arxiv_id stringlengths 0 16 | text stringlengths 10 1.65M |
|---|---|
1803.10189 | \section{Acknowledgment window size analysis}
One can conclude from the results in Section 4 two findings:
First, as the BER is smaller, 11ax/256 outperforms
11ax/64 from larger PHY rates. Second, the MCS from which
11ax/256 outperforms 11ax/64 is not dependent on the MSDU size.
We want to investigate these phenomena further.
In the following analysis we use the above mentioned
approximation from~\cite{SA}
where we neglect the rounding in the
denomination of Eq.~\ref{equ:thrtwole} and assume
that all the MPDUs contain the same number of MSDUs.
We also neglect the rounding of the MPDU size
and the addition of the 22 bits in the denomination.
Following this approximation Eq.~\ref{equ:thrtwole}
turns out to be Eq.~\ref{equ:tlthrerr}:
\begin{equation}
Thr=
\frac
{8 \cdot X \cdot Y \cdot L_{DATA} \cdot (1-BER)^{8 \cdot (O_M+Y \cdot Len)}}
{O_P + \frac{8 \cdot X \cdot (O_M+Y \cdot Len)}{R}}
\label{equ:tlthrerr}
\end{equation}
Notice from Eq.~\ref{equ:tlthrerr} that
given a number $Y$ of MSDUs in an MPDU, it is
worthwhile to contain as many MPDUs as possible
in the A-MPDU frame, up to the limit on the PPDU transmission
time.
\subsubsection{Reliable channel, BER$=$0}
Let MCS$_C$ be the MCS from which 11ax/256 outperforms
11ax/64. For BER$=$0 it is possible to compute MCS$_C$
accurately. Recall that $O_M$ is the sum of the lenghs of
the MAC Header, MPDU Delimiter and FCS fields in bytes.
Also recall that $Len=4 \cdot \ceil{\frac{L+14}{4}}$,
let $P_r$ be the length
of the Preamble in $\mu s$ ($64.8 \mu s$ in our case),
$R$ be the PHY rate and $T$ be the
limit on the transmission time of the PPDU ( 5400 $\mu s$ in
our case ). Finally, let $Y_{max} = \floor{\frac{11454-O_M}{Len}}$
be the maximum possible number of MSDUs per MPDU frame.
For BER$=$0 it is most efficient to include $Y_{max}$ MSDUs
per MPDU frame and as many MPDUs in the A-MPDU frame
up to the limit $T$.
Then, one receives the following equation for 11ax/64 assuming
that the PHY rate enables to transmit 64 MPDUs of $Y_{max}$ MSDUs
each: $T = \frac{64 \cdot (O_M +Y_{max} \cdot Len)}{R}+P_r$.
The largest PHY rate the enables the transmissions of up to 64 MPDUs
is $R=\frac{64 \cdot (O_M+Y_{max} \cdot Len)}{T-P_r}$.
For $L_{DATA}=1500$ bytes ($Len=1516$ bytes) it turns out
that $R=1021 Mbps$. Neglecting the rounding of $Y_{max}$
one receives that $R=\frac{64 \cdot 11545 \cdot 8}{T-P_r}$ which,
independently of $L_{DATA}$, equals 1099 Mbps for $T=5400 \mu s$ and
$P_r = 64.8 \mu s$. The range 1021-1099 Mbps falls between
MCS2 and MCS3 i.e. 11ax/256 outperforms 11ax/64 starting
from $MCS_C = MCS3$ for any
MSDU length $L_{DATA}$ up to 1500 bytes.
In Figure~\ref{fig:comptwole} the difference between
11ax/64 to 11ax/256 in MCS3 is too small to be
noticed, however from MCS4 the difference is noticeable.
\subsubsection{Unreliable channel, BER$>$0}
For positive BERs the optimal number of MSDUs
per MPDU is not necessarily $Y_{max}$.
Therefore, we use the following approximation.
Given that it is worthwhile to transmit as long PPDUs as possible,
then let $X_{opt}$ and $Y_{opt}$
be the number of MPDUs and the number of
MSDUs per MPDU
respectively in the optimal A-MPDU, i.e. the A-MPDU that achieves the
largest Throughput. Then,
Eqs.~\ref{equ:tl1} and~\ref{equ:tl2} can give a relation between
$X_{opt}$ and $Y_{opt}$:
\begin{equation}
T=
\frac
{X_{opt} \cdot ( Y_{opt} \cdot Len + O_M )}
{R}
+ Pr
\label{equ:tl1}
\end{equation}
Or:
\begin{equation}
Y_{opt}=
\frac
{R \cdot T - R \cdot Pr - X_{opt} \cdot O_M}
{X_{opt} \cdot Len}
\label{equ:tl2}
\end{equation}
Using
Eqs.~\ref{equ:tl1} and~\ref{equ:tl2}
the search for the optimal A-MPDU
can consider only the number $X$ of MPDUs and the
number $Y$ of MSDUs per MPDU that maintain
Eq.~\ref{equ:tl2}. Eq.~\ref{equ:tlthrerr}
can therefore be re-written as:
\begin{equation}
Thr=
\frac
{8 \cdot X \cdot
(
\frac
{R \cdot T - R \cdot Pr - X \cdot O_M}
{X \cdot Len}
)
\cdot L_{DATA} \cdot (1-BER)^{8 \cdot (O_M+
(
\frac
{R \cdot T - R \cdot Pr - X \cdot O_M}
{X \cdot Len}
)
\cdot Len)}}
{O_P -Pr + T }
\label{equ:tl3}
\end{equation}
Notice that the denomination of Eq.~\ref{equ:tl3}
is constant because we use the outcome that it is
most efficient that the transmission time of the
PPDU will be the largest possible.
To find the largest Throughput we derive Eq.~\ref{equ:tl3}
according to $X$ and find that the optimal X is the single
positive solution
of a quadratic equation, which reveals that Eq.~\ref{equ:tl3}
is unimodal.
The optimal $X$, $X_{opt}$, is given by Eq.~\ref{equ:tl4}:
\begin{equation}
X_{opt}=
\frac
{R \cdot (T - Pr) \cdot ln(1-BER) \cdot O_M}
{2}
\cdot
(1-\sqrt{1-\frac{4}{O_M \cdot ln(1-BER)}})
\label{equ:tl4}
\end{equation}
If we now substitute the parameters
in Eq.~\ref{equ:tl4} by the values we use in this
paper, and using BER$=10^{-7}, 10^{-6}, 10^{-5}$ we
get that $X_{opt}= 0.0991 \cdot R, 0.3117 \cdot R, 0.9678 \cdot R$
respectively. $X_{opt}$ does not depend on the MSDU size
but it is a function of the PHY rate $R$. If we look for
the PHY rates for which $X_{opt} > 64$, i.e. 11ax/256 outperforms
11ax/64, we get the following PHY rates $ 645, 205, 66 Mbps$
respectively. This means that the corresponding $MCS_C$s are
{\it MCS2, MCS0, MCS0} respectively, as is shown
in Figures~\ref{fig:comptwole5}-\ref{fig:comptwole7} respectively.
Notice that by the above in turns out that the $MCS_C$s
do not depend on the MSDUs' sizes, as it is also observed from
Figures~\ref{fig:comptwole5}-\ref{fig:comptwole7}.
\section{Introduction}
\indent
The latest IEEE 802.11-REVmc Standard (WiFi), created and maintained by
the IEEE LAN/MAN Standards Committee (IEEE 802.11)~\cite{IEEEBase1}
is currently the most
effective solution within the range of Wireless Local
Area Networks (LAN). Since its first release in 1997,
the standard provides the basis
for Wireless network products
using the WiFi brand, and has since been improved upon
in many ways. One of the main goals of these improvements
is to increase the Throughput achieved by users and to improve
its Quality-of-Service (QoS) capabilities.
To fulfill the promise of increasing
IEEE 802.11 performance and QoS capabilities
a new amendment IEEE 802.11ax, also
known as High Efficiency (HE) was introduced
recently~\cite{IEEEax}.
IEEE 802.11ax is a six generation type of a WLAN in the IEEE
802.11 set of types of WLANs~\cite{DCC,B}
and it is a successor to IEEE 802.11ac~\cite{IEEEac}.
Currently this project is at a very early stage
of development and
it is due to be
publicly released in 2019 .
IEEE 802,11ax is predicted
to have a top capacity of around 10 Gbps and a
frequency of 2.4 and/or 5 GHz,
and has the goal of providing 4 times
the Throughput of IEEE 802.11ac .
In this paper we compare between the Throughputs
of IEEE 802.11ax and IEEE 802.11ac in a scenario
where one user continuously transmits in a single user (SU)
operation mode to another user
without collisions, using aggregation.
In order to achieve the 4 times Throughput
compared to IEEE 802.11ac, around 10Gbps, the IEEE
802.11ax addresses several new features.
The first feature extends by 4 times the IEEE 802.11ac OFDM
symbols duration while preserving the IEEE 802.11ac Guard Interval (GI)
. In addition, two new Modulation/Coding schemes are introduced
in IEEE 802.11ax, 1024 QAM 3/4 and 1024 QAM 5/6 , MCS10
and MCS11 respectively. In order to support the above
two new features the PHY Preamble in IEEE 802.11ax is longer
than that in IEEE 802.11ac, as we show in Section 2.
Next, in this paper we focus in Two-Level aggregation, first
introduced in IEEE 802.11n~\cite{IEEEBase} and later
extended in IEEE 802.11ac~\cite{IEEEac}
and IEEE 802.11ax~\cite{IEEEax}. In order to increase
the Throughput in IEEE 802.11ax the MAC acknowledgment window
is extended to 256 MAC Protocol Data Units (MPDU) which
extends the IEEE 802.11ac aggregation capability.
In this paper we verify what is the Throughput improvement
achieved in IEEE 802.11ax following the above new features.
In overall, the research on the performance
of IEEE 802.11ax in various scenarios is in
its first steps~\cite{QLYY}.
The paper is organized as follows: In Section 2 we describe in more
details the new features of IEEE 802.11ax mentioned above
and describe the transmission scenario over which
we compare between IEEE 802.11ax and IEEE 802.11ac .
We assume that the reader is familiar with
the basics of the PHY and MAC layers
of IEEE 802.11 described in previous papers, e.g.~\cite{SA}.
In Section 3 we analytically compute the Throughput
of the transmission scenario described in Section 2 and
in Section 4 we present the Throughputs of the
protocols and compare between them.
In Section 5 we analytically compute the PHY rates
from which using a 256 MPDUs acknowledgment window size
in IEEE 802.11ax is better than using a 64 MPDUs acknowledgment
window size and finally Section 6 summarizes the paper.
In the rest of the paper we denote IEEE 802.11ax and IEEE 802.11ac
by 11ax and 11ac respectively.
\section{Model}
In this paper we consider the Single User (SU) operation mode
in 11ax vs. that in 11ac.
In this operation mode every transmitted
PHY Protocol Data Unit (PPDU)
is destined to one user only.
As mentioned, there are several new features
in 11ax compared to 11ac in the PHY and
MAC layers in the SU operation mode.
Assuming an OFDM
based PHY layer,
every OFDM symbol is extended from $3.2 \mu s$ in
11ac to $12.8 \mu s$ in 11ax. Since
the same Guard Interval (GI) is added to every such symbol,
the overhead in 11ax due to the GI is lower.
Second, in 11ax there are two new
Modulation/Coding schemes (MCSs), 1024 QAM 3/4 and 1024 QAM 5/6,
MCS 10 and MCS 11 respectively, applicable for bandwidth
larger than 20 MHz. The above
two features enlarge the PHY rate of 11ax .
In this paper we focus in the Two-Level aggregation
scheme, first introduced
in IEEE 802.11n~\cite{IEEEBase}, in which several
MPDUs are transmitted in a
single PHY Service Data Unit (PSDU).
Such a PSDU
is denoted Aggregate MAC Protocol Data Unit (A-MPDU) frame.
In Two-Level aggregation
every MPDU contains several MAC Service Data Units (MSDU).
MPDUs are separated
by an MPDU Delimiter field of 4 bytes and each MPDU contains
MAC Header and Frame Control Sequence (FCS) fields.
MSDUs within an MPDU
are separated by a SubHedaer field of 14 bytes. Every MSDU
is rounded to an integral multiply of 4 bytes
together with the SubHeader field. Every MPDU is also
rounded to an integral multiply of 4 bytes.
In 11ax and 11ac the size of an MPDU
is limited to 11454 bytes. In 11ac an A-MPDU
is limited to 1048575 bytes and this limit is
removed in 11ax . In both 11ac and
11ax the transmission time of the PPDU (PSDU and
its Preamble) is limited to $\sim 5.4ms$ ($5400 \mu s$)
due to L-SIG (one of the legacy
Preamble's fields) duration limit~\cite{IEEEBase1}.
.
In this paper we also assume
that all the MPDUs transmitted in an A-MPDU
frame are from the same Traffic Stream (TS).
In this case up
to 256 MPDUs are allowed in an A-MPDU frame of 11ax,
while
in 11ac up to only
64 MPDUs are allowed.
In Figure~\ref{fig:PPDUformat} we show the PPDU formats
in 11ax and 11ac in parts (A) and (B) respectively.
In the 11ax PPDU format there are HE-LTF fields,
the number of which equals to the number of Spatial Streams (SSs)
in use. In this paper we assume that each such field is of the shortest
length possible, i.e.
$7.2 \mu s$~\cite{IEEEax}.
In the PPDU format of 11ac there are
the VHT-LTF fields, the number of which equals
again to the number of SSs, and each is $4 \mu s$.
Notice that in SU mode and when using the
same number $S$ of SS, the
Preamble in 11ax is longer than that
in 11ac by $S \cdot (7.2-4)=S \cdot 3.2 \mu s$.
Notice also that the PSDU frame in 11ax contains
a Packet Extension (PE) field.
This field is mainly used in Multi-User (MU) mode
and so we assume that it does not present, i.e.
it is of length $0 \mu s$.
\begin{figure}
\vskip 5cm
\special{psfile=PPDUformat.ps voffset =-325 hoffset= -10 hscale = 80 vscale = 80}
\caption{The PPDU format in Single User (SU) mode in VHT and HE.}
\label{fig:PPDUformat}
\end{figure}
\begin{figure}
\vskip 4cm
\special{psfile=UDPtraffic.ps voffset =-425 hoffset= -10 hscale = 80 vscale = 80}
\caption{The UDP like traffic pattern.}
\label{fig:UDPtraffic}
\end{figure}
We also assume a UDP like traffic where the AP continuously
transmits Data MSDUs to a station, and the station responds with
the BAck control frame.
A transmission of a PPDU from the AP followed by a BAck
control frame from the station is denoted {\it Transmission Cycle}
and such a cycle repeats itself continuously,
as shown in Figure~\ref{fig:UDPtraffic}.
We also assume the compressed BAck
frame format and consider two cases: in one case the AP transmits
up to 64 MPDUs in every A-MPDU frame and so the BAck
frame is 32 bytes long. It contains 8 bytes, i.e. 64 bits,
each acknowledging one MPDU. In the second case, that is
relevant to 11ax only,
the AP can transmit up to 256 MPDUs in an A-MPDU
frame and so the BAck frame is 56 bytes long,
containing 32 bytes for acknowledging MPDUs.
The BAck frame is transmitted in legacy mode
using a 24 Mbps PHY rate. Therefore, its transmission
times are $31 \mu s$ and $39 \mu s$ in the above two
cases respectively.
Finally, we consider several channel conditions which
are expressed by different values of the Bit Error Rate (BER)
which is the probability that a bit arrives successfully
at the destination. We assume a model where these probabilities
are independent from bit to bit~\cite{L1}.
\section{Summary}
A comparison between the maximum Throughputs of
IEEE 802.11ax and IEEE 802.11ac in a single user
operation mode is performed, in a scenario where
one user transmits continuously to another user using
Two-Level aggregation. Concerning IEEE 802.11ax two
flavors are considered, using acknowledgment
windows of 256 and 64 MPDUs respectively.
IEEE 802.11ax outperforms IEEE 802.11ac by 48$\%$ and 29$\%$
in unreliable and reliable channels respectively.
Also, a detailed analysis comparing between the
two flavors of IEEE 802.11ax is given.
This paper is one of the first to evaluate the performance
of IEEE 802.11ax and more are expected to come for other
scenarios such as the multi user operation mode.
\section{Throughput computation}
Let $X$ be the number
of MPDU frames in an A-MPDU frame, numbered $1,..,X$, and $Y_i$ be the number
of MSDUs in MPDU number $i$.
Also, let
$O_P = AIFS+BO+Preamble+SIFS+BAck$,
$O_M = MPDU Delimiter+MacHeader+FCS$,
$Len=4 \cdot \ceil{\frac{L_{DATA}+14}{4}}$
and
$C_i = 8 \cdot 4 \cdot \ceil{\frac{O_M+Y_i \cdot Len}{4}}$.
Then, the Throughput in both 11ax and 11ac is given by
Eq.~\ref{equ:thrtwole}~\cite{SA}:
\begin{equation}
Thr=
\frac
{8 \sum_{i=1}^{X} \cdot Y_i \cdot L_{DATA} \cdot (1-BER)^{C_i}}
{O_P + TSym \ceil{\frac{\sum_{i=1}^{X} \cdot C_i + 22}{TSym \cdot R }}}
\label{equ:thrtwole}
\end{equation}
\normalsize
$TSym$ is the length of an OFDM symbol and every transmission
must be of an integral number of OFDM symbols.
The additional 22 bits in the denomination
are due to the SERVICE and TAIL fields that are added to every
transmission by the PHY layer conv. protocol~\cite{IEEEBase1}.
The function in Eq.~\ref{equ:thrtwole} is not continuous and so
it is difficult to find the optimal X and Y. However, in~\cite{SA} it is
shown that if one neglects the rounding in the
denomination of Eq.~\ref{equ:thrtwole} then the optimal
solution has the property that all the MPDUs
contain almost the same number of MSDUs: the difference
between the largest and smallest number of MSDUs in MPDUs
is at most 1. The difference is indeed 1
if the limit on the transmission time of the PPDU
does not enable to transmit
the same number of MSDUs in all the MPDUs.
If one neglects the rounding of the denomination
of Eq.~\ref{equ:thrtwole} the received Throughput
for every X and Y is as large as that received
in Eq.~\ref{equ:thrtwole}. The difference depends
on the size of the denomination.
We therefore use the result in~\cite{SA} and look for the
maximum Throughput as follows: We check for every
X, $1 \le X \le 64$ (also $1 \le X \le 256$ for 11ax)
and for every Y, $1 \le Y \le Y_{max}$, what is the
received Throughput such that $Y_{max}$ is the
maximum possible number of MSDUs in an MPDU.
All is computed taking into account
the upper limit of $5.4 ms$ on the transmission time
of the PPDU (PSDU+Preamble). In case where it is not possible
to transmit the same number of MSDUs in all
the MPDUs, part of the MPDUs have one more MSDU
than the others, up to the above upper limit
on the transmission time. We found that the smallest
denomination of any of the maximum Throughputs
is around $1000 \mu s$. Neglecting the rounding
in the denomination
reduces its size by at most $13.6 \mu s$ in 11ax
and $4 \mu s$ in 11ac. Thus, the mistake in the
received maximum Throughputs is in the order of
at most 1.4$\%$.
\section{Throughput comparison between IEEE 802.11ax and IEEE 802.11ac}
In Figures~\ref{fig:comptwole},~\ref{fig:comptwole5},~\ref{fig:comptwole6},~\ref{fig:comptwole7}
we show the maximum Throughputs of 11ax
and 11ac for four different channel's
conditions: BER$=0, 10^{-7}, 10^{-6}, 10^{-5}$
respectively. Every figure contains results
for 3 different
sizes $L_{DATA}$ of MSDUs: $L_{DATA}=64, 512$ and
$1500$ octets in parts (A), (B) and (C)
respectively. There are results for 11ac,
with 64 MPDUs in every A-MPDU frame,
for 11ax with 64 MPDUs in every A-MPDU
frame and for 11ax with 256 MPDUs in every
A-MPDU frame. The last two flavors of 11ax
are denoted 11ax/64
and 11ax/256 respectively.
First notice that in every figure
the Throughput is shown as a function of
the MCSs in the x-axis. In every MCS
11ax and 11ac enable different PHY rates
and so the comparison criteria is the Throughput
of the two protocols
in every MCS in use. Also notice that MCS 10 and MCS 11 are
not possible in 11ac and so
11ac does not have results for these MCSs.
In 11ac the PHY rates for MCS0-MCS9 are
234, 468, 702, 936, 1404, 1872, 2106, 2340, 2808 and 3120 Mbps
respectively, assuming a 160MHz channel,
4 SSs and a $0.8 \mu s$ Guard Interval.
In 11ax the PHY rates for MCS0-MCS11 are
288, 576, 864, 1152, 1729, 2305, 2594, 2882, 3458, 3843, 4323
and 4803 Mbps respectively.
In all the figures the performance of 11ax is better
than that of 11ac. This is due to the larger PHY rates
that 11ax enables in every MCS compared to 11ac.
For BER$=$0 11ax/256 outperforms 11ac by 29$\%$
and in BER$=10^{-5}$ the improvement reaches 48$\%$.
When comparing between 11ax/64 and 11ax/256 one
can see that for BER$=$0 11ax/256 outperforms 11ax/64 only for
MCSs higher than MCS2. On the other hand
in the case of BER$=10^{-5}$ 11ax/256 outperforms 11ax/64
starting from MCS0. The reason for this difference
is as follows: for BER$=$0 it is worth to transmit
MPDUs with as much MSDUs as possible. Thus,
not many MPDUs are transmitted when the maximum Throughput
is received and the limiting parameter on the Throughput
is the limit on the PPDU transmission time. Therefore,
is small PHY rates, i.e. small MCSs, 11ax/256 has no advantage
over 11ax/64. Only when the PHY rates increase, the limit
of 64 MPDUs in 11ax/64 begins to be significant and 11ax/256
begins to outperform 11ax/64. When BER$=10^{-5}$ it is worth
to transmit short MPDUs because the failure
probability of an MPDU increases with its length.
In small PHY rates the limiting parameter is now
the number of MPDUs and not the limit on the
PPDUs' transmission times. Therefore, 11ax/256 outperforms
11ax/64 also in small indexed MCSs.
Notice that 11ax/256 outperforms 11ac in BER$=10^{-5}$, in percentage,
more than in BER$=$0. The main overhead
incurred in the transmissions is $O_P$. In BER$=$0
MPDUs are large with relatively many MSDUs. On the
other hand in BER$=10^{-5}$ MPDUs are short in order to keep
on large transmission success probabilities.
More MPDUs in BER$=10^{-5}$ are therefore more
significant than in BER$=$0 and so is the
relative improvement in Throughput between 11ax/256 and 11ac .
\begin{figure}
\vskip 3cm
\special{psfile=bar10.ps voffset =-80 hoffset= -50 hscale = 40 vscale = 40}
\special{psfile=bar20.ps voffset =-80 hoffset= 110 hscale = 40 vscale = 40}
\special{psfile=bar30.ps voffset =-80 hoffset= 270 hscale = 40 vscale = 40}
\caption{Comparison between the maximum Throughputs of 802.11ax and 802.11ac in
the Two-level aggregation scheme, single user operatopm mode and different length MSDUs. BER=0.}
\label{fig:comptwole}
\end{figure}
\begin{figure}
\vskip 5cm
\special{psfile=bar17.ps voffset =-80 hoffset= -50 hscale = 40 vscale = 40}
\special{psfile=bar27.ps voffset =-80 hoffset= 110 hscale = 40 vscale = 40}
\special{psfile=bar37.ps voffset =-80 hoffset= 270 hscale = 40 vscale = 40}
\caption{Comparison between the maximum Throughputs of 802.11ax and 802.11ac in
the Two-level aggregation scheme, single user operatopm mode and different length MSDUs. BER=$10^{-7}$.}
\label{fig:comptwole5}
\end{figure}
\begin{figure}
\vskip 5cm
\special{psfile=bar16.ps voffset =-80 hoffset= -50 hscale = 40 vscale = 40}
\special{psfile=bar26.ps voffset =-80 hoffset= 110 hscale = 40 vscale = 40}
\special{psfile=bar36.ps voffset =-80 hoffset= 270 hscale = 40 vscale = 40}
\caption{Comparison between the maximum Throughputs of 802.11ax and 802.11ac in
the Two-level aggregation scheme, single user operatopm mode and different length MSDUs. BER=$10^{-6}$.}
\label{fig:comptwole6}
\end{figure}
\begin{figure}
\vskip 5cm
\special{psfile=bar15.ps voffset =-80 hoffset= -50 hscale = 40 vscale = 40}
\special{psfile=bar25.ps voffset =-80 hoffset= 110 hscale = 40 vscale = 40}
\special{psfile=bar35.ps voffset =-80 hoffset= 270 hscale = 40 vscale = 40}
\caption{Comparison between the maximum Throughputs of 802.11ax and 802.11ac in
the Two-level aggregation scheme, single user operatopm mode and different length MSDUs. BER=$10^{-5}$.}
\label{fig:comptwole7}
\end{figure} |
2209.13123 | \section{Introduction}
\label{sec:Intro}
The most-congested 25 cities in the United States are estimated to have an annual congestion economic loss of \$50 billion \cite{pishue2017us}. Intelligent transportation systems have the potential to reduce the economic losses incurred by growing transportation challenges. Accurate traffic forecasting is one of the key pillars of the intelligent transportation system. Although traffic forecasting is difficult because of the complexity of spatial and temporal dependencies, many deep learning models have achieved superior results for short-term traffic forecasts of up to 1 hour \cite{li2017diffusion, yu2017spatio, guo2019attention}. As the urban population rises, however, traffic dynamics and congestion usually last for hours. Hence, short-term forecasting alone cannot provide sufficiently rich information for proactive traffic management strategies, such as time signal control, to avoid potential traffic congestion \cite{yu2021long}. Modeling long-term traffic temporal dependency is difficult, however, since multiple temporal patterns entangle the long-term traffic dynamics.
Since intelligent transportation systems that use deep learning methods will affect everyone's life and governments' infrastructure investments, forecasting results must be trustworthy. Although most deep learning models have promising performance, they are labeled as ``black box,'' whose predictions cannot be explained.
For establishing trustworthiness, explainable artificial intelligence (XAI) methods are crucial. Moreover, traffic dynamics are highly nonlinear and can have complex spatial dependencies and temporal patterns. XAI methods can help traffic managers understand these complex traffic dynamics and bottlenecks, which are otherwise impossible to model analytically.
\begin{comment}
The self-attention layer can be considered as a fully connected layer with the weights generated from pairwise relations from inputs. However, applying self-attention to long-term time series forecasting is computationally prohibitive due to the quadratic complexity of sequence length $L$ in both memory and time. Many variants of the transformer \cite{kitaev2020reformer, zhou2021informer, liu2021pyraformer, cirstea2022triformer, https://doi.org/10.48550/arxiv.2106.13008} were developed to improve the efficiency of the self-attention mechanism, which is beneficial for long sequence prediction tasks.
Furthermore, the attention mechanism is also widely used to explain predictions from the deep learning models. XAI methods for time series forecasting \cite{https://doi.org/10.48550/arxiv.1512.04150, https://doi.org/10.48550/arxiv.2004.12538, kashiparekh2019convtimenet} can be are grouped into ante hoc and post hoc methods. The attention mechanism is one of the popular ante hoc methods to explain the time series model, assigning values corresponding to the importance of the different parts of the time series \cite{Karim_2018, choi2019prediction}.
\end{comment}
We develop Explainable Graph Pyramid Autoformer (X-GPA), a new spatial-temporal graph neural network (GNN) model. Our approach is based on transformers, which have achieved high performances for long-sequence time series forecasting based on a self-attention mechanism \cite{https://doi.org/10.48550/arxiv.1706.03762}.
To learn the temporal patterns for long-term traffic forecasting, we design a novel attention mechanism called pyramid autocorrelation attention. It extracts the temporal features hierarchically from time series using patch attention and uses an autocorrelation attention mechanism to combine the hierarchical features.
To model the spatial dependencies, we adopt a graph attention layer \cite{https://doi.org/10.48550/arxiv.1710.10903} to aggregate node features based on (driving) distances and the pairwise relationship between two locations. With the new temporal learning together with the spatial dependency modeling, our X-GPA model achieves a high forecasting accuracy for long-term traffic forecasting and simultaneously explains the predictions with attention scores.
To that end, contributions of the paper are as follows:
\begin{itemize}
\item We propose a novel pyramid autocorrelation attention mechanism to overcome the computational complexity associated with learning long time series in spatial-temporal GNNs.
\item Our proposed X-GPA method defines a new state-of-the-art method for long-term traffic forecasting. Specifically, the X-GPA method achieves up to 35\% improvement in accuracy over the state-of-the-art spatial-temporal GNNs developed for traffic forecasting.
\item Our X-GPA is the first ante hoc XAI method for long-term traffic forecasting, which provides attention-score-based spatial and temporal explanations for predictions.
\end{itemize}
\section{Related work \label{sec.related_works}}
Here we review the methods with respect to traffic forecasting, XAI for traffic forecasting, and long-term time series forecasting, and we highlight the key differences of the methods.
\paragraph{Traffic forecasting} Early data-driven methods for traffic forecasting include historical average (HA) \cite{ermagun2018spatiotemporal}
and ARIMA \cite{box2015time}
In recent years, deep neural networks such as recurrent networks (RNNs) \cite{wen2017multi} and their variants (e.g., long short-term memory networks \cite{lai2018modeling}) have achieved superior performance.
However, these methods do not leverage spatial dependencies of the traffic network.
Recent works have shown that traffic forecasting requires both spatial and temporal dependencies. To that end, methods that combine GNNs with time series modeling have become state-of-the-art traffic forecasting methods. Diffusion-convolution recurrent neural network (DCRNN) \cite{li2017diffusion} used the diffusion convolution layer and gated recurrent layers to model the spatial dependencies and temporal correlations, respectively. Spatio-Temporal Graph Convolutional Networks (STGCN) \cite{yu2017spatio} combined graph and temporal convolution. Attention Based Spatial-Temporal Graph Convolutional Networks (ASTGCN) \cite{guo2019attention} introduced a self-attention mechanism into STGCN for performance improvement. Nevertheless, although these models have achieved state-of-art performances for short-term traffic forecasting (up to one hour), they are not competent for long-term traffic forecasting because they cannot learn patterns from long sequences or sequences having high computational complexity. We show that our X-GPA approach with the new pyramid autocorrelation attention mechanism obtains accuracy values that are significantly better than those of existing spatial-temporal GNNs for long-term traffic forecasting.
Recently Yu et al.~\cite{yu2021long} proposed a novel graph neural network for long-term traffic prediction using manually curated historical data as the input sequence, thereby reducing the complexity of learning from long sequences. In contrast, our X-GPA method takes the last seven days as the input sequence and uses the attention mechanism to automatically capture the critical historical information, thereby reducing the manual effort required.
\paragraph{Explainable traffic forecasting}
To overcome the challenges of black box models, some researchers have started exploring explainable methods to predict the traffic dynamics. Lai et al.~\cite{li2021multistep} adopted dynamic graph convolution to simulate the dynamics of the traffic system for short-term traffic prediction, which was considered as a partially explainable model. Cui et al.~\cite{cui2020graph} developed graph Markov processes to predict short-term traffic conditions under the setting of missing data. The parameters of the model after training are used to explain the traffic dynamics. However, all these spatial temporal models are post hoc XAI methods, where the explanation extracted from the model is based on other numerical techniques. Moreover, most post hoc techniques are used under specific assumptions, which make the interpretation of the models untrustworthy \cite{vale2022explainable}. Hence these models cannot obtain an explanation for each prediction in the test dataset.
To the best of our knowledge, this paper presents the first use of the attention mechanism to develop an interpretable GNN model for traffic forecasting. The results show that our model can automatically detect important historical features for each prediction and simulate the periodic temporal pattern of the dynamical systems.
\subsection{Long-term time series forecasting}
Many traditional time series prediction models fail to capture the long-term temporal dependencies and thus either accumulate extremely high errors or suffer from huge computation cost \cite{ermagun2018spatiotemporal, box2015time, wen2017multi}. In recent years, the attention mechanism has become the key component in neural networks for long-term time series forecasting \cite{https://doi.org/10.48550/arxiv.1706.03762, cirstea2022triformer}.
Among all the models using the attention mechanism, the transformer \cite{https://doi.org/10.48550/arxiv.1706.03762} using the self-attention mechanism has obtained state-of-the-art performance in time series data modeling with $O(L^2)$ computational complexity, where $L$ is the input sequence length of the time series.
The high computation complexity of the self-attention mechanism \cite{https://doi.org/10.48550/arxiv.1706.03762} is, however, an obstacle to applying the model for information extraction from long-sequence input data. Therefore, many works adapt the architecture of transformers for higher efficiency. Reformer \cite{kitaev2020reformer} reduced the complexity to $O(L \log L)$ using the local-sensitive hashing attention and reversible residual layers. Informer \cite{zhou2021informer} also achieved computation complexity of $O(L \log L)$ with a sparse self-attention mechanism. Pyraformer \cite{liu2021pyraformer} explored the multiresolution representation of the time series based on the pyramidal attention module; its time and space complexity scale linearly with time length $L$. Triformer \cite{cirstea2022triformer} also achieved linear complexity of $O(L)$ by developing a novel attention mechanism called patch attention.
Furthermore, some transformer variants were developed for better prediction performance by applying the attention mechanism in the frequency domain. Autoformer \cite{https://doi.org/10.48550/arxiv.2106.13008} proposed a novel autocorrelation attention mechanism by using a fast Fourier transform, which achieved significant improvement compared with non-frequency-based transformers.
Built on Autoformer and Triformer, our model adopts a new pyramid autocorrelation mechanism by combining patch attention and autocorrelation attention, resulting in computation complexity of $O(L)$ while maintaining the same level of information utilization capability. Moreover, the existing long-term forecasting methods do not leverage spatial dependencies, which are crucial for traffic forecasting. Equipped with the new pyramid autocorrelation mechanism and graph attention layer, our proposed X-GPA achieves significantly better forecasting accuracy than the state-of-the-art Autoformer for long-term forecasting.
\section{Explainable Graph Pyramid Autoformer for
Long-Term Traffic Forecasting \label{sec:Med}}
We consider the most commonly adopted traffic forecasting setup \cite{li2017diffusion, yu2017spatio, guo2019attention}, wherein a traffic network is modeled as an undirected graph $G=(V, E, A)$, where $V$ is a set of all vertices representing (sensor) locations $E$ is a set of all edges representing the connection between the locations, and $A \in \mathbb{R}^{N \times N}$ is the the adjacency matrix representing the connectivity among nodes, where $N=|V|$ is the number of vertices. The sensors in the vertices collect measurements of the system state with a fixed sampling frequency. The dimensionality of the system features that each sensor collects is $D$. The collected feature sequence $\bm{\chi}_{t-L+1}^t = \{ x^{t-L+1}, x^{t-L+2},...,x^t \}$ , where $x_t \in \mathbb{R}^{N \times D}$ denotes the feature matrix of the graph observed at time $t$. The prediction task is to find a mapping function $\it f$ from the previously observed feature matrix to the future feature matrix:
$
\bm{\chi}_{t+1}^{t+Q} = \it{f} ( \bm{\chi}_{t-L+1}^t; A ).
$
\begin{comment}
In this section, we will introduce multiple attention-based modules to capture traffic data's spatial and temporal dependency. To explain different modules in a more general way, we denote $F_Q$, $F_K$, and $F_V$ as the query mapping, key mapping, and value mapping, which are all parameterized as neural networks.
\end{comment}
Instead of fitting a model to simulate the traffic evolution \cite{li2017diffusion, ermagun2018spatiotemporal}, our model extracts the useful information from the input sequence based on the attention mechanism and predicts the future based the spatial-temporal pattern of the history input sequence.
\subsection{Pyramid autocorrelation attention for learning temporal patterns \label{subsec.ac_atten}}
The pyramid autocorrelation attention mechanism that we propose consists of two parts: patch attention to derive hierarchical representations of the time series and an autocorrelation attention mechanism to learn the periodic pattern of each representation.
In patch attention, the time series is divided into small patches, and the attentions of hidden states of the timestamps in each patch are computed to derive a hidden state of single pseudo timestamp (this will look like inverted pyramid shape).
In the same way, patch attentions are applied on the derived pseudo time steps hierarchically. Consequently, at each level of the hierarchy, temporal features are aggregated at different time resolutions (for example, 5, 15, and 30 minutes). Then the autocorrelation attention is applied for each level of the hierarchy to extract periodic information and patterns from the pseudo time series.
\subsubsection{Patch attention}
We build on the patch attention operator introduced in Triformer \cite{https://doi.org/10.48550/arxiv.2204.13767}, wherein the patch attention is used to reduce the computational complexity of transformer for long input sequences. We use patch attention for reducing computation costs and extracting multiscale traffic time series patterns.
For a given the input time series ${X}$ with time length $L$, applying patch attention of size $ps$ will result in a pseudo time series ${Y}$ of time length $L/{ps}$. This is achieved as follows. The input time series of length $L$ is divided into $L/{ps}$ patches in the temporal direction. The $j$th patch is denoted as $P_j = \{ X_{(j-1) \cdot {ps}+1}, ..., X_{j \cdot {ps}}\}$, and $Y_j$ is the output of the $j$th patch.
The attention score $S_q^{p_j}$ of the $q$th value in the $j$th patch is calculated in two steps. First the keys for all the values in a patch except for the $q$th value are computed:
\[
Keys = \bm{\bigparallel}_{i \in P_j, i \neq q}^{ps} F_K(X_i; \theta_K),
\]
where $\bm{\bigparallel}$ is the sum of concatenation and $F_K(\cdot)$ is the key mapping with the trainable parameters $\theta_K$. Then the attention score $ S_q^{p_j}$ of the $q$th value is calculated as follows:
\[
S_q^{p_j} = \sigma(W_{patch} [F_Q(X_q; \theta_Q) | Keys],
\]
where $|$ is the concatenation operation, $F_Q(\cdot)$ is the query mapping with parameters $\theta_Q$, $W_{patch}$ is a set of trainable parameters, and $\sigma$ is the nonlinear activation function.
Then, a softmax function is applied to aggregate the feature with the weights of attention scores as follows:
\begin{align*}
S_1^{'p_j}, ..., S_{ps}^{'p_j} &= \rm{softmax} \{ S_1^{p_j}, ..., S_{ps}^{p_j} \}, \\
Y_j &= \sum_q^{ps} S_q^{'p_j} \cdot F_V(X_i; \theta_V),
\end{align*}
where $S_q^{'p_j}$ is the normalized $q$th attention score in $j$th patch and $F_V$ represents the value mapping.
\subsubsection{Autocorrelation attention}
To extract the periodic temporal pattern of the input sequence, we use an autocorrelation attention mechanism \cite{https://doi.org/10.48550/arxiv.2106.13008}. For a real-valued discrete time series $\{ \bm{\chi_t} \}$ with time length $L$, we can obtain the autocorrelation $R_{\chi \chi} (\tau)$ as a function of the time shift $\tau$ efficiently by using fast Fourier transforms (FFTs) based on the Wiener–-Khinchin theorem \cite{wiener1930generalized}:
\[
R_{\chi \chi} (\tau) = \mathscr{F}^{-1} ( \mathscr{F} (\chi_t) \mathscr{F}^{conj} (\chi_t) ),
\]
where $\tau \in \{1,...,T\}$ is the delay time length, $\mathscr{F}$ denotes the FFT, $\mathscr{F}^{-1}$ is its inverse, and $conj$ represents the conjugate operation. The normalized $ \frac{R_{\chi \chi} (\tau_i)}{\sum_k R_{\chi \chi} (\tau_k)}$ can be considered as a possibility for the time series to repeat itself with the time shift $\tau_i$. Hence, we consider the future traffic prediction as the linear combination of the time-shifted historical data based on this possibility. Applying this concept to the attention mechanism, we can calculate the autocorrelation by the keys and queries:
\[
R_{Q_t, K_t} (\tau) = \mathscr{F}^{-1} ( \mathscr{F} (Q_t) \mathscr{F}^{conj} (K_t) ),
\]
where $Q_t$ and $K_t$ respectively are the time series after applying query mapping and key mapping on the input time series $\chi_t$. Following the implementation of \cite{https://doi.org/10.48550/arxiv.2106.13008}, we choose the most possible $k$ delay values to aggregate the temporal features for the purpose of reducing the memory complexity:
\[
\tau_1, ..., \tau_k = \rm{arg_{\tau} Topk} \{ R_{Q_t, K_t} (\tau), \}
\]
where $\rm{arg_{\tau} Topk}(\cdot)$ gets the indices of the $k$ largest autocorrelations. Then we normalize the autocorrelation as our temporal attention scores using the softmax function \cite{https://doi.org/10.48550/arxiv.1706.03762}:
\[
S_1, ..., S_k = \rm{softmax} \{ R_{Q_t, K_t} (\tau_1), ..., R_{Q_t, K_t} (\tau_k). \}
\]
Now we can aggregate the temporal feature to calculate the output feature $\chi^{out}$ based on the scores of the autocorrelation:
\[
\chi^{out} = \sum_i^k \rm{Roll} (V_t, \tau_i) * S_i,
\]
where $V_t = [v^1, ..., v^t]$ is the time series after applying value mapping on the input time series $\chi_t$ and where $\rm{Roll} (V_t, \tau)$ represents the rolling operation to $V_t$ in temporal dimension with time delay $\tau$:
\[
\rm{Roll} ([v^1, ..., v^t], \tau) = [v^{\tau+1}, ..., v^t, v^1, ...,v^{\tau} ].
\]
After applying multiple-patch attention layers, we can derive multiple shorter sequence representations $\{Y^1, Y^2, ...\}$. We apply autocorrelation attention to each time series to calculate the hidden state of the predicted time steps. The architecture of the pyramid autocorrelation attention mechanism is shown in Figure \ref{patch_atten}.
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{img/Med/pyramid_attn.png}
\caption{\footnotesize Description of pyramid autocorrelation attention mechanism. The lower-dimensional representations of the input time series are first derived by using patch attention. Then we extract the periodic temporal pattern of each representation based on autocorrelation attention.}
\label{patch_atten}
\end{figure}
\subsection{Graph attention network for learning spatial dependency\label{subsec:GAL}}
To model the spatial dependency among the nodes in a graph, we adopt graph attention networks \cite{https://doi.org/10.48550/arxiv.1710.10903}.
The input is a set of node features $H^t = \{ h_1, h_2, ..., h_N \}, h_i \in \mathbb{R}^{D}$ at a specific time $t$, where $N$ is the number of nodes. The graph attention layer aggregates the neighbor nodes' features of a given node to produce new node features $H'^t = \{ h'_1, h'_2, ..., h'_N \}, h'_i \in \mathbb{R}^{D}$. We first compute the importance of node $j$ to node $i$ $I_{ij}$ of $h_j$ to $h_i$ by
$
I_{ij} = \sigma(W_{sp} [ F_Q(h_i; \theta_Q) | F_K(h_j; \theta_K) || d_{ij} ]),
$
where $F_Q(\cdot) \in \mathbb{R}^D \shortrightarrow \mathbb{R}^D$ is the query mapping and $F_K(\cdot) \in \mathbb{R}^D \shortrightarrow \mathbb{R}^D$ is the key mapping with the trainable parameters $\theta_Q$ and $\theta_K$, respectively. $W_{sp} \in \mathbb{R}^{{2D+1}}$ is a set of trainable parameters, $d_{ij}$ represents the distance between node $i$ and node $j$, $|$ is the concatenation operation, and $\sigma$ is the nonlinear activation function. Then we inject graph structure into the attention mechanism by performing masked attention. The attention coefficient $\alpha_{ij}$ from $h_j$ to $h_i$ is computed as follows:
$
\alpha_{ij} = \frac{exp(I_{ij})} {\sum_{k \in \mathbb{N}_i} exp(I_{ik})},
$
where $\mathbb{N}_i$ represents the neighbor nodes of node $v_i$. Then the new node feature is computed as follows:
$
h'_i = W_2 \sigma ( \sum_{j \in \mathbb{N}_i} \alpha_{ij} F_V(h_j; \theta_V)),
$
where $F_V \in \mathbb{R}^D \shortrightarrow \mathbb{R}^D$ is the value mapping of parameters $\theta_V$ and $W_2 \in \mathbb{R}^{D \times D}$ is a set of trainable parameters.
\subsection{Model architecture}
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{img/Med/Model_architecture.png}
\caption{\footnotesize Framework of graph pyramid autoformer.}
\label{model_framework}
\end{figure}
Our proposed X-GPA model architecture is shown in Figure \ref{model_framework}. The model can be divided into two parts. In the module of spatial-temporal feature aggregation, we first apply a graph attention layer \ref{subsec:GAL} to aggregate the feature of the neighbor nodes. The output $\chi^{gc_i}$ represents the aggregated features after $i$ graph attention layers. Then we apply temporal patch attention to aggregate traffic features in the temporal dimension. The output $\chi^{p_j}$ represents the aggregated features after $j$ patch attention layers. After applying the spatial graph attention layers and temporal patch attention layers, we obtain a set of spatial-temporal aggregated traffic feature $\{\chi^{gc_i,p_j}\}_{i=0,j=0}^{M_{gc}, M_{p}}$, where $M_{gc}$ is the number of graph attention layers and $M_{p}$ is the number of patch attention layers.
Using the autocorrelation mechanism, we derive the hidden features of the predicted time slots $\chi'$ by aggregating the historical traffic data based on the periodic pattern of the time series. All selected features are used for the prediction of each future timestep. We then utilize class activation mapping \cite{https://doi.org/10.48550/arxiv.1512.04150} to help derive the weights of all feature maps $\{w_{i,j}\}_{i=0,j=0}^{M_{gc}, M_{p}}$, where $w_{i,j}$ is the weight for the feature map $\chi^{gc_i,p_j}$. This is given by
$
\{w_{i,j}\}_{i=0,j=0}^{M_{gc}, M_{p}} = \exp[F(\{\chi'^{gc_i,p_j}\}_{i=0,j=0}^{M_{gc},M_{p}}; \bm{\theta})],
$
where $F$ is a neural network model parameterized by $\bm{\theta}$ and $\exp$ is the exponential operation to ensure that the weights are positive. The hidden feature of the predicted traffic data is calculated by
$
v^{t+j} = \sum_{i,j} w_{i,j} \chi'^{gc_i,p_j}.
$
\section{Explanation extraction}
The attention mechanism is a popular approach to explain the deep learning model predictions. In a traditional self-attention layer, the output is considered as a linear combination of input features. To increase the explainability of the model, we develop two variants of the single-headed attention mechanism by either removing the value mapping or removing values mapping and query mapping simultaneously, as shown in Figure \ref{explainable_attn}. In the first variant, we define value mapping as identical mapping $F_V(x) = x$. In the second variant, we adopt the same function as our query mapping and key mapping $F_Q = F_K$ while using the identical mapping as the value mapping at the same time. When using these two attention mechanisms, even though we stack multiple graph attention layers and temporal attention layers, the final output becomes a linear combination of the original input. In our model, we tested three different kinds of attention mechanism and adopted the most explainable one.
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{img/Med/single_head_attn.png}
\label{sensor_loc}
\caption{\footnotesize Description of different attention mechanisms.}
\label{explainable_attn}
\end{figure}
Recall that in the graph attention layer we define $\alpha_{kl}$ as the importance of node $l$ to node $k$, and in the temporal attention layer we define $S_q$ as the attention score with respect to time delay $\tau_q$. We define $\mathbb{GC}$ as the operation of the graph attention layer and define $\mathbb{TC}$ as the operation of the temporal attention layer.
The input of each attention layer is defined as $\mathbb{I} \in \mathbb{R}^{N \times T_1 \times D}$, and the output is defined as $\mathbb{O} \in \mathbb{R}^{N \times T_2 \times D}$, where $N$ is the number of nodes, $T_1, T_2$ is the number of time steps, and $D$ is the number of features. Hence we can derive the relationship between the input and the output of multiple graph attention layers as follows:
\[
\mathbb{O}_{k,:,:} = \mathbb{GC} ( \sum_{l \in \mathbb{N}_k} \alpha_{kl} \mathbb{I}_{l,:,:}).
\]
Similarly, the relationship between the input and the output of multiple temporal attention layers is
\[
\mathbb{O}_{:,p,:} = \mathbb{TC} (\sum_q S_q * \mathbb{I}_{:,p+\tau_q,:} ).
\]
Therefore, the prediction of node $k$ at time step $p$ can be formulated as
\[
\chi^{out}_{k,p,:} = \mathbb{TC} [\sum_q S_q * \mathbb{GC} ( \sum_{l \in \mathbb{N}_k} \alpha_{kl} \chi_{l,:,:})_{:,p+\tau_q,:} ].
\]
Hence, the importance of the feature of node $l$ at time step $p+\tau_q$ to the prediction of the feature of node $k$ at time step $p$ is $S_q * \alpha_{kl}$.
\section{Numerical experiments \label{sec:Traffic_Result}}
We used PeMS-BAY and Metr-LA, two real-world traffic datasets \cite{li2017diffusion}, which have been used in a number of prior traffic forecasting studies \cite{li2017diffusion, yu2017spatio, guo2019attention}. The recorded traffic speeds were aggregated into 5-minute intervals. The PeMS-BAY dataset includes information from 325 sensors in the San Francisco Bay area collecting five months of data from Jan. 1 to May 31, 2017. The Metr-LA dataset includes information from 207 sensors in the highway of Los Angeles County from March 1 to June 30, 2012.
We split the data into three parts: 15 weeks of traffic data for training (70\%), 2 weeks of traffic data for validation (10\%), and 4 weeks of traffic data for testing (20\%).
\begin{comment}
\begin{figure}[ht]
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{img/sensor_location_pems.png}
\label{sensor_loc_pems}
\caption{\footnotesize PeMS-BAY sensor locations}
\end{subfigure}
\begin{subfigure}[t]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{img/sensor_location_metr.png}
\label{sensor_loc_metr}
\caption{\footnotesize Metr-LA sensor locations}
\end{subfigure}
\caption{\footnotesize sensor locations of two datasets.}
\label{sensor_loc}
\end{figure}
\end{comment}
We used the following baselines to compare the performance of our proposed X-GPA: historical average \cite{ermagun2018spatiotemporal}, STGCN \cite{Yu_2018}, ASTGCN \cite{ye2022attention}, DCRNN \cite{li2017diffusion}, and Autoformer \cite{https://doi.org/10.48550/arxiv.2106.13008}.
\begin{comment}
\begin{table}[ht]
\caption{Setup of different numerical experiments}
\label{Cases description}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
{ } & Input time horizon & forecasting time horizon \\
\hline
Case 1 & Last 1 hour & future 1 hour \\
\hline
Case 2 & \makecell{Last 1 hour \\ daily periodic } & future 1 hour \\
\hline
Case 3 & Last 7 days & future 1 hour \\
\hline
Case 4 & Last 7 days & future 12 hours \\
\hline
\end{tabular}
\end{table}
\end{comment}
We used mean absolute error (MAE) as the criterion to quantify the forecasting error of each time step $t$: $MAE_t =\frac{\sum_j^N |x^t_j - x^{pre,t}_j|}{N}$,
where $N$ is the number of sensors, $x^t$ represents the ground truth traffic future data at time step $t$, and $x^{pre,t}$ represents the model forecast at time step $t$.
We conducted experiments under four settings:
Case 1: last 1-hour traffic as input time horizon; Case 2: last 1-hour traffic along with daily periodic data input time horizon (for example, to forecast 8--9 am traffic on Monday, the traffic conditions of 7--8 am on that Monday along with 8--9 am traffic of Monday to Sunday), which will be beneficial for model performance since traffic data has strong daily periodic pattern; Case 3: last 7 days as input time horizon, which will enable the models to leverage the strong weekly periodic pattern; and Case 4: last 7 days input time horizon but 12 hours forecast time horizon. Although X-GPA is designed for long-term forecasting, first we compared our approach with several state-of-the-art short-term traffic forecasting methods (Cases 1--3). We found that our approach is either superior or comparable to the existing methods.
We refer the reader to the Appendix \ref{Sec.app} for more detailed results of short-term traffic forecasting results.
\subsection{Long-term traffic forecasting}
Here we show that X-GPA completely outperforms previous state-of-the-art methods for long-term traffic forecasting.
The results for Case 4 are shown in Table \ref{long_term_speed_prediction_performance}. We used seven days of data as input and forecast future 12 hours of traffic data. Because of the extremely high computation cost of DCRNN and ASTGCN (see the Appendix), we did not include them in the comparison.
The results show that X-GPA achieves accuracy values that are better than the other methods for all the time horizons from 2 hours to 12 hours. HA and STGCN cannot achieve satisfactory accuracy for long-term traffic forecasts. Autoformer outperforms both HA and STGCN. However, X-GPA achieved much higher accuracy than the other three methods, with an MAE of only 2.86 for 12-hours-ahead prediction, a value that these three methods cannot achieve even for 2-hour forecasts. The trend is similar on the Metra-LA dataset as well. We observed that the overall accuracy improvement compared with the best baseline is over 35\%.
\begin{table}[h]
\caption{MAE (mph) comparison for Case 4}
\centering
\begin{tabular}{c c c c c}
\hline
{} & \multicolumn{4}{c}{PEMS-BAY dataset}\\
Model & 2 hours & 4 hours & 8 hours & 12 hours \\
\hline
HA & 5.42 $\pm$ 0.00 & 5.42 $\pm$ 0.00 & 5.42 $\pm$ 0.00 & 5.42 $\pm$ 0.00 \\
STGCN & 5.09 $\pm$ 0.19 & 5.12 $\pm$ 0.21 & 5.17 $\pm$ 0.31 & 5.20 $\pm$ 0.40 \\
Autoformer & 4.05 $\pm$ 0.11 & 4.06 $\pm$ 0.15 & 4.08 $\pm$ 0.20 & 4.12 $\pm$ 0.25 \\
{X-GPA} & \textbf{2.72 $\pm$ 0.08} & \textbf{2.77 $\pm$ 0.12} & \textbf{2.81 $\pm$ 0.16} & \textbf{2.86 $\pm$ 0.27} \\
\hline
\end{tabular}
\medskip
\begin{tabular}{c c c c c}
\hline
{} & \multicolumn{4}{c}{Metr-LA dataset}\\
Model & 2 hours & 4 hours & 8 hours & 12 hours \\
\hline
HA & 10.14 $\pm$ 0.00 & 10.14 $\pm$ 0.00 & 10.14 $\pm$ 0.00 & 10.14 $\pm$ 0.00 \\
STGCN & 9.02 $\pm$ 0.19 & 9.08 $\pm$ 0.21 & 9.11 $\pm$ 0.31 & 9.19 $\pm$ 0.40 \\
Autoformer & 7.15 $\pm$ 0.12 & 7.26 $\pm$ 0.15 & 7.48 $\pm$ 0.20 & 7.62 $\pm$ 0.27 \\
{X-GPA} & \textbf{4.80 $\pm$ 0.15} & \textbf{4.87 $\pm$ 0.19} & \textbf{4.92 $\pm$ 0.25} & \textbf{4.98 $\pm$ 0.27} \\
\hline
\end{tabular}
\label{long_term_speed_prediction_performance}
\end{table}
\begin{comment}
To better visualize the forecasting performance, we show the plots of comparison between predicted traffic of different models and ground truth future traffic conditions in Figure \ref{fig.Long_term_predicted_traffic}. All of the models achieved promising results for normal traffic conditions as shown in Figure \ref{fig.normal_traffic_plot}. For the traffic conditions of gradual change, our model had better performance than other baselines. However, the results showed that our model overestimated the traffic congestion in Figure \ref{fig.gradual_traffic_plot}. In Figure \ref{fig.abrupt_traffic_plot}, all of models could not achieve satisfactory results for the abrupt change of traffic conditions. Nevertheless, our model showed higher performance in capturing the traffic abrupt change than other baselines.
\begin{figure}[H]
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=7cm]{img/long_term_pred_plots/Wed_10_2_prediction.png}
\caption{Prediction of normal traffic conditions.}
\label{fig.normal_traffic_plot}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=7cm]{img/long_term_pred_plots/Wed_0_40_prediction.png}
\caption{Prediction of gradual change of traffic conditions.}
\label{fig.gradual_traffic_plot}
\end{subfigure}
\medskip
\begin{subfigure}[t]{0.45\linewidth}
\centering
\includegraphics[width=7cm]{img/long_term_pred_plots/Tue_8_2_prediction.png}
\caption{Prediction of abrupt change of traffic conditions.}
\label{fig.abrupt_traffic_plot}
\end{subfigure}
\caption{\footnotesize 12 hours ahead prediction performance comparison.}
\label{fig.Long_term_predicted_traffic}
\end{figure}
\end{comment}
\subsection{Explaining forecasts}
\subsubsection{Short-term forecasting}
\label{short_term_pred_interpret}
In this section we visualize the traffic segments that our model focuses on. To better show our model's ability to focus on important traffic features using the attention mechanism, we show additional results of the model attention scores of Case 3. We carried out more detailed analysis only for the model of Case 3 because we provide more information (i.e., seven days of traffic data) to the model, which increases the difficulty of picking useful information. In Figure \ref{short_term_spatial_atten} we plot the spatial attention distribution of top three important feature maps for future one-hour traffic prediction. We observe that the model focuses on the historical data of the target node for non-peak-hour prediction; on the other hand, it pays attention to the neighbor nodes for peak-hour prediction. We also observe that the nodes near the
interconnection of different highways have a significant effect on our target node prediction, which is reasonable based on the dynamics of traffic movement. Moreover, we observe that spatial attention is distributed not just around the target node but along a specific path. One possible reason is that the model can capture the traffic movement pattern; this still requires more detailed validation.
\begin{figure}[ht]
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=7cm]{img/spatial_attns/spatial_new_attn_1w_to_12_epoch4_Tue_9.png}
\caption{9:00am on Tuesday}
\label{short_term_spatial_attn_peak}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=7cm]{img/spatial_attns/spatial_new_attn_1w_to_12_epoch4_Sun_12.png}
\caption{12:00pm on Sunday}
\label{short_term_spatial_attn_non_peak}
\end{subfigure}
\caption{\footnotesize Spatial distribution of one-hour-ahead prediction for peak hours (top) and non-peak hours (bottom). Most attention accumulates on the target node itself for short-term prediction (within one hour).}
\label{short_term_spatial_atten}
\end{figure}
For each feature map, we show the temporal attention distribution of the most influential node. The most influential node is defined as the node of highest spatial attention, which has deepest color in Figure \ref{short_term_spatial_atten}. We show the temporal attention of the historical traffic data of the most influential nodes in Figure \ref{short_term_temporal_atten}. Based on the second and third feature maps of Figure \ref{non_peak_hour_attn_12}, most attention is accumulated on the same time from the last week and the most recent traffic data of the target node itself. For peak hour prediction, we see that most attention accumulates on the recent traffic conditions of the target node history data. The most recent traffic and the traffic conditions of the same time from the last week are also important for prediction.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=7cm]{img/temporal_attn/temporal_new_attn_1w_to_12_epoch4_Sun_12.png}
\caption{One-hour-ahead prediction for 12:00--1:00 pm on Sunday.}
\label{non_peak_hour_attn_12}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=7cm]{img/temporal_attn/temporal_new_attn_1w_to_12_epoch4_Tue_9.png}
\caption{One-hour-ahead prediction for 9:00--10:00 am on Tuesday.}
\label{peak_hour_attn_12}
\end{subfigure}
\caption{\footnotesize Comparison of attention distribution with the Pearson correlation for non-peak hours (top) and peak hours (bottom). Models pay attention to the most recent traffic and traffic data of the same time from the last week. Attention on the history will be higher for peak-hour traffic prediction.}
\label{short_term_temporal_atten}
\end{figure}
\subsubsection{Long-term forecasts}
We observe that for long-term traffic forecasts, the model pays more attention to the neighbor nodes but not the target nodes themselves. We show the spatial attention for the long-term traffic prediction model (Case 4) in Figure \ref{long_term_spatial_atten}. For peak-hour traffic prediction, we observe that the history of the target traffic data is still important
However, to predict non-peak hour traffic, our model does not use the information of the target node but relies more on neighbor nodes. This indicates that the spatial correlation is important for long-term prediction.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=6cm]{img/long_term_spatial_attns/spatial_new_attn_144_epoch4_Sun_12.png}
\caption{Prediction for 12:00 pm--00:00 am on Sunday}
\label{long_term_spatial_non_peak}
\end{subfigure}
\hspace{0.5cm}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=6cm]{img/long_term_spatial_attns/spatial_new_attn_144_epoch4_Mon_9.png}
\caption{Prediction for 9:00 am -- 9:00 pm on Monday}
\label{long_term_spatial_peak}
\end{subfigure}
\caption{\footnotesize Spatial attention distribution of the long-term prediction model for short-term traffic prediction (top) and long-term traffic prediction (bottom). The attention will shift from the target node to neighbor nodes for further future traffic prediction.}
\label{long_term_spatial_atten}
\end{figure}
The temporal attention of the most influential node for each feature map is shown in Figure \ref{Long_term_model_temporal_prediction}. We observe that the attention not only accumulates at the same time of the last week but also on the same time of other days. This shows that our model is capable of capturing weekly and daily periodic patterns. Moreover, there is no attention on the recent traffic data, which indicates that for long-term traffic prediction the most recent traffic is not as important as for short-term traffic prediction.
In Figure \ref{peak_hour_attn_144} we observe that to predict the peak hour of the weekday traffic, our model does not pay attention to the traffic conditions on the weekend, which shows that our model can recognize the difference between weekday traffic and weekend traffic.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=7cm]{img/long_term_temporal_attn/temporal_new_attn_144_epoch4_Sun_12.png}
\caption{Prediction for 12:00 pm--00:00 am on Sunday}
\label{non_peak_hour_attn_144}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=7cm]{img/long_term_temporal_attn/temporal_new_attn_144_epoch4_Mon_9.png}
\caption{Prediction for 9:00 am--9:00 pm on Monday}
\label{peak_hour_attn_144}
\end{subfigure}
\caption{\footnotesize Temporal attention distribution of 5-minutes-ahead prediction for non-peak hour (top) and peak hour (bottom).}
\label{Long_term_model_temporal_prediction}
\end{figure}
Comparing the attention distribution of short-term traffic prediction model and long-term prediction model, we conclude that spatial correlation and temporal correlation are more important for long-term prediction than for short-term prediction. In short-term prediction, spatial correlation is also utilized. Based on the results of the temporal attention distribution, however, we observe that the model actually did not extract too much information from the neighbor nodes (extracted only the information of the same time last week). This implies that short-term is a relatively easy task so that the model does not need too much information to obtain good results.
\section{Conclusion}
We studied the long-term spatial-temporal traffic forecasting problem, which is essential for intelligent traffic systems. We developed the explainable graph pyramid autoformer (X-GPA) that combines a novel pyramid autocorrelation temporal attention layers with graph spatial attention layers to learn temporal patterns from long traffic time-series and spatial dependencies, respectively. We showed that our X-GPA model achieved up to 35\% improvement in accuracy over the state-of-the-art spatial-temporal graph neural networks developed for traffic forecasting. We used the attention-based scores from the X-GPA model to derive both spatial and temporal explanations for predictions. We show that our model can notice the congestion locations of the network spatially and capture the long-term periodic temporal pattern of the traffic system.
Our future work will include 1) X-GPA for dynamical systems such as weather forecasting; 2) uncertainty quantification capabilities; 3) scaling for large data sets with distributed training; and 4) transfer learning.
\clearpage
\newpage
\section{Appendix}
\label{Sec.app}
We tested the model performance for different short-term traffic forecasts (Cases 1--3). Different GNN models are built based on different inductive biases, which need different input data. Models that forecast the traffic future by simulating the evolution of the traffic system, such as HA, DCRNN, and Autoformer, use the setting of Case 1. However, models that forecast the future traffic by aggregating multiple historical traffic data, such as STGCN and ASTGCN, use the setting of Case 2. To enable a fair comparison, we tested our proposed X-GPA performance under the same setting of the baseline models. All experiments were conducted in PyTorch \cite{https://doi.org/10.48550/arxiv.1912.01703} on a single NVIDIA TITAN RTX 16 GB GPU.
We report the MAE of 15-min, 30-min, and 60-min ahead forecasts for Cases 1--3 in Table \ref{fig.Case_1&2_pred_performance} and Table \ref{fig.Case_3_pred_performance}.
For Case 1, we observe that our proposed X-GPA method obtains better forecasting accuracy than that of HA and Autoformer. DCRNN achieves slightly better accuracy than does X-GPA, where the observed difference is within 0.5 mph. For Case 2, X-GPA achieves accuracy values that are either better than or comparable to those of STGCN and ASTGCN. For Case 3, our X-GPA outperforms HA, STGCN, and Autoformer. DCRNN and ASTGCN were not included in the comparison because of the high computation cost. We estimated the total training time by multiplying training time per epoch and total training epochs. DCRNN needs 100 epochs with 126 hours of training time per epoch, and ASTGCN requires 20 epochs with 75.6 hours of training time per epoch. DCRNN was originally designed for short-sequence input and thus is unsuitable for analyzing long-sequence input, while ASTGCN using the self-attention mechanism with $O(n^2)$ complexity is computationally expensive---the key reason that these two models need such significant training time. Comparing the results in Cases 2 and 3, we found that by selecting only important historical data as input, the model performance can be improved.
\begin{table}[ht]
\begin{center}
\begin{tabular}{c c c c c c c}
\multicolumn{7}{c}{MAE (in mph) comparison of Case 1 } \\
\hline
{} & \multicolumn{3}{c}{PEMS-bay dataset} & \multicolumn{3}{c}{Metr-LA dataset} \\
model & 15 min & 30 min & 60 min & 15 min & 30 min & 60 min\\
\hline
HA & 2.42 $\pm$ 0.00 & 2.87 $\pm$ 0.00 & 3.65 $\pm$ 0.00 & 3.47 $\pm$ 0.00 & 3.78 $\pm$ 0.00 & 4.16 $\pm$ 0.00 \\
DCRNN & \textbf{1.48 $\pm$ 0.11} & \textbf{1.81 $\pm$ 0.15} & \textbf{2.24 $\pm$ 0.21} & \textbf{2.15 $\pm$ 0.09} & \textbf{2.83 $\pm$ 0.12} & \textbf{3.68 $\pm$ 0.19}\\
Autoformer & 2.05 $\pm$ 0.13 & 2.36 $\pm$ 0.18 & 2.93 $\pm$ 0.27 & 2.55 $\pm$ 0.17 & 3.24 $\pm$ 0.19 & 3.84 $\pm$ 0.25\\
X-GPA & 1.49 $\pm$ 0.08 & 1.87 $\pm$ 0.11 & 2.23 $\pm$ 0.16 & 2.29 $\pm$ 0.11 & 2.91 $\pm$ 0.15 & 3.76 $\pm$ 0.20 \\
\hline
\end{tabular}
\medskip
\begin{tabular}{c c c c c c c}
\multicolumn{7}{c}{MAE (in mph) comparison of Case 2} \\
\hline
{} & \multicolumn{3}{c}{PEMS-bay dataset} & \multicolumn{3}{c}{Metr-LA dataset} \\
model & 15 min & 30 min & 60 min & 15 min & 30 min & 60 min\\
\hline
STGCN & 1.36 $\pm$ 0.11 & 1.81 $\pm$ 0.14 & 2.49 $\pm$ 0.20 & 2.33 $\pm$ 0.14 & 2.94 $\pm$ 0.19 & 3.79 $\pm$ 0.26 \\
ASTGCN & 1.35 $\pm$ 0.08 & 1.70 $\pm$ 0.11 & 2.06 $\pm$ 0.14 & 2.14 $\pm$ 0.13 & 2.79 $\pm$ 0.16 & 3.58 $\pm$ 0.22\\
X-GPA & \textbf{1.32 $\pm$ 0.06} & \textbf{1.62 $\pm$ 0.10} & \textbf{1.95 $\pm$ 0.15} & \textbf{2.12 $\pm$ 0.10} & \textbf{2.72 $\pm$ 0.15} & \textbf{3.50 $\pm$ 0.21} \\
\hline
\end{tabular}
\end{center}
\caption{MAE (in mph) comparison for Case 1 and Case 2. }
\label{fig.Case_1&2_pred_performance}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{c c c c c c c}
\multicolumn{7}{c}{Performance comparison of Case 3} \\
\hline
{} & \multicolumn{3}{c}{PEMS-bay dataset} & \multicolumn{3}{c}{Metr-LA dataset} \\
model & 15 min & 30 min & 60 min & 15 min & 30 min & 60 min\\
\hline
HA & 5.40 $\pm$ 0.00 & 5.40 $\pm$ 0.00 & 5.40 $\pm$ 0.00 & 15.89 $\pm$ 0.00 & 15.89 $\pm$ 0.00 & 15.89 $\pm$ 0.00\\
STGCN & 1.76 $\pm$ 0.11 & 2.31 $\pm$ 0.17 & 2.89 $\pm$ 0.23 & 2.63 $\pm$ 0.16 & 3.44 $\pm$ 0.24 & 4.19 $\pm$ 0.31 \\
Autoformer & 1.87 $\pm$ 0.10 & 2.18 $\pm$ 0.13 & 2.74 $\pm$ 0.18 & 2.81 $\pm$ 0.14 & 3.67 $\pm$ 0.20 & 4.41 $\pm$ 0.29\\
X-GPA & \textbf{1.43 $\pm$ 0.08} & \textbf{1.79 $\pm$ 0.12} & \textbf{2.27 $\pm$ 0.19} & \textbf{2.49 $\pm$ 0.16} & \textbf{3.60 $\pm$ 0.13} & \textbf{4.07 $\pm$ 0.20}\\
\hline
\end{tabular}
\medskip
\begin{tabular}{c c c c c c c}
\multicolumn{7}{c}{Training time comparison for Case 3}\\
\hline
Model & HA & STGCN & DCRNN & ASTGCN & Autoformer & X-GPA \\
\hline
PEMS-bay dataset & 0 hours & 7.5 hours & 12600 hours & 1512 hours & 2.5 hours & 4.5 hours \\
Metr-LA dataset & 0 hours & 4.9 hours & 7052 hours & 896 hours & 1.5 hours & 3.5 hours \\
\hline
\end{tabular}
\end{center}
\caption{Model performance and efficiency comparison for Case 3. MAE is shown in mph.}
\label{fig.Case_3_pred_performance}
\end{table}
For Case 3, X-GPA and Autoformer have significantly lower training time than do the other models. Although Autoformer without the need to model the spatial dependency has lower training time than that of X-GPA, Autoformer's accuracy values are lower than those of X-GPA. Our X-GPA model can achieve results that are superior or comparable to the state-of-the-art short-term traffic forecasting methods.
\bibliographystyle{unsrt} |
1911.11324 | \section{Introduction}
Twitter, or social media in general, is a vast space for users to express opinions and sentiments. The proliferation of social media networks accelerates the generation of public health data at an unprecedented rate, allowing big data computing approaches to achieve innovative and impactful research in health sciences. Since \citet{paul2011you} proposed the use of Twitter data for public health informatics, several studies \citep{zou2016infectious,abbar2015you,nguyen2016building,nguyen2017twitter, sarma2019estimating} discovered strong correlations between government statistics for specific diseases and tweets on specific topics. These studies suggest that a large number of relevant tweets can provide insight into the general health of a population. \\
\hspace*{\fill}\\
In a recent literature review, \citet{jordan2019using} showed the majority of Twitter studies in the last decade on public health informatics has relied on keywords for classification or clustering. However, since tweets contain unpredictable noises such as slang, emoji, and misspellings, the tweets retrieved from keyword-matching models often exclude relevant or include irrelevant messages. These uncertain semantics of retrieved tweets lead to unreliable estimations for public health metrics \citep{jordan2019using, culotta2010towards}. Meanwhile, other studies use Latent Dirichlet allocation to learn latent topics \citep{paul2014discovering, prier2011identifying}. These models depend on reliable word co-occurrence statistics and typically suffer from data sparsity when applied to short documents like tweets \citep{sridhar2015unsupervised}. \\
\hspace*{\fill}\\
We propose using sentence embeddings by supervised deep learning methods to overcome this shortcoming. Hashtags, the user-annotated label that clusters tweets with shared topics regardless of the diverse textual patterns, provide a natural supervision for training distributed representations of tweets. In this work, we adapt \textit{TagSpace} \citep{weston2014tagspace}, a convolutional neural network (CNN) that learns word and tweet embeddings in the same vector space using hashtags as supervised signals.
In order to demonstrate the feasibility of this method, we address the specific task of estimating state-level obesity from tweets characterizing actual dietary habits.
We use both embeddings to cluster and to extract relevant textual features that correspond to population-level dietary habits from over two hundred million tweets. The regression on these textual features strongly correlates to state-level obesity prevalence surveyed by Centers for Disease Control and Prevention \citep{cdc}.
Since our method is not specifically tailored to obesity research, our approach is applicable to a wide range of public health studies that involve Twitter data.
\section{Data acquisition and pre-processing}
We retrieve 272.8 million tweet records posted in 2014 using the Twitter API \footnote{https://developer.twitter.com/}, and we assign one state among the contiguous United States (48 states plus the District of Columbia) to each of the 261 million records based on user geolocation metadata. Non-English posts are removed from our dataset.
We use regex to perform the following steps:
\begin{enumerate}
\item Convert all alphabetical characters to lower case
\item Remove all URLs, user mentions, and special characters except the hashtag symbol \texttt{\#}
\item Remove numerical characters except those in hashtags
\item Add white space between consecutive emojis, and limit repeating mentions of words or emojis in a post
\end{enumerate}
After initial preprocessing, we find 9.5 million unique vocabularies (including hashtags and emojis) heavily tailed at scarce mentions - 6.3 million are mentioned only once, and 8.9 million less than ten times. Similar distribution is also discovered in hashtags. Including scarcely-mentioned words not only results in a memory-demanding lookup table, but also puts our model under the risk of overfitting, as the model may memorize the rare textual patterns found only in the training set. Hence we select 500k most mentioned words (excluding stop words) and 50k hashtags for our model, and all out-of-vocabulary words are tokenized as <UNKNOWN>. The data pre-processing pipeline can be visualized in Figure \ref{pipeline}.
\begin{figure}
\centering
\captionsetup{type=figure}
\includegraphics[scale=0.6]{pipeline.png}
\caption{Twitter data pre-processing and keyword acquisition pipeline.}
\label{pipeline}
\end{figure}
\section{Methods}
We address the specific task of using the twitter data within a state to estimate the obesity prevalence in that state. Inspired by recent Twitter-derived public health studies \citep{zou2016infectious, nguyen2017twitter, nguyen2016building, abbar2015you}, we first compile a set of keywords related to dietary habits to form a feature space in a regression scenario. We then adapt two deep learning models to retrieve food-related tweets by the scoring between embeddings of tweets and keywords. Embeddings of food-related tweets within a state will be aggregated for extracting features later used in regression.
\subsection{Constructing feature space from keywords}
Following prior works by \citet{nguyen2017twitter}, we generate the keyword list from two sources: 1) the U.S. Department of Agriculture’s National Nutrient Database \citep{cdcfood} - from over 7000 food records found in the USDA database, we extract only the first-level information (e.g. "strawberry yogurt" and "nonfat yogurt" are both recorded as "yogurt"), which gives us 371 terms; and 2) popular food-related mentions in the press \footnote{App Spring Inc. List Challenges: Food, https://www.listchallenges.com/lists/food} - we add food (e.g. "sashimi" and "kimchi"), food-related slangs (e.g. "blt"), and chain restaurants (e.g. KFC and Starbucks) that are not included by the USDA database but frequently appear in user-generated contexts, which results in 131 additional terms. All of the 502 keywords in the list are reduced to their singular forms by NLTK lemmatizer \citep{nltk}, and words in the tweets are also lemmatized for keyword matching. We show the keyword acquisition process in Figure \ref{pipeline}.
\subsection{Retrieving relevant tweets via deep learning}
\begin{figure}
\centering
\captionsetup{type=figure}
\includegraphics[scale=0.42]{forward.png}
\caption{The model architecture of \textit{TagSpace}\citep{weston2014tagspace}. Given an input tweet $w$ and its hashtag $t$, the forward pass outputs the scoring $f(w,t)$, where $N$ denotes vocabulary size, $d$ denotes embedding dimension, $l$ denotes number of words (i.e. max sequence length) of input tweet, $K$ denotes convolution window size, $H$ denotes hidden dimension.}
\label{forward}
\end{figure}
\paragraph{Keyword matching} The simple baseline regards tweets explicitly mentioning at the one of the keywords as relevant. We find the relative frequency of food-related tweets ranges from 3.0\% to 6.2\% across all states with the mean value of 4.7\%, which is close to statistics reported by \citet{nguyen2017twitter} and \citet{ghosh2013we}. This implies that our pre-processing pipeline, which set a higher bar for word frequency than \citep{nguyen2017twitter} and \citep{ghosh2013we}, does not drastically change the distribution of food-related tweets.
\paragraph{TagSpace} Simple keyword matching results in questionable semantic relevancy of retrieved tweets (e.g. "that problem is a hard \textit{nut} to crack", "taylor swift is the \textit{cream} of the crop"). In contrast, hashtags provide the user's labeling of a tweet's themes. We adapt \textit{TagSpace}, a CNN model that learns the distributed representations of both words and tweets using hashtags as a supervised signal \citep{weston2014tagspace}. Given a tweet, the model convolves its unigrams' embeddings as input, and ranks the scoring (e.g. inner product) between the learnt embeddings of the tweet and candidate hashtags. The ranking is optimized by a pairwise hinge objective function as given by Algorithm \ref{warp}, which is optimized for retrieving top-ranked hashtags according to \citet{weston2014tagspace}. We show the model architecture in Figure \ref{forward}. In our study, we consider a tweet with top ranked hashtags containing the keywords as food-related. Among the 50k hashtag candidates, 373 are included in our food keyword list. For this training task, we require the input tweets to mention hashtags in the candidate pool for prediction, and this gives us 14.4 million tweets for training, 1.8 million held out for hyperparameter tuning and 1.8 million for testing
To prevent data leakage, hashtags in the input texts are substituted with the corresponding plain words.
\begin{algorithm}[H]
\SetAlgoLined
\KwData{$e_{conv}(w) \in \mathbb{R}^{N\times d}$: sentence embedding of twitter $w$; $e(t)\in \mathbb{R}^d$: word embedding of hashtag $t$; $T$: set of all hashtags in corpus; $T^+$: set of hashtags found in $w$; $m \in \mathbb{R}$: margin; $M$: max sample iterations}
\KwResult{Optimized CNN weights, $e_{conv}(w)$, and embedding of words in $w$}
Sample $t^+ \in \mathbb{R}^n$ from $T^+$\\
Compute $f(w, t^+) = e_{conv}(w) \cdot e(t^+)$\\
Sample $t^-$ from $T\backslash T^+$\\
Compute $f(w, t^-) = e_{conv}(w) \cdot e(t^-)$\\
Initialize $i \leftarrow 0$\\
\While{$f(t^-, w) \leq m + f(t^+, w)$ and $i \leq M$} {
Resample $t^-$ from $T\backslash T^+$\;
Compute $f(w, t^-) = e_{conv}(w) \cdot e(t^-)$ \;
$i \leftarrow i + 1$
}
Compute $Loss= max(0, m-f(w, t^+)+f(w, t^-))$\\
Backward propagation on $Loss$
\caption{WARP ranking loss of \textit{TagSpace}}
\label{warp}
\end{algorithm}
\paragraph{Binary TagSpace} While \citet{weston2014tagspace} optimizes the prediction of $p(hashtag\ |\ tweet)$ over all hashtag candidates, we are only interested in the tweets' semantic relevancy with food (i.e. $p(hashtags\ about\ food\ |\ tweet)$). Based on whether a tweet contains hashtags found in our food keyword list, we label all tweets with hashtags as either food-related or not. The word and tweet embeddings learnt from the CNN discussed in the previous method are optimized for a binary classification objective function instead. We use the same training and testing sets as those of the previous method.
\subsection{Feature engineering and obesity estimation by elastic net}
For a given state, features are calculated from the scoring (e.g. inner product or cosine similarity) between the keyword embeddings and the average sentence-level embedding of food-related tweets within that state. The scoring function is the same for both CNN models. Both CNNs' objective functions internally train tweet embeddings in the word vector space \citep{weston2014tagspace, wu2018starspace}, and hence the scoring provides information about the semantic relevance between the tweets and the keywords. By aggregating food-related tweets (i.e. tweets with sentence embeddings that have a high score with keyword vectors) within a state, we represent the dietary characteristics of that state in the word vector space. For obesity prevalence estimation, we apply the elastic net, a regression method that combines L1 and L2 regularization and has been shown to surpass ridge or lasso regressions in text regression tasks \citep{zou2016infectious}. In particular, given the regression task
$$y_s = \textbf{w}^T\textbf{x}_s + \beta + \varepsilon$$
where $y_s \in \mathbb{R}$ denotes the obesity prevalence of a given state $s$, $x_s \in \mathbb{R}^{373}$ the vector of extracted textual features of state $s$, $\beta \in \mathbb{R}$ the intercept, and $\varepsilon \in \mathbb{R}$ an independent and zero-centered noise, the weight vector $w$ is learnt by optimizing the objective function
$$\argmin_{\textbf{w}, \beta}\left(\sum_{s\in S}( \textbf{w}^T\textbf{x}_s + \beta + \varepsilon - y_s)^2 + \lambda_1\sum_{k=1}^{373} |w_k|+ \lambda_2\sum_{k=1}^{373}|w_k|^2 \right) $$
where $\lambda_1$ and $\lambda_2$ are the regularization coefficients, and in practice they are chosen by random search in the range $[1e-5, 1e2]$.
We randomly hold out four states for validation and eight states for testing, and apply cross-validations for training.
\section{Results and discussion}
\subsection{Deep learning model performance}
\begin{table}[!htb]
\caption{Performance of CNNs on the test set}
\begin{subtable}{.5\linewidth}
\caption{Ranking by \textit{TagSpace}}
\centering
\begin{tabular}{ccc}
\toprule
Embedding dim & Precision@1 & Recall@10\\
\midrule
64 & 15.37\%&40.13\% \\
128& 28.39\%&43.97\%\\
256&\textbf{32.72\%}&\textbf{45.65\%}\\
\bottomrule
\end{tabular}
\end{subtable}%
\begin{subtable}{.5\linewidth}
\centering
\caption{Classification by \textit{Binary TagSpace}}
\begin{tabular}{ccc}
\toprule
Embedding dim & Precision & Recall\\
\midrule
64 &73.05\%&53.24\% \\
128 &84.35\%&62.14\% \\
256 &\textbf{87.48\%}&\textbf{66.37\%}\\
\bottomrule
\end{tabular}
\end{subtable}
\end{table}
We show the performance of two deep learning models in Table 1 based on their objective functions. Table 1a evaluates the ranking performance of our adaption of \textit{TagSpace}, and the result is comparable to the implementation by \citet{weston2014tagspace} on less noisy text data, which yield 37.42\% P@1 and 43.01\% R@10. This implies that \textit{TagSpace} maintains its ability to predict hashtags on short and noisy documents and hence applicable to Twitter texts in general. As for the binary version of \textit{TagSpace} shown in Table 1b, there is no prior studies for comparison. The low recall can be explained by the unbalanced labels, as in average only 9.4\% of tweets in the test set contain food-related hashtags. The precision of binary \textit{TagSpace} is high, and hence we suspect if the model optimizes objective function by over-generalizing hashtag predictions (i.e. tagging tweets with only general and frequent hashtags such as \texttt{\#restaurant} and \texttt{\#diet}). As the model internally learns tweet embeddings, we use them to rank hashtags and find that the most frequent 100 food-related hashtags in the prediction account for 11.3\% of the food-related tweets. This implies that binary \textit{TagSpace} gives more granular information about a tweet than whether food-related or not.
\subsection{Estimating obesity prevalence by tweet embeddings}
\begin{table}[!htb]
\caption{Regressions on obesity prevalence by extracting features from word and tweet embeddings}
\centering
\begin{tabular}{cccc}
\toprule
Model & dim & MAE & Pearson Corr. \\
\midrule
Bag-of-Words & -&2.596 & 0.607 \\
\midrule
\phantom{-}&64 & 1.653&0.795 \\
\textit{TagSpace}&128&1.571&0.813\\
\phantom{-}&256&1.452&0.836\\
\midrule
\phantom{-}&64 & 1.239&0.871 \\
\textit{Binary TagSpace}&128& 1.018&0.904\\
\phantom{-}&256&\textbf{0.839}&\textbf{0.927}\\
\bottomrule
\end{tabular}
\end{table}
We evaluate the regression results using mean absolute error (MAE) and Pearson correlation with government obesity data. Since no prior study has used our dataset, we handcraft a \textit{Bag-of-Words} baseline that uses tweets filtered by \textit{keyword matching} method and extracts features by frequencies of keywords mentioned within a state. The BOW approach is used in previous Twitter-derived obesity research \citep{nguyen2017twitter, nguyen2016building}. The regression results by our baseline moderately correlates to government data, which agrees with prior works that that dietary characteristics mined from Twitter data is informative in actual obesity estimation \citep{nguyen2017twitter,nguyen2016building,abbar2015you}. Both CNNs generate features resulting in more accurate estimation of state-level obesity compared to our baseline, and binary \textit{TagSpace} outperforms all other methods. Hence we are optimistic that word and tweet embeddings trained from \textit{TagSpace} models optimizing for selective topics results in better indicators of specific diseases.
\subsection{Discovering dietary risk factors with obesity}
\begin{center}
\captionsetup{type=figure}
\includegraphics[scale=0.22]{foodcorr.png}
\captionof{figure}{Spearman correlations between selected features and obesity prevalence}
\label{foodcorr}
\end{center}
We are interested in features correlating to higher obesity prevalence, and we obtain such features using Spearman correlation, which quantifies monotonic relationship between two variables. The highest positive correlation to obesity prevalence is given by \texttt{"\#macncheese"} (\textit{corr} = 0.4910), \texttt{"\#wendys"} (0.4853), \texttt{"\#doughnut"} (0.4796), \texttt{"\#blt"} (0.4359), and \texttt{"\#dominospizza"} (0.4307). We also observe that more general and frequently-mentioned features (such as \texttt{\#dinner}, \texttt{\#food}) usually have weaker monotonic relationship with our target variable as shown in Figure \ref{foodcorr}. While the correlation values are modest, it points to a possibility to learn behavior risk factors from Twitter data using sentence-level embeddings of tweets.
\section{Conclusion}
In conclusion, we propose a deep learning approach to extract textual features using sentence-level embeddings of tweets for public health monitoring. In the case study, our adaption of the two CNNs perform reliably on Twitter data and provides informative textual features for obesity prevalence estimation.We have also showed that features constructed via word and tweet embeddings can potentially learn risk factors for specific diseases, which is useful for monitoring acute public health incidents such as influenza tracking, allergy ailments, and infectious diseases \citep{paul2011you, paul2014discovering, zou2016infectious}.
Our data acquisition and deep learning methods do not include any obesity-related settings, which implies that our approach can be applied to a wide range of Twitter-based public health studies and for various purposes. One limitation of our study is that the demographics of Twitter users over-represent younger age-groups, and one remedy is to standardize tweets based on user ages inferred from probabilistic models \citep{chamberlain2017probabilistic} for future work.
We hope this work will inspire future studies to explore the potential of using sentence-level embeddings of social media texts for a wide scope of public health surveillance.
\newpage
\medskip
\small |
2204.13958 | \section*{}
\vspace{-1cm}
\footnotetext{\textit{$^{a}$~Thomas Young Centre and Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ, UK}}
\footnotetext{\textit{$^{b}$~Thomas Young Centre and Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ, UK}}
\footnotetext{\textit{$^{c}$~Department of Materials Science and Engineering, Yonsei University, Seoul 03722, Korea }}
\section{Introduction}
Covalent organic frameworks (COFs) are porous organic materials that can adopt various topologies, using linkers to form periodic skeletons and ordered nanopores in two and three dimensions (2D and 3D).\cite{cote2005porous, diercks2017atom,huang2016covalent,geng2020covalent,lohse2018covalent}
In layered COFs, which were reported in 2005\cite{cote2005porous}, the organic units are linked by strong in-plane covalent bonds to form 2D sheets, which can then be stacked into crystalline structures.\cite{winkler2021understanding} $\pi$-$\pi$ interactions between the stacked aromatic building blocks strongly affect both the atomic and electronic structure, determining the stacking sequence, band dispersion and band gap energy.\cite{kuc2020proximity} Proximity effects from the repulsive electrostatic interactions between hydrogen and the $\pi$ system of adjacent aromatic rings cause the fully eclipsed stacking of 2D COFs to be unfavourable.\cite{kuc2020proximity,hunter1990nature} In 2007, 3D COFs were successfully synthesized using alternative building-unit geometries which were strongly connected by covalent bonds.\cite{el2007designed} Both 2D and 3D COFs exhibit the advantages of flexible and customizable crystal structures, high chemical and thermal stability, and high porosity – making them promising candidates for applications such as energy storage\cite{liu2020porous,shi2020nitrogen,rojaee2020two,deblase2015rapid,kong2021redox,sun2020covalent,song2021covalent,lei2018boosting,ball2020triazine,luo2018microporous,wu20202d,yao2020two,an2021designs}, ion and molecule separation,\cite{li2020laminated, dey2020nanoparticle,tong2017exploring} optoelectronics\cite{li2018tuneable,lv2018direct} and catalysis\cite{bi2019two,liu2022covalent,xu2015stable,fu2020stable,zhu2020efficient}.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\textwidth]{database_relaxed_structure.pdf}
\caption{\textbf{(a)} Planar experimentally-reported crystal structure and \textbf{(b)} wavy relaxed structure of Tp-Azo. \textbf{(c)} Planar experimentally-reported structure and \textbf{(d)} wavy relaxed structure of DAAQ-TFP COF. The upper figures are viewed looking down the $c$ axis, and the lower figures are viewed along the $ab$ plane. ``$*$'' refers to the relaxed crystal structures. The arrows in (b) and (d) are the displacement directions along the $a$ and $b$ unit cell vectors. }
\label{fig:1}
\end{figure*}
Various forms of disorder exist in experimentally-synthesized COFs, such as bond breakage, pore collapse and stacking faults. Such imperfections can significantly affect the properties of 2D COFs, causing loss of crystallinity, porosity, and conductivity.\cite{haase2017tuning, koo2012classification, spitler20112d, putz2020total,emmerling2021interlayer} In particular, the interlayer stacking modes of 2D aromatic COFs play a critical role in determining their properties. The stacking behaviour of COFs is not thoroughly understood, however, due to difficulties in experimental characterisation of the dynamic, low-crystallinity materials. For instance, powder X-ray diffraction (XRD) can only detect the existence of crystalline domains, making the extraction of accurate results difficult in the presence of low long-range order and sizeable thermal dynamics.\cite{kuc2020proximity,lukose2010reticular} As such, XRD measurements struggle to quantitatively distinguish crystalline structures from other similar aggregated structures, as a result of peak broadening in the diffraction pattern.\cite{kang2022aggregated,zhou2010structural, putz2020total, winkler2021understanding} To achieve greater resolution of COF layer stacking, Kang et. al\cite{kang2022aggregated} recently used $^{13}\textrm{C}$ solid-state nuclear magnetic resonance (ssNMR) to distinguish different aggregated structures by studying the interactions between atoms and chemical groups from adjacent layers.
Five different stacking modes in 2D COFs have been reported: eclipsed, inclined, zigzag, staggered and random stacked.\cite{putz2020total,mahringer2020taking} The eclipsed stacking (AA) corresponds to zero horizontal (coplanar) offset between neighbouring layers in the $ab$ plane, which has the highest symmetry and is the most often reported in experimental works. The inclined stacking (AA\textquoteright) corresponds to a constant, collinear offset between neighbouring layers. This stacking mode was observed using powder XRD and transmission electron microscopy (TEM) in SIOC-COF-8 and SIOC-COF-9.\cite{fan2017case} Zigzag stacking (AB) corresponds to an alternating offset direction between layers but it still retains high porosity in the stacking sequence. Staggered stacking is a special type of AB stacking, whereby the offset between layers is sufficient to make one layer's skeleton centered directly above the other pore, e.g.~a horizontal offset halfway along the $ab$ unit cell diagonal. This large offset between layers would reduce the porosity completely in the structures.\cite{putz2020total} These four stacking modes can be combined to form a random stacking sequence\cite{mahringer2020taking}, which is difficult to characterize experimentally or computationally due to limitations in equipment precision and computational demand.
Several studies have focused on stacking modes and their effect on properties for various 2D COFs.\cite{putz2020total,haase2017tuning,koo2012classification,spitler20112d,emmerling2021interlayer} It has been found that the AA stacking mode is the most energetically unfavorable as a result of strong repulsive interlayer orbital interactions.\cite{lukose2010reticular, koo2012classification, haase2017tuning} Koo et al. studied the potential energy surface (PES) of 33 COFs using molecular mechanics (MM) and density functional theory (DFT) approaches, finding that COFs are preferentially stacked with 1--2 \r{A} horizontal offsets between layers.\cite{koo2012classification} It has been reported that bulk COF structures have either inclined or zigzag stacking, which are more energetically favorable than eclipsed and staggered stacking.\cite{lukose2011structure, winkler2021understanding} The simulated XRD patterns of unidirectionally slipped (AA\textquoteright) and alternating slipped (AB) modes show a better agreement with the experimental XRD pattern than the eclipsed structures.\cite{lukose2011structure,putz2020total} More precisely, in many studies\cite{martinez2021understanding,fan2017case,zhou2010structural}, the predicted diffraction patterns of inclined stacking are more consistent with experiment than other stacking modes.
COFs of Tp-Azo\cite{chandra2014phosphoric} and DAAQ-TFP\cite{deblase2013beta}, whose experimentally reported crystal structures are shown in Fig.~\ref{fig:1}(a) and (c), have been reported with high energy capacity, good cycling performance and excellent stability as battery electrodes.\cite{zhao2020dual,an2021designs,wang2017exfoliation, deblase2013beta} It has been predicted that 30 Li$^+$ ions per unit cell can be inserted into and extracted from the porous Tp-Azo structure, using DFT simulations.\cite{zhao2020dual} DAAQ-TFP COF linked by $\beta$-ketoenamines\cite{kandambeth2012construction, chandra2013chemically} was the first COF to exhibit reversible redox behavior in energy storage systems and has the highest surface area of all COFs linked by either imines or enamines.\cite{deblase2015rapid,deblase2013beta} However, the basic structural properties, the stacking modes and the electronic structures in Tp-Azo and DAAQ-TFP COFs have not been reported. In this work, we present a theoretical study of the bulk properties and potential energy surface for stacking fault disorder of these two COFs. Furthermore, we investigate the effect of the stacking sequence on the electronic structure, rationalising the behaviour through consideration of the interlayer orbital interactions, and discuss the implications for COF material design for energy applications.
\section{Methods}
All electronic structure calculations were performed using Kohn--Sham DFT through the all-electron ``Fritz Haber Institute ab initio molecular simulations'' \textit{FHIaims} package.\cite{blum2009ab, ren2012resolution, havu2009efficient, levchenko2015hybrid} Both the semi-local functional of Perdew-Burke-Ernzerhof revised for solids (PBEsol)\cite{perdew2008restoring} and the hybrid Heyd-Scuseria-Ernzerhof (HSE06)\cite{heyd2003hybrid} functional were used, and the Tkatchenko-Scheffler correction was implemented to account for van der Waals (vdWs) interactions between layers. The PBEsol functional was used for geometry optimisation, having been shown to predict atomic structures and energies of solid materials with good accuracy.\cite{perdew2008restoring} The HSE06 functional was used for calculations of electronic band structures, having been shown to accurately reproduce the electronic structure across a range of semiconductors.\cite{pedro2020exchange} A \textit{k}-point grid of $1 \times 1 \times 10$ was used for the geometry optimisation with $6 \times 6 \times 6$ sampling used for electronic structure analysis.
An energy convergence criterion of 0.01 meV per unit cell was used with an atomic force tolerance of 0.01 eV/\r{A}.
The initial crystal structure parameters of Tp-Azo and DAAQ-TFP were obtained from the CoRE-COF database\cite{tong2017exploring}. These structures were firstly relaxed using the lighter Tier 1 numerical basis set, followed by a relaxation with the expanded Tier 2 basis set, before calculating the energetic and electronic properties. The well-converged conventional ``intermediate'' basis functions for each element species were used in the band structure calculations.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{grid_based_approach.pdf}
\caption{\textbf{(a)} The experimentally-reported crystal structure of DAAQ-TFP viewed along <0001>. \textbf{(b)} Grid-based approach for the displacement of DAAQ-TFP COF along the $a$ and $b$ sides. Here the structure within each layer is held fixed as the layers are displaced. The angle between $a$ and $b$ in \textbf{(a)} and \textbf{(b)} corresponds to $\angle\gamma$ for unit cells of relaxed Tp-Azo and DAAQ-TFP. The blue dots on \textbf{(b)} represent the locations of single-point calculations using PBEsol functional.}
\label{fig:2}
\end{figure}
The relaxed structures were modified to study the stacking fault behaviour. Layer displacement was modelled by changing the angles of $\alpha$ and $\beta$ of the unit cell, thereby shifting the individual pseudo-hexagonal layers along the $ab$ plane to yield inclined stacking modes.\cite{haase2017tuning} Fig.~\ref{fig:2} shows the slip grid of one layer to the another layer along the $a$ and $b$ sides, with offsets of -6 \r{A} to 6 \r{A} and displacement steps of 0.5 \r{A}. The distance between adjacent layers was kept fixed to that of the relaxed structures. The energies of the displaced structures were then calculated with fixed atomic positions and with relaxed atomic positions. The relaxed displaced structures were used to study the effect of the stacking faults on the physical properties.
\section{Results and Discussion}
\subsection{Crystal structure optimisation}
The crystal structure of Tp-Azo was assigned to a hexagonal $P6/m$ space group, with eclipsed stacking of planar layers separated by a distance of 3.3 \r{A}, on the basis of powder XRD measurements (Fig.~\ref{fig:1}\textbf{(a)} and Supplementary Tab.~S1\textbf{(a)}).\cite{chandra2014phosphoric}
Similarly, the structure of DAAQ-TFP has been assigned to a $P6/m$ space group from Pawley refinement of powder XRD patterns (Fig.~\ref{fig:1}\textbf{(c)} and Supplementary Tab.~S1\textbf{(c)}).\cite{deblase2013beta}
Upon geometry optimisation, in both cases we find both a breaking of the planarity within layers through an undulating distortion, as well as relative coplanar displacements between layers as shown in Fig.~\ref{fig:1}\textbf{(b)} and \textbf{(d)}.
The space group symmetry lowers to P$\bar{1}$.
The interlayer distance of Tp-Azo it decreases from 3.30 \r{A} to 3.23 \r{A}, and in DAAQ-TFP it decreases from 3.60 \r{A} to 3.33 \r{A} during geometry relaxation from the reference structures.
The layer shift of Tp-Azo along $a$ is -2.63 \r{A} and along $b$ is 2.01 \r{A}, the offset between neighbouring layers along the $ab$ plane is 2.73 \r{A}. The layer shift of DDAQ-TFP in the $ab$ plane is 2.29 \r{A}.
The horizontal offsets of both Tp-Azo and DAAQ-TFP are higher than other COFs, which have been reported with offsets of 1-2 \r{A} between neighbouring layers.\cite{koo2012classification,haase2017tuning}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\textwidth]{2_pes.pdf}
\caption{Contour maps of the potential energy surfaces for different displacements along the $a$ and $b$ axes of \textbf{(a)} Tp-Azo and \textbf{(c)} DAAQ-TFP. Blue corresponds to regions of low energy, while yellow regions are high energy. A zero (0 {\AA}, 0 {\AA}) shift at the centre of each plot represents a perfectly eclipsed geometry. \textbf{(b)} and \textbf{(d)} show the 1D cross-sections along the $x$ (blue line) and $y$ (orange line) axes. The blue and orange dashed lines in \textbf{(a)} and \textbf{(c)} correspond to those in \textbf{(b)} and \textbf{(d)}, respectively.}
\label{fig:3}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\textwidth]{2_band_gap_plot.pdf}
\caption{Contour maps of the electronic band gaps for displaced structures of \textbf{(a)} Tp-Azo and \textbf{(c)} DAAQ-TFP, normalized to the minimum band gap and calculated using hybrid DFT (HSE06). Blue / dark green corresponds to small band gaps and yellow represents large band gaps. A zero (0 {\AA}, 0 {\AA}) shift at the centre of each plot represents a perfectly eclipsed geometry. \textbf{(b)} and \textbf{(d)} plot the band gap variation upon displacement along the $x$ or $y$ axes, corresponding to the dashed lines in \textbf{(a)} and \textbf{(c)}, respectively.}
\label{fig:4}
\end{figure*}
\subsection{Binding between layers}
Due to their non-covalent interlayer interactions, the structural properties of COFs can be modified through exfoliation or tuning of interlayer distances.\cite{xu2015stable,wang2017exfoliation}
Single- or few-layer COFs are an emerging class of functional materials.\cite{li2020partitioning, li2020partitioning,deblase2015rapid}
For example, nanosheets of DAAQ-TFP show promise in battery cathodes due to shorter ion/electron migration pathways and higher ionic/electronic diffusion rates.\cite{wang2017exfoliation}
Hence, knowledge of the binding energy between layers in COFs is important for tuning their performance in device applications.
The binding between layers of Tp-Azo and DAAQ-TFP was calculated from the total energy difference between the relaxed monolayer and the bulk COFs. Due to the requirement of periodic boundary conditions, the layer distance was increased to 30 \r{A} to ensure negligible chemical interactions between repeating layers (Supplementary Fig.~S2).\cite{bjorkman2012van} The exfoliated COF layers were fully relaxed with this fixed interlayer distance. After relaxation, the undulating monolayer structure became planar again, indicating this distortion to be a result of interlayer interactions (Supplementary Fig.~S2).
The binding energy, $\gamma$, to form the monolayer can be calculated per unit area according to:
\begin{equation}\label{eq1}
\gamma = (E_\mathrm{monolayer}-N E_\mathrm{bulk}) / A,
\end{equation}
where $E_\mathrm{monolayer}$ is the total energy of the COF monolayer, $N$ is the total number of atoms in the surface of the monolayer, and $E_\mathrm{bulk}$ is the bulk energy per atom.\cite{bjorkman2012van, han2019surface} $A$ is the area of the bottom or the top surface of the monolayer. As there is only a single layer per unit cell, only a single layer surface area is needed in Equation \ref{eq1}. The binding energy between layers of Tp-Azo and DAAQ-TFP COFs are 2.5 meV/\r{A}$^2$ and 3.1 meV/\r{A}$^2$, respectively. Compared with the binding energies of other 2D layered materials such as graphite (13 meV/\r{A}$^2$) and \ce{MoS2} (20 meV/\r{A}$^2$)\cite{bjorkman2012van}, Tp-Azo and DAAQ-TFP can be classified as $'$easily exfoliable$'$ 2D materials (specifically, their binding energies are smaller than 30 meV/\r{A}$^2$).\cite{han2019surface, mounet2018two} The high porosity in the COF structure greatly contributes to this low interlayer binding energy, with $\gamma$ increasing to 8.5 meV/\r{A}$^2$ and 9.2 meV/\r{A}$^2$, respectively, when the pores are omitted from the framework surface area $A$ in Equation \ref{eq1}.
\subsection{Potential energy surface for layer displacements}
A series of displaced structures were generated with the layers offset to varying amounts along the $a$ and $b$ axes (Fig.~\ref{fig:2}).
For each displaced structure, the internal geometry was relaxed and the energy minimum was set to 0 (Fig.~\ref{fig:3}).
The PES exhibits a characteristic hexagonal shape for both COFs, resembling a ``sombrero'' potential.\cite{koo2012classification, meier2020manifestation}
A similar scan of rigid layers without geometry relaxation is shown in Supplementary Fig. S3; a steeper and more fragmented PES is produced.
The interlayer $\pi$-$\pi$ interactions give rise to a stable hexagonal ring (dark blue in Fig.~\ref{fig:3}\textbf{(a)} and \textbf{(c)}) where the relative layer displacements maximise the attractive electrostatic interactions.
Eclipsed stacking of layers is significantly less energetically favourable than the displaced arrangements and represents a local maximum on the PES.
The center of the PES, corresponding to no displacement, is 0.20 $\textrm{eV}/\textrm{nm}^2$ higher than the minimum energy for Tp-Azo, and 0.17 $\textrm{eV}/\textrm{nm}^2$ for DAAQ-TFP.
This is shown most clearly from the 1D cross-sections in Fig.~\ref{fig:3}\textbf{(b)} and \textbf{(d)}.
The width of the low energy wells is approximately 2 \r{A} along both the $x$ and $y$ axes.
The suggested behaviour is distinct from typical stacking faults associated with discrete local minimum configurations, e.g.~mixtures of hexagonal (AB) and cubic (ABC) packing in close-packed crystals.
Here, a continuous range of configurations are accessible.
Random sampling of the low energy ring would produce an average structure that appears as eclipsed to macroscopic measurements,\cite{spitler20112d} yet in reality comprises locally offset COF layers.
Moreover, the soft ``sombrero" PES suggest a high sensitivity of the actual COF structures to the synthesis and processing conditions.
\subsection{Electronic band gap opening}
Next, we consider the impact of these displacive instabilities on the underlying electronic structure of the COFs.
Remarkably, the band gap variation follows the inverse of the PES.
The smallest band gap is exhibited by the eclipsed structure with no displacements along the $a$ and $b$ axes (at the center of the heatmaps).
The HSE06 calculated band gap is 0.28 eV for eclipsed Tp-Azo and 1.29 eV for eclipsed DAAQ-TFP.
A band gap opening of 1.37 eV (to 1.65 eV) and 0.75 eV (to 2.04 eV) is found for displaced Tp-Azo and DAAQ-TFP, respectively.
A similar behaviour has previously been observed in COF-5.\cite{kuc2020proximity}
We note that monolayers of Tp-Azo and DAAQ-TFP exhibit even larger band gaps of 2.06 eV and 2.36 eV as a result of quantum confinement in the 2D sheets (Supplementary Fig.~S4).
These band gaps suggest that Tp-Azo and DAAQ-TFP COFs are semiconducting materials.
Fig.~\ref{fig:4}\textbf{(b)} and \textbf{(d)} show that the band gaps change sharply upon small displacement between layers within a deep well of 2 \r{A} width.
However, when the displacement is more than 2 \r{A} but less than 6 \r{A}, the band gap oscillates between 1.25 eV and 1.72 eV for Tp-Azo, and 0.71 eV and 0.97 eV for DAAQ-TFP.
Our analysis suggests a relatively small variation within the low energy ring that should be populated at room-temperature in thermal equilibrium.
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{band_structure_wavefunction.pdf}
\caption{Electronic band structures of eclipsed and displaced Tp-Azo \textbf{(a-b)} and DAAQ-TFP \textbf{(c-d)}, alongside the electronic wavefunctions (isosurface = 0.004 eV/\r{A}$^3$) of the valence band maximum and conduction band minimum at the $\Gamma$ point. The highest occupied band is indicated by the dashed horizontal line.}
\label{fig:5}
\end{figure*}
\subsection{Origins of strong electronic coupling to layer displacements}
The electronic band structures of the eclipsed and slipped Tp-Azo and DAAQ-TFP COFs are compared in Fig.~\ref{fig:5}.
Both COFs exhibit low band dispersion along the $\Gamma$-M-K-$\Gamma$ path in reciprocal space, which corresponds to in-plane directions.
The layer stacking direction, which is the shortest axis in real space, corresponds to the longer $\Gamma$-A line in the band structure.
For eclipsed stacking, the interlayer interactions produce dispersive bands with a band width 1.72 eV in the upper valence band (VB) and 1.56 eV in lower conduction band (CB) along the $\Gamma$-A path of Tp-Azo. The dispersion of DAAQ-TFP is slightly reduced, giving rise to a band width of to 1.35 eV in the VB and 0.47 eV in the CB in the interlayer direction.
Eclipsed Tp-Azo and DAAQ-TFP both have strongly indirect band gaps arising from the interlayer interactions between between $\Gamma$ and A.
Layer slippage results in a pronounced change in the band dispersion.
The band structures remain weakly indirect between $\Gamma$ and A, but the dispersion itself is inverted with the VB maximum changing location.
Both the conduction and valence bands become much flatter upon layer displacement, particularly along the $\Gamma$-A path. In going from the eclipsed to displaced stacking mode, the widths of the topmost valence bands reduce from 1.72 to 0.53 eV (Tp-Azo) and from 1.35 to 0.28 eV (DAAQ-TFP), and from 1.65 to 0.15 eV (Tp-Azo) and 0.53 to 0.13 eV (DAAQ-TFP) for the bottom-most conduction bands.
The corresponding $\Gamma$ point wavefunctions are shown in the insets of Fig.~\ref{fig:5}.
They confirm that the band edges are formed from the C 2p$_z$ $\pi$ subsystem.
For the eclipsed structure, the interlayer interactions are strongly anti-bonding at the $\Gamma$ point.
This explains the strong downward dispersion towards A, where the phase of successive layers is reversed.
In the displaced structure, stronger interlayer $\pi$ bonding interactions are allowed at the $\Gamma$ point and the band dispersion is suppressed along the $\Gamma$-A line.
These changes result in a band gap that is weakly indirect and much larger in magnitude compared to the eclipsed structure.
\section{Conclusions}
It is convenient to represent and model covalent organic frameworks as an ordered sequence of eclipsed planar layers.
However, by taking the examples of Tp-Azo and DAAQ-TFP, we have shown that the results can be highly misleading in line with recent observations for other COFs.
A displaced stacking sequence of undulating layers both lowers the total energy of the frameworks and results in a large change in the electronic structure driven by interlayer $\pi$ orbital overlap.
Layer displacements produce a pronounced band gap opening in these frameworks.
The unusual ``sombrero'' potential energy surface for layer displacements, which mirrors the variation in band gap, has important implications.
Although macroscopically a given COF may appear to have an eclipsed structure, for example on the basis of diffraction measurements, locally a continuous range of stacking sequences are accessible.
The strong coupling between layer orientation and electronic structure highlights the potential for COF twistronics where longer range modulations in the crystal potential are harnessed.
These findings will be of particular importance when screening COFs for applications in energy storage and conversion where electrochemical and photochemical descriptors are significantly altered including accessible voltage ranges for batteries, stability windows for electrocatalysis, and visible light absorption for photoelectrochemical systems.
\section{Acknowledgements}
J.H.~thanks Chengcheng Xiao for suggestions relating to the computational workflow.
J.H.~acknowledges Imperial College London and the Chinese Scholarship Council (CSC) for providing a PhD scholarship.
S.R.K.~acknowledges the EPSRC Centre for Doctoral Training in the Advanced Characterisation of Materials (CDT-ACM)(EP/S023259/1) for funding a PhD studentship. K.T.~acknowledges the Independent Research Fund Denmark for funding through the International Postdoctoral grant (0164-00015B).
A.M.G~was supported by EPSRC Fellowship EP/T033231/1.
We are also grateful to the UK Materials and Molecular Modelling Hub for computational resources, which is partially funded by EPSRC (EP/P020194/1 and EP/T022213/1). Via our membership of the UK's HEC Materials Chemistry Consortium, which is funded by EPSRC (EP/L000202), this work used the ARCHER2 UK National Supercomputing Service (https://www.archer2.ac.uk).
\balance
\section{Supplementary Information}
\begin{table}[hbt!]
\caption{Lattice parameters and space groups of \textbf{(a)} planar experimentally-reported crystal structure and \textbf{(b)} wavy relaxed structure of Tp-Azo. The lattice parameters and space groups of \textbf{(c)} planar experimentally-reported crystal structure and \textbf{(d)} wavy relaxed structure of DAAQ-TFP.}
\begin{center}
\begin{tabular}{lccccccc}
\hline
\multicolumn{8}{|c|}{Lattice parameters} \\
\hline
-- & $a$ (\r{A}) & $b$ (\r{A}) & $c$ (\r{A}) & $\alpha$ ($^{\circ}$) & $\beta$ ($^{\circ}$) & $\gamma$ ($^{\circ}$) & Space group\\
\hline
\textbf{(a)} Tp-Azo\cite{chandra2014phosphoric} &31.50 & 31.50 & 3.30 & 90 & 90 & 120 & $P6/m$\\
\textbf{(b)} Tp-Azo$^*$ &33.28 & 33.36 & 4.23 & 61.71 & 128.44 & 121.17 & P$\bar{1}$\\
\textbf{(c)} DAAQ-TFP\cite{deblase2013beta} & 29.83 & 29.83 & 3.60 & 90 & 90 & 60 & $P6/m$\\
\textbf{(d)} DAAQ-TFP $^*$ &30.28 & 30.38 & 4.04 & 68.68 & 56.15 & 60.19 & P$\bar{1}$\\
\hline
\end{tabular}
\end{center}
\label{tab1:lattice_parameter}
\end{table}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\textwidth]{supporting_information/workflow.pdf}
\caption{Workflow of displacement process.}
\label{fig:1}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{supporting_information/monolayer_process.pdf}
\caption{Process of forming monolayer and the binding energy calculation of Tp-Azo COF. The upper figures are viewed looking down the $c$ axis, and the lower figures are viewed along the $ab$ plane. The layer distance in geometry relaxation keeps 30 \r{A}.}
\label{fig:2}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{supporting_information/2_pes.pdf}
\caption{Potential energy surfaces (PES) for rigid layer displacements along $a$ and $b$ sides of (a) Tp-Azo and (b) DAAQ-TFP. Note, the PES reported in the main text includes geometry relaxation.}
\label{fig:3}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=\textwidth]{supporting_information/monolayer_band_structures.pdf}
\caption{Electronic band structures of monolayer Tp-Azo (a) and monolayer DAAQ-TFP (b).}
\label{fig:4}
\end{figure*}
\clearpage |
2005.05945 | \section{Methodology, Modeling and Optimization}
\label{sec:Method}
\subsection{Income, Consumption and Savings}
The initial pre-disaster income, $i_o$, is:
\begin{equation}
\begin{aligned}
i_o &= i_o^L + i_o^{oth} + i_o^h \\
&= i_o^L + \pi k_o^{oth} + \pi k_o^h
\end{aligned}
\end{equation}
where $i_o^L, i_o^{oth}$ and $i_o^h$ are the initial pre-disaster incomes from labor, investments and housing respectively\footnote{Here, what we refer to as housing income is the imputed rent for homeowners, considered as a capital income.}, $k_o^{oth}$ and $k_o^h$ are capital stocks for investments and housing respectively and $\pi$ is the US average productivity of capital. The total income as a function of time, $i(t)$, is defined as follows:
\begin{equation}
\begin{aligned}
i(t) &= i_o - \Delta i(t) \\
&= i_o - \Delta i^L(t) + i^{UI}(t) + i^{CARES}(t)
\end{aligned}
\end{equation}
where $\Delta i^L(t)$ is the labor income loss over time due to the crisis and $i^{UI}(t)$ and $i^{CARES}(t)$ are the unemployment insurance income and CARES Act 2020 stimulus package income respectively. The initial pre-disaster household consumption, $c_o$ is:
\begin{equation}
c_o = i_o - p_o^{rent} - p_o^{mort}
\end{equation}
where $p_o^{rent}$ and $p_o^{mort}$ are the rent and mortgage payments\footnote{Rent and mortgage payment are removed since the housing income is included in $i_o$. As a simplification, it is assumed that all income that is not invested in housing is being consumed.}.
Initially, households have also precautionary savings $S_o$, which they can use to smooth consumption in case of income shock. It is assumed that the containment phase last for a duration $T_C$. After this period, incomes can get back to their pre-crisis level, and there is a recovery period of duration $T_R$ during which households rebuild their precautionary savings.
As a first exploration, this study assumes that there is no macroeconomic-level impact of the crisis: the only impact is a decline in the income of some households, either because they cannot work remotely or because demand has collapsed in their sector. People who are not directly affected through a drop in revenue or loss of job are assumed that have an unchanged income. These assumptions are acceptable over the short-term, but will be increasingly optimistic as the duration of the containment last. Over the longer-term, one can expect all workers and firms to be affected as the impact of reduced incomes propagate through the economic system. These second-round effects will be explored in a second phase.
During the crisis and recovery period, households use and then rebuild their precautionary savings and the consumption as a function of time, $c(t)$, is as follows:
\begin{equation}
c(t) = \begin{cases}
c_o - \Delta i(t) + \displaystyle \frac{S_o - S_f}{T_C} & \mbox{if } 0 \le t \le T_C \\[10pt]
c_o - \displaystyle \frac{S_o - S_f}{T_R} & \mbox{if } T_C < t \le T_C + T_R \\
\end{cases}
\label{Eq:Consumption}
\end{equation}
where $S_o$ and $S_f$ and the initial and final savings respectively, $T_C$ and $T_R$ are the duration of crisis and recovery respectively. The adjusted income is the following:
\begin{equation}
c_{adj}(t) = \max(c(t), c_{min})
\end{equation}
where $c_{min}=1e^{-3}$ represents the survival level of consumption, assuming people always have access to humanitarian assistance (e.g., food banks).
Finally, the household savings as a function of time, $S(t)$, are:
\begin{equation}
S(t) = \begin{cases}
S_o - t \displaystyle \frac{S_o - S_f}{T_C} & \mbox{if } 0 \le t \le T_C \\[10pt]
S_f + \displaystyle \frac{t - T_C}{T_R} (S_o - S_f) & \mbox{if } T_C < t \le T_C + T_R \\
\end{cases}
\label{Eq:Savings}
\end{equation}
where $t$ is the time, which is initialized at the start of the crisis $t_o = 0$, and other terms are defined previously. The CARES stimulus individual paycheck (up to \$1,200) is added with a time delay directly into savings. The recovery time is based on an exogenous ability to save, assumed constant for all households:
\begin{equation}
T_R = \frac{S_f - S_0}{\gamma c_0}
\end{equation}
where $\gamma$ is the saving rate during recovery, until precautionary savings are back to their pre-crisis level (here we will assume $\gamma=0.10$).
The model, with household consumption and savings time series, is shown in Figure \ref{fig:Model}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\linewidth]{Figures/Model.pdf}
\caption{Household consumption and savings model with crisis and recovery periods. Highlighted zones indicate federal assistance and state unemployment insurance}
\label{fig:Model}
\end{figure}
\subsection{Utility Functions and Household Well-Being}
We assume people derive an utility from consumption, $u(t)$, and from precautionary savings, $v(t)$:\footnote{Utility from precautionary savings can be interpreted either as the value of ``peace of mind'' when people have precautionary savings, or as the net prevent value of the future use of precautionary savings for any shock that can occur in the future.}
\begin{equation}
u(t) = \frac{1}{1 - \eta} c(t)^{1 - \eta}
\end{equation}
\begin{equation}
v(t) = \frac{\alpha}{1 - \beta} S(t)^{1 - \beta}
\end{equation}
where $\eta$ is the elasticity of the marginal utility of consumption, $\alpha$ and $\beta$ represent statistically calibrated parameters for utility of precautionary savings (see Appendix).
The household well-being, $W$, is the sum of the household well-beings during crisis, $W_C$ and recovery phases, $W_R$:
\begin{equation}
\begin{aligned}
W & = W_C + W_R \\
&= \int_0^{T_C} e^{-\rho t} \left( u_C(t) + v_C(t) \right) dt + \int_{T_C}^{T_C+T_R} e^{-\rho t} \left( u_R(t) + v_R(t) \right) dt \\
W & = \int_0^{T_C} e^{-\rho t} \left( \frac{1}{1 - \eta} c_C(t)^{1 - \eta}
+ \frac{\alpha}{1 - \beta} S_C(t)^{1 - \beta} \right) dt \\
& \qquad \qquad \qquad + \int_{T_C}^{T_C+T_R} e^{-\rho t} \left( \frac{1}{1 - \eta} c_R(t)^{1 - \eta} + \frac{\alpha}{1 - \beta} S_R(t)^{1 - \beta} \right) dt \\
\end{aligned}
\end{equation}
where the household consumption $c_C, c_R$ and savings $S_C, S_R$ for the crisis and recovery periods are detailed previously in equations. The household well-being losses, $\Delta W$, are defined as follows:
\begin{equation}
\Delta W = W_o - W
\end{equation}
where $W_o$ is the initial well-being defined as follows:
\begin{equation}
\begin{aligned}
W_o &= \int_0^{T_C + T_R} e^{-\rho t} \left( u_o + v_o \right) dt \\
&= \int_0^{T_C + T_R} e^{-\rho t} \left( \frac{1}{1 - \eta} c_o^{1 - \eta}
+ \frac{\alpha}{1 - \beta} S_o^{1 - \beta} \right) dt \\
W_o &= \frac{1}{\rho} \left( 1 - e^{-\rho (T_C + T_R) } \right) \left( \frac{1}{1 - \eta} c_o^{1 - \eta} + \frac{\alpha}{1 - \beta} S_o^{1 - \beta} \right) dt \\
\end{aligned}
\end{equation}
The utility at the minimum level of consumption $c_{min}$ provides a lower bound for people's utility. It also provides a higher bound to the well-being impact individuals can experience. Note that this model does not include mortality and morbidity, either due to COVID-19 or to health impacts from containment, such as under- and malnutrition due to insufficient income, mental health implications from isolation, or indirect health consequences from reduced access to health care (especially for people with chronic disease.
\subsection{Optimization Formulation}
In this study, we will assume that households will deplete their savings to smooth consumption over time, in order to maximize their well-being. There are not using all their precautionary savings, because other shock may affect them during or after the COVID-19 crisis, so that the utility derived from remaining precautionary savings have an increasing value as they are used in the current shock.
One important simplification here is that people are assumed to know in advance the duration of the containment phase. In reality, one challenge for households is to decide how to manage their precautionary savings in the context of a highly uncertain crisis, both in duration and magnitude. The assumption that the duration is known means that the results from the analysis are conservative, underestimating well-being and poverty consequences from containment.
The one-dimensional unconstrained optimization problem is the following:
\begin{equation}
\begin{array}{l l}
\text{maximize: } & W(S_f) = W_C(S_f) + W_R(S_f) \\[10pt]
\text{subject to: } & 0 \le S_f \le S_0 \\
\end{array}
\label{eq:optimization}
\end{equation}
where $S_f$ is the design variable, the savings at the end of the crisis. The optimization problem is sequentially solved for every census tract in the database representing Bay Area households.
\subsection{Convex Optimization Proof}
In this section, we prove that the optimization problem Eq. \ref{eq:optimization} is convex. Since this is an unconstrained optimization (besides the convex inequality box constraints), to show the problem is convex, we need to prove that $W(S_f)$ is a concave function in $S_f$ (maximization), or $-W(S_f)$ is a convex function (minimization). Thus, we need to show that $-W_C(S_f)$ and $-W_R(S_f)$ are both convex functions. The following Lemmas are necessary for the derivation \citep{Boyd2004ConvexOptimization}:
\paragraph{\textbf{Lemma 1}}
Let $f(x,t):R^2 \rightarrow R$ be a convex function in $x$ for each $t \in [a, b]$, and $w(t) \ge 0$, then the function $\phi$, defined as follows:
\begin{equation}
\phi(x) = \int_a^b w(t) f(x,t) dt
\end{equation}
is a convex function in $x$.
\paragraph{\textbf{Lemma 2}}
Let $f(x): R \rightarrow R$ be a convex function in $x$ and $a, b \in R$. The function $g$ defined by:
\begin{equation}
g(x) = f(ax + b)
\end{equation}
is a composition of an affine function, $x \mapsto ax + b$, and $f$, and is convex in $x$. \\
\noindent Thus, since $e^{-\rho t} > 0$ for all $t$, $-W_C$ is convex if and only if $-u_C$ and $-v_C$ are convex functions, similarly for $-W_R$, $-u_R$ and $-v_R$ (Lemma 1).
Since $1-\eta < 0$, $1-\beta < 0$ and $\alpha>0$, $f(x) = \frac{-1}{1 - \eta} x^{1 - \eta}$ and $g(x) = \frac{-\alpha}{1 - \eta} x^{1 - \beta}$ are convex functions. In addition, since $c(t), S(t)$ are piecewise affine functions in $S_f$, $-u(t), -v(t)$ are the compositions of affine functions and convex functions. By Lemma 2, $-u(t), -v(t)$ are convex functions in $S_f$.
To conclude, $-W_C(S_f)$ and $-W_R(S_f)$ are convex, thus $-W(S_f)$ is a convex function in $S_f$. The optimization problem is convex (QED). \hfill $\blacksquare$
\subsection{Calibration of Utility of Precautionary Savings}
The utility of precautionary savings is of the functional form:
\begin{equation}
v(t) = \frac{\alpha}{1 - \beta} S(t)^{1 - \beta}
\end{equation}
where $\alpha$ and $\beta$ are parameters to statistically calibrate. Assuming an exponential law relation between consumption and savings:
\begin{equation}
S_o = a c_o^b
\end{equation}
where $a, b$ are parameters to calibrate that describe the exponential law. Equilibrium between savings and consumption is assumed initially, before the crisis:
\begin{equation}
\left. \frac{du}{dc} \right|_{c_o} = \left. \frac{dv}{dS} \right|_{S_o}
\end{equation}
which leads to the following relation $\beta = \eta / b$.
\noindent Using variational form of the household well-being:
\begin{equation}
c_\lambda (t) = c_o + \lambda \delta_o(t)
\end{equation}
where $\lambda$ is an instantaneous increase of consumption and $\delta_o(t)$ is the Dirac delta function. The variational savings is the following:
\begin{equation}
S_\lambda (t) = S_o + \lambda
\end{equation}
\noindent The variational well-being is now:
\begin{equation}
\begin{aligned}
W(\lambda, t) &= \int_0^{+\infty} e^{-\rho t} \left( \frac{1}{1 - \eta} c_\lambda(t)^{1 - \eta}
+ \frac{\alpha}{1 - \beta} S_\lambda(t)^{1 - \beta} \right) dt \\
W(\lambda, t) &= \int_0^{+\infty} e^{-\rho t} \left( \frac{1}{1 - \eta} (c_o + \lambda \delta_0(t))^{1 - \eta} + \frac{\alpha}{1 - \beta} (S_o + \lambda)^{1 - \beta} \right) dt
\end{aligned}
\end{equation}
Taking the derivative in terms of $\lambda$ and setting to zero at equilibrium:
\begin{equation}
\pdv{W}{\lambda} = \int_0^{+\infty} e^{-\rho t} \left( \delta_0(t)(c_o + \lambda \delta_0(t))^{- \eta} + \alpha (S_o + \lambda)^{-\beta} \right) dt = 0
\end{equation}
\noindent Solving we get the following equation (setting $\lambda=0$ at equilibrium conditions):
\begin{equation}
\int_0^{+\infty} e^{-\rho t} \left( \delta_0(t) c_o^{-\eta} + \alpha S_o^{-\beta} \right) dt = 0
\end{equation}
\begin{equation}
c_o^{-\eta} - \frac{\alpha}{\rho} S_o^{-\beta} = 0
\end{equation}
\begin{equation}
\alpha = \rho \frac{c_o^{-\eta}}{S_o^{-\beta}}
\end{equation}
The exponential calibration of the utility of precautionary savings is shown in Figure \ref{fig:Fig1_SavingsCalibration} with coefficients $a=3.710, b=0.638$, which a coefficient of determination of $R^2=0.9861$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{Figures/Fig1_SavingsCalibration.pdf}
\caption{Utility of per capita savings exponential calibration}
\label{fig:Fig1_SavingsCalibration}
\end{figure}
\section{California Labor Statistics}
\label{sec:CaliforniaLabor}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/UIR_California.pdf}
\caption{Insured unemployment rate (IUR) using a 13-week average from 1999 to 2019 for California according to the Employment Development Department (EDD), State of California, \url{https://www.edd.ca.gov/about_edd/quick_statistics.htm\#UIStatistics} }
\label{fig:UIR_California}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/CountyUnemploymentRate.pdf}
\caption{Map of annual average unemployment rate in California by county in 2018 according to the Employment Development Department (EDD), State of California, \\ \url{https://www.labormarketinfo.edd.ca.gov/file/Maps/County_UR_2018BM2018.pdf}}
\label{fig:CountyUnemploymentRate}
\end{figure}
\section{California Poverty Rate}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{Figures/PovertyCalifornia.png}
\caption{Poverty rates for California counties according to the California Poverty Measure (CPM) courtesy of the Public Policy Institute of California (PPIC) and Stanford's Center on Poverty and Inequality, \url{https://www.ppic.org/publication/poverty-in-california/}}
\label{fig:PovertyCalifornia}
\end{figure}
\section{Introduction}
COVID-19 has led to severe and acute losses in many economies around the world due to illness and and government-mandated social distancing orders. The impact and duration of the economic crisis on individual households, resulting from the pandemic, is difficult to predict as many uncertainties surround the crisis duration, i.e. length of ''stay-at-home" orders, as well as impacted industries and the post-crisis consumption and recovery.
There is a plethora of ongoing research studies on estimating the economic impact of COVID-19, in both emerging and developed countries. Due to widespread business closures, especially in lower income populations, national economies are expected to contract, leading to a dramatic rise in unemployment and poverty rates. A report from the World Bank estimated that 11 million people could fall into poverty across East Asia and the Pacific \citep{WorldBank2020EastCovid-19}. Analyzing the effect of the pandemic on poor communities across four continents, \cite{Buheji2020TheReview} estimates that 49 million individuals will be driven into extreme poverty in 2020 (living on less than \$1.90 per day).
The U.S. economy, where gross domestic product (GDP) fell by 4.8\% in the first quarter, is projected to fall into recession in 2020, with a contraction of 5.0\% in a likely scenario \citep{McKibbin2020TheScenarios, Fernandes2020EconomicEconomy}. The European Commission estimates that the euro area economy would decline by 7.25\% in 2020, with all countries expected to fall into a recession \citep{EuropeanCommission2020European2020}. Developing countries in South-East Asia are also vulnerable to the global economic disruption of the pandemic due to decrease in trade, foreign investment and tourism. According to the International Monetary Fund (IMF), the ASEAN-5, which consists of Indonesia, Malaysia, Philippines, Thailand, and Vietnam is predicted to decline by 0.6\% in 2020 \citep{InternationalMonetaryFund2020WorldLockdown}. Reduction in remittances from high-income countries to low- and middle-income countries is likely to have a significant impacts in many countries, such as Nepal or the Philippines, where remittances represent a large share of many households' income.
In the six week span of March 15 to April 25, a record 30.2 Americans have filed for unemployment benefits as first-time claimants, according to the U.S. Department of Labor. The unemployment rate in the U.S. hit a staggering 14.7\% officially in April from statistics released by the U.S. Bureau of Labor Statistics and some predictions estimate even higher unemployment rates, above 20\%, \citep{Bick2020RealOutbreak}.
According to the Pew Research Center, the highest risks of layoffs are in the accommodations, retail trade, transportation services and arts entertainment and recreation services sector\citep{Kochhar2020YoungJobs}. Additionally, among the sectors that lost the most jobs in March are the leisure and hospitality and health and educational services \citep{Burns2020HowDemographics}. Using a variable vector autoregression model based on data from recent disasters, \cite{Ludvigson2020Covid19Disasters} estimates a cumulative loss of 24 million jobs in the U.S. over the course of 10 months, largely due to a 17\% loss in service sector employment. Only 37\% of jobs in the U.S. can be performed at home, and many lower-income countries have a lower share of jobs that can be performed remotely \citep{Dingel2020HowHome}. Consumer discretionary spending is in free fall as non-essential businesses are closed and individuals are saving more. Analyzing data from a personal finance website, \cite{Baker2020HowPandemic} found that consumer spending in the United States is highly dependent on the severity of the disease's outbreak in the state and the strength of the local government's response.
Although ongoing research is assessing the economic ramifications of COVID-19, most of these studies are focused on the macroeconomic and financial impact of the coronavirus. Impact on national economies are then translated into socio-economic impact on individuals, including consumption and poverty rates (top-down approach). The goal of this study is to analyze the socio-economic impacts of the COVID-19 containment at the household level (bottom-up approach). While this approach is not expected to replace macro-level analyses that can better capture the interaction across sectors and countries or the effect of macroeconomic aggregated, it can complement it by providing much finer estimates of the distributional impacts. It can also better account for households' coping capacity, the role of people's savings, and the higher resilience of multi-job households.
To understand the impacts of the loss of revenue on the lower income level populations, a household well-being formulation is adopted following the work of \cite{Hallegatte2016Unbreakable:Disasters}. The original household well-being model was developed for the disaster impact of an earthquake, and applied to the Bay Area in California \citep{Markhvida2020QuantificationLosses}. In addition, the household model has also been applied to estimate household-level resilience to natural disasters in Fiji, the Philippines, and Sri Lanka \citep{Walsh2018ClimateResilient, Walsh2019SocioeconomicAssessment, Walsh2020MeasuringLosses}. Here the economic shock of COVID-19 is represented by loss of income, in certain industry sectors, during a pre-defined crisis period. The impact of the coronavirus on household consumption, savings and recovery time is analyzed, as well as changes poverty rates and geospatial inequality distributions, with different assumptions regarding the social protection system. Since California has been affected early and high-frequency data on the situation of households are available in the U.S., the Bay Area is a good case study to develop and validate the model, which we then plan to apply in other countries and regions.
\section{Case Study}
To evaluate the socio-economic impact of COVID-19 on individuals, a micro-economic model is
built to estimate the household consumption and well-being. The model is used to quantify the
effects of mandatory “shelter-in-place” orders, as well as the effectiveness of social benefits.
The San Francisco Bay Area is used as a case study, where impact of the lockdown and the
recently passed CARES Act federal stimulus package are evaluated in conjunction with state
unemployment insurance (UI) benefits. The model and approach can be easily transferred to other countries
and regions, even though differences in the availability of data (e.g., regarding financial savings)
may make it necessary to make further approximations in other contexts.
\subsection{Scope and Data}
The Bay Area enacted a shelter-in-place order on March 16, 2020 in six counties: San Francisco, Santa Clara, San Mateo, Marin County, Alemeda County, Contra Costa. Soon after, a mandatory stay-at-home order across California was issued on March 19, 2020. This major business perturbation has led to a sharp rise in unemployment and severe economic repercussions \citep{Schwartz2020NowherePandemic}. On March 27 2020, the Coronavirus Aid, Relief and Economic Security (CARES) Act was signed into U.S. law, which among other stimulus measures, extends unemployment benefits and gives single payouts to individuals. The number of reported total cases per county across Bay Area, as well as the daily cases are shown in Figure \ref{fig:BayAreaCases}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{Figures/BayAreaCases.pdf}
\caption{Bay Area COVID-19 crisis timeline: total reported cases by county (top left) and daily reported cases from \textit{The New York Times} publicly available Coronavirus (COVID-19) Data in the United States \citep{TheNewYorkTimes2020AnU.S.} (bottom left) and Bay Area counties \citep{Rissman2008TheDatabase} (right)}
\label{fig:BayAreaCases}
\end{figure}
For the case study of the Bay Area, the socio-economic impacts of COVID-19 of the following 9 counties are modeled, in alphabetical order, (1) Alameda, (2) Contra Costa, (3) Marin, (4) Napa, (5) San Francisco, (6) San Mateo, (7) Santa Clara, (8) Solona and (9) Sonoma County (Figure \ref{fig:BayAreaCases}). The total population of the Bay Area is greater than 7,302,000 inhabitants. The household data is sourced from census tracts information using the \texttt{SimplyAnalytics} platform, which details income, investments, savings, employed and other relevant data for the year 2016 \citep{SimplyAnalytics2016CensusData}.
\subsection{Income Shock}
An income loss schedule is implemented according to industry sectors to model the shock of the COVID-19 crisis on households from economic impacts related to illness, layoffs and loss of activity due to social distancing orders. The income drop per sector is modeled according to Table \ref{tab:IndustryShock} using the 15 aggregated industry sectors from the U.S. Bureau of Economic Analysis (BEA). The hardest hit industries are assumed to be construction, retail trade, transportation, arts and entertainment \citep{Leatherby2020HowMoney}. Affected individuals are assumed to have a 100\% loss of labor income during the shelter-in-place order, which in this study is referred to as the crisis period.
\begin{table}[h!]
\centering
\caption{Percent of affected individuals for each aggregated industry sector of the U.S. Bureau of Economic Analysis (BEA)}
\begin{adjustbox}{width=0.85\textwidth}
\small
\begin{tabular}{c c p{4in} c} \hline
\textbf{No.} & \textbf{Sector} & \textbf{Description} & \textbf{Affected}$^\dagger$ \\ \hline
1 & AGR & Agriculture, forestry, fishing, and hunting & 0\% \\
2 & MIN & Mining & 0\% \\
3 & UTI & Utilities & 0\% \\
4 & CON & Construction & 50\% \\
5 & MAN & Manufacturing & 10\% \\
6 & WHO & Wholesale trade & 10\% \\
7 & RET & Retail trade & 50\% \\
8 & TRA & Transportation and warehousing & 50\% \\
9 & INF & Information & 10\% \\
10 & FIN & Finance, insurance, real estate, rental and leasing & 10\% \\
11 & PRO & Professional and business services & 10\% \\
12 & EDU & Educational services, health care and social assistance & 10\% \\
13 & ART & Arts, entertainment, recreation, accommodation and food services & 80\% \\
14 & OTH & Other services, except government & 80\% \\
15 & GOV & Government & 0\% \\ \hline
\multicolumn{4}{l}{$\dagger$: Percent of individuals in sector affected, income drop is assumed to be 100\% for affected pop.} \\ \hline \hline
\end{tabular}
\end{adjustbox}
\label{tab:IndustryShock}
\end{table}
\subsection{Policy Impacts}
To investigate the impacts of policies on per capita consumption and well-being, the following three case studies A, B and C are explicitly considered:
\begin{itemize}[leftmargin=*]
\item \textbf{Case A: Base.} This is the initial base case, where no unemployment insurance nor stimulus benefit package are considered. In this case, households will smooth their consumption during the crisis by using their savings.
\item \textbf{Case B: UI.} This is the Unemployment Insurance (UI) case, where the regular California UI benefits are considered. In this case, individuals who lose their job can receive between \$40 and \$450 per week for a maximum duration of 26 weeks (6 months). According to California Law, an individual can claim UI benefits from a minimum gross income of \$900/quarter (\$300/month). The maximum UI benefit is capped for an individual earning \$11,676/quarter (\$3,892/month) or more.
\item \textbf{Case C: CARES.} This case considers regular California UI benefits in addition to the new CARES Act stimulus package, signed into law on March 27, 2020. Although the Coronavirus Aid, Recovery and Economic Security (CARES) Act contains many programs, two specific aspects are explicitly modeled here:
\begin{enumerate}
\item Unemployment Insurance Extension
\begin{enumerate}
\item \textit{Pandemic Emergency Unemployment Compensation.} Eligible individuals, who exhaust their regular California state UI benefits, can receive up to an additional 13 weeks (3 months) of UI benefits at the original rate, for a total UI state benefits of 39 weeks (9 months).
\item \textit{Pandemic Unemployment Compensation.} Eligible individuals will benefit from an additional \$600/week flat rate of UI benefit until July 31 on top of the UI and Pandemic Emergency Unemployment Compensation. This unemployment assistance is a flat rate and does not depend on prior income.
\end{enumerate}
\item Stimulus Checks \\
The U.S. government is issuing direct payments to most Americans, up to \$1,200 through the U.S. Internal Revenue Service (IRS). The stimulus checks are based on annual gross income of the 2018 tax filing year. Individuals earning \$75,000 or less per year will receive \$1,200. Individuals earning more than \$75,000 will receive checks reduced by \$50 for every additional \$100 above that threshold. Individuals having a gross yearly income over \$99,000 will not receive a stimulus check. Couples filing jointly and additional benefits for children dependents under the age of 16 (\$500 per dependent child) are not explicitly modeled due to lack of data.
\end{enumerate}
\end{itemize}
As for all analysis of post-disaster support and social protection impact assessments, it is critical to consider the practical implementation of the measures, and to include in the analysis unavoidable exclusion errors. In this model, the exclusion error is estimated as 40\%, which is based on 2019 unemployment insurance rates in California according to the Employment Development Department (EDD) (Appendix \ref{sec:CaliforniaLabor}). To investigate the importance of implementation of response measures, various assumptions on this parameter are explored. Excluded individuals are assumed not to receive state UI nor CARES due to ineligibility, claim processing errors, exhaustion of UI benefits, or other reasons. Undocumented immigrants, which represent 9.0\% of the total labor force in California according to the Pew Research Center \citep{Passel2016SizeRecession}, are explicitly modeled and are also expected to receive no state nor federal benefits. The unemployment benefits, from state and federal levels, as well as the stimulus checks are given to individuals with a random time delay representing the delay from the onset of the crisis to the benefits arriving in individual bank accounts. The time delays for each are modeled randomly and independently assuming a lognormal distribution with mean six weeks and standard deviation of three weeks.
\subsection{Poverty Levels}
To investigate the impact of COVID-19 on lower income populations, two poverty levels are considered: (1) poverty, corresponding to the Low Income Level (LIL) defined by the Department of Housing and Urban Development (HUD) and (2) deep poverty, which is defined as half the income of LIL. According to HUD, LIL is defined for household gross annual income less than \$25,844, thus, deep poverty is defined for gross annual income less than \$12,922. These poverty measures closely match the California Poverty Measure (CPM), developed by the Public Policy Institute of California \citep{Bohn2019PovertyCalifornia}. From the census tract data, the poverty rate of the Bay Area is 17.1\%, and the deep poverty rate is 1.68\%. \\
\subsection{Modeling}
Using the census tract data, a household-level economic model is built, divided into two periods: (1) crisis period, which simulates the duration of the shelter-in-place order and subsequent loss of income and (2) the recovery period. During the crisis period, affected individuals will suffer an income loss (Table \ref{tab:IndustryShock}) and use precautionary savings to replenish consumption. The income shock is assumed to begin on the first day of the crisis and last until full economic reopening. During recovery, income is assumed to be fully replenished to pre-crisis levels and savings are replenished using a marginal savings rate of 10\%. A household recovery time is defined as the time it will take to fully replenish its savings to pre-crisis levels. Unemployment insurance, from state and federal levels, replenish consumption, while the CARES single paycheck is assumed to replenish savings directly. The methodology, modeling and optimization are presented in full detail in Appendix \ref{sec:Method}.
\section{Household Consumption and Saving Losses}
In this section, the crisis is assumed to last three months, $T_C = 3$, representing a time period starting from the Bay Area shelter-in-place order on March 16, 2020 to a full reopening on June 16, 2020. At the time of writing this report, the California mandated shelter-in-place is to be maintained until at least end of May. However, the order could remain in effect for longer since the daily new cases in the Bay Area is still close to peak levels (Figure \ref{fig:BayAreaCases}).
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/ConsumptionSaving.pdf}
\caption{Histograms of per capita consumption and savings, comparing initial pre-crisis and during the crisis period for (a) case A: base, (b) case B: unemployment insurance and (c) case C: unemployment insurance and CARES Act stimulus. The income thresholds for poverty and deep poverty are plotted for comparisons.}
\label{fig:ConsumptionSaving}
\end{figure}
The histograms of initial versus crisis per capita consumption [\$/month] and savings [\$] for each case (A: base, B: UI and C: CARES) along with the poverty income levels are shown in Figure \ref{fig:ConsumptionSaving}. The median Bay Area initial per capita consumption is \$3,989 per month. Considering consumption during the crisis period and no unemployment insurance (UI) benefit nor federal assistance, the consumption drops for most individuals, with 643,000 people falling below the poverty level (per capita consumption lower than o \$2,154 per month). The California state unemployment insurance (UI) benefits help maintain consumption during the crisis, with CARES (Case C) having a very strong impact on consumption levels during the crisis.
Considering both unemployment insurance and CARES benefits, the standard deviation of crisis consumption across individuals is reduced. This is represented by the lower tail distribution of Figure \ref{fig:ConsumptionSaving}(c). Indeed, lower income individuals gain more from CARES Act then their job revenue pre-crisis, since the \$600/week unemployment benefit is an average and not based on per capita income.
The median per capita savings in the Bay Area, before the crisis is \$6,092, which represents 7.0 weeks of pre-crisis consumption. With no benefits (case A), most individuals deplete their savings to smooth consumption, with some individuals fully using the precautionary savings. For case B, considering UI benefits, the decrease in savings is reduced thanks to California benefits. Finally, the residual savings are much higher with both UI and CARES (case C), since the state and federal benefits are used as alternative cash flows instead of using savings to replenish consumption.
\section{Recovery Time}
The histograms of recovery time in months for the affected population (i.e., those who have an income loss due to the COVID-19 crisis) in the Bay Area is shown in Figure \ref{fig:RecoveryTime} along with the average values. The recovery time is defined as the time needed to replenish savings, which were depleted during the crisis. The crisis period is assumed to last three months.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\linewidth]{Figures/RecoveryTime.pdf}
\caption{Histograms of recovery time for affected populations considering (a) case A: base, (b) case B: unemployment insurance and (c) case C: unemployment insurance and CARES Act stimulus.}
\label{fig:RecoveryTime}
\end{figure}
The median and per-quartile recovery times for affected individuals is shown in Table \ref{tab:AvgRecoveryTime}. The average recovery time for case A is almost a year (11.8 months), which illustrates the severity of the economic crisis due to COVID-19. The average recovery time in case B is reduced by 3 months as a result of UI state benefits to an average of 10.4 months, since individuals replenish their savings faster. The CARES Act (case C) further reduces the average recovery time to 6.7 months, but also reduces its depth. Most individuals have a recovery time less than half a month in this situation, even though the situation is very heterogeneous. Although the median recovery time dramatically decreases with the addition of CARES, the third quartile (Q3) does not drop as much. Indeed, 25\% of affected individuals will take more than 11 months to fully recover from the crisis.
\begin{table}[h!]
\centering
\caption{Median and quartile recovery times in months for affected individuals.}
\begin{tabular}{l c c c c} \hline
\textbf{Cases} & \textbf{A: base} & \textbf{B: UI} & \textbf{C: CARES} \\ \hline
Q1 &8.7 &7.7 &0.0 \\
median &11.5 &10.1 &6.3 \\
Q3 &15.1 &12.9 &11.2 \\ \hline \hline
\end{tabular}
\label{tab:AvgRecoveryTime}
\end{table}
Altogether, the full recovery takes more than 12 months in all cases, as illustrated by Figure \ref{fig:RecoveryCurve}. The figure shows total household savings as a percentage of pre-crisis level. The recovery remain long because, even with the most optimistic assumptions, there are some individuals who takes a long time to fully recover.
Moreover, it is important to note that this model assumes that individuals regain full employment and income as soon as the crisis is over, and it does not account for the longer-term macroeconomic repercussions of the pandemic crisis. In reality, the drop in demand while households rebuild their asset (and firms rebuild their balance sheet) and the uncertainty in the timeline of the COVID pandemics is likely to maintain income depressed for a long time period. Introducing this macroeconomic feedback will be a priority for future work.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\linewidth]{Figures/RecoveryCurve.pdf}
\caption{Recovery curve for the Bay Area using total household savings as a function of time for cases A (base), B (UI) and C (CARES). For case C, a confidence interval is shown based on uncertainty in exclusion rate (55\% to 10\%).}
\label{fig:RecoveryCurve}
\end{figure}
\section{Poverty and Policy Impact}
In this section, the impact of different policies on lower income individuals are evaluated. The following lower income levels are used to analyze the policies' impact: poverty level and deep poverty levels. Figure \ref{fig:IncomeLevels} shows the deep poverty and poverty rates, as well as the increase in number of individuals under those poverty levels. The poverty rates for cases A, B and C are computed based on consumption levels at the end of the crisis, and represent temporary poverty rates, as compared to annual poverty rates as is traditionally reported. For case C, uncertainty in implementation is accounted for using a median exclusion rate of 40\% in addition to worst-case and best-case scenarios of 55\% and 10\%.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth, center]{Figures/IncomeLevels.pdf}
\caption{Bay Area poverty and deep poverty rates, as well as increase in poverty populations considering initial, case A, case B and case C. For case C with CARES, uncertainty is given based on exclusion rate with a median of 40\% and upper and lower bounds of 55\% and 10\% respectively.}
\label{fig:IncomeLevels}
\end{figure}
Under case A, the deep poverty and poverty rates temporarily increase dramatically from 1.7\% to 9.5\% and 17.1\% to 25.9\% respectively, from pre-crisis to post-crisis levels. With no social protection, the Bay Area could have an additional 643,000 people in poverty. Cases B, and C reduce the increase in poverty rates by providing income and benefits to impoverished individuals. Under case C, with an assumed 40\% exclusion rate, the deep poverty rate drops from 9.5\% assuming no social benefits, to 4.9\% with CARES. The poverty rate would still increase by 2.0\% points even with the implementation of state and federal assistance.
\begin{table}[h!]
\centering
\caption{Pre and post-crisis deep poverty and poverty rates in the Bay Area.}
\begin{tabular}{l c c c c} \hline
&\textbf{Initial} &\textbf{A: base} &\textbf{B: UI} &\textbf{C: CARES} \\ \hline
Deep poverty &1.7\% &9.5\% &7.1\% &4.9\% [2.5\% - 6.2\%]$^\dagger$ \\
Poverty &17.1\% &25.9\% &23.9\% &19.1\% [15.7\% - 20.8\%]$^\dagger$ \\ \hline
\multicolumn{5}{l}{$\dagger$: confidence interval on exclusion rate [55\% - 10\%]} \\ \hline \hline
\end{tabular}
\label{tab:PovertyRate}
\end{table}
Given the uncertainty around the duration of the crisis period, the effect of the different policies are evaluated for different crisis periods: $T_C = $ 2, 3, 6 and 9 months. Figure \ref{fig:CrisisTimeImpact} illustrates the percent of the Bay Area population under each lower income level for all three policies evaluated using the four different crisis periods. Cases B and C reduce the number of individuals for each lower income levels. Considering cases B and C, the time of the crisis exacerbates the financial strains on the Bay Area population, reflected by the increase in percentage of the population for each lower income level.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/CrisisTimeImpact.pdf}
\caption{Impact of crisis time on deep poverty and poverty rates in the Bay Area for cases A, B and C. The shaded area for case C represents the uncertainty in the exclusion rate with a likely estimate of 40\% and upper and lower bounds of 55\% and 10\% for worst-case and best-case implementation scenarios respectively.}
\label{fig:CrisisTimeImpact}
\end{figure}
Most notably, case C, considering both UI and CARES benefits, reduces the number of individuals at the poverty levels, which a minimum at 3 months. This is because the \$600/week extra UI benefits expire at approximately 4 months. Thus, individuals who are laid off and earn less than the UI benefit, will get a bump in their income and consumption. Indeed, in the short-term, some individuals may benefit from staying unemployed for up to 4 months, thanks to the higher income from social benefits.
\section{Inequality and Geospatial Distribution}
The average household consumption losses, both total [\$/month] and relative [\%], saving losses and recovery time (for affected individuals) by pre-crisis income quintile are shown in Figure \ref{fig:QuintileA} for cases A and C as a comparison, assuming a crisis period of three months and a 40\% exclusion rate for unemployment insurance and CARES. For the base case A with no assistance, although the total consumption losses [\$/month] are higher for the higher income individuals, the reverse is true considering relative income losses [\%]. The lowest income quintile has an average relative consumption loss of 18.3\%, compared to only 5.9\% for the highest income earning individuals. Furthermore, the average recovery time for affected individuals is double for the lowest income quintile compared to the highest income quintile, 14.3 compared to 7.2 months. Without social protection, the lowest income population is most impacted by the coronavirus crisis.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/QuintilePlotsA.pdf}
\caption{Household average consumption losses, total [\$/month] and relative [\%], saving losses and recovery time by income quintile for cases A (base) and C (CARES) assuming a crisis period of three months.}
\label{fig:QuintileA}
\end{figure}
On the other hand, with government assistance (case C), the consumption losses are smaller for all income quintiles, with the lowest drop for the lowest income individuals. Most likely, average consumption losses are the smallest for the lowest quintile since benefits from the assistance program can be superior to pre-crisis income in certain cases. In addition, unemployment insurance and the federal stimulus package lead to a more equal distribution of average saving losses and recovery time.
The geospatial distribution of average household consumption losses [\%] in the Bay Area is shown in Figure \ref{fig:ConsumptionLossesMap} for cases A (base) and C (CARES). Overall, the economic impacts of the business interruptions due to the coronavirus are felt throughout all Bay Area counties. The effects of COVID-19 are particularly felt in Alameda and Contra Costa counties, where the average consumption losses exceed 15\% in multiple regions. Cities of South San Francisco, Richmond, San Leandro and Concord are particularly affected. In comparison, the addition of unemployment insurance and individual benefits from the CARES Act lead to a lower average consumption loss across all counties in the Bay Area, assuming a 40\% exclusion rate for case C. Certain regions of the Bay Area even see a moderate increase in consumption, due to the social benefits of CARES. Contra Costa and Alameda counties remain the hardest hit areas with consumption losses greater than 10\% for multiple regions.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/ConsumptionLossesMap.png}
\caption{Spatial distribution of average relative consumption change [\%] per capita in the Bay Area for case A: no benefits (left) and case C: CARES (right). Red indicates an average consumption loss (negative values) and blue indicates an average consumption gain (positive values).}
\label{fig:ConsumptionLossesMap}
\end{figure}
The geospatial distribution of average recovery time for affected individuals in the Bay Area is illustrated in Figure \ref{fig:RecoveryTimeMap} considering cases A and C and assuming a crisis period of three months. Considering no social protection, vast regions of the Bay Area have an average recovery time over 9 months, even exceeding 12 months in certain cases. CARES and unemployment insurance significantly diminish the recovery time of most regions in all nine counties of the San Francisco Bay Area. However, the average recovery time of affected individuals is very heterogeneous. Densely populated regions in San Jose, San Francisco and East Bay can see average recovery times exceed a year, while some rural regions, such as those near Marin and Napa Couties drop below three months.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Figures/RecoveryTimeMap.png}
\caption{Spatial distribution of average recovery time [months] per capita for affected individuals in the Bay Area for case A: no benefits (left) and case C: CARES (right). Darker purple indicates a longer recovery time.}
\label{fig:RecoveryTimeMap}
\end{figure}
\section{Assumptions, Limitations and Future Work}
There are several limitations to the modeling presented in this study, specifically, (1) the distribution of income loss by industry sector, (2) the exclusion rate of unemployment insurance and CARES and (3) delay in payments for filed claims. The underlying assumptions, as well as the rationale, are discussed in this section.
Firstly, the average loss of income by industry sector, according to the broad categories of the Bureau of Economic Analysis (BEA), was implemented to reflect the direct impact of the coronavirus and business interruptions, which particularly hit non-essential service sectors. The hardest hit industry sectors were assumed to be arts, entertainment, recreation, accommodation and food services (BEA Sector 13 ART) and other services except government (BEA Sector 14 GOV), quickly followed by construction (BEA Sector 4 CON), retail trade (BEA Sector 7 RET), transportation and warehousing (BEA Sector 8 TRA), see Table \ref{tab:IndustryShock}. This is based on reports from the Bureau of Labor Statistics (BLS), which estimates that the hardest hit sectors are the hospitality and leisure services, especially the accommodation and food sectors \citep{Franck2020HereImpact}. Out of a survey of 10,000 participants, \cite{Coibion2020TheSpending} found that 50\% of respondents had an income loss with an average of more than \$5,000 due to the coronavirus lockdown.
However, there is now growing evidence of second-round effects, with unemployment growing in sectors that are not directly impacted by the containment, but by supply chain effects (e.g., affected firms reducing demand for other firms) or the aggregate drop in demand. To estimate these effects, a next step is to connect the household model presented here to an Input-Output model, such as the one used in \cite{Guan2020} to look at how COVID-19 affects global supply chains, or a Computable General Equilibrium (CGE) model.
Income loss for affected individuals is assumed to start at the beginning of the crisis and lasts the duration of the crisis. At the start of the recovery phase, affected individuals regain full employment and the marginal rate of savings is assumed to be 10\% of consumption. This is a conservative estimate, as many employees who were laid off during the crisis will have difficulties re-entering the economy due to the large macroeconomic effects of the coronavirus and the potential for a recession \citep{Avalos2020CoronavirusLockdowns}. In addition, social distancing measures will progressively be relaxed and some service activities, such as food services, bars and performing arts will only be fully operational in the distant future. Furthermore, household consumption during the crisis is assumed to be constant, where households distribute income revenue, state and federal assistance and savings to replenish consumption. In reality, the crisis might exacerbate savings depletion at the onset due to the delay in assistance.
Under the assumed distribution of income loss by sector, and their representative share of the Bay Area, the unemployment rate is estimated at 27.4\% due to coronavirus by the end of the crisis. Certain estimates, such as the calculations from the Federal Reserve Bank of St. Louis, using filed unemployment claims from the Bureau of Labor Statistics (BLS), place the unemployment rate in the U.S. as much as 32.1\% by the end of Q2 \citep{Faria-e-Castro2020Back-of-the-EnvelopeRate}. As of April 23, the Labor Department reported that 26 million people in the United States have filled for unemployment insurance in solely the last five weeks \citep{Cohen2020JoblessCrisis}, 3.4 million of these are in California \citep{Avalos2020CoronavirusLockdowns}. From the Bureau of Labor Statistics (BLS), the California workforce is 19,485,000 as of 2019 with 731,000 unemployed individuals, representing a 3.9\% unemployment rate. Adding the 3.4 million claimants yields a 21.2\% unemployment rate in California, but this only includes the individuals who have filed for unemployment as of April 23, 2020 and the number is likely to grow by the end of the crisis. Here we are assuming that the nine counties Bay Area are approximately representative of the overall California state.
Secondly, the exclusion rate of state and federal assistance (CARES) is assumed to be 40\%. However, there are many uncertainties on this rate, considering the unprecedented scale of the stimulus package. With our median assumption, 40\% of all unemployed individuals will not receive state and/or federal unemployment assistance during the crisis. This is due to issues related to eligibility and to implementation challenges, such as erroneous data or system backlog. According to the Employment Development Department (EDD) of the State of California, the insured unemployment rate (IUR) is 2.19\% (Appendix \ref{sec:CaliforniaLabor}), while the unemployment rate is 3.9\% in 2019. This means that in 2019, before the crisis, the exclusion rate was 43.8\%. Other statistics estimate an exclusion rate as high as 59\% in California \citep{Badger2020StatesUndo.}. Here, it is assumed that the state UI exclusion rate will stay constant during the constant.
The rate may however increase as a result of number of people losing their job. It could also go down, since the CARES Act introduced the Pandemic Unemployment Assistance (PUA) program to boost the percentage of unemployed individuals receiving benefits, especially gig-workers, freelances, contractors and self-employed individuals. However, as of April 10, the EDD of California has not received guidelines from the federal level on how to implement the program \citep{Castaneda2020AllCoronavirus}.
Thirdly, the delay in state and federal assistance is assumed to follow a lognormal distribution with mean six weeks from the start of the crisis and standard deviation of three weeks. In reality the delay in stimulus checks could be much higher, individual who file taxes without linking their banking information with the IRS, could receive checks up until September. Due to the unprecedented volume of 26 million claims across the union in just five weeks, the processing of claims and disbursement of federel unemployment insurance (UI), as well as single payment stimulus checks is likely to be severely delayed \citep{Hernandez2020CoronavirusApplications, Wire2020NewlyBenefits}. Widespread website glitches, system crashes and backlog on phone help lines have exacerbated the problem and contributed to lag in UI payments being issued \citep{Badger2020StatesUndo.}.
There are many potential extensions and applications of this study. Firstly, the long-term macroeconomic impacts of COVID-19 were mainly ignored, but in reality, could lead to income depressed for a longer period of time than the crisis. The income drop could spread to other industry sectors that feel the secondary effect of the loss of mainly the service and hospitality sectors. Additionally, the role of uncertainty in households decision-making could change the rate of savings depletion and the severity of the impact of the crisis. Households do not have perfect information about the duration nor depth of the crisis. Finally, the impact of simultaneous exogenous shocks, such as natural disasters, is of great concern, since lower income populations will have depleted most of their savings and are vulnerable to another shock. For instance, Puerto Rico experienced a 5.4 magnitude earthquake on May 2 during the lockdown with several buildings damaged. Although mild, this scenario highlights the potential for a real crisis due to an added shock.
\section{Conclusion}
In this study, we propose a household-level model to assess the socio-economic impacts of COVID-19 on per capita consumption and savings, and the benefits from government interventions. Assuming an income loss distribution for various sectors, the model can provide estimates of households' consumption losses (a proxy for well-being), depletion of precautionary savings, and recovery time. The San Francisco Bay Area was used as a case study. The main findings of this study are the following.
First, without any social protection, COVID-19 would lead to a massive economic shock to the system. In simulations of a 3-month lockdown, the poverty rate increases from 17.1\% to 25.9\% during the crisis in the Bay Area. Household savings and consumption drop significantly, and the average recovery time for individuals is almost one year. The long recovery time after the crisis will be further exacerbated by a general decrease in demand, people’s change in consumption behavior, and general slowdown of economic activities.
Second, government benefits, both state UI and federal CARES stimulus, decrease the amplitude and duration of the crisis. In likely scenario of a 3-month crisis period, the increase in poverty can be limited to 19\% (from 17.1\% at pre-crisis), and the average time of recovery almost halved to 6.7 months, thanks to the state UI and the federal stimulus package. However, the recovery is spatially heterogeneous, as certain communities will be impacted more than the average and could take over a year to replenish their lost savings.
A near perfect implementation of CARES Act, with 90\% of unemployed individuals receiving benefits, could even lead to a slight temporary decrease in the poverty rate in the Bay Area from 17.1\% to 16.5\%, since the unemployment compensation is higher than pre-crisis income for certain individuals.
Further work will explore the impact of indirect and macro-level impacts, the role of uncertainty in households' decision-making and the consequences in case of multiple waves of social distancing and the possible effect in case of simultaneous exogenous shocks (e.g., natural disasters). Indeed, these results are particularly important when considering the risk of multiple shocks: where the COVID-19 crisis is forcing most households to use their precautionary savings (especially in countries with weak social protection system), the population becomes much more vulnerable to any other shocks, including other natural disasters (e.g., tropical storms, with the hurricane season starting in the Caribbean on June 1st, earthquakes, a 5.5 magnitude earthquake hit Zagreb, Croatia on March 22, 2020 during the lockdown) or the financial and economic secondary impact from the expected recession.
Beyond this first modeling exercise, the model can be used in other countries or regions, and provide assessment of the potential impact from the ``shelter-in-place mandates", as well as the benefits from different options to provide emergency income support.
\newpage |
2209.04978 | \section{Introduction}
Let $G$ be a compact connected Lie group with Lie algebra $\g$. Suppose that $M$ is a Hamiltonian $G$-space, i.e. a symplectic manifold equipped with a symplectic action of $G$ and equivariant moment map $\mu : M \longrightarrow \g^*.$ The symplectic or Marsden--Weinstein \cite{MarsdenWeinstein} quotient of $M$ by $G$ at level $\xi\in\g^*$ is the topological space $$M\sll{\xi}G\coloneqq\mu^{-1}(\xi)/G_{\xi},$$ where $G_{\xi}\subset G$ is the $G$-stabilizer of $\xi.$ If $G$ acts freely on $\mu^{-1}(\xi),$ then $M\sll{\xi}G$ is a smooth symplectic manifold. In the absence of this freeness assumption, $M\sll{\xi}G$ is a stratified symplectic space in the sense of Sjamaar-Lerman \cite{SjamaarLerman}.
The purpose of this paper is to show that certain integrable systems on $\g^*$ allow us to express generic symplectic quotients of a Hamiltonian $G$-space $M$ as symplectic quotients of the same manifold
$M$ by the action of a compact torus. Such integrable systems include the Gelfand--Cetlin systems constructed by Guillemin--Sternberg \cite{GuilleminSternbergGC,GuilleminSternbergThimm}
for unitary and special orthogonal groups, as well Hoffman--Lane's more recent generalizations of Gelfand--Cetlin systems \cite{Lane} to arbitrary Lie type.
An example of our main result arises in classical mechanics \cite{GuilleminSternbergCollective}. Suppose that we are given a Hamiltonian $\operatorname{SO}(3)$-space $M$ with an invariant Hamiltonian function $H:M\longrightarrow\mathbb{R}$. The $\operatorname{SO}(3)$-action gives rise to two Poisson-commuting conserved quantities: the total angular momentum, and the angular momentum in some fixed direction in the Lie algebra of $\operatorname{SO}(3)$ corresponding to a choice of maximal torus. These quantities give the components of a moment map for a densely defined 2-torus action on $M,$ coming from the Gelfand--Cetlin system of Guillemin--Sternberg for the case of $\operatorname{SO}(3)$.\footnote{The square of the total angular momentum is a smooth function, but the orbits of its Hamiltonian flow do not have constant period, and so it does not generate a circle action. Taking the square root gives a function whose Hamiltonian flow generates a circle action, but which is only continuous, not differentiable, at zero. It therefore does not define a Hamiltonian flow at zero.} Our main result shows that for a non-zero value $\xi$ of the angular momentum, the symplectic quotient $M\sll{\xi}\operatorname{SO}(3)$ coincides with an appropriate symplectic quotient of $M$ under the densely defined torus action. See \cite{GuilleminSternbergCollective} for more examples of these techniques.
\subsection{Main result} We introduce the notion of a Gelfand--Cetlin datum $(\lambda_{\text{big}},\g^*_{\text{s-reg}})$ in Definition \ref{Definition: Main definition}. This amounts to $\lambda_{\text{big}}$ being a continuous map on $\g^*$ that restricts to a Poisson moment map for a Hamiltonian action of a compact torus $\mathbb{T}_{\text{big}}$ on an open dense subset $\g^*_{\text{s-reg}}\subset\g^*$, along with some extra conditions that capture salient properties of the classical Gelfand--Cetlin systems. One of these conditions is that the open, symplectic submanifold $M_{\text{s-reg}}\coloneqq\mu^{-1}(\g^*_{\text{s-reg}})\subset M$ be a Hamiltonian $\mathbb{T}_{\text{big}}$-space with moment map
$\lambda_M\coloneqq(\lambda_{\text{big}}\circ\mu)\big\vert_{M_{\text{s-reg}}}$ for any Hamiltonian $G$-space $M$ with moment map $\mu:M\longrightarrow\g^*$.
The results of Guillemin--Sternberg \cite{GuilleminSternbergGC,GuilleminSternbergThimm} imply that Gelfand--Cetlin data exist for all unitary and special orthogonal groups, while more recent results of Hoffman--Lane \cite{Lane} imply that such data exist in all Lie types.
The following is the main result of our paper.
\begin{theorem*}\label{Theorem: Main theorem}
Let $G$ be a compact connected Lie group, and $M$ a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$. Suppose that $(\lambda_{\emph{big}},\g^*_{\emph{s-reg}})$ is a Gelfand--Cetlin datum, and consider a point $\xi\in\g^*_{\emph{s-reg}}$.
\begin{itemize}
\item[\textup{(i)}] The torus $\mathbb{T}_{\emph{big}}$ acts freely on $\lambda_M^{-1}(\lambda_{\emph{big}}(\xi))$ if and only if $G_{\xi}$ acts freely on $\mu^{-1}(\xi)$. In this case, there is a canonical symplectomorphism $M\sll{\xi}G\cong M_{\emph{s-reg}}\sll{\lambda_{\emph{big}}(\xi)}\mathbb{T}_{\emph{big}}$.
\item[\textup{(ii)}] There is a canonical isomorphism $M\sll{\xi}G\cong M_{\emph{s-reg}}\sll{\lambda_{\emph{big}}(\xi)}\mathbb{T}_{\emph{big}}$ of stratified symplectic spaces.
\end{itemize}
\end{theorem*}
Part (ii) is strictly more general than (i). Part (i) is included for the sake of exposition and accessibility.
One may regard this theorem as an approach to abelianizing the generic symplectic quotients of a Hamiltonian $G$-space $M$, i.e. to presenting such quotients as symplectic quotients by a compact torus. An alternative approach to abelianization is pursued in the work of Guillemin--Jeffrey--Sjamaar \cite{guillemin-jeffrey-sjamaar}.
\subsection{Organization}
Section \ref{Section: Background and conventions} briefly establishes some of our conventions concerning Lie theory and Hamiltonian geometry. Section \ref{Section: Gelfand--Cetlin data} subsequently motivates and contextualizes the notion of a Gelfand--Cetlin datum. Our main result is then proved in Section \ref{Section: The abelianization theorem} for smooth quotients. A generalization to stratified symplectic spaces is formulated and proved in Section \ref{Section: Generalization to stratified symplectic spaces}.
\subsection*{Acknowledgements} The authors would like to thank Megumi Harada and Jeremy Lane for exceedingly useful conversations. P.C. acknowledges support from a Utah State University startup grant, while J.W. acknowledges support from Simons Collaboration Grant \# 579801.
\section{Background and conventions}\label{Section: Background and conventions}
This section establishes some of our notation and conventions regarding Lie theory and Hamiltonian geometry.
\subsection{Tori}
The Lie algebra of the unitary group $\operatorname{U}(1)$ is the real vector space $i\mathbb{R}\subset\mathbb{C}$ of purely imaginary numbers. We will identify this vector space with $\mathbb{R}$ in the obvious way. It follows that $\mathbb{R}^k$ is the Lie algebra of $\operatorname{U}(1)^{k}$ for all non-negative integers $k$, and that $$\mathbb{R}^k\longrightarrow\operatorname{U}(1)^k,\quad (x_1,\ldots,x_{k})\mapsto (e^{ix_1},\ldots,e^{ix_k})$$ is the exponential map for $\operatorname{U}(1)^k$. In certain contexts, we will implicitly use the dot product to regard $\mathbb{R}^k$ as the dual of the Lie algebra of $\operatorname{U}(1)^k$.
\subsection{General compact connected Lie groups}\label{Subsection: Lie-theoretic preliminaries}
Let $G$ be a compact connected Lie group with Lie algebra $\g$ and rank $\ell$. One has the adjoint representations $\mathrm{Ad}:G\longrightarrow\operatorname{GL}(\g)$ and $\mathrm{ad}:\g\longrightarrow\mathfrak{gl}(\g)$, as well as the coadjoint representations $\mathrm{Ad}^*:G\longrightarrow\operatorname{GL}(\g^*)$ and $\mathrm{ad}^*:\g\longrightarrow\mathfrak{gl}(\g^*)$. The $G$-representations induce $G$-actions on $\g$ and $\g^*$, and thereby give rise to stabilizer subgroups
$$G_x\coloneqq\{g\in G:\mathrm{Ad}_g(x)=x\}\quad\text{and}\quad G_{\xi}\coloneqq\{g\in G:\mathrm{Ad}_g^*(\xi)=\xi\}$$ of $G$ for all $x\in\g$ and $\xi\in\g^*$. On the other hand, the $\g$-representations allow us to define centralizers $$\g_x\coloneqq\{y\in\g:\mathrm{ad}_y(x)=0\}\quad\text{and}\quad\g_{\xi}\coloneqq\{y\in\g:\mathrm{ad}^*_y(\xi)=0\}$$ for all $x\in\g$ and $\xi\in\g^*$. It follows that $\g_x$ (resp. $\g_{\xi}$) is the Lie algebra of $G_x$ (resp. $G_{\xi}$). Let us also note that $\dim\g_x\geq\ell$ and $\dim\g_{\xi}\geq\ell$ for all $x\in\g$ and $\xi\in\g^*$. The regular loci $$\g_{\text{reg}}\coloneqq\{x\in\g:\dim\g_x=\ell\}\quad\text{and}\quad\g^*_{\text{reg}}\coloneqq\{\xi\in\g^*:\dim\g_{\xi}=\ell\}$$ are open, dense, $G$-invariant subsets of $\g$ and $\g^*$, respectively. An element $x\in\g$ (resp. $\xi\in\g^*$) then belongs to $\g_{\text{reg}}$ (resp. $\g_{\text{reg}}^*$) if and only if $\g_x$ (resp. $\g_{\xi}$) is a Cartan subalgebra of $\g$. This is equivalent to $G_x$ (resp. $G_{\xi}$) being a maximal torus of $G$.
A few remarks on maximal tori and and Cartan subalgebras are warranted. Let $\mathfrak{t}\subset\g$ be a Cartan subalgebra, and write $T\subset G$ for the maximal torus with Lie algebra $\mathfrak{t}$. The exponential map $\exp:\g\longrightarrow G$ then restricts to a surjective homomorphism $\exp\big\vert_{\mathfrak{t}}:\mathfrak{t}\longrightarrow T$ of abelian groups. The kernel of the latter is a free $\mathbb{Z}$-submodule of $\mathfrak{t}$ with rank equal to $\ell$. It follows that the same is true of
$$\Lambda_{\mathfrak{t}}\coloneqq\frac{1}{2\pi}\mathrm{ker}\left(\exp\big\vert_{\mathfrak{t}}\right)\subset\mathfrak{t}.$$
\subsection{Hamiltonian geometry}
Let $(M,\sigma)$ be a Poisson manifold, i.e. $\sigma\in H^0(M,\Lambda^2TM)$ is a Poisson bivector field on the manifold $M$. Note that $\sigma$ may be regarded as a skew-symmetric bilnear map from two copies of $T^*M$ to the trivial rank-$1$ vector bundle over $M$. Contracting $\sigma$ with cotangent vectors in the first argument then determines a vector bundle morphism $\sigma^{\vee}:T^*M\longrightarrow TM$. One calls $(M,\sigma)$ \textit{non-degenerate} if $\sigma^{\vee}$ is an isomorphism. In this case, $(\sigma^{\vee})^{-1}=\omega^{\vee}$ for a unique symplectic form $\omega\in H^0(M,\Lambda^2 T^*M)$, where $\omega^{\vee}:TM\longrightarrow T^*M$ is the vector bundle morphism obtained by contracting $\omega$ with tangent vectors in the first argument. This process gives rise to a bijective correspondence between symplectic structures on $M$ and non-degenerate Poisson structures on $M$. We will thereby make no distinction between symplectic manifolds and non-degenerate Poisson manifolds.
If $(M,\sigma)$ is a Poisson manifold, then $\sigma$ can be recovered from the Poisson bracket $\{\cdot,\cdot\}$ that it induces. This bracket associates to smooth functions $f_1,f_2:M\longrightarrow\mathbb{R}$ the smooth function
$$\{f_1,f_2\}\coloneqq\sigma(\mathrm{d}f_1\wedge\mathrm{d}f_2):M\longrightarrow\mathbb{R}.$$ At the same time, one defines the Hamiltonian vector field of a smooth function $f: M\longrightarrow\mathbb{R}$ by $X_f\coloneqq-\sigma^{\vee}(\mathrm{d}f)\in H^0(M,TM)$. It follows that $$\{f_1,f_2\}=-X_{f_1}(f_2)=X_{f_2}(f_1)$$ for all smooth functions $f_1,f_2:M\longrightarrow\mathbb{R}$.
Now let $G$ be a compact connected Lie group with Lie algebra $\g$ and exponential map $\exp:\g\longrightarrow G$. If $G$ acts smoothly on a manifold $M$, then each $\eta\in\g$ determines a generating vector field $\eta_M\in H^0(M,TM)$ by
$$(\eta_M)_m\coloneqq\frac{d}{dt}\bigg\vert_{t=0}\exp(-t\eta)\cdot m$$ for all $m\in M$. A Poisson manifold $(M,\sigma)$ with a smooth $G$-action will be called a \textit{Poisson Hamiltonian $G$-space} if $\sigma$ is $G$-invariant and $M$ comes equipped with a \textit{moment map}. This last term refers to $G$-equivariant smooth map $\mu: M\longrightarrow\g^*$ satisfying $X_{\mu^{\eta}}=\eta_M$ for all $\eta\in\g$, where $\mu^{\eta}:M\longrightarrow\mathbb{R}$ is the result of pairing $\mu$ with $\eta$ pointwise. We will reserve the term \textit{Hamiltonian $G$-space} for a Poisson Hamiltonian $G$-space whose underlying Poisson structure is symplectic.
It will be advantageous to recall the Poisson Hamiltonian $G$-space structure on $\g^*$. The Poisson bracket on $\g^*$ is given by
$$\{f_1,f_2\}(\xi)=\xi([\mathrm{d}_{\xi}f_1,\mathrm{d}_{\xi}f_2])$$ for all smooth functions $f_1,f_2:M\longrightarrow\mathbb{R}$ and points $\xi\in\g^*$, where $\mathrm{d}_{\xi}f_1,\mathrm{d}_{\xi}f_2\in(\g^*)^*=\g$ denote the differentials of $f_1,f_2$ at $\xi$, respectively. One finds that $\g^*$ is a Poisson Hamiltonian $G$-space with respect to the coadjoint action, and with the identity $\g^*\longrightarrow\g^*$ serving as the Poisson moment map.
\section{Gelfand--Cetlin data}\label{Section: Gelfand--Cetlin data}
In this section, we define Gelfand--Cetlin data and introduce their main properties. This begins with the definition itself in \ref{Subsection: Definition}. The existence of Gelfand--Cetlin data is addressed in \ref{Subsection: Existence}, while concrete techniques for constructing such data are discussed in \ref{Subsection: Integrality} and \ref{Subsection: GC torus}. In \ref{Subsection: GS}, we describe concrete Gelfand--Cetlin data for unitary groups.
\subsection{Definition and relation to integrable systems}\label{Subsection: Definition}
Let $G$ be a compact connected Lie group with Lie algebra $\g$ and rank $\ell$. Consider the quantities
$$\mathrm{u}\coloneqq\frac{1}{2}(\dim\g-\ell)\quad\text{and}\quad\mathrm{b}\coloneqq\frac{1}{2}(\dim\g+\ell),$$ and introduce the following tori of \textit{small}, \textit{intermediate}, and \textit{big} ranks: $$\mathbb{T}_{\text{small}}\coloneqq\operatorname{U}(1)^{\ell},\quad\mathbb{T}_{\text{int}}\coloneqq\operatorname{U}(1)^{\mathrm{u}},\quad\text{and}\quad\mathbb{T}_{\text{big}}\coloneqq\mathbb{T}_{\text{small}}\times\mathbb{T}_{\text{int}}\cong\operatorname{U}(1)^{\mathrm{b}}.$$ The respective Lie algebras of these tori are
$$\mathbb{R}_{\text{small}}\coloneqq\mathbb{R}^{\ell},\quad\mathbb{R}_{\text{int}}\coloneqq\mathbb{R}^{\mathrm{u}},\quad\text{and}\quad\mathbb{R}_{\text{big}}\coloneqq\mathbb{R}_{\text{small}}\times\mathbb{R}_{\text{int}}\cong\mathbb{R}^{\mathrm{b}}.$$
\begin{definition}\label{Definition: Main definition}
A \textit{Gelfand--Cetlin datum} is a pair $(\lambda_{\text{big}},\g^*_{\text{s-reg}})$, consisting of a continuous map $\lambda_{\text{big}}=(\lambda_1,\ldots,\lambda_{\mathrm{b}}):\g^*\longrightarrow\mathbb{R}_{\text{big}}$ and open dense subset $\g^*_{\text{s-reg}}\subset\g^*$ that satisfy the following conditions:
\begin{itemize}
\item[\textup{(i)}] $\lambda_1,\ldots,\lambda_{\ell}$ are $G$-invariant on $\g^*$ and smooth on $\g^*_{\text{reg}}$;
\item[\textup{(ii)}] $\{\mathrm{d}_{\xi}\lambda_1,\ldots,\mathrm{d}_{\xi}\lambda_{\ell}\}$ is a $\mathbb{Z}$-basis of the lattice $\Lambda_{\g_{\xi}}\subset\g_{\xi}$ for all $\xi\in\g^*_{\text{reg}}$;
\item[\textup{(iii)}] $\g^*_{\text{s-reg}}\subset\g^*_{\text{reg}}$;
\item[\textup{(iv)}] $\lambda_{\text{big}}\big\vert_{\g^*_{\text{s-reg}}}:\g^*_{\text{s-reg}}\longrightarrow\mathbb{R}_{\text{big}}$ is a smooth submersion and moment map for a Poisson Hamiltonian $\mathbb{T}_{\text{big}}$-space structure on $\g^*_{\text{s-reg}}$;
\item[\textup{(v)}] $\lambda_{\text{big}}\big\vert_{\g^*_{\text{s-reg}}}:\g^*_{\text{s-reg}}\longrightarrow\lambda_{\text{big}}(\g^*_{\text{s-reg}})$ is a principal $\mathbb{T}_{\text{int}}$-bundle;
\item[\textup{(vi)}] if $M$ is a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$, then $$(\lambda_{\text{big}}\circ\mu)\big\vert_{\mu^{-1}(\g^*_{\text{s-reg}})}: \mu^{-1}(\g^*_{\text{s-reg}})\longrightarrow\mathbb{R}_{\text{big}}$$ is a moment map for a Hamiltonian $\mathbb{T}_{\text{big}}$-space structure on $\mu^{-1}(\g_{\text{s-reg}})$.
\end{itemize}
In this case, we adopt the notation
$$\lambda_{\text{small}}\coloneqq (\lambda_1,\ldots,\lambda_{\ell}):\g^*\longrightarrow\mathbb{R}_{\text{small}},\quad\lambda_{\text{int}}\coloneqq(\lambda_{\ell+1},\ldots,\lambda_{\mathrm{b}}):\g^*\longrightarrow\mathbb{R}_{\text{int}},$$ $$M_{\text{s-reg}}\coloneqq\mu^{-1}(\g^*_{\text{s-reg}}),\quad\text{and}\quad\lambda_M\coloneqq(\lambda_{\text{big}}\circ\mu)\big\vert_{M_{\text{s-reg}}}:M_{\text{s-reg}}\longrightarrow\mathbb{R}_{\text{big}}.$$ We also refer to the elements of $\g^*_{\text{s-reg}}$ as the \textit{strongly regular} elements of $\g^*$.
\end{definition}
\begin{remark}
Condition (v) in Definition \ref{Definition: Main definition} is only slightly weaker than the existence of global action-angle coordinates on $\g^*_{\text{s-reg}}$. This existence question features prominently in \cite{Duistermaat,Lane}.
\end{remark}
It is instructive to consider this definition in relation to the theory of completely integrable systems. One is thereby led to the following result.
\begin{proposition}
Let $(\lambda_{\emph{big}},\g^*_{\emph{s-reg}})$ be a Gelfand--Cetlin datum. If $\mathcal{O}\subset\g^*$ is a coadjoint orbit and $\mathcal{O}_{\emph{s-reg}}\coloneqq\mathcal{O}\cap\g^*_{\emph{s-reg}}$, then $$\lambda_{\emph{int}}\big\vert_{\mathcal{O}_{\emph{s-reg}}}:\mathcal{O}_{\emph{s-reg}}\longrightarrow\lambda_{\emph{int}}(\mathcal{O}_{\emph{s-reg}})\subset\mathbb{R}_{\emph{int}}$$ is a completely integrable system, principal $\mathbb{T}_{\emph{int}}$-bundle, and moment map for a Hamiltonian action of $\mathbb{T}_{\emph{int}}$ on $\mathcal{O}_{\emph{s-reg}}$.
\end{proposition}
\begin{proof}
Note that the Hamiltonian vector field of any smooth function $\g^*\longrightarrow\mathbb{R}$ is tangent to $\mathcal{O}$. It follows that $\mathcal{O}_{\text{s-reg}}$ is stable under the action of $\mathbb{T}_{\text{big}}$ on $\g^*_{\text{s-reg}}$. Definition \ref{Definition: Main definition}(iv) now implies that $\mathcal{O}_{\text{s-reg}}$ is a Hamiltonian $\mathbb{T}_{\text{big}}$-space with moment map $\lambda_{\text{big}}\big\vert_{\mathcal{O}_{\text{s-reg}}}$. We conclude that $\lambda_{\text{int}}\big\vert_{\mathcal{O}_{\text{s-reg}}}$ is a moment map for the Hamiltonian action of $\mathbb{T}_{\text{int}}\subset\mathbb{T}_{\text{big}}$ on $\mathcal{O}_{\text{s-reg}}$.
Since $\lambda_{\text{small}}$ is constant-valued on $\mathcal{O}$, Definition \ref{Definition: Main definition}(v) tells us that $\lambda_{\text{int}}\big\vert_{\mathcal{O}_{\text{s-reg}}}:\mathcal{O}_{\text{s-reg}}\longrightarrow\lambda_{\text{int}}(\mathcal{O}_{\text{s-reg}})$ is a principal $\mathbb{T}_{\text{int}}$-bundle. It therefore remains only to prove that $\dim\mathbb{T}_{\text{int}}=\frac{1}{2}\dim\mathcal{O}_{\text{s-reg}}$. This follows immediately from the fact that $\dim\mathbb{T}_{\text{int}}=\mathrm{u}=\frac{1}{2}(\dim\g-\ell)$.
\end{proof}
\subsection{Existence of Gelfand--Cetlin data}\label{Subsection: Existence}
It is natural to wonder about the generality in which Gelfand--Cetlin data exist. The earliest constructions are due to Guillemin--Sternberg \cite{GuilleminSternbergGC,GuilleminSternbergThimm}, and apply to all unitary groups $\operatorname{U}(n)$ and special orthogonal groups $\operatorname{SO}(n)$. The underlying techniques are based on Thimm's method, as described in \cite{GuilleminSternbergThimm}. Further details are outlined in Sections \ref{Subsection: Integrality}--\ref{Subsection: GS} of this paper. The case of symplectic groups is considerably more subtle and addressed in Harada's paper \cite{Harada}.
Some recent work of Hoffman--Lane \cite{Lane} implies the existence of Gelfand--Cetlin data for an arbitrary compact connected Lie group $G$; the reader is referred to \cite[Section 6.2]{Lane} for the relevant details. This Hoffman--Lane paper is part of a broader program aimed at generalizing the results of Harada--Kaveh \cite{HaradaKaveh}.
\subsection{Construction of Gelfand--Cetlin data: integrality}\label{Subsection: Integrality}
We now discuss the construction of functions $\lambda_1,\ldots,\lambda_{\ell}:\g^*\longrightarrow\mathbb{R}$ satisfying Conditions (i) and (ii) in Definition \ref{Definition: Main definition}. Let $G$ be a compact connected Lie group with Lie algebra $\g$ and rank $\ell$. Choose a $G$-invariant inner product on $\g$, a Cartan subalgebra $\mathfrak{t}\subset\g$, and a closed, fundamental Weyl chamber $\mathfrak{t}_{+}\subset\mathfrak{t}$. The chamber $\mathfrak{t}_{+}$ is known to be a fundamental domain for the adjoint action of $G$ on $\g$. Our inner product thereby identifies $\mathfrak{t}_{+}$ with a fundamental domain $\mathfrak{t}_+^*\subset\g^*$ for the $G$-action on $\g^*$. We may therefore define a continuous surjection
$\pi:\g^*\longrightarrow\mathfrak{t}_+^*$ by the property that $(G\cdot\xi)\cap\mathfrak{t}_+^*=\{\pi(\xi)\}$ for all $\xi\in\g^*$, where $G\cdot\xi\subset\g^*$ is the coadjoint orbit of $\xi$. One sometimes calls $\pi$ the \textit{sweeping map} on $\g^*$ with respect to $\mathfrak{t}$; its fibers are exactly the coadjoint orbits of $G$, and $\pi(\g^*_{\text{reg}})$ is the interior $(\mathfrak{t}_+^*)^{\circ}$ of $\mathfrak{t}_+^*$. One also finds that the commutative diagram
$$\begin{tikzcd}[row sep=large,column sep=large]
\g^*_{\text{reg}}\arrow[hookrightarrow]{r} \arrow[d, swap, "\pi\big\vert_{\g^*_{\text{reg}}}"] & \g^* \arrow[d, "\pi"] \\
(\mathfrak{t}_+^*)^{\circ} \arrow[hookrightarrow]{r} & \mathfrak{t}_+^*
\end{tikzcd}$$ is Cartesian. The left vertical map $$\pi\big\vert_{\g^*_{\text{reg}}}:\g^*_{\text{reg}}\longrightarrow(\mathfrak{t}_+^*)^{\circ}$$ is a easily seen to be a smooth, surjective submersion.
Now choose a $\mathbb{Z}$-basis $\{\phi_1,\ldots,\phi_{\ell}\}$ of the $\mathbb{Z}$-submodule $\Lambda_{\mathfrak{t}}\subset\mathfrak{t}$. Note that the pairing between $\mathfrak{t}$ and $\mathfrak{t}^*$ allows one to regard $\phi_1,\ldots,\phi_{\ell}$ as functions on $\mathfrak{t}_+^*$. With this in mind, each $k\in\{1,\ldots,\ell\}$ determines a function $$\lambda_k\coloneqq\phi_k\circ\pi:\mathfrak{g}^*\longrightarrow\mathbb{R}.$$ The previous paragraph implies that $\lambda_1,\ldots,\lambda_{\ell}$ are smooth on $\g^*_{\text{reg}}$, while being $G$-invariant and continuous as functions on $\mathfrak{g}^*$. Given any $\xi\in\g^*_{\text{reg}}$, the differentials $\mathrm{d}_{\xi}\lambda_1,\ldots,\mathrm{d}_{\xi}\lambda_{\ell}\in(\g^*)^*=\g$ may be described as follows.
\begin{proposition}\label{Proposition: Lattice}
If $\xi\in\g_{\emph{reg}}^*$, then $\{\mathrm{d}_{\xi}\lambda_1,\ldots,\mathrm{d}_{\xi}\lambda_{\ell}\}$ is a $\mathbb{Z}$-basis of the lattice $\Lambda_{\g_{\xi}}\subset\g_{\xi}$.
\end{proposition}
\begin{proof}
Choose $g\in G$ for which $\mathrm{Ad}_g^*(\xi)\in(\mathfrak{t}_+^*)^{\circ}$. Since each function $\lambda_k$ is $G$-invariant, one has $$\mathrm{d}_{\xi}\lambda_k=\mathrm{d}_{\mathrm{Ad}_g^*(\xi)}\lambda_k\circ\mathrm{Ad}^*_g=\mathrm{Ad}_{g^{-1}}(\mathrm{d}_{\mathrm{Ad}_g^*(\xi)}\lambda_k)$$ for all $k\in\{1,\ldots,\ell\}$. We also observe that $\mathrm{Ad}_{g^{-1}}:\g\longrightarrow\g$ restricts to a $\mathbb{Z}$-module isomorphism $\lambda_{\mathfrak{t}}\overset{\cong}\longrightarrow\lambda_{\g_{\xi}}$.
It therefore suffices to take $\xi\in(\mathfrak{t}_+^*)^{\circ}$ and prove that $\mathrm{d}_{\xi}\lambda_k=\phi_k$ for all $k\in\{1,\ldots,\ell\}$.
Assume that $\xi\in(\mathfrak{t}_+^*)^{\circ}$, and fix $k\in\{1,\ldots,\ell\}$. Our invariant inner product allows us to regard $\mathfrak{t}^*$ as a subspace of $\g^*$. We then note that $\g^*=\mathfrak{t}^*\oplus T_{\xi}(G\cdot\xi)$, and that $T_{\xi}(G\cdot\xi)$ is contained in the kernel of $\mathrm{d}_{\xi}\lambda_k$. Let us also note that $T_{\xi}(G\cdot\xi)$ is the annihilator of $\mathfrak{t}$ in $\g^*$, and as such is contained in the kernel of $\phi_k$. These last two sentences reduce us to proving that $\mathrm{d}_{\xi}\lambda_k(\eta)=\phi_k(\eta)$ for all $\eta\in\mathfrak{t}^*$. On the other hand, we have $\mathrm{d}_{\xi}\lambda_k=\phi_k\circ\mathrm{d}_{\xi}\pi$. It therefore suffices to prove that $\mathrm{d}_{\xi}\pi(\eta)=\eta$ for all $\eta\in\mathfrak{t}^*$. But this is an immediate consequence of the following two observations: $(\mathfrak{t}_{+}^*)^{\circ}$ is an open subset of $\mathfrak{t}^*$, and $\pi(\eta)=\eta$ for all $\eta\in(\mathfrak{t}_{+}^*)^{\circ}$.
\end{proof}
\subsection{Construction of Gelfand--Cetlin data: Thimm's method}\label{Subsection: GC torus}
Retain the objects and notation discussed in Section \ref{Subsection: Integrality}. Let $G=G_0\supset G_1\supset\cdots\supset G_m$ be a descending filtration of $G$ by connected closed subgroups with respective Lie algebras $\g=\g_0\supset\g_1\supset\cdots\supset\g_m$. Let us also choose a Cartan subalgebra $\mathfrak{t}_j\subset\g_j$ and closed, fundamental Weyl chamber $(\mathfrak{t}_j)_{+}\subset\mathfrak{t}_j$ for each $j\in\{0,\ldots,m\}$. Our $G$-invariant inner product on $\g$ gives rise to a $G_j$-module isomorphism $\mathfrak{g}_j\cong\mathfrak{g}_j^*$, by means of which $\mathfrak{t}_j$ and $(\mathfrak{t}_j)_+$ correspond to subsets $\mathfrak{t}_j^*$ and $(\mathfrak{t}_j^*)_{+}$ of $\mathfrak{g}_j^*$. As in Section \ref{Subsection: Integrality}, one may define a continuous surjection $\pi_j:\mathfrak{g}_j^*\longrightarrow(\mathfrak{t}_j^*)_{+}$ by the condition that $(G_j\cdot\xi)\cap(\mathfrak{t}_j^*)_{+}=\{\pi_j(\xi)\}$ for all $\xi\in\mathfrak{g}_j^*$.
Let $\ell=\ell_0\geq\ell_1\geq\cdots\geq\ell_m$ be the ranks of $G=G_0\supset G_1\supset\cdots\supset G_m$, respectively. Let us also choose a $\mathbb{Z}$-basis $\{\phi_{j1},\ldots,\phi_{j\ell_j}\}$ of the lattice $\Lambda_{\mathfrak{t}_j}\subset\mathfrak{t}_j$ for each $j\in\{0,\ldots,m\}$. As in Section \ref{Subsection: Integrality}, we define the functions
$$\nu_{jk}\coloneqq\phi_{jk}\circ\pi_j:\g_j^*\longrightarrow\mathbb{R}$$ for $k\in\{1,\ldots,\ell_j\}$. The same section implies that $\nu_{j1},\ldots,\nu_{j\ell_j}$ are $G_j$-invariant and continuous on $\g_j^*$, as well as smooth on $(\mathfrak{g}_j^*)_{\text{reg}}$. We also have the following equivalent version of Proposition \ref{Proposition: Lattice}.
\begin{proposition}\label{Proposition: Lattice 2}
If $j\in\{0,\ldots,m\}$ and $\xi\in(\g_j^*)_{\emph{reg}}$, then $\{\mathrm{d}_{\xi}\nu_{j1},\ldots,\mathrm{d}_{\xi}\nu_{j\ell_j}\}$ is a $\mathbb{Z}$-basis of the lattice $\Lambda_{(\g_j)_{\xi}}\subset(\g_j)_{\xi}$.
\end{proposition}
Let $\sigma_j:\g^*\longrightarrow\g_j^*$ denote the transpose of the inclusion $\g_j\hookrightarrow\g$ for each $j\in\{0,\ldots,m\}$. Consider the functions on $\g^*$ defined by $$\lambda_{jk}\coloneqq\nu_{jk}\circ\sigma_j:\g^*\longrightarrow\mathbb{R}$$ for $j\in\{0,\ldots,m\}$ and $k\in\{1,\ldots,\ell_j\}$. It will be convenient to enumerate these functions as
\begin{equation}\label{Equation: Enumeration}\lambda_{\text{big}}\coloneqq(\lambda_1,\ldots,\lambda_{\mathrm{c}})\coloneqq(\lambda_{01},\ldots,\lambda_{0\ell},\lambda_{11},\ldots,\lambda_{1\ell_1}\ldots,\lambda_{m1},\ldots,\lambda_{m\ell_m}):\g^*\longrightarrow\mathbb{R}^{\mathrm{c}},\end{equation} where $\mathrm{c}\coloneqq\ell_0+\cdots+\ell_m$.
The discussion preceding Proposition \ref{Proposition: Lattice 2} implies that $\lambda_{\text{big}}$ is smooth on the open subset $$\mathcal{U}\coloneqq\bigcap_{j=0}^m\sigma_j^{-1}((\g_j^*)_{\text{reg}})\subset\g^*$$ Let us also define $$\g^*_{\text{s-reg}}\coloneqq\{\xi\in\mathcal{U}:\mathrm{d}_{\xi}\lambda_{\text{big}}\text{ is surjective}\}.$$ These last few sentences give context for the following consequence of \cite[Theorem 3.4]{GuilleminSternbergGC}.
\begin{proposition}\label{Proposition: Poisson moment map}
Let $\lambda_{\emph{big}}$ and $\g^*_{\emph{s-reg}}$ be as defined above.
\begin{itemize}
\item[\textup{(i)}] The restriction $\lambda_{\emph{big}}\big\vert_{\g^*_{\emph{s-reg}}}:\g^*_{\emph{s-reg}}\longrightarrow\mathbb{R}^{\mathrm{c}}$ is a moment map for a Poisson Hamiltonian $\operatorname{U}(1)^{\mathrm{c}}$-space structure on $\g^*_{\emph{s-reg}}$.
\item[\textup{(ii)}] If $M$ is a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$, then $$(\lambda_{\emph{big}}\circ\mu)\big\vert_{\mu^{-1}(\g^*_{\emph{s-reg}})}: \mu^{-1}(\g^*_{\emph{s-reg}})\longrightarrow\mathbb{R}^{\mathrm{c}}$$ is a moment map for a Hamiltonian $\operatorname{U}(1)^{\mathrm{c}}$-space structure on $\mu^{-1}(\g^*_{\emph{s-reg}})$.
\end{itemize}
\end{proposition}
This result has the following immediate connection to Definition \ref{Definition: Main definition}: the pair $(\lambda_{\text{big}},\g^*_{\text{s-reg}})$ is a Gelfand--Cetlin datum if and only if $\mathrm{c}=\mathrm{b}$, $\g^*_{\text{s-reg}}$ is dense in $\g^*$, and $\lambda_{\text{big}}\big\vert_{\g^*_{\text{s-reg}}}:\g^*_{\text{s-reg}}\longrightarrow\lambda_{\text{big}}(\g^*_{\text{s-reg}})$ is a principal $\mathbb{T}_{\text{int}}$-bundle. Guillemin--Sternberg \cite{GuilleminSternbergGC,GuilleminSternbergThimm} explicitly show these conditions to be achievable for $G=\operatorname{U}(n)$ and $G=\operatorname{SO}(n)$. Our next section outlines the details of the Guillemin--Sternberg construction for $G=\operatorname{U}(n)$.
\subsection{Example of Gelfand--Cetlin datum: the Gelfand-Cetlin system on $\mathfrak{u}(n)^*$}\label{Subsection: GS}
Fix a positive integer $n$. Consider the Lie group $G\coloneqq\mathrm{U}(n)$ of unitary $n\times n$ matrices, and its Lie algebra $\g\coloneqq\mathfrak{u}(n)$ of skew-Hermitian $n\times n$ matrices. Let us also consider the real $\operatorname{U}(n)$-module $\mathcal{H}(n)$ of Hermitian $n\times n$ matrices. In what follows, we will freely identify $\mathfrak{u}(n)^*$ with $\mathcal{H}(n)$ by means of the non-degenerate, $G$-invariant bilinear form \begin{equation}\label{Equation: Pairing}\mathfrak{u}(n)\otimes_{\mathbb{R}}\mathcal{H}(n)\longrightarrow\mathbb{R},\quad\eta\otimes \xi\mapsto-i\mathrm{tr}(\eta\xi).\end{equation}
Given an integer $j\in\{0,\ldots,n-1\}$, define the subgroup
$$G_j\coloneqq\left\{\left[\begin{array}{ c | c }
I_{j} & 0 \\
\hline
0 & A
\end{array}\right]:A\in\operatorname{U}(n-j)\right\}\subset G=\operatorname{U}(n).$$ The descending chain $\operatorname{U}(n)=G=G_0\supset G_1\supset\cdots\supset G_{n-1}$ then induces such a chain $\mathfrak{u}(n)=\g=\g_0\supset\g_1\supset\cdots\supset\g_{n-1}$ on the level of Lie algebras. We have
$$\g_j=\left\{\left[\begin{array}{ c | c }
0 & 0 \\
\hline
0 & x
\end{array}\right]:x\in\mathfrak{u}(n-j)\right\}\subset\mathfrak{u}(n)\quad\text{and}\quad\g_j^*=\left\{\left[\begin{array}{ c | c }
0 & 0 \\
\hline
0 & \xi
\end{array}\right]:\xi\in\mathcal{H}(n-j)\right\}\subset\mathcal{H}(n)$$ for all $j\in\{0,\ldots,n-1\}$, where the second equation implicitly uses \eqref{Equation: Pairing}.
The transpose $\sigma_j:\g^*\longrightarrow\g_j^*$ of $\g_j\subset\g$ then sends $\xi\in\g^*=\mathcal{H}(n)$ to the $(n-j)\times(n-j)$ submatrix in the bottom right-hand corner of $\xi$.
Now note that
$$\mathfrak{t}_j\coloneqq\left\{\left[\begin{array}{ c | ccc }
0 & & 0 & \\
\hline
& ia_1 & &\\
0 & & \ddots & \\
& & & ia_{n-j}
\end{array}\right]:a_1,\ldots,a_{n-j}\in\mathbb{R}\right\}\subset\g_j$$
is a Cartan subalgebra for each $j\in\{0,\ldots,n-1\}$. Let $\phi_{jk}\in\mathfrak{t}_j$ be the result of setting $a_k=1$ and $a_p=0$ for $p\neq k$, noting that $\{\phi_{j1},\ldots,\phi_{j(n-j)}\}$ is a $\mathbb{Z}$-basis of $\Lambda_{\mathfrak{t}_j}\subset\mathfrak{t}_j$. At the same time, consider the fundamental Weyl chamber
$$(\mathfrak{t}_j)_{+}\coloneqq\left\{\left[\begin{array}{ c | ccc }
0 & & 0 & \\
\hline
& ia_1 & &\\
0 & & \ddots & \\
& & & ia_{n-j}
\end{array}\right]:\let\scriptstyle\textstyle\substack{a_1,\ldots,a_{n-j}\in\mathbb{R}\\ a_1\geq\cdots\geq a_{n-j}}\right\}\subset\mathfrak{t}_j$$ for each $j\in\{0,\ldots,n-1\}$. Under the pairing \eqref{Equation: Pairing}, $(\mathfrak{t}_j)_{+}$ corresponds to the cone
$$(\mathfrak{t}^*_j)_{+}\coloneqq\left\{\left[\begin{array}{ c | ccc }
0 & & 0 & \\
\hline
& a_1 & &\\
0 & & \ddots & \\
& & & a_{n-j}
\end{array}\right]:\let\scriptstyle\textstyle\substack{a_1,\ldots,a_{n-j}\in\mathbb{R}\\ a_1\geq\cdots\geq a_{n-j}}\right\}\subset\mathcal{H}(n).$$ We then have sweeping maps $\pi_j:\g_j^*\longrightarrow(\mathfrak{t}^*_j)_{+}$ and compositions $\nu_{jk}\coloneqq\phi_{jk}\circ\pi_j:\g_j^*\longrightarrow\mathbb{R}$ for $j\in\{0,\ldots,n-1\}$ and $k\in\{1,\ldots,n-j\}$, as in Section \ref{Subsection: GC torus}. The functions
$$\lambda_{jk}\coloneqq\nu_{jk}\circ\sigma_j:\mathcal{H}(n)\longrightarrow\mathbb{R}$$ from Section \ref{Subsection: GC torus} are therefore given by the following condition: if $\xi\in\mathcal{H}(n)$ and $j\in\{0,\ldots,n-1\}$, then $\lambda_{j1}(\xi)\geq\lambda_{j2}(\xi)\geq\cdots\geq\lambda_{j(n-j)}(\xi)$ are the eigenvalues of the $(n-j)\times(n-j)$ submatrix in the bottom right-hand corner of $\xi$.
Observe that the number of maps $\lambda_{jk}$ is $$n+(n-1)+\cdots+1=\frac{n(n+1)}{2}=\frac{1}{2}(\dim\mathfrak{u}(n)+n).$$ Our enumeration \eqref{Equation: Enumeration} therefore takes the form \begin{align*}\lambda_{\text{big}} & \coloneqq(\lambda_1,\ldots,\lambda_{\mathrm{\frac{n(n+1)}{2}}}) \\ & \coloneqq(\lambda_{01},\ldots,\lambda_{0n},\lambda_{11},\ldots,\lambda_{1(n-1)},\ldots,\lambda_{(n-2)1},\lambda_{(n-2)2},\lambda_{(n-1)1}):\mathcal{H}(n)\longrightarrow\mathbb{R}^{\frac{n(n+1)}{2}}.\end{align*} Let us also consider the open dense subset $$\mathcal{H}(n)_{\text{s-reg}}\coloneqq\left\{\xi\in\mathcal{H}(n):\let\scriptstyle\textstyle\substack{\lambda_{j1}(\xi)>\cdots>\lambda_{j(n-j)}(\xi)\text{ for all }j\in\{0,\ldots,n-1\}\\ \lambda_{0k}(\xi)>\cdots>\lambda_{(n-k)k}(\xi)\text{ for all }k\in\{1,\ldots,n\}}\right\}$$ of $\g^*=\mathcal{H}(n)$. By the paragraph following the proof of Proposition \ref{Proposition: Poisson moment map}, $(\lambda_{\text{big}},\mathcal{H}(n)_{\text{s-reg}})$ is a Gelfand--Cetlin datum if and only if $\lambda_{\text{big}}\big\vert_{\mathcal{H}(n)_{\text{s-reg}}}:\mathcal{H}(n)_{\text{s-reg}}\longrightarrow\lambda_{\text{big}}(\mathcal{H}(n)_{\text{s-reg}})$ is a principal bundle for $\mathbb{T}_{\text{int}}=\operatorname{U}(1)^{\frac{n(n-1)}{2}}$. This later condition is verified in \cite[Section 5]{GuilleminSternbergGC}.
\section{The abelianization theorem}\label{Section: The abelianization theorem}
This section is devoted to the proof of our abelianization theorem for smooth quotients. Some preliminary results are established in \ref{Subsection: Universal} and \ref{Subsection: Some supplementary results}, while the main proof appears in \ref{Subsection: Proof for smooth}.
\subsection{The universal maximal torus}\label{Subsection: Universal}
Adopt the notation and conventions in Section \ref{Subsection: Definition}, and let $(\lambda_{\text{big}},\g^*_{\text{s-reg}})$ be a Gelfand--Cetlin datum. Given any $\xi\in\g^*_{\text{reg}}$, Definition \ref{Definition: Main definition}(ii) tells us that $\{\mathrm{d}_{\xi}\lambda_1,\ldots,\mathrm{d}_{\xi}\lambda_{\ell}\}$ is a basis of $\g_{\xi}$. This basis determines a vector space isomorphism $$\kappa_{\xi}:\g_{\xi}\overset{\cong}\longrightarrow\mathbb{R}^{\ell},\quad x_1\mathrm{d}_{\xi}\lambda_1+\cdots+x_{\ell}\mathrm{d}_{\xi}\lambda_{\ell}\mapsto (x_1,\ldots,x_{\ell}).$$ The torus $\mathbb{T}_{\text{small}}$ is then a universal maximal torus in the following sense.
\begin{proposition}\label{Proposition: Universal maximal torus}
If $\xi\in\g^*_{\emph{reg}}$, then $\kappa_{\xi}$ integrates to a Lie group isomorphism $\tau_{\xi}:G_{\xi}\overset{\cong}\longrightarrow\mathbb{T}_{\emph{small}}$.
\end{proposition}
\begin{proof}
It suffices to prove that $\kappa_{\xi}$ restricts to a $\mathbb{Z}$-module isomorphism from the kernel of $\exp\big\vert_{\g_{\xi}}:\g_{\xi}\longrightarrow G_{\xi}$ to $(2\pi\mathbb{Z})^{\ell}\subset\mathbb{R}^{\ell}$. This is an immediate consequence of Proposition \ref{Proposition: Lattice}.
\end{proof}
\begin{proposition}\label{Proposition: Alternative action description}
Let $M$ be a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$. Suppose that $\xi\in\g^*_{\emph{s-reg}}$. We then have $g\cdot m=\tau_{\xi}(g)\cdot m$ for all $g\in G_{\xi}$ and $m\in\mu^{-1}(\xi)$, where the left and right-hand sides denote the actions of $G_{\xi}\subset G$ on $M$ and $\mathbb{T}_{\emph{small}}\subset\mathbb{T}_{\emph{big}}$ on $M_{\emph{s-reg}}$, respectively.
\end{proposition}
\begin{proof}
Let $X_{\zeta}$ be the generating vector field on $M_{\text{s-reg}}$ determined by $\zeta\in\mathbb{R}_{\text{small}}$ via the action of $\mathbb{T}_{\text{small}}$ on $M_{\text{s-reg}}$. Write $Y_{\eta}$ for the generating vector field on $M$ determined by $\eta\in\g_{\xi}$ through the action of $G_{\xi}\subset G$ on $M$. It suffices to prove that $(X_{\kappa_{\xi}(\eta)})_m=(Y_{\eta})_m$ for all $\eta\in\g_{\xi}$ and $m\in\mu^{-1}(\xi)$. Setting $\gamma_j\coloneqq d_{\xi}\lambda_j$ and letting $e_j\in\mathbb{R}^{\ell}=\mathbb{R}_{\text{small}}$ denote the $j^{\text{th}}$ standard basis vector, this is equivalent to establishing that $(X_{e_j})_m=(Y_{\gamma_j})_m$ for all $j\in\{1,\ldots,\ell\}$ and $m\in\mu^{-1}(\xi)$. On the other hand, $Y_{\gamma_j}$ (resp. $X_{e_j}$) is the Hamiltonian vector field on $M$ (resp. $M_{\text{s-reg}}$) associated to $\mu^{\gamma_j}$ (resp. the $j^{\text{th}}$ component $\mu^*\lambda_j$ of $\lambda\circ\mu$). This further reduces us to proving that $\mathrm{d}_m\mu^{\gamma_j}=\mathrm{d}_m\mu^*\lambda_j$. But it is clear that
$$\mathrm{d}_m\mu^*\lambda_j=\mathrm{d}_\xi\lambda_j\circ\mathrm{d}_m\mu=\mathrm{d}_m\mu^{\gamma_j}$$ for all $m\in\mu^{-1}(\xi)$ and $j\in\{1,\ldots,\ell\}$.
\end{proof}
\subsection{Some supplementary results}\label{Subsection: Some supplementary results}
We now prove two supplementary facts needed to establish the main result of this paper. We continue with the notation and conventions of Sections \ref{Subsection: Definition} and \ref{Subsection: Universal}.
\begin{proposition}\label{Proposition: Two parts}
Let $M$ be a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$. Suppose that $\xi\in\g^*_{\emph{s-reg}}$.
\begin{itemize}
\item[\textup{(i)}] If $m\in\mu^{-1}(\xi)$ and $t\in\mathbb{T}_{\emph{int}}$ satisfy $t\cdot m\in\mu^{-1}(\xi)$, then $t=e$.
\item[\textup{(ii)}] The saturation of $\mu^{-1}(\xi)$ under the action of $\mathbb{T}_{\emph{int}}$ on $M_{\emph{s-reg}}$ is $\lambda_M^{-1}(\lambda_{\emph{big}}(\xi))$.
\end{itemize}
\end{proposition}
\begin{proof}
To verify (i), let $m\in\mu^{-1}(\xi)$ and $t\in\mathbb{T}_{\text{int}}$ be such that $t\cdot m\in\mu^{-1}(\xi)$. Let us also observe that $\mu$ is $\mathbb{T}_{\text{int}}$-equivariant when restricted to a map $M_{\text{s-reg}}\longrightarrow\g^*_{\text{s-reg}}$. These last two sentences imply that $\xi=t\cdot\xi$. Since $\mathbb{T}_{\text{int}}$ acts freely on $\g^*_{\text{s-reg}}$, we must have $t=e$.
We now verify (ii). To this end, note that $\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$ is a $\mathbb{T}_{\text{int}}$-invariant subset of $M_{\text{s-reg}}$ that contains $\mu^{-1}(\xi)$. This implies that the saturation of $\mu^{-1}(\xi)$ is contained in $\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$. For the opposite inclusion, suppose that $m\in\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$. Definition \ref{Definition: Main definition}(v) tells us that $t\cdot\mu(m)=\xi$ for some $t\in\mathbb{T}_{\text{int}}$. By the equivariance property of $\mu$ mentioned in the previous paragraph, we must have $t\cdot m\in\mu^{-1}(\xi)$. This completes the proof of (ii).
\end{proof}
Fix $\xi\in\g^*_{\text{s-reg}}$. In light of the previous proposition, we may define the map
$$\delta_{\xi}:\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}}\longrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi)),\quad (m,t)\mapsto t\cdot m.$$ The following result is immediate consequence of the previous proposition.
\begin{corollary}\label{Corollary: Homeomorphism}
If $\xi\in\g^*_{\emph{s-reg}}$, then $\delta_{\xi}$ is a homeomorphism.
\end{corollary}
\subsection{Proof of the abelianization theorem}\label{Subsection: Proof for smooth}
Let us continue with the notation and conventions set in Sections \ref{Subsection: Definition}, \ref{Subsection: Universal}, and \ref{Subsection: Some supplementary results}.
\begin{theorem}\label{Theorem: Free case}
Let $M$ be a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$. Suppose that $\xi\in\g^*_{\emph{s-reg}}$.
\begin{itemize}
\item[\textup{(i)}] The stabilizer $G_{\xi}$ acts freely on $\mu^{-1}(\xi)$ if and only if $\mathbb{T}_{\emph{big}}$ acts freely on $\lambda_M^{-1}(\lambda_{\emph{big}}(\xi))$.
\item[\textup{(ii)}] In the case of \emph{(i)}, there is a canonical symplectomorphism $M\sll{\xi} G\cong M_{\emph{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\emph{big}}$.
\end{itemize}
\end{theorem}
\begin{proof}
We begin by verifying (i). In light of Proposition \ref{Proposition: Universal maximal torus}, the multiplication map
$$\rho_{\xi}:G_{\xi}\times\mathbb{T}_{\text{int}}\longrightarrow\mathbb{T}_{\text{big}},\quad (g,t)\mapsto\tau_{\xi}(g)t$$ is a Lie group isomorphism.
We also note that the action of $G_{\xi}$ on $\mu^{-1}(\xi)$ and multiplication action of $\mathbb{T}_{\text{int}}$ on itself define an action $G_{\xi}\times\mathbb{T}_{\text{int}}$ on $\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}}$. By Proposition \ref{Proposition: Alternative action description}, the homeomorphism $\delta_{\xi}$ is equivariant in the following sense:
$$\delta_{\xi}((g,t)\cdot x)=\rho_{\xi}(g,t)\cdot\delta_{\xi}(x)$$ for all $(g,t)\in G_{\xi}\times\mathbb{T}_{\text{int}}$ and $x\in\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}}$. It follows that $G_{\xi}$ acts freely on $\mu^{-1}(\xi)$ if and only if $\mathbb{T}_{\text{big}}$ acts freely on $\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$.
We now prove (ii). By Corollary \ref{Corollary: Homeomorphism}, the inclusion $\mu^{-1}(\xi)\longhookrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$ descends to a diffeomorphism $$\mathrm{f}:\mu^{-1}(\xi)\overset{\cong}\longrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))/\mathbb{T}_{\text{int}}.$$ We also note that the $\mathbb{T}_{\text{big}}$-action on $\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$ induces a residual action of the subtorus $\mathbb{T}_{\text{small}}$ on $\lambda_M^{-1}(\lambda_{\text{big}}(\xi))/\mathbb{T}_{\text{int}}$. Proposition \ref{Proposition: Alternative action description} then tells us that $$\mathrm{f}(g\cdot m)=\tau_{\xi}(g)\cdot \mathrm{f}(m)$$ for all $g\in G_{\xi}$ and $m\in\mu^{-1}(\xi)$. The map $\mathrm{f}$ therefore descends to a diffeomorphism
$$\varphi:M\sll{\xi}G\overset{\cong}\longrightarrow M_{\text{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\text{big}}.$$ It therefore suffices to prove that $\varphi$ pulls the symplectic form $\beta$ on $M_{\text{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\text{big}}$ back to the symplectic form $\alpha$ on $M\sll{\xi}G$.
We have a commutative diagram $$\begin{tikzcd}
\mu^{-1}(\xi)\arrow[r, "\mathrm{j}"] \arrow[d, "\pi"'] & \lambda_M^{-1}(\lambda_{\text{big}}(\xi)) \arrow[d, "\theta"] \\
M\sll{\xi}G \arrow[r, swap, "\varphi"] & M_{\text{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\text{big}}
\end{tikzcd},$$
where $\pi:\mu^{-1}(\xi)\longrightarrow\mu^{-1}(\xi)/G_{\xi}=M\sll{\xi}G$ and $\theta:\lambda_M^{-1}(\lambda_{\text{big}}(\xi))\longrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))/\mathbb{T}_{\text{big}}=M_{\text{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\text{big}}$ are the canonical quotient maps and $\mathrm{j}:\mu^{-1}(\xi)\longhookrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$ is the inclusion. We also have inclusion maps $\mathrm{k}:\mu^{-1}(\xi)\longhookrightarrow M$ and $\mathrm{l}:\lambda_M^{-1}(\lambda_{\text{big}}(\xi))\longhookrightarrow M$. Another consideration is that $\alpha$ (resp. $\beta$) is the unique $2$-form on $M\sll{\xi}G$ (resp. $M_{\text{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\text{big}}$) for which $\pi^*\alpha=\mathrm{k}^*\omega$ (resp. $\theta^*\beta=\mathrm{l}^*\omega$), where $\omega$ is the symplectic form on $M$. It therefore suffices to prove that $\pi^*(\varphi^*\beta)=\mathrm{k}^*\omega$. On the other hand, our commutative diagram implies that
$$\pi^*(\varphi^*\beta)=\mathrm{j}^*(\theta^*\beta)=\mathrm{j}^*(\mathrm{l}^*\omega)=\mathrm{k}^*\omega.$$
This completes the proof.
\end{proof}
\section{Generalization to stratified symplectic spaces}\label{Section: Generalization to stratified symplectic spaces}
We now provide a generalization of Theorem \ref{Theorem: Free case} in the realm of stratified symplectic spaces \cite{SjamaarLerman}. In \ref{Subsection: Stratified symplectic}, we recall the immediately pertinent parts of Sjamaar and Lerman's more general theory of stratified symplectic spaces. The generalization of Theorem \ref{Theorem: Free case} to stratified symplectic spaces appears in \ref{Subsection: More general}.
\subsection{Stratified symplectic spaces}\label{Subsection: Stratified symplectic}
Let $X$ be a topological space on which a compact torus $T$ acts continuously. Given a closed subgroup $H\subset T$, let $$X_H\coloneqq\{x\in X:T_x=H\}$$ be the locus of points with $T$-stabilizer $T_x$ equal to $H$. Denote by $\mathrm{Stab}(T,X)$ the set of all closed subgroups $H\subset T$ for which $X_H\neq\emptyset$.
Now let $G$ be a compact connected Lie group with Lie algebra $\g$. Suppose that $M$ is a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$. As discussed in the introduction to this paper, $M\sll{\xi}G$ is a stratified symplectic space \cite{SjamaarLerman} for all $\xi\in\g^*$. This means that $M\sll{\xi}G$ is naturally partitioned into symplectic manifolds satisfying certain compatibility conditions. While we refer the reader to \cite[Definition 1.12]{SjamaarLerman} for a precise definition and description of stratified symplectic spaces, the following exposition will be sufficient for our purposes.
Fix a point $\xi\in\mathfrak{g}^*_{\text{reg}}$, and recall that $G_{\xi}\subset G$ is a maximal torus. Adopt the more parsimonious notation $$\mathrm{Stab}(G,\xi)\coloneqq\mathrm{Stab}(G_{\xi},\mu^{-1}(\xi)),$$ and note that $\mu^{-1}(\xi)$ is the disjoint union
$$\mu^{-1}(\xi)=\bigsqcup_{H\in\mathrm{Stab}(G,\xi)}\mu^{-1}(\xi)_H.$$
The arguments in the proof of \cite[Theorem 2.1]{SjamaarLerman} imply that each subset $\mu^{-1}(\xi)_H$ is a locally closed, $G_{\xi}$-invariant submanifold of $M$. These arguments also imply that the topological quotient $(\mu^{-1}(\xi)_H)/G_{\xi}$ carries a unique manifold structure for which the canonical map $\pi:\mu^{-1}(\xi)_H\longrightarrow(\mu^{-1}(\xi)_H)/G_{\xi}$ is a surjective submersion. One further consequence of \cite[Theorem 2.1]{SjamaarLerman} is the existence of a symplectic form $\overline{\omega}$ on $(\mu^{-1}(\xi)_H)/G_{\xi}$ such that $\pi^*\overline{\omega}$ is the pullback of $\omega$ along the inclusion $\mu^{-1}(\xi)_H\longhookrightarrow M$. It follows that $M\sll{\xi}G=\mu^{-1}(\xi)/G_{\xi}$ is a disjoint union
\begin{equation}\label{Equation: Symplectic strata}M\sll{\xi}G=\bigsqcup_{H\in\mathrm{Stab}(G,\xi)}(\mu^{-1}(\xi)_H)/G_{\xi}\end{equation} of symplectic manifolds, called the \textit{symplectic strata} of $M\sll{\xi}G$.
\begin{remark}\label{Remark: Strata}
The quotients $(\mu^{-1}(\xi)_H)/G_{\xi}$ need not be manifolds in the traditional sense of the term; each may have connected components of different dimensions. To obtain a stratification into genuine symplectic manifolds, one must refine \eqref{Equation: Symplectic strata} and declare the symplectic strata to be the connected components of the quotients $(\mu^{-1}(\xi)_H)/G_{\xi}$. The distinction between \eqref{Equation: Symplectic strata} and this refined stratification will not materially affect any argument in this paper.
\end{remark}
\begin{definition}\label{Definition: Isomorphism}
Let $G$ and $K$ be compact connected Lie groups with respective Lie algebras $\g$ and $\mathfrak{k}$. Suppose that $M$ (resp. $N$) is a Hamiltonian $G$-space (resp. Hamiltonian $K$-space) with moment map $\mu:M\longrightarrow\g^*$ (resp. $\nu:N\longrightarrow\mathfrak{k}^*$). Take $\xi\in\g^*_{\text{reg}}$ and $\eta\in\mathfrak{k}^*_{\text{reg}}$. A pair of maps $\varphi:M\sll{\xi}G\longrightarrow N\sll{\eta}K$ and $\phi:\mathrm{Stab}(G,\xi)\longrightarrow\mathrm{Stab}(K,\eta)$ will be called an \textit{isomorphism of stratified symplectic spaces} if the following conditions are satisfied:
\begin{itemize}
\item[\textup{(i)}] $\varphi$ is a homeomorphism;
\item[\textup{(ii)}] $\phi$ is a bijection;
\item[\textup{(iii)}] $\varphi$ restricts to a symplectomorphism $(\mu^{-1}(\xi)_H)/G_{\xi}\longrightarrow(\nu^{-1}(\eta)_{\phi(H)})/K_{\eta}$ for each $H\in\mathrm{Stab}(G,\xi)$.
\end{itemize}
\end{definition}
\begin{remark}
Assume that this definition is satisfied. Equip $M\sll{\xi}G$ and $N\sll{\eta}K$ with the refined stratifications discussed in Remark \ref{Remark: Strata}. By (ii) and (iii), the association $S\mapsto\varphi(S)$ defines a bijection from the set of symplectic strata $S\subset M\sll{\xi}G$ to the set of symplectic strata in $N\sll{\eta}K$. Property (i) implies that this bijection is an isomorphism of partially ordered sets, i.e. any symplectic strata $S,T\subset M\sll{\xi}G$ satisfying $S\subset\overline{T}$ must also satisfy $\varphi(S)\subset\overline{\varphi(T)}$. We also know that $\varphi$ restricts to a symplectomorphism $S\longrightarrow\varphi(S)$ for all symplectic strata $S\subset M\sll{\xi}G$, as follows from (iii). In other words, an isomorphism in the sense of Definition \ref{Definition: Isomorphism} gives rise to an isomorphism between the refined symplectic stratifications on $M\sll{\xi}G$ and $N\sll{\eta}K$.
\end{remark}
\subsection{A more general abelianization theorem}\label{Subsection: More general}
Let us continue with the notation and conventions set in Section \ref{Section: The abelianization theorem}, as well as those in Section \ref{Subsection: Stratified symplectic} concerning stratified symplectic spaces.
In preparation for our next proposition, we encourage the reader to recall Proposition \ref{Proposition: Universal maximal torus} and Corollary \ref{Corollary: Homeomorphism}.
\begin{proposition}\label{Proposition: Strata}
Let $M$ be a Hamiltonian $G$-space with moment map $\mu:M\longrightarrow\g^*$. Suppose that $\xi\in\g^*_{\emph{s-reg}}$.
\begin{itemize}
\item[\textup{(i)}] The association $H\mapsto\tau_{\xi}(H)$ defines a bijection $\mathrm{Stab}(G,\xi)\overset{\cong}\longrightarrow\mathrm{Stab}(\mathbb{T}_{\emph{big}},\lambda_{\emph{big}}(\xi))$.
\item[\textup{(ii)}] If $H\subset G_{\xi}$ is a closed subgroup, then $\delta_{\xi}$ restricts to a diffeomorphism
$$\mu^{-1}(\xi)_H\times\mathbb{T}_{\emph{int}}\overset{\cong}\longrightarrow\lambda_M^{-1}(\lambda_{\emph{big}}(\xi))_{\tau_{\xi}(H)}.$$
\end{itemize}
\end{proposition}
\begin{proof}
As in the proof of Theorem \ref{Theorem: Free case}(i), we have
$$\delta_{\xi}((g,t)\cdot x)=\rho_{\xi}(g,t)\cdot\delta_{\xi}(x)$$ for all $(g,t)\in G_{\xi}\times\mathbb{T}_{\text{int}}$ and $x\in\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}}$. It follows that $K\mapsto\rho_{\xi}(K)$ defines a bijection $$\mathrm{Stab}(G_{\xi}\times\mathbb{T}_{\text{int}},\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}})\overset{\cong}\longrightarrow\mathrm{Stab}(\mathbb{T}_{\text{big}},\lambda_{\text{big}}(\xi)),$$ and that $\delta_{\xi}$ restricts to a homeomorphism $$(\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}})_K\overset{\cong}\longrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\rho_{\xi}(K)}$$ for all closed subgroups $K\subset G_{\xi}\times\mathbb{T}_{\text{int}}$. On the other hand, we clearly have a bijection
$$\mathrm{Stab}(G,\xi)\overset{\cong}\longrightarrow\mathrm{Stab}(G_{\xi}\times\mathbb{T}_{\text{int}},\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}}),\quad H\mapsto H\times\{e\}\subset G_{\xi}\times\mathbb{T}_{\text{int}}.$$ We also note that $(\mu^{-1}(\xi)\times\mathbb{T}_{\text{int}})_K = \mu^{-1}(\xi)_H\times\mathbb{T}_{\text{int}}$ and $\rho_{\xi}(K)=\tau_{\xi}(H)$ for $K=H\times\{e\}$. These last three sentences combine to imply the desired results.
\end{proof}
The following is our generalization of Theorem \ref{Theorem: Free case} to stratified symplectic spaces.
\begin{theorem}
If $\xi\in\g^*_{\emph{s-reg}}$, then there is a canonical isomorphism $M\sll{\xi} G\cong M_{\emph{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\emph{big}}$ of stratified symplectic spaces.
\end{theorem}
\begin{proof}
By Corollary \ref{Corollary: Homeomorphism} and Proposition \ref{Proposition: Strata}, the inclusion $\mu^{-1}(\xi)\longhookrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$ descends to a homeomorphism $$\mathrm{f}:\mu^{-1}(\xi)\overset{\cong}\longrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))/\mathbb{T}_{\text{int}}$$ whose restriction to $\mu^{-1}(\xi)_H$ is a diffeomorphism $$\mu^{-1}(\xi)_H\overset{\cong}\longrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)}/\mathbb{T}_{\text{int}}$$ for all $H\in\mathrm{Stab}(G,\xi)$. We also note that the $\mathbb{T}_{\text{big}}$-action on $\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$ induces a residual action of the subtorus $\mathbb{T}_{\text{small}}$ on $\lambda_M^{-1}(\lambda_{\text{big}}(\xi))/\mathbb{T}_{\text{int}}$. Proposition \ref{Proposition: Alternative action description} then tells us that $$\mathrm{f}(g\cdot m)=\tau_{\xi}(g)\cdot \mathrm{f}(m)$$ for all $g\in G_{\xi}$ and $m\in\mu^{-1}(\xi)$. The map $\mathrm{f}$ therefore descends to a homeomorphism
$$\varphi:M\sll{\xi}G\overset{\cong}\longrightarrow M_{\text{s-reg}}\sll{\lambda_{\text{big}}(\xi)}\mathbb{T}_{\text{big}}$$
whose restriction to $(\mu^{-1}(\xi)_H)/G_{\xi}$ is a diffeomorphism $$\varphi_H:(\mu^{-1}(\xi)_H)/G_{\xi}\overset{\cong}\longrightarrow(\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)})/\mathbb{T}_{\text{big}}$$ for all $H\in\mathrm{Stab}(G,\xi)$.
Now consider the bijection
$$\phi:\mathrm{Stab}(G,\xi)\overset{\cong}\longrightarrow\mathrm{Stab}(\mathbb{T}_{\text{big}},\lambda_{\text{big}}(\xi)),\quad H\mapsto \tau_{\xi}(H)$$ from Proposition \ref{Proposition: Strata}(i). We claim that $\varphi$ and $\phi$ define an isomorphism of stratified symplectic spaces, in the sense of Definition \ref{Definition: Isomorphism}. In light of the previous paragraph, it suffices to prove the following for all $H\in\mathrm{Stab}(G,\xi)$: $\varphi_H$ pulls the symplectic form $\beta$ on $(\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)})/\mathbb{T}_{\text{big}}$ back to the symplectic form $\alpha$ on $(\mu^{-1}(\xi)_H)/G_{\xi}$.
Proposition \ref{Proposition: Strata}(ii) implies that $\mu^{-1}(\xi)_H\subset\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$. This leads to the commutative diagram $$\begin{tikzcd}
\mu^{-1}(\xi)_H\arrow[r, "\mathrm{j}"] \arrow[d, "\pi"'] & \lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)} \arrow[d, "\theta"] \\
(\mu^{-1}(\xi)_H)/G_{\xi} \arrow[r, swap, "\varphi_H"] & (\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)})/\mathbb{T}_{\text{big}}
\end{tikzcd},$$
where $\pi:\mu^{-1}(\xi)_H\longrightarrow(\mu^{-1}(\xi)_H)/G_{\xi}$ and $\theta:\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)}\longrightarrow (\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)})/\mathbb{T}_{\text{big}}$ are the canonical quotient maps and $\mathrm{j}:\mu^{-1}(\xi)_H\longhookrightarrow\lambda_M^{-1}(\lambda_{\text{big}}(\xi))$ is the inclusion. We also have inclusion maps $\mathrm{k}:\mu^{-1}(\xi)\longhookrightarrow M$ and $\mathrm{l}:\lambda_M^{-1}(\lambda_{\text{big}}(\xi))\longhookrightarrow M$. Another consideration is that $\alpha$ (resp. $\beta$) is the unique $2$-form on $(\mu^{-1}(\xi)_H)/G_{\xi}$ (resp. $(\lambda_M^{-1}(\lambda_{\text{big}}(\xi))_{\tau_{\xi}(H)})/\mathbb{T}_{\text{big}}$) for which $\pi^*\alpha=\mathrm{k}^*\omega$ (resp. $\theta^*\beta=\mathrm{l}^*\omega$). It therefore suffices to prove that $\pi^*(\varphi_H^*\beta)=\mathrm{k}^*\omega$. On the other hand, our commutative diagram implies that
$$\pi^*(\varphi_H^*\beta)=\mathrm{j}^*(\theta^*\beta)=\mathrm{j}^*(\mathrm{l}^*\omega)=\mathrm{k}^*\omega.$$
This completes the proof.
\end{proof}
\bibliographystyle{acm} |
1507.03407 | \section{Solving patrolling problems with a fully connected environment}
\label{app-algorithm}
Let $\mathcal{G} = (U,T\hat{u},E,d)$ be a patrolling problem where
$T = U$, $E = U \times U$, and let $S$ be the signature of $\mathcal{G}$.
We start by defining the semantics for the
``strategy expressions'' introduced in
Section~\ref{sec-modular-strategies} precisely.
\begin{itemize}
\item $\mathit{Circle}(\U{i,N},M,L)$ denotes the $c$-modular strategy where $c = L \cdot (N/M)$ such that
the distribution $\mu_\ell$, where $0 \leq \ell < c$, selects uniformly
among the elements of $\U{i+\hat{\ell} \cdot M, M}$ where $\hat{\ell} = \ell~\mathrm{mod}~(N/M)$.
In other words, $\mathit{Circle}(\U{i,N},M,L)$ is a strategy which splits $\U{i,N}$ into pairwise disjoint subsets of size~$M$
and then ``walks around'' these sets $L$ times (actually, $\mathit{Circle}(\U{i,N},M,L)$ can also be seen as
$(N/M)$-modular strategy, but for technical reasons we prefer to interpret it as a $c$-modular strategy).
\item if $\theta_1$ and $\theta_2$ denote $c_1$-modular and $c_2$-modular strategies
with underlying distributions $\mu_0^1,\ldots,\mu_{c_1-1}^1$ and $\mu_0^2,\ldots,\mu_{c_2-1}^2$, respectively,
then $\theta_1;\theta_2$ denotes the $c_1{+}c_2$-modular strategy with the underlying distributions
$\mu_0^1,\ldots,\mu_{c_1-1}^1,\mu_0^2,\ldots,\mu_{c_2-1}^2$.
\item if $\theta_1$ and $\theta_2$ denote $c$-modular strategies
with underlying distributions $\mu_0^1,\ldots,\mu_{c-1}^1$ and $\mu_0^2,\ldots,\mu_{c-1}^2$, then
Then $\nu_p[\theta_1,\theta_2]$ denotes the $c$-modular strategy with the underlying distributions
$\mu_0,\ldots,\mu_{c-1}$, where $\mu_i = (1-\alpha(p)) \cdot \mu_i^1 + \alpha(p) \cdot \mu_i^2$ for all
$0 \leq i < c$.
\end{itemize}
Now we give a detailed description of the algorithm of Section~\ref{sec-modular-strategies}.
We construct a recursive function
$\textsc{Defend}$ which inputs a triple $(\U{i,N},D,e)$, where $\U{i,N}$ is the set of nodes to be defended,
$D$ is the number of steps available for defending $\U{i,N}$, and $e$ is an expression
which represents the ``weight'' of the constructed defending strategy
in the final distribution $\nu$. The procedure outputs a pair $(\theta,V)$ where
$\theta$ is an expression specifying a $D$-modular strategy for $\U{i,N}$, and $V$ is
an arithmetic expression representing the guaranteed ``coverage'' of the targets in $\U{i,N}$ when using $\theta$ with
the weight~$e$.
As a side effect, the function $\textsc{Defend}$ may produce
equations for the employed variables. The algorithm is invoked by
$\textsc{Defend}(\U{1,|U|},d,1)$, and the system of equations is initially empty.
A call $\textsc{Defend}(\U{i,N},D,e)$ is processed as follows:
\begin{itemize}
\item If $D \mid N$ and $N = k \cdot D$, then
$\theta = \mathit{Circle}(\U{i,N},k,1)$. Observe that every node of $\U{i,N}$
is visited at most once in $D$ steps, and this happens with probability $D/N$.
If the weight of $\theta$ is $e$, then this probability becomes $e\cdot(D/N)$ (since
$N$ and $D$ are constants, the expression $V = e\cdot(D/N)$ is parameterized just by
the variables of $e$). Hence, the function returns the pair $(\theta,V)$.
\item If $N \mid D$ and $D = k \cdot N$, then
$\theta = \mathit{Circle}(\U{i,N},1,k)$. Every node of $\U{i,N}$
is visited precisely $k$ times in $D$ steps. If the weight of $\theta$ is $e$, then the probability
of visiting a given node in $D$ steps is $V = 1- (1-e)^k$.
The function returns the pair $(\theta,V)$.
\item If $N > D$, $D \nmid N$, and $N = k \cdot D + c$ where $1 \leq c < D$, then
we split the set $\U{i,N}$ into disjoint subsets $\U{i,k \cdot D}$ and $\U{i + k \cdot D,c}$ with precisely
$k \cdot D$ and $c$ elements, respectively. Then, we pick a fresh variable
$p$ and issue two recursive calls:
\[
(\theta_1,V_1) = \textsc{Defend}(\U{i,k \cdot D},D,(1-p)\cdot e),\hspace*{2em}
(\theta_2,V_2) = \textsc{Defend}(\U{i + k \cdot D,c},D,p \cdot e)
\]
The set of equations is enriched by $V_1 = V_2$. That is, we require that $p$ is chosen so that
the nodes of $\U{i,k \cdot D}$ and $\U{i + k \cdot D,c}$ are protected equally well. Then, we put
$\theta = \nu_p[\theta_1,\theta_2]$ and we set $V = V_1$. The function returns
the pair $(\theta,V)$.
\item Finally, if $D > N$, $N \nmid D$, and $D = k \cdot N + c$ where $1 \leq c < N$, we issue two
recursive calls:
\[
(\theta_1,V_1) = \textsc{Defend}(\U{i,N},k\cdot N, e),\hspace*{2em}
(\theta_2,V_2) = \textsc{Defend}(\U{i,N},c,e)
\]
This is perhaps the most subtle part of our algorithm. Here we do not split the set $\U{i,N}$, but
the number of steps available to protect~$\U{i,N}$. Intuitively, the constructed strategy $\theta$ first
tries to loop over the targets of $\U{i,N}$ as long as possible (i.e., for the first $k\cdot N$ steps).
This is what $\theta_1$ does. Then, $\theta$ tries to exploit the remaining $c$ steps in the
best possible way, i.e., by employing $\theta_2$. That is, we put $\theta = \theta_1;\theta_2$.
If the weight of $\theta$ is $e$,
then the targets of $\U{i,N}$ are protected with probability at least $V = 1 - (1-V_1)(1-V_2)$. The function
returns the pair $(\theta,V)$.
\end{itemize}
\section{Computing finite-memory $\varepsilon$-optimal strategies}
\label{app-eps-opt}
\subsection{Proposition~\ref{cor:eps-opt-discr}}
Let us fix a patrolling problem $\mathcal{G} = (U,T,\hat{u},E,d)$.
\begin{proposition}\label{cor:eps-opt-discr}
Given $\varepsilon>0$, there is an $\varepsilon$-optimal strategy
$\sigma^{\varepsilon}$ such that for every history $h$ and every $u\in U$ it
holds
\[
\sigma^{\varepsilon}(h)(u)=k\cdot \lceil (|U|\hat{d})/\varepsilon\rceil^{-1}\text{ for a suitable }k\in \mathbb{N}
\]
and
\[
\min_{u\in U}\ \ att\textrm{-}\val_h(\sigma^\varepsilon,u)\quad \geq\quad \mathit{val}-\varepsilon.
\]
\end{proposition}
\begin{proof}
Let $\sigma$ be an optimal strategy.
Let $U = \{u_1, u_2, \dots, u_{|U|}\}$ be the set of nodes of $\mathcal{G}$ and define $s=\lceil (|U|\hat{d})/\varepsilon\rceil^{-1}$.
For every history $h$ and every $1 \leq i \leq |U|$, we inductively define $\sigma^{\varepsilon}(h)(u_i) = k_i \cdot s$, where $k_i$ is the largest number satisfying
$$ k_i \cdot s \leq \sigma(h)(u_i) + \sum_{j=1}^{i-1}(\sigma(h)(u_j) - \sigma^{\varepsilon}(h)(u_j))~.$$
This rounding procedure guarantees
that $\sigma^{\varepsilon}(h)$ is indeed a probability distribution over $U$, i.e. $\sum_{u\in U}\sigma^{\varepsilon}(h)(u) = 1$ (note that simple rounding would not guarantee this property).
Further, when we realize the invariant $0 \leq \sum_{j=1}^{i-1}(\sigma(h)(u_j) - \sigma^{\varepsilon}(h)(u_j)) < s$ holds for all $1 \leq i \leq |U|$, it is easy to see that $|\sigma(h)(u_i) - \sigma^{\varepsilon}(h)(u_i)| < s$, which is captured by the following claim.
\begin{claim}{A}
$|\sigma(h)(u) - \sigma^{\varepsilon}(h)(u)| < s$ for every $u\in U$.
\end{claim}
It follows from the definition of $\sigma^{\varepsilon}$ that whenever $\sigma(h)(u) = 0$, then also $\sigma^{\varepsilon}(h)(u) = 0$.
This means that any history executable using $\sigma^{\varepsilon}$ is also executable using $\sigma$, i.e. $\mathcal{H}(\sigma^{\varepsilon}) \subseteq \mathcal{H}(\sigma)$.
Now, knowing that $att\textrm{-}\val_h(\sigma)$ is defined if $att\textrm{-}\val_h(\sigma^{\varepsilon})$ is defined, we prove the following:
\begin{align}
att\textrm{-}\val_h(\sigma^{\varepsilon})
&\geq att\textrm{-}\val_h(\sigma) - \epsilon\label{eq1:first} \\
&\geq val_{\mathit{last}(h)}(\sigma_h) - \epsilon\label{eq1:second} \\
&\geq val(\sigma) - \epsilon\label{eq1:third}~.
\end{align}
\begin{itemize}
\item The inequality \eqref{eq1:third} directly follows from Proposition~\ref{prop:optimal} as $h \in \mathcal{H}(\sigma)$.
\item The inequality \eqref{eq1:second} clearly holds as forcing the attacker to attack immediately cannot decrease the value of the game.
\item To prove the first inequality \eqref{eq1:first}, we have to analyze the impact of the rounding in the definition of $\sigma^{\varepsilon}$.
Denote by $R[\iota, h, t, k]$ the probability of reaching $t \in T$ from $last(h)$, $h \in \mathcal{H}$, in up to $k$ steps using the strategy $\iota_h$.
We prove by induction on $k$ that for all $h \in \mathcal{H}$, $t \in T$, and $k \in \mathbb{N}$ we have that
\begin{align}
R[\sigma, h, t, k] - R[\sigma^{\varepsilon}, h, t, k] \leq k|U|s~.\nonumber
\end{align}
The base case ($k = 1$) directly follows from Claim~A for all $u\in U$ and the fact that $R[\iota, h, t, 1] = \iota(h)(t)$ for every defender's strategy $\iota$.
Let us denote the difference $R[\sigma, h, t, k] - R[\sigma^{\varepsilon}, h, t, k]$ by $\Delta$.
For $k \geq 2$, we have
\begin{eqnarray}
\Delta
&=& \sigma(h)(t) + \sum_{u\in U \smallsetminus \{t\}}\sigma(h)(u)\cdot R[\sigma, hu, t, k - 1] - \label{eq1b:first} \\
& &\hspace{-11.2pt} -~\sigma^{\varepsilon}(h)(t) - \sum_{u\in U \smallsetminus \{t\}}\sigma^{\varepsilon}(h)(u)\cdot R[\sigma^{\varepsilon}, hu, t, k - 1] \nonumber \\
&\leq& s + \sum_{u\in U \smallsetminus \{t\}}(R[\sigma, hu, t, k - 1]\cdot (\sigma(h)(u) - \sigma^{\varepsilon}(h)(u)) \label{eq1b:second} + \\
& &\hspace{47.5pt} +~\sigma^{\varepsilon}(h)(u)\cdot (R[\sigma, hu, t, k - 1] - R[\sigma^{\varepsilon}, hu, t, k - 1])) \nonumber \\
&\leq& s + \sum_{u\in U \smallsetminus \{t\}} R[\sigma, hu, t, k - 1]\cdot s + \label{eq1b:third} \\ & &\hspace{7.6pt} +\hspace{-2pt}~\sum_{u\in U \smallsetminus \{t\}} \sigma^{\varepsilon}(h)(u)\cdot (k - 1)|U|s \nonumber \\
&\leq& s + (|U|-1)s + (k - 1)|U|s \label{eq1b:fourth} \\
&=& k|U|s\nonumber~.
\end{eqnarray}
\begin{itemize}
\item The equality \eqref{eq1b:first} follows from the definition of $R[\iota, h, t, k]$ as $R[\iota, h, t, k] = \iota(h)(t) + \sum_{u \in U \smallsetminus \{t\}} \iota(h)(u) \cdot R[\iota, hu, t, k - 1]$ for all $k \geq 2$. \item The equality \eqref{eq1b:second} is just an application of Claim~A and of the formula $ab - a'b' = b(a - a') + a'(b - b')$.
\item The inequality \eqref{eq1b:third} follows from Claim~A and from the induction hypothesis.
\item The inequality \eqref{eq1b:fourth} holds because $R[\sigma, hu, t, k - 1] \leq 1$ and $\sum_{u\in U \smallsetminus \{t\}} \sigma^{\varepsilon}(h)(u) \leq 1$.
\end{itemize}
So we have that $R[\sigma, h, t, d(t)] - R[\sigma^{\varepsilon}, h, t, d(t)] \leq d|U|s$.
However, note that
$$att\textrm{-}\val_h(\iota) = \inf_{t \in T} R[\iota, h, t, d(t)]$$
and therefore
$$att\textrm{-}\val_h(\sigma) - att\textrm{-}\val_h(\sigma^{\varepsilon}) \leq \hat{d}|U|s \leq \epsilon~.$$
\end{itemize}
\end{proof}
\subsection{Formal proof of Theorem~\ref{thm:eps-opt}}
\noindent
\noindent
In order to make lengthy computations more succinct, we use the following shorthand notation:
Given a~characteristic $c=(\mathbf{r},\mathbf{s},\mathbf{c})\in \mathbf{Char}$, we define:
\begin{itemize}
\item $c(0,u)=1$ if $u=c_{\mathbf{r}}$, and $c(0,u)=0$ for all $u\not = c_{\mathbf{r}}$.
\item $c(1,u)=c_{\mathbf{s}}(u)$ for all $u\in U$.
\item $c(k,u)=c_{\mathbf{c}}(k,u)$ for all $2\leq k\leq \hat{d}$ and $u\in T$.
\end{itemize}
Also, we use functional notation to denote vectors of characteristics (i.e. successors). That is we represent each $(c^v)_{v\in U}\in \mathbf{Char}^U$ as a function $\zeta:U\rightarrow \mathbf{Char}$ where $\zeta(v)=c^v$ for every $v\in U$.
Let us formally define the notion of successor of a characteristic. We say that $\zeta:U\rightarrow \mathbf{Char}$ is a~{\em successor} of $c\in \mathbf{Char}$ if for every $v\in U$ holds $\zeta(v)(0,v)=1$, and for every $u\in T$ and $2\leq k\leq \hat{d}$ holds
\[
c(k,u)=c(1,u)+\sum_{v\not = u} c(1,v)\cdot \zeta(v)(k-1,u)
\]
A set of characteristics $B\subseteq \mathcal{C}$ is {\em closed} if there is at least one $c\in B$ satisfying $c(0,\hat{u})=1$, and every $c\in B$ has a successor $\zeta:U\rightarrow B$.
Given a defender's strategy $\sigma$ and a~history $h$, we denote by $c[\sigma,h]$ the characteristic defined as follows: $c[\sigma,h](\mathit{last}(h))=1$, and $c[\sigma,h](1,u)=\sigma(h)(u)$ for every $u\in U$, and for every $2\leq k\leq \hat{d}$ and $u\in T$ we define
\[
c[\sigma,h](k,u)=\mathcal{P}^{\sigma}(\mathcal{R}(\{hh'\mid \mathit{last}(h')=u, 1\leq |h'|\leq k\})\mid \mathcal{R}(h))
\]
(Intuitively, for $k\geq 1$, the value $c[\sigma,h](k,u)$ is the probability of reaching $u$ in at least one and at most $k$ steps starting with the history $h$ and using $\sigma$.) Denote by $\mathbf{Char}[\sigma]$ the set of all characteristics $c[\sigma,h]$ where $h\in \mathcal{H}(\sigma)$.
\begin{lemma}
Given a defender's strategy $\sigma$, the set $\mathbf{Char}[\sigma]$ is closed.
\end{lemma}
\begin{proof}
By definition, $c[\sigma,\hat{u}](0,\hat{u})=1$. Now consider $c[\sigma,h]\in \mathbf{Char}[\sigma]$. Let $\xi(v)=c[\sigma,hv]$. Apparently, $\xi(v)\in \mathbf{Char}[\sigma]$ so it suffices to show that $\xi$ is a successor of $c[\sigma,h]$. By definition,
\[
\xi(v)(0,v)=c[\sigma,hv](0,v)=1
\]
and, clearly,
\begin{eqnarray*}
c[\sigma,h](k,u) & = & c[\sigma,h](1,u)+\sum_{v\not = u} c[\sigma,h](1,v)\cdot c[\sigma,hv](k-1,u)\\
& = & c[\sigma,h](1,u)+\sum_{v\not = u} c[\sigma,h](1,v)\cdot \xi(v)(k-1,u)\\
\end{eqnarray*}
which means that $\xi$ is a successor of $c[\sigma,h]$.
\end{proof}
\noindent
Let $C$ be a finite closed subset of $\mathbf{Char}$. We say that a finite-memory strategy $\sigma=(M,N,m_0,\xi)$ is {\em consistent} with $C$ if
\begin{itemize}
\item $M=C$,
\item for every $c\in C$ the function $K(c)$ defined by
\[
K(c)(u)=N(c,u)\text{ \quad for all } u\in U
\]
is a successor of $c$.
\item $m_0=\hat{c}$ for some $\hat{c}\in C$ satisfying $\hat{c}(0,\hat{u})=1$,
\item $\xi(c,u)=c(1,u)$ for all $u\in U$.
\end{itemize}
\begin{proposition}\label{prop:closed-set-strat}
Let $C$ be a finite closed set of characteristics and assume that $\sigma$ is consistent with $C$. Then $\mathit{val}(\sigma)\geq \min_{c\in C} \mathit{val}(c)$.
\end{proposition}
\begin{proof}
Let us first prove that $c[\sigma,h]\in C$ for every history $h\in \mathcal{H}(\sigma)$. Let us fix a history $h$.
We prove that $c[\sigma,h]=N(\hat{c},h)$, i.e. that $c[\sigma,h](k,u)=N(\hat{c},h)(k,u)$ for all $0\leq k\leq \hat{d}$.
It is easy to show that $N(\hat{c},h)(0,\mathit{last}(h))=1$. For $k>0$ we proceed by induction on $k$.
Immediately from definitions we obtain that for every $u\in U$
\[
c[\sigma,h](1,u)=\sigma(h)(u)=\xi(N(\hat{c},h),u)=N(\hat{c},h)(1,u)
\]
Now consider $2\leq k\leq \hat{d}$. We have
\begin{eqnarray*}
c[\sigma,h](k,u) & = & c[\sigma,h](1,v)+\sum_{v\not = u} c[\sigma,h](1,v)\cdot c[\sigma,hv](k-1,u)\\
& = & N(\hat{c},h)(1,u)+\sum_{v=u} N(\hat{c},h)(1,v)\cdot N(\hat{c},hv)(k-1,u)\\
& = & N(\hat{c},h)(1,u)+\sum_{v=u} N(\hat{c},h)(1,v)\cdot N(N(\hat{c},h),v')(k-1,u)\\
& = & N(\hat{c},h)(1,u)+\sum_{v=u} N(\hat{c},h)(1,v)\cdot K(N(\hat{c},h))(v')(k-1,u)\\
& = & N(\hat{c},h)(k,u)
\end{eqnarray*}
Here the second equality follows by induction, the last equality follows from the fact that $K(N(\hat{c},h))$ is a successor of $N(\hat{c},h)$.
This proves that $c[\sigma,h]\in C$ for every history $h\in \mathcal{H}(\sigma)$.
Now since every defender's strategy $\sigma$ satisfies
\[
att\textrm{-}\val_{h}(\sigma,u)=c[\sigma,h](d(u),u)
\]
we obtain
\begin{eqnarray*}
\mathit{val}(\sigma) & = & \inf_{\pi} \mathcal{P}^{\sigma}_{\hat{u}}(\mathcal{D}[\pi]) \\
& = & \inf_{\pi}\sum_{h\in \mathcal{H}(\sigma), \pi(h)\in U} \mathcal{P}(\mathcal{R}(h))\cdot att\textrm{-}\val_h(\sigma,\pi(h)) \\
& = & \inf_{\pi}\sum_{h\in \mathcal{H}(\sigma), \pi(h)\in U} \mathcal{P}(\mathcal{R}(h))\cdot
c[\sigma,h](d(\pi(h)),\pi(h)) \\
& \geq & \inf_{\pi}\sum_{h\in \mathcal{H}(\sigma), \pi(h)\in U} \mathcal{P}(\mathcal{R}(h))\cdot \min_{c\in C}\mathit{val}(c) \\
& = & \min_{c\in C}\mathit{val}(c)
\end{eqnarray*}
\end{proof}
\noindent
Let $\mathbf{Char}_{\varepsilon}$ be the set of all characteristics $c$ such that $c(k,u)$ is an integer multiple of $s^k$, here $s=\lceil |U| d/\varepsilon \rceil^{-1}$, for every $1\leq k\leq \hat{d}$ and every $u\in U$.
\begin{lemma}\label{lem:closed-eps}
The set $\mathbf{Char}_{\varepsilon}$ contains a (finite) closed subset $C$ such that $\min_{c\in C} \mathit{val}(c)\geq \mathit{val}-\varepsilon$.
\end{lemma}
\begin{proof}
It suffices to consider $\sigma^{\varepsilon}$ of Proposition~\ref{cor:eps-opt-discr}. Then $\mathbf{Char}[\sigma^{\varepsilon}]\subseteq \mathbf{Char}_{\varepsilon}$ is a closed subset.
\end{proof}
\noindent
Given any closed subset $C$ of $\mathbf{Char}_{\varepsilon}$ satisfying $\min_{c\in C} \mathit{val}(c)\geq \mathit{val}-\varepsilon$, we obtain, via Proposition~\ref{prop:closed-set-strat}, a finite-memory $\varepsilon$-optimal strategy. So it remains to give an algorithm for computing such a closed subset $C$.
\subsection*{The Algorithm}
The following procedure computes a closed subset $C$ of $\mathbf{Char}_{\varepsilon}$ which maximizes $\min_{c\in C}\mathit{val}(c)$ among all closed subsets of $\mathbf{Char}_{\varepsilon}$ (so in particular, satisfies the desired bound $\min_{c\in C}\mathit{val}(c)\geq \mathit{val}-\varepsilon$).
Let $c_1,\ldots,c_n$ be all characteristics of $\mathbf{Char}_{\varepsilon}$ ordered in such a way that for arbitrary $c_i, c_j$ we have that $\mathit{val}(c_i)\leq \mathit{val}(c_j)$ implies $i\leq j$.
The following procedure maintains the invariant that $A=\{c_1,\ldots,c_k\}$ for some $k\geq 0$ and computes the desired closed set $C$ :
\begin{itemize}
\item[1.] Initialize $A:=\{c_1\}$.
\item[2.] Compute a closed subset of $A$, or indicate that $A$ does not contain a closed subset as follows:
\begin{itemize}
\item[a.] Initialize $B:=A$,
\item[b.] compute $B'$ as the set of all $c\in B$ that have successors in $B$,
\item[c.] depending on $B'$ do:
\begin{itemize}
\item if either $B'=\emptyset$, or there is no $c\in B'$ such that $c(0,\hat{u})=1$, then indicate that there is no closed subset of $A$ (and proceed to 3.),
\item else, if $B=B'$, then return $B$ as a closed subset of $A$ (and proceed to 3.),
\item else assign $B:=B'$ and go to b.
\end{itemize}
\end{itemize}
\item[3.] If $A=\{c_1,\ldots,c_k\}$ does not contain a closed subset, then add $c_{k+1}$ to $A$ and go to 2., else return the closed subset as a result.
\end{itemize}
\subsubsection*{Correctness}
In step 2., the algorithm computes the greatest closed subset of $A$ using a straightforward iterative algorithm. As the characteristics are added to $A$ in the order of non-decreasing value and there exists a closed subset $C$ of $\mathbf{Char}_{\varepsilon}$ satisfying $\min_{c\in C} \mathit{val}(c)\geq \mathit{val}-\varepsilon$ (due to Lemma~\ref{lem:closed-eps}), a subset $C'$ satisfying $\min_{c\in C'} \mathit{val}(c)\geq \mathit{val}-\varepsilon$ is computed when $C\subseteq A$ for the first time.
\subsubsection*{Complexity}
Let us denote by $\Theta$ the size of $\mathbf{Char}_{\varepsilon}$. It is straightforward to show that $\Theta\in \left(\frac{|U|\hat{d}}{\varepsilon}\right)^{\mathcal{O}(|U|\hat{d}^2)}$. Now the computation in step 2. b. takes time in $\Theta^{\mathcal{O}(|U|)}$ (for every characteristic of $B$ one has to check all possible successors, i.e. vectors of the form $\zeta:U\rightarrow B$). The whole algorithm iterates at most $\Theta$ times through 1. -- 3. (a characteristic is added to $A$ in every iteration except the last one). So the total complexity is at most
\[
\Theta^{\mathcal{O}(|U|)}= \left(\frac{|U|\hat{d}}{\varepsilon}\right)^{\mathcal{O}(|U|^2\hat{d}^2)}
\]
\section{Complexity of finding the characteristic subdigraph}
\label{app-HAM}
In this section we prove two claims leading to combined
Theorem~\ref{thm-connected} via Theorem~\ref{thm-subdigraph}.
We will focus on a subclass of patrolling problems $\mathcal{G} = (U,T,\hat{u},E,d)$
such that $T = U$, $\mathit{supp}(S) = \{k\}$.
In such a case, for a well-formed attack signature $S$,
we have that $|U|=n$ is divisible by $k$ and that the characteristic digraph
$M_s$ has a particularly nice description:
$M_s$ has a node set $u_0,\dots u_{n-1}$ and $u_iu_j$ is an arc iff
$j=(i+1)\,\mathit{mod}\,k$.
Our proofs will actually be expressed in terms of a {\em special equitable
$k$-colouring} of the complementary digraph $H=(U,\overline{E}\,)$
of the environment $E$ \,(i.e., $H$ having precisely those arcs, but not
the loops, which are absent in $E$):
Let $|U|=|V(H)|=a\cdot k$.
The task is to find a colouring $c:V(H)\to\{1,2,\dots,k\}$ of the node set
such that (a) $|c^{-1}(i)|=a$ for each $i=1,\dots,k$,
and (b) no arc $xy$ of $H$ receives colours
$c(x)=j$, $c(y)=(j\,\mathit{mod}\,k)+1$ for some $j\in\{1,\dots,k\}$
(while both $x,y$ might receive the same colour).
Comparing this with the definition of $M_s$ one immediately
concludes that $(U,E)$ contains a subdigraph isomorphic to $M_s$ if, and only if,
the complement $H$ has a special equitable $k$-colouring.
\begin{lemma}
For a simple digraph $H$ on an even number of nodes, one can find in
polynomial time a special equitable $2$-colouring of $H$, if it exists.
\end{lemma}
\begin{proof}
Note that our definition of a special $2$-colouring does not allow
for arcs having two distinct colours on their nodes, in either order.
Hence every weak component of $H$ must be monochromatic
(recall that a weak component is a connected component of the underlying
undirected graph of $H$).
The problem thus reduces to finding a subset of weak components of $H$
summing to exactly half of the nodes of $H$.
This we solve in polynomial time by two folklore algorithms;
finding the weak components by BFS, and solving the knapsack problem in
unary notation by standard dynamic programming.
\end{proof}
\begin{lemma}
Let $k \in \mathbb{N}$, $k\geq3$.
Assume a simple digraph $H$ such that $|V(H)|$ is divisible by~$k$.
Then it is $\textbf{NP}$-complete to decide whether $H$
has a special equitable $k$-colouring.
\end{lemma}
\begin{proof}
First to say, there does not seem to be an easy way how to reduce a case of
$k\geq3$ to that of $k+1$, and so we have to provide hardness reductions for
each considered value of~$k$.
We reduce from the folklore $\textbf{NP}$-complete problem of {\em
two-colouring $3$-uniform hypergraph}:
Given is a ground set $X$ and a family $\mathcal{F}$ of $3$-element subsets of $X$
(hyperedges).
The task is to decide whether the elements of $X$ can be assigned one of two
colours each such that no set in $\mathcal{F}$ is monochromatic.
\subparagraph{$(k=3)$}
For such a $3$-uniform hypergraph $(X,\mathcal{F})$ we first construct an
equivalent instance $H$ of the special equitable $3$-colouring problem.
Let $a=3|\mathcal{F}|+|X|$.\,\footnote{Although the formula for $a$ might seem arbitrary now, this
precise expression will become relevant with the case of $k=6$.}
We denote by $A_3$ the digraph of $a'=a+|\mathcal{F}|$ nodes
$s_1,s_2,\dots,s_{a'}$ and of $a'-1$ arcs $s_1s_i$ for $i=2,\dots,a'$ ($A_3$ is a star),
and by $B$ the digraph on $a-|\mathcal{F}|$ nodes with no arcs at all.
Then we construct a digraph $G_3$ on the node set $X\cup\mathcal{F}^3$
where $\mathcal{F}^3$ is a set containing exactly three distinct copies
$f,f',f''$ of each hyperedge $f\in\mathcal{F}$.
The arcs of $G_3$ are given as follows;
for each $f=\{x_1,x_2,x_3\}\in\mathcal{F}$ there is a directed $6$-cycle
on the nodes $x_1,f,x_2,f',x_3,f''$ in this cyclic order
(a permutation of $x_1,x_2,x_3$ is irrelevant, though).
A digraph $H$ is constructed from the disjoint union of $A_3,B$ and $G_3$,
by adding arcs from the node $s_1$ to all the nodes in $X$ of $G_3$.
Then $H$ has exactly $a+|\mathcal{F}|+a-|\mathcal{F}|+|X|+3|\mathcal{F}|=3a$ nodes,
and we claim that $H$ has a special equitable $3$-colouring if, and only if,
$(X,\mathcal{F})$ is two-colourable.
In the forward direction,
up to symmetry between the colours, we may assume that $s_1$ gets colour
$1$, and so all nodes of $A_3$ have colours $1$ or $3$.
We argue the following properties:
\begin{itemize}
\item[(i)]
The nodes in $X$ can only receive colours $1,3$.
\item[(ii)]
Among the nodes of $G_3$ not in $X$, at least $|\mathcal{F}|$
of them must receive colour $2$.
\end{itemize}
Here (i) follows from the fact that each node in $X$ ends an arc starting
in $s_1$ (of $A_3$) of colour $1$.
To get (ii), notice that we have to assign colour $2$ to exactly $a$ nodes
which cannot appear in $A_3$ and in $X$ due to $s_1$ having colour $1$.
We can give colour $2$ to the nodes of $B$, yet, at least
$a-|B|=|\mathcal{F}|$ of the nodes of colour $2$ must be in $G_3\setminus X$.
Now we prove that if (i),(ii) hold true, then the hypergraph
$(X,\mathcal{F})$ is two-colourable.
Consider one of the $6$-cycles of $G_3$, say the one on the nodes
$x_1,f,x_2,f',x_3,f''$.
It cannot happen $c(f)=c(f')=2$\,---in such a case, depending on the colour $c(x_2)$,
there would be an arc in $G_3$ coloured with a forbidden pair $1,2$ or $2,3$.
Hence each of the $6$-cycles defining the arcs of $G_3$
(for each $f=\{x_1,x_2,x_3\}\in\mathcal{F}$)
has at most one vertex of colour $2$, and so exactly one such.
Up to symmetry, let $c(f)=2$ in (any) one of the cycles.
Then $c(x_1)\not=c(x_2)$, since otherwise $c(x_1)=c(x_2)\in\{1,3\}$ would
again give a forbidden pair of colours $1,2$ or $2,3$, respectively.
Consequently, taking the colouring $c$ restricted to $X$, no hyperedge in
$\mathcal{F}$ is monochromatic and $(X,\mathcal{F})$ is two-colourable.
Conversely, consider a two-colourable $3$-uniform hypergraph $(X,\mathcal{F})$.
Let the colours occuring in $X$ be $1$ and $3$.
We extend this to a special equitable $3$-colouring of our digraph $H$ as
follows.
If a hyperedge $f=\{x_1,x_2,x_3\}\in\mathcal{F}$ is coloured $1,1,3$,
then we assign colours $1,1,1,3,3,2$ in order to the $6$-cycle on the nodes
$x_1,f,x_2,f',x_3,f''$ in $G_3$.
If this $f=\{x_1,x_2,x_3\}\in\mathcal{F}$ is coloured $1,3,3$,
then we assign colours $1,1,3,3,3,2$ in order to the same $6$-cycle.
We finally assign colour $1$ to $s_1$, colour $2$ to all nodes of $B$
(and so $c^{-1}(2)=a$),
and an arbitrary choice of colours $1,3$ to the remaining nodes of $A_3$
in order to ``balance'' $c^{-1}(1)=c^{-1}(3)=a$.
\subparagraph{$(k=4)$}
Second, we modify the previous construction of $H$ for the case of $k=4$.
We use the same $B$ and $G_3$.
We replace $A_3$ with a digraph $A_4$ which is the complete digraph on
$a'=a+|\mathcal{F}|$ nodes $s_1,s_2,\dots,s_{a'}$, too.
Then we add a new digraph $C_4$ formed by $a$ nodes with no arcs between.
$H$ is constructed from a disjoint union of $A_4,C_4,B$ and $G_3$ by
adding all the arcs from $s_1$ to $C_4$ and all the arcs between the nodes
of $A_4$ and of $X\subseteq V(G_3)$ in both directions.
Clearly, $H$ has $4a$ nodes.
Consider a special equitable $4$-colouring of $H$.
Again, up to symmetry, let the colour of $s_1$ be $1$.
Then whole $A_4$ and $X$ may only receive colours $1$ or $3$ and (i) holds
true again.
Since no node of $C_4$ may be coloured $2$ due to the existence of an arc
from $s_1$, and since $B$ (which may be coloured by $2$) has size $a-|\mathcal{F}|$,
we get (ii), too.
Now, notice that the argument following (i),(ii) above did not use the pair
of colours $3,1$ as forbidden, and so it applies now as well;
$(X,\mathcal{F})$ is two-colourable.
Conversely, consider a two-colourable $3$-uniform hypergraph $(X,\mathcal{F})$.
Then, exactly as in the case of $k=3$, we get a valid colouring of $A_4\cup
B\cup G_3$ which we complement by assigning colour $4$ to whole $C_4$.
This results in a special equitable $4$-colouring of $H$.
\subparagraph{$(k\geq5)$}
Third, we define a general construction for all the values $k=5,6,\dots$.
We use the same gadgets $G_3$, $B$, and $A_4$, and introduce $k-3$ disjoint
copies of $C_4$ which we denote by $C_4,C_5$ and $D_5,\dots,D_{k-1}$.
Again, on the disjoint union of all these digraphs (which has $k\cdot a$
nodes) we define $H$ by adding
\begin{itemize}
\item all the arcs between the nodes
of $A_4$ and of $D_5\cup\dots\cup D_{k-1}$ in both directions,
\item all the arcs between the nodes
of $A_4\cup D_5\cup\dots\cup D_{k-1}$ and the nodes $X$ of $G_3$ in both directions,
\item all the arcs between the nodes
of $D_5\cup\dots\cup D_{k-1}$ and of $B$ in both directions,
\item all the arcs between $s_1$ and the nodes of $C_4$ in both directions,
and the same between $s_2$ and $C_5$.
\end{itemize}
Consider a special equitable $k$-colouring of $H$.
For simplicity we call the forbidden pairs of colours \mbox{$j$,
$(j\,\mathit{mod}\,k)+1$} as {\em adjacent}.
Since $A_4\cup D_5\cup\dots\cup D_{k-1}$ has $(k-4)a+1$ nodes, at least $k-3$
distinct colours must occur there.
However, $A_4$ (of $>\!a$ nodes) itself gets at least two distinct
non-adjacent colours $c_1,c_2$ which cannot be adjacent to any of the colours
occuring in $D_5\cup\dots\cup D_{k-1}$ other than $c_1,c_2$.
A simple case analysis shows that the only valid choice of colours is
$c_1=1$, $c_2=3$ and remaining $5,6,\dots,k-1$, up to rotation symmetry.
Consequently, $A_4$ holds only colours $1,3$ and each of the colours
$5,6,\dots,k-1$ occurs somewhere in $D_5\cup\dots\cup D_{k-1}$.
In particular, no node of $A_4\cup D_5\cup\dots\cup D_{k-1}$ is coloured~$2$.
Which nodes could have colour $2$?
Due to the arcs to and from $s_1,s_2$ in $A_4$, all the nodes of colour $2$
belong to $B\cup(G_3\setminus X)$, and since $G_3\setminus X$ has
$3|\mathcal{F}|<a$ nodes, we have $c^{-1}(2)\cap B\not=\emptyset$.
This has a twofold consequence; first, (ii) holds true also in this case,
and second, colours $1,3$ cannot occur in $D_5\cup\dots\cup D_{k-1}$.
Then, by simple counting, $c^{-1}(5)\cup\dots\cup c^{-1}(k-1)$ must be exactly
the node set of $D_5\cup\dots\cup D_{k-1}$,
and hence $X$ cannot get any of the colours $5,\dots,k-1$.
Neither colours $2,4$ or $k$ could occur in $X$ due to the arcs to and from $A_4$,
which concludes that (i) holds true, too.
Theorefore, $(X,\mathcal{F})$ is two-colourable.
Conversely, consider a two-colourable $3$-uniform hypergraph $(X,\mathcal{F})$.
We colour $G_3\cup A_4$ by $1,2,3$ as above while giving $c(s_1)=1$ and $c(s_2)=3$.
Then we assign colour $2$ to whole $B$, colour $4$ to whole $C_4$, colour
$k$ to whole $C_5$, and colours $j$ to whole $D_j$ for $j=5,\dots,k-1$.
Again, this results in a special equitable $k$-colouring of $H$.
\end{proof}
\section{The existence of an optimal defender's strategy}
\label{app-optimal}
\begin{reftheorem}{thm:optimal}
For every patrolling problem $\mathcal{G} = (U,T,\hat{u},E,d)$ there exists an optimal defender's
strategy.
\end{reftheorem}
\begin{proof}
We construct an optimal strategy $\sigma^*$ as a point-wise limit of a sequence $\sigma^1,\sigma^2,\ldots$ of strategies where each $\sigma^k$ is $1/k$-optimal. More precisely, we prove the following.
\begin{claim}{}
There is a sequence of defender's strategies $\sigma^1,\sigma^2,\ldots$ and a defender's strategy $\sigma^*$ such that
\begin{itemize}
\item each $\sigma^i$ is $1/i$-optimal, i.e., $\mathit{val}(\sigma^i)\geq \mathit{val}-1/i$,
\item for every $h\in \mathcal{H}(\sigma)$ and every $u\in U$ we have that $\lim_{i\rightarrow \infty} \sigma^i(h)(u)=\sigma^*(h)(u)$. (In particular, the limit exists for every $h$ and $u$.)
\end{itemize}
\end{claim}
\begin{claimproof}
Assume a lexicographical ordering $\preceq$ on histories of $\mathcal{H}$. To simplify our notation, we consider an "empty" history $\epsilon$ such that $\epsilon\preceq h$ for every $h\in \mathcal{H}$.
We consider histories $h$ successively according to $\preceq$ and inductively define sequences $\sigma^{h,1},\sigma^{h,2},\ldots$ of defender's strategies so that the following holds:
\begin{itemize}
\item[A.] each $\sigma^{h,i}$ is $1/i$-optimal,
\item[B.] $\sigma^{h,1},\sigma^{h,2},\ldots$ is a subsequence of all preceding sequences $\sigma^{h',1},\sigma^{h',2},\ldots$ for $h'\preceq h$,
\item[C.] for every ${h'\preceq h}$ the sequence of distributions $\sigma^{h,1}(h'),\sigma^{h,2}(h'),\ldots$ converges (point-wisely) to a probability distribution.
\end{itemize}
Then it suffices to put $\sigma^i=\sigma^{h,|h|}$ where $h$ is the $i$-th history according to $\preceq$, and to define $\sigma^*(h)=\lim_{i\rightarrow \infty} \sigma^i(h)$.
We define $\sigma^{h,i}$ as follows:
\begin{itemize}
\item For every $i\in \mathbb{N}$, we define $\sigma^{\epsilon,i}$ to be an arbitrary $1/i$-optimal strategy.
\item Assume that $\sigma^{h',1},\sigma^{h',2},\ldots$ has already been defined for $h'$. Consider a next history $h$ according to $\preceq$. As the space of all probability distributions on $U$ is compact, there exists a~subsequence $\sigma^{h,1},\sigma^{h,2},\ldots$ of $\sigma^{h',1},\sigma^{h',2},\ldots$ such that $\sigma^{h,i}(h)$ converges (point-wisely) to a probability distribution on $U$.
\end{itemize}
The sequences apparently satisfy the above conditions A, B, C.
\end{claimproof}
\noindent
We prove that the defender's strategy $\sigma^*$ obtained in the above Claim is optimal.
Suppose that $\sigma^*$ is not optimal, i.e. $\mathit{val}(\sigma^*)\leq\mathit{val}-\delta$ for some $\delta>0$.
Then there is an attacker's strategy $\pi$ such that $\mathcal{P}^{\sigma^*}(\mathcal{D}[\pi])\leq \mathit{val}-\delta/2$. For every $i\in \mathbb{N}$, let $\pi_i$ behave as $\pi$ on runs where $\pi$ attacks before $i$-th step, and do not attack at all on the rest.
\begin{claim}{}
$\lim_{i\rightarrow \infty} \mathcal{P}^{\sigma^*}(\mathcal{D}[\pi_i])=\mathcal{P}^{\sigma^*}(\mathcal{D}[\pi])$
\end{claim}
\begin{claimproof}
Note that
\begin{eqnarray*}
\mathcal{P}^{\sigma^*}(\mathcal{D}[\pi]) & = & \sum_{\substack{h\in \mathcal{H}(\sigma^*)\\ \pi(h)\not = \bot}} \mathcal{P}^{\sigma^*}(h)\cdot att\textrm{-}\val_h(\sigma^*,\pi(h)) \\
& = & \sum_{\substack{h\in \mathcal{H}(\sigma^*)\\ \pi(h)\not = \bot\\ |h|\leq i}} \mathcal{P}^{\sigma^*}(h)\cdotatt\textrm{-}\val_h(\sigma^*,\pi(h))+\sum_{\substack{h\in \mathcal{H}(\sigma^*)\\ \pi(h)\not = \bot\\ |h|>i}} \mathcal{P}^{\sigma^*}(h)\cdotatt\textrm{-}\val_h(\sigma^*,\pi(h))\\
& = & \mathcal{P}^{\sigma^*}(\mathcal{D}[\pi])+p_i
\end{eqnarray*}
where $p_i$ is the probability that the the attacker starts his attack after $i$. Clearly, $p_i\rightarrow \infty$ as $i\rightarrow \infty$, which proves the claim.
\end{claimproof}
Thus for a sufficiently large $i$ we have that $\mathcal{P}^{\sigma^*}(\mathcal{D}[\pi_i])\leq \mathit{val}-\delta/4$.
Now observe that for all sufficiently large $k\in \mathbb{N}$ we have
$|\mathcal{P}^{\sigma^*}(\mathcal{D}[\pi_i])-\mathcal{P}^{\sigma^k}(\mathcal{D}[\pi_i])|\leq \delta/8$ because the transition probabilities determined by $\sigma^*$ and $\sigma^k$ on the first $i+\hat{d}$ steps are getting closer and closer with growing $k$. However, then we obtain that $\mathcal{P}^{\sigma^k}(\mathcal{D}[\pi_i])|\leq \mathit{val}-\delta/8$, which means that $\sigma^k$ cannot be $1/k$-optimal for large $k$.
\end{proof}
\begin{proposition}\label{prop:optimal}
Assume that $\hat{u}$ is a target. There every optimal defender's strategy $\sigma^*$ satisfies
\begin{equation}\label{eq:atval-opt}
\inf_{h\in \mathcal{H}(\sigma^*)} \ \ \min_{u\in T}\ \ att\textrm{-}\val_h(\sigma^*,u)\quad \geq\quad \mathit{val}
\end{equation}
\end{proposition}
\begin{proof}
Recall that we denote by $\mathit{val}_u$ and $\mathit{val}_u(\sigma)$ the values of $\mathcal{G}$ and of $\sigma$, resp., when $u$ is used as the initial node instead of $\hat{u}$.
It suffices to prove Proposition~\ref{prop:optimal} under the assumption
that $\mathit{val}=\mathit{val}_{\hat{u}}=\max_{u\in T} \mathit{val}_{u}$, because then we obtain, as a~consequence, that $\mathit{val}_u=\mathit{val}_{\hat{u}}$ for all $u\in T$. Indeed, using $\sigma^*$, every target node has to be visited. So given $u\in T$, there is a history $h\in \mathcal{H}(\sigma^*)$ such that $u=\mathit{last}(h)$. However, note that~(\ref{eq:atval-opt}) holds also for $\sigma^*_h$ instead of $\sigma^*$, and thus $\mathit{val}_u(\sigma^*_h)\geq \mathit{val}$. As $\mathit{val}_{\hat{u}}=\mathit{val}$ is maximal, we obtain that $\mathit{val}_u=\mathit{val}_{\hat{u}}$.
So assume that $\mathit{val}=\mathit{val}_{\hat{u}}=\max_{u\in T} \mathit{val}_{u}$. Let $\sigma^*$ be an optimal strategy. Note that $\mathit{val}=\max_{u\in T}\mathit{val}_u$ implies $\mathit{val}_{\mathit{last}(h)}(\sigma^*_h)\leq \mathit{val}$ for every history $h\in \mathcal{H}(\sigma)$ such that $\mathit{last}(h)\in T$. We obtain that $\mathit{val}_{\mathit{last}(h)}(\sigma_h)\leq \mathit{val}$ {\em for every history} $h\in \mathcal{H}(\sigma)$ because even if $\mathit{last}(h)$ is not a target, $\sigma_h$ starting in $\mathit{last}(h)$ must visit a target almost surely and the attacker may wait until it happens.
We claim that $\sigma^*$ satisfies (\ref{eq:atval-opt}), i.e. that $att\textrm{-}\val_h(\sigma^*,u)\geq\mathit{val}$ for all $h\in \mathcal{H}(\sigma^*)$ and all $u\in T$. Indeed, assume that
$att\textrm{-}\val_{\bar{h}}(\sigma^*,u) \leq\mathit{val}-\delta$ for some $\delta>0$ and $\bar{h}\in \mathcal{H}(\sigma^*)$ and $u\in U$. Assume, w.l.o.g., that $\sigma^*$ follows the history $\bar{h}$ with probability at least $\delta$.
Note that due to $\mathit{val}_{\mathit{last}(h)}(\sigma^*_{h})\leq \mathit{val}$ for every $h$, the deficiency of $\sigma^*$ at $\bar{h}$ cannot be compensated on other histories.
We obtain the following: Let $A$ be the set of all histories $h'$ of length $|h|$ (i.e., in particular, $h\in A$). Then
\begin{eqnarray*}
\mathit{val}(\sigma^*) & \leq & \sum_{h'\in A} \mathcal{P}^{\sigma^*}(h') \cdot \mathit{val}_{\mathit{last}(h')}(\sigma^*_{h'}) \\
& = & \mathcal{P}^{\sigma^*}(h) \cdot \mathit{val}_{\mathit{last}(h)}(\sigma^*_h) + \sum_{h'\in A\smallsetminus \{h\}} \mathcal{P}^{\sigma^*}(h') \cdot \mathit{val}_{\mathit{last}(h')}(\sigma^*_{h'}) \\
& \leq & \mathcal{P}^{\sigma^*}(h) \cdot \mathit{val}_{\mathit{last}(h)}(\sigma^*_h) + \sum_{h'\in A\smallsetminus \{h\}} \mathcal{P}^{\sigma^*}(h') \cdot \mathit{val} \\
& \leq & \mathcal{P}^{\sigma^*}(h) \cdot\min_{u\in U}\ att\textrm{-}\val_h(\sigma^*,u) + \sum_{h'\in A\smallsetminus \{h\}} \mathcal{P}^{\sigma^*}(h') \cdot \mathit{val} \\
& \leq & \mathcal{P}^{\sigma^*}(h) \cdot (\mathit{val}-\delta) + \sum_{h'\in A\smallsetminus \{h\}} \mathcal{P}^{\sigma^*}(h') \cdot \mathit{val} \\
& = & \mathit{val}-\mathcal{P}^{\sigma^*}(h) \cdot \delta\\
& \leq & \mathit{val}-\delta^2
\end{eqnarray*}
This contradicts the fact that $\sigma^*$ is optimal.
\end{proof}
\section{The existence of a characteristic subdigraph}
\label{app-subdigraph}
In this section we prove the non-trivial direction of
Theorem~\ref{thm-subdigraph}, i.e., we show that
if $\mathcal{G} = (U,T,\hat{u},E,d)$ is a patrolling problem
with $T = U$, a well formed attack signature $S$,
and a sufficiently connected environment, then $M_S$ is
($d$-preserving isomorphic to) a subdigraph of $(U,E)$.
Let us assume that $E$ is sufficiently connected, and let
$\sigma$ be a defender's strategy for $\mathcal{G}$ such
that $\mathit{val}(\sigma) = \left(\sum_{k \in \mathit{supp}(S)} \frac{S(k)}{k}\right)^{-1}$.
Due to Theorem~\ref{thm-upper}, we obtain that $\sigma$ is \emph{optimal},
i.e., $\mathit{val} = \mathit{val}(\sigma)$, and hence we can apply
Proposition~\ref{prop-abafy}~to~$\sigma$.
We reuse the notation introduced in the proof of
Theorem~\ref{thm-upper}. In particular, for all
$h \in \histories(\sigma)$ and $i \in \mathbb{N}_0$, we use
$\mathit{Node}_{h,i} : \mathcal{R}(h) \rightarrow U$ to denote a function which to every
run $hw \in \mathcal{R}(h)$ assigns the node $w_i$.
Further, we use $\mu_{h,i} \in \Delta(U)$ to denote a distribution
defined by
$\mu_{h,i}(u) = \mathcal{P}^\sigma (\mathit{Node}_{h,i} {=} u)/\mathcal{P}^\sigma(\mathcal{R}(h))$.
We start by realizing the following:
\begin{lemma}
\label{lem-mu-val}
For all $h \in \histories(\sigma)$ and $u \in U$, we have that
$\sum_{i=0}^{d(u)-1} \mu_{h,i}(u) = \mathit{val}$.
\end{lemma}
\begin{proof}
For all $h \in \histories(\sigma)$ and $u \in U$ we have that
\[
\sum_{i=0}^{d(u)-1} \mu_{h,i}(u) \quad \geq \quad
\mathit{val}_{\mathit{last}(h)}(\sigma_h) \quad = \quad \mathit{val}
\]
where the last equality is due to Proposition~\ref{prop-abafy}.
Now suppose that there exist some $h \in \histories(\sigma)$ and $u \in U$
such that $\sum_{i=0}^{d(u)-1} \mu_{h,i}(u) > \mathit{val}$. Let
$\ell = \Pi_{k \in \mathit{supp}(S)}\, k$. For every $k \in \mathit{supp}(S)$, we put
\[
\alpha[k] \quad = \quad \sum_{u \in U,\ d(u)=k} \
\sum_{i=0}^{\ell-1}\ \mu_{h,i}(u)\,.
\]
Obviously, $\sum_{k \in \mathit{supp}(S)} \alpha[k] = \ell$. Further, for every
$k \in \mathit{supp}(S)$ we have that
$\alpha[k] \geq \mathit{val} \cdot S(k) \cdot \frac{\ell}{k}$, because otherwise
there inevitably exists some $0\leq i < \ell-k$ and $u \in U$ such that
$d(u) = k$ and $\sum_{j=i}^{i+d(u)-1} \mu_{h,i}(u) < \mathit{val}$, which means that
there exists $hh' \in \histories(\sigma)$ such that $|h'| = i$ and
$\sum_{j=0}^{d(u)-1} \mu_{hh',j}(u) < \mathit{val}$. Since
$\mathit{val}_{\mathit{last}(hh')}(\sigma_{hh'}) = \mathit{val}$ by
Proposition~\ref{prop-abafy}, we have a contradiction.
Since $\alpha[k] \geq \mathit{val} \cdot S(k) \cdot \frac{\ell}{k}$ for all
$k \in \mathit{supp}(S)$ and $\sum_{k \in \mathit{supp}(S)} \alpha[k] = \ell$, we obtain
that $\alpha[k] = \mathit{val} \cdot S(k) \cdot \frac{\ell}{k}$ for all
$k \in \mathit{supp}(S)$. Similarly, for every $k \in \mathit{supp}(S)$, every
$u \in U$ where $d(u) = k$, and every $0 \leq i < \ell-k$ we must have
that $\sum_{j=i}^{i+d(u)-1} \mu_{h,i}(u) \geq \mathit{val}$ (otherwise we obtain
contradiction in the way indicated above), which is possible only if
$\sum_{j=i}^{i+d(u)-1} \mu_{h,i}(u) = \mathit{val}$ for all such $i$ and $u$.
In particular, this holds for $i=0$, and the proof is finished.
\end{proof}
\noindent
Now we present a sequence of observations that reveal a certain
form of periodicity in the structure of~$\sigma$. The next lemma
follows trivially from Lemma~\ref{lem-mu-val}.
\begin{lemma}
\label{lem-upper-tran-prob}
For all $h \in \histories(\sigma)$ and $u \in U$ we have that
$\sigma(h)(u) \leq \mathit{val}(\sigma)$.
\end{lemma}
\begin{lemma}
\label{lem-not-earlier}
Let $h \in \histories(\sigma)$ where $\mathit{last}(h) = u$. Then for every
$hh' \in \histories(\sigma)$ where $|h'| < d(u)$ we have that
$\mathit{last}(h') \neq u$. \end{lemma}
\begin{proof}
Suppose the converse. Then there exist $hh' \in \histories(\sigma)$
and a node $u \in U$ such that $\mathit{last}(h) = \mathit{last}(h') = u$ and
$|h'| < d(u)$. Due to Proposition~\ref{prop-abafy}, we have that
$\mathit{val}_{u}(\sigma_h) = \mathit{val}$. Further,
$\sum_{i=0}^{d(u)-1} \mu_{h,i}(u) = \mathit{val}$ by Lemma~\ref{lem-mu-val}.
However, due to the existence of $h'$ we obtain that
$\mathit{val}_{u}(\sigma_h) < \sum_{i=0}^{d(u)-1} \mu_{h,i}(u)$, which is a
contradiction.
\end{proof}
\begin{lemma}
\label{lem-pred}
Let $h \in \histories(\sigma)$ where $\mathit{last}(h) = u$.
For all $i \geq 0$ and $hh' \in \histories(\sigma)$ where
$|h'| = i \cdot d(u) + d(u) - 1$
we have that $\sigma(hh')(u) = \mathit{val}(\sigma)$ and $u$ does
not appear among the last $d(u)-1$ nodes of $h'$.
\end{lemma}
\begin{proof}
By induction on $i$. In the base case ($i = 0$), we have that
$u$ does not appear among the last $d(u)-1$ nodes of $h'$
by Lemma~\ref{lem-not-earlier}. Further, by
Lemma~\ref{lem-mu-val} and Lemma~\ref{lem-not-earlier} we obtain
that $\mathit{val}_{u}(\sigma_h) = \mu_{h,d(u)-1}(u)$. Hence,
$\mu_{h,d(u)-1}(u) = \mathit{val} = \mathit{val}(\sigma)$. By
Lemma~\ref{lem-upper-tran-prob}, this is possible only if
for all $hh' \in \histories(\sigma)$ where $|h'| = d(u)-1$ we have
that $\sigma(hh')(u) = \mathit{val}(\sigma)$. For the inductive step,
consider $hh'h'' \in \histories(\sigma)$ where $|h'| = i \cdot d(u) + d(u) - 1$
and $|h''| = d(u)$. By applying induction hypothesis to $hh'$,
we obtain that $\sigma(hh')(u) = \mathit{val}(\sigma)$. If $u$ was revisited
in the last $d(u)-1$ nodes of $h''$, we would have
$\sum_{i=0}^{d(u)-1} \mu_{hh',i} (u) > \mathit{val}$, which contradicts
Lemma~\ref{lem-mu-val}. If $\sigma(hh'h'')(u) < \mathit{val}(\sigma)$,
we obtain $\sum_{i=0}^{d(u)-1} \mu_{hh'u',i} (u) < \mathit{val}$, where $u'$
is the first node of $h''$, which again contradicts Lemma~\ref{lem-mu-val}.
\end{proof}
\begin{lemma}
\label{lem-sub-revisit}
Let $hh' \in \histories(\sigma)$ where $\mathit{last}(h) = \mathit{last}(h') = u$.
Then $d(u)$ divides $|h'|$.
\end{lemma}
\begin{proof}
Directly from Lemma~\ref{lem-pred}.
\end{proof}
\begin{lemma}
\label{lem-sub-revisit-surely}
Let $h \in \histories(\sigma)$ where $\mathit{last}(h) = u$. For every $i \in \mathbb{N}$,
there exist $hh' \in \histories(\sigma)$ such that $|h'| = i \cdot d(u)$ and
$\mathit{last}(h') =u$.
\end{lemma}
\begin{proof}
Immediate.
\end{proof}
\noindent
For the rest of this section, let us fix a history
$h = u_0 \cdots u_m \in \histories(\sigma)$ such that
every node of $U$ appears in~$h$ (such an $h$ must exist).
For every $u \in U$, let us fix some $j \leq m$ such that $u_j = u$, and let
$\mathit{offset}(u) = j - \left\lfloor \frac{j}{d(u)} \right\rfloor \cdot d(u)$.
Note that due to Lemma~\ref{lem-sub-revisit}, the definition of $\mathit{offset}(u)$
is independent of the concrete choice of~$j$. For every $k \in \mathit{supp}(S)$
and every $i \in \{0,\ldots,k-1\}$, let $V_k[i]$ be the set of all
nodes $u \in U$ such that $d(u) = k$ and $\mathit{offset}(u) = i$.
\begin{lemma}
\label{lem-sub-edges}
Let $k,k' \in \mathit{supp}(S)$, $i \in \{0,\ldots, k{-}1\}$,
$i' \in \{0,\ldots, k'{-}1\}$, and $0 \leq \ell < k \cdot k'$ where
$i = \ell \,\mathit{mod}\, k$ and $i' = \ell {+} 1 \,\mathit{mod}\, k'$.
Then for all $u \in V_k[i]$ and $u' \in V_{k'}[i']$ we have that
$(u,u') \in E$.
\end{lemma}
\begin{proof}
Due to Lemma~\ref{lem-sub-revisit-surely}, there exist
$h h' \in \histories(\sigma)$ and $hh''\in \histories(\sigma)$ such that
$|h'| = \ell$, $|h''| = \ell + 1$, $\mathit{last}(h') = u$, and $\mathit{last}(h'') = u'$.
By Lemma~\ref{lem-pred}, we obtain $\sigma(hh')(u') = \mathit{val}$, which
means $(u,u') \in E$.
\end{proof}
\begin{lemma}
\label{lem-sub-size}
For all $k \in \mathit{supp}(S)$ and $i \in \{0,\ldots, k-1\}$, the set
$V_k[i]$ contains exactly $S(k)/k$ nodes.
\end{lemma}
\begin{proof}
By applying Lemma~\ref{lem-mu-val}.
\end{proof}
\noindent
Due to Lemma~\ref{lem-sub-size}, we have that for all $k \in \mathit{supp}(S)$ and
$i \in \{0,\ldots, k-1\}$, the set $V_k[i]$ has exactly $S(k)/k$
elements, which we denote by $v_k[i,1],\ldots,v_k[i,S(k)/k]$.
Due to Lemma~\ref{lem-sub-edges}, for every pair of nodes
$v_k[i,j]$ and $v_{k'}[i',j']$, such that $i = \ell \,\mathit{mod}\, k$ and
$i' = (\ell {+} 1) \,\mathit{mod}\, k'$ for some
$0 \leq \ell < k\cdot k'$ we have that $(v_k[i,j],v_{k'}[i',j']) \in E$.
Hence, $(U,E)$ contains a subdigraph which is $d$-preserving isomorphic to $M_S$.
\section{A bound on the Stackelberg value}
\label{app-val-bound}
Using the arguments of the proof of Proposition~\ref{prop:optimal}, the
following proposition can be shown.
\begin{proposition}
\label{prop-abafy}
Let $\mathcal{G} = (U,T,\hat{u},E,d)$ be a patrolling problem. Further, let $\sigma$ be an optimal defender's
strategy and $h \in \histories(\sigma)$. Then $\mathit{val}_{\mathit{last}(h)}(\sigma_h) =
\mathit{val}(\sigma) = \mathit{val}$.
\end{proposition}
\noindent
Note that Proposition~\ref{prop-abafy} cannot be generalized to non-optimal
strategies, i.e., for a given non-optimal $\sigma$ and $h \in \histories(\sigma)$
we do \emph{not} necessarily have that $\mathit{val}_{\mathit{last}(h)}(\sigma_h) =
\mathit{val}(\sigma)$ (a counterexample is easy to find).
\begin{reftheorem}{thm-upper}
For every patrolling problem $\mathcal{G} = (U,T,\hat{u},E,d)$
such that $T = U$,
we have that $\mathit{val} \leq \left(\sum_{k \in \mathit{supp}(S)} \frac{S(k)}{k}\right)^{-1}$
where $S$ is the attack signature of $\mathcal{G}$.
\end{reftheorem}
\begin{proof}
Let $\sigma$ be an optimal defender's strategy. For all
$h \in \histories(\sigma)$ and $i \in \mathbb{N}_0$,
let $\mathit{Node}_{h,i} : \mathcal{R}(h) \rightarrow U$ be a function which to every
run $hw \in \mathcal{R}(h)$ assigns the node $w_i$.
Further, let $\mu_{h,i} \in \Delta(U)$ be a distribution defined by
$\mu_{h,i}(u) = \mathcal{P}^\sigma (\mathit{Node}_{h,i} {=} u)/\mathcal{P}^\sigma(\mathcal{R}(h))$.
First, we show that for all $u \in U$ and $i \in \mathbb{N}_0$ we have
that $\sum_{j = i}^{i+d(u)-1} \mu_{\hat{u},j}(u) \geq \mathit{val}$. Let us fix some
$u \in U$ and $i \in \mathbb{N}_0$, and let $\mathcal{H}_{i}(\sigma)$ be the
set of all $h \in \histories(\sigma)$ such that $|h| = i$.
For every $h \in \mathcal{H}_{i}(\sigma)$, consider an attacker's
strategy $\pi$ such that $\pi(h) = u$. Due to
Proposition~\ref{prop-abafy}, we have that
$\mathit{val}_{\mathit{last}(h)}(\sigma_h) = \mathit{val}$, which means that
$\mathcal{P}^\sigma_{\mathit{last}(h)}(\mathcal{D}_{\mathit{last}(h)}[\pi_h])$ is at least $\mathit{val}$.
Obviously,
$\mathcal{P}^\sigma_{\mathit{last}(h)}(\mathcal{D}_{\mathit{last}(h)}[\pi_h]) \ \leq \
\sum_{j = 0}^{d(u)-1} \mu_{h,j}(u)$. Thus, we obtain
$\sum_{j = 0}^{d(u)-1} \mu_{h,j}(u) \geq \mathit{val}$. Now it suffices to
realize
\[
\sum_{j = i}^{i+d(u)-1} \mu_{\hat{u},j}(n) \quad = \quad
\sum_{h \in \mathcal{H}_{i}(\sigma)} \left (
\mathcal{P}^{\sigma}(\mathcal{R}(h)) \cdot \sum_{j = 0}^{d(u)-1} \mu_{h,j}(u) \right )
\quad \geq \quad
\mathit{val} \cdot \sum_{h \in \mathcal{H}_{i}(\sigma)} \mathcal{P}^{\sigma}(\mathcal{R}(h))
\quad = \quad \mathit{val} \,.
\]
Now we can continue with the main proof. Let $\ell = \Pi_{k \in \mathit{supp}(S)}\, k$.
Since $\sum_{j = i}^{i+d(u)-1} \mu_{\hat{u},j}(u) \geq \mathit{val}$ for all $u \in U$
and $i \in \mathbb{N}_0$ (see above), we immediately obtain
$\sum_{j = 0}^{\ell-1} \mu_{\hat{u},j}(u) \geq \mathit{val} \cdot \frac{\ell}{d(u)}$.
Hence,
\[
\ell \quad = \quad \sum_{j = 0}^{\ell-1} \sum_{u \in U} \mu_{\hat{u},j}(u)
\quad = \quad
\sum_{u \in U} \sum_{j = 0}^{\ell-1} \mu_{\hat{u},j}(u)
\quad \geq \quad \sum_{u \in U} \mathit{val} \cdot \frac{\ell}{d(u)}
\quad = \quad \mathit{val} \cdot \sum_{k \in \mathit{supp}(S)} \frac{S(k) \cdot \ell}{k}
\]
Thus, we get
$\mathit{val} \leq \left(\sum_{k \in \mathit{supp}(S)} \frac{S(k)}{k}\right)^{-1}$
as desired.
\end{proof}
\section{Detailed definitions for appendices}
\label{app-defs}
We use $\mathbb{N}$ and $\mathbb{N}_0$ to denote the sets of positive and
non-negative integers, respectively.
The sets of all finite and infinite words over a given alphabet $\Gamma$
are denoted by $\Gamma^*$ and $\Gamma^{\omega}$, respectively. We write
$\varepsilon$ for the empty word. The length of a given $w\in \Gamma^*\cup
\Gamma^{\omega}$ is denoted by $|w|$, where the length of an infinite word
is $\infty$. We denote by $\Gamma^{\leq k}$ the set of all words $w\in \Gamma^*$
satisfying $|w|\leq k$. The last letter of
a finite non-empty word $w$ is denoted by $\mathit{last}(w)$. Given a
(finite or infinite)
word $w$ over $\Gamma$, the individual
letters of $w$ are denoted by $w_0 w_1\cdots$. Given two words
$w,w'\in \Gamma^*\cup \Gamma^{\omega}$ we write $w\preceq w'$ whenever $w$ is a
prefix of $w'$, i.e., whenever there exists a word $w''\in \Gamma^*\cup
\Gamma^{\omega}$ such that $w'=w w''$. Further, we write $w\prec w'$ whenever
$w\preceq w'$ and $w\not = w'$.
Given a finite or countably infinite set $A$, a \emph{probability
distribution} over $A$ is a function
\mbox{$\delta : A \rightarrow [0,1]$} such that
$\sum_{a\in \mathit{supp}(\delta)} \delta(a)=1$. The \emph{support} of $\delta$
is the set $\mathit{supp}(\delta) = \{a \in A \mid \delta(a) \neq 0\}$.
We use $\Delta(A)$ to denote the set of all distributions over~$A$.
A distribution $\delta \in \Delta(A)$ is \emph{positive} if
$\delta(a) > 0$ for every $a \in A$, and \emph{rational} if $\delta(a)$
is rational for every $a \in A$.
\begin{definition}
A \emph{patrolling problem} is a triple $\mathcal{G} = (U,T,\hat{u},E,d)$ where
$U$ is a finite set of \emph{nodes}, $T \subseteq U$
is a set of \emph{targets}, $\hat{u} \in T$ is the
\emph{initial target}, $E \subseteq U \times U$ is
an \emph{environment}, and $d : T \rightarrow \mathbb{N}$ assigns to
each target the associated \emph{attack length}.
The \emph{attack signature} of $\mathcal{G}$ is a
function $\mathit{S} : \mathbb{N} \rightarrow \mathbb{N}_0$ where
$\mathit{S}(k)$ is the cardinality of $\{u \in U \mid d(u) = k\}$.
We use $\mathit{supp}(S)$ to denote the set $\{k \in \mathbb{N} \mid S(k) \neq 0\}$.
We say that $S$ is \emph{well formed} if $k$ divides $S(k)$ for every
$k \in \mathbb{N}$. By $\hat{d}$ we denote $\max_{u\in U} \{d(u)\}$.
\end{definition}
\noindent
Let $\mathcal{G} = (U,T,\hat{u},E, d)$ be a patrolling problem. We say that $E$ is \emph{fully connected} if $E = U \times U$. Given a node $u \in U$, we denote by $\mathit{succ}(u)$ the set $\{u'\in
U \mid (u,u')\in E\}$ of all successors of $u$.
A \emph{path} is a finite or infinite word $w\in U^*\cup U^{\omega}$
such that $(w_i,w_{i+1})\in E$ for every $0\leq i< |w|$. A
\emph{history} is a finite non-empty path, and a \emph{run} is an infinite
path. The sets of all histories and runs are denoted by $\mathcal{H}$
and $\mathcal{R}$, respectively. Given a set of histories
$H \subseteq \mathcal{H}$, we use $\mathcal{R}(H)$ to denote the set of all runs
$\omega$ such that $w\preceq \omega$ for some $w\in H$ (when $H = \{h\}$,
we write $\mathcal{R}(h)$ instead of $\mathcal{R}(\{h\})$).
\begin{definition}
A \emph{defender's strategy} is a function
$\sigma : \mathcal{H} \rightarrow \Delta(U)$
such that $\mathit{supp}(\sigma(h))\subseteq \mathit{succ}(\mathit{last}(h))$ for every
$h\in \mathcal{H}$. The set of
all defender's strategies is denoted by $\Sigma$.
An \emph{attacker's strategy} is a function
$\pi : \mathcal{H} \rightarrow T \cup \{\bot\}$ such that
whenever $\pi(h) \neq {\bot}$, then for all $h'\prec
h$ we have that $\pi(h')={\bot}$. We denote by $\Pi$ the set of all
attacker's strategies.
\end{definition}
\noindent
Intuitively, given a history $h$, the defender chooses the next node
randomly according to the distribution $\sigma(h)$, and the attacker
either attacks a node $u\in T$ ($\pi(h)=u$), or waits ($\pi(h)=\bot$).
Note that the attacker can choose to attack only once during a play,
and also note that he cannot randomize. This is because randomization
does not help the attacker to decrease the Stackelberg value,
and hence we can safely adopt this restriction from the very
beginning.
For a given strategy $\sigma \in \Sigma$, we define the set
$\histories(\sigma) \subseteq \mathcal{H}$ of \emph{relevant} histories,
consisting of all $h \in \mathcal{H}$ such that
for all $h' \in \mathcal{H}$ and $u \in U$ where $h'u \preceq h$ we
have that $\sigma(h')(u) > 0$. Note that a defender's
strategy $\sigma$ determines a unique probability space over all
infinite paths initiated in a given $u \in U$ in the standard way
(see, e.g., \cite{Chung:book}), and we use $\mathcal{P}^\sigma_u$ to
denote the associated probability measure.
Given an attacker's strategy $\pi$, we say that a run $w$
\emph{contains a successful attack} if there
exist a finite prefix $h$ of $w$ and a node $u \in T$ such that
$\pi(h) = u$ and $u$ is not among the first $d$ nodes visited by $w$
after the prefix $h$. For every node $u \in U$, we use $\mathcal{D}_u
[\pi]$ to denote the set of all \emph{defended} runs initiated in $u$
that do not contain a successful attack. Hence,
$\mathcal{P}^\sigma_u(\mathcal{D}_u[\pi])$ is the probability of all runs
initiated in~$u$ that are defended when the defender uses the strategy
$\sigma$ and the attacker uses the strategy~$\pi$. We omit the
subscript $u$ in $\mathcal{P}^\sigma_u$ and $\mathcal{D}_u[\pi]$ when $u = \hat{u}$.
\begin{definition}
For all $u \in U$ and $\sigma\in \Sigma$, we denote by
$\mathit{val}_u(\sigma)$ the \emph{value
of $\sigma$} defined by $ \mathit{val}_u(\sigma)=\inf_{\pi \in \Pi}
\mathcal{P}^{\sigma}_u(\mathcal{D}_u[\pi])$. The \emph{Stackelberg value}
of $u$ is defined as $\mathit{val}_u = \sup_{\sigma \in \Sigma}
\mathit{val}_u(\sigma)$. A defender's strategy $\sigma^*$ is
\emph{optimal in $u$} if $\mathit{val}_u(\sigma^*) = \mathit{val}_u$.
The value of $\hat{u}$ is denoted by $\mathit{val}$, and a strategy
which is optimal in $\hat{u}$ is called just \emph{optimal}.
\end{definition}
At some places, we consider strategies obtained by ``forgetting''
some initial prefix of the history. Formally, for all $h \in \mathcal{H}$
and a strategy $\theta$ of the defender/attacker, we define a strategy $\theta_h$
by $\theta_h(u h') = \theta(h h')$ for every $u\in U$ and $h'\in
H$. Note that $\sigma_h$ behaves similarly for all initial
nodes. We are typically interested in its behavior starting in $\mathit{last}(h)$,
which corresponds to behavior of $\sigma$ when started at
$h$.
In what follows, we also use the notion of an \emph{immediate attack value}.
Given a defender's strategy $\sigma$, a history $h\in \mathcal{H}(\sigma)$, and a node $u\in U$, we define $att\textrm{-}\val_h(\sigma,u)$ to be the probability of reaching $u$ from $\mathit{last}(h)$ in at least one and at most $d(u)$ steps using the strategy $\sigma_h$. Intuitively, $att\textrm{-}\val_h(\sigma,u)$ is the probability of defending $u$ assuming that the attack on $u$ starts after the history $h$, i.e., $\pi(h)=u$. It is easy to see that
\[
\mathcal{P}^{\sigma}(\mathcal{D}[\pi]) = \sum_{\substack{h\in \mathcal{H}(\sigma)\\ \pi(h)\not = \bot}} \mathcal{P}^{\sigma}(h)\cdot att\textrm{-}\val_h(\sigma,\pi(h))
\]
\section{Introduction}
\label{sec-intro}
A central problem in security and operational research is how to deploy limited
security resources (such as police patrols, security guards, etc.)
to maximize their effectiveness. Clearly, police patrols cannot be everywhere
all the time, security guards cannot check every door every minute, etc., which
raises a crucial question how to utilize them best.
Game theoretic approaches to operational security problems based on Stackelberg model have received much attention
in recent years (see, e.g., \cite{tambe2011}). Informally,
the problem is to find the best possible strategy for a \emph{defender}
who is supervising potentially vulnerable targets (such as airports,
banks, or patrol stations) and aims at detecting possible
\emph{intrusions}. The time needed to complete an intrusion at each
target is finite, and the aim of the defender is to maximize the
probability of discovering an intrusion before it is completed.
An intensive research in this area has led to numerous successful applications (see, e.g., \cite{patrol-aplik-LAX,patrol-aplik-transit_system}).
Due to high demand for practically usable solutions, the main emphasis has been put on inventing methods that can produce working solutions for large-scale instances quickly. In most cases, the problem is
simplified (for example, by restricting the set of defender's strategies
to some manageable subclass), and various tricks are used to avoid non-linear constraints and/or objectives. This approach enables efficient synthesis of strategies that are ``good enough'' for practical purposes (thus, the main engineering goal is achieved), but does not allow for
synthesizing optimal or \mbox{$\varepsilon$-strategies} (for a given $\varepsilon > 0$) in general. Further, the
size of the resulting mathematical program is usually proportional
to the number of targets, which influences the scalability of these
methods. Since developing the basic theory of the underlying game model
has not received so much attention as designing practically usable
solutions, many fundamental questions (such as the computability of the Strackelberg value, the existence and computability of an \mbox{optimal/$\varepsilon$-optimal} defender's strategy, etc.) are open or have even been answered
incorrectly. In this paper, we provide a solution for some of these problems. As an unexpected payoff of our study, we also obtain a completely new approach to synthesizing defender's strategies in security games with fully connected environment based on compositional reasoning, which avoids
the use of mathematical programming and can be applied to exponentially larger
instances than the currently available methods. A detailed explanation of the achieved results is given below.
In this paper, we consider the adversarial variant of patrolling, where the attacker is assumed to
be quite powerful---he can observe defender's moves, and he even knows defender's strategy. However,
he cannot predict the way of resolving the defender's randomized choice. Formally,
a \emph{patrolling problem} $\mathcal{G}$ is specified by a finite set $U$
of nodes (possible defender's positions), a set $T \subseteq U$ of targets, an initial node $\hat{u} \in T$ (the initial
position of the defender), an environment $E \subseteq U \times U$ (admissible moves of the defender)
and a function $d : T \rightarrow \mathbb{N}$ which to every target associates the corresponding attack length.
The defender starts at $\hat{u}$ and then moves from node to node consistently with $E$. We assume that
traversing every edge takes precisely one unit of time (longer moves can be modeled by inserting
intermediate nodes.)
The defender may choose the next node randomly and independently of
her previous choices. Formally, a \emph{defender's strategy} is a function $\sigma : \mathcal{H} \rightarrow \Delta(U)$ where $\mathcal{H}$ is the set of all finite
non-empty sequences of nodes and $\Delta(U)$ is the set of all
probability distributions over~$U$. We require that $\sigma$ is consistent
with $E$, i.e., the support of $\sigma(h)$ is a subset of nodes that
are immediate successors of the last node of~$h$. Note that each $\sigma$
determines a unique probability space over all \emph{runs}
(infinite paths in $(U,E)$) initiated in $\hat{u}$ in the standard way, and we use
$\mathcal{P}^\sigma$ to denote the associated probability measure.
Depending on the observed walk of the defender, the attacker may choose to attack some
target or wait (we assume that the attacker may attack at most once
during a play). More precisely, an \emph{attacker's strategy} is function
$\pi : \mathcal{H} \rightarrow T \cup \{\bot\}$ such that whenever $\pi(h)
\neq {\bot}$, then for all proper prefixes $h'$ of $h$ we have that
$\pi(h')={\bot}$. Since the attacker has a complete knowledge about
the current position of the defender, he would \emph{never} attack a
target currently visited by the defender. Still, he may attack this
target immediately after the defender's departure, i.e., long before
the defender arrives to the next node (think of an UAV patrolling
military bases). This assumption is reflected in the definition
of a discovered attack---if the current location of the
defender is $u$ and the attacker attacks a target $v$, the defender
has to visit the node $v$ within the next $d(v)$ time units to discover
this attack, even if $u = v$. The aim of the defender is to maximize
the probability of successfully detected (or not initiated) attacks,
while the attacker aims at the opposite. Given a strategy $\sigma$ of
the defender and a strategy $\pi$ of the attacker, we use
$\mathcal{P}^{\sigma}(\mathcal{D}[\pi])$ to denote the probability of all
infinite paths $w$ initiated in $\hat{u}$ such that either $\pi(h) = {\bot}$ for
every prefix $h$ of $w$ (i.e., no attack is encountered along $w$), or
$\pi(h) = v \in T$ for some prefix $h$ of $w$ and $v$ is among the nodes visited
after $h$ in $w$ in at most $d(v)$ transitions (i.e., $w$ contains a successfully
defended attack). The \emph{value of $\sigma$} is defined by
$\mathit{val}(\sigma) = \inf_{\pi} \, \mathcal{P}^{\sigma}(\mathcal{D}[\pi])$, where
$\pi$ ranges over all strategies of the attacker.
The \emph{Stackelberg value} of $\mathcal{G}$ is defined by
$\mathit{val} \ = \ \sup_{\sigma} \, \mathit{val}(\sigma)$, where $\sigma$ ranges over all strategies of the defender.
A defender's strategy $\sigma^*$ is \emph{$\varepsilon$-optimal} (where $\varepsilon \geq 0$) if
$\mathit{val}(\sigma^*) \geq \mathit{val} - \varepsilon$. A \mbox{$0$-optimal} strategy is called \emph{optimal}.
\begin{remark}
In our definition of the patrolling problem, we assume that all targets are equally important to
the defender (and the attacker). The results \contrref{A} and \contrref{B} presented below remain valid even if we extend
the model by assigning numerical \emph{weights} to nodes and modify the game objective so that
the defender/attacker aims at maximizing/minimizing the expected weight of a discovered attack.
If the weight (importance) of nodes is \emph{different} for each player, the game is no longer zero-sum,
and the solution concept becomes somewhat different (consequently, our results do not apply in this case).
\end{remark}
\begin{figure}
\begin{tikzpicture}[x=2.2cm,y=2.2cm,font=\scriptsize]
\node (v) at (0,-.13) [ran] {$u_2$};
\node (u) at (-.5,-1) [ran] {$u_0$};
\node (s) at (.5,-1) [ran] {$u_1$};
\draw [tran,<->] (v) -- (u);
\draw [tran,<->] (v) -- (s);
\draw [tran,<->] (u) -- (s);
\path (u) edge[loop left] node[left=1pt] {} (u);
\path (s) edge[loop right] node[left=1pt] {} (s);
\path (v) edge[loop above] node[left=1pt] {} (v);
\node[text width=4cm,align=left] at (1.9,-.4)
{ $\hat{u} = u_0$\\
$d(u_0) = d(u_1) = d(u_2) = 2$\\[1ex]
$\sigma^*(h) = \mu_\ell$, \ $\ell = |h| \,\mathit{mod}\, 2$\\[1ex]
$\mu_0(u_0) = \kappa$,\\
$\mu_0(u_1) = 0$,\\
$\mu_0(u_2) = 1- \kappa$\\[1ex]
$\mu_1(u_0) = 0$,\\
$\mu_1(u_1) = \kappa$,\\
$\mu_1(u_2) = 1- \kappa$\\[1ex]
$\kappa = (\!\sqrt(5) - 1)/2$
};
\path (3.8,-.5) coordinate (origin);
\path (162-0:.7) ++(origin) coordinate (P0);
\path (162-1*72:.7) ++(origin) coordinate (P1);
\path (162-2*72:.7) ++(origin) coordinate (P2);
\path (162-3*72:.7) ++(origin) coordinate (P3);
\path (162-4*72:.7) ++(origin) coordinate (P4);
\node (V0) at (P0) [ran] {$v_0$};
\node (V1) at (P1) [ran] {$v_2$};
\node (V2) at (P2) [ran] {$v_1$};
\node (T0) at (P3) [ran] {$t_1$};
\node (T1) at (P4) [ran] {$t_0$};
\draw [tran,<->] (V0) -- (V1);
\draw [tran,<->] (V0) -- (V2);
\draw [tran,<->] (V0) -- (T0);
\draw [tran,<->] (V0) -- (T1);
\draw [tran,<->] (V1) -- (V2);
\draw [tran,<->] (V1) -- (T0);
\draw [tran,<->] (V1) -- (T1);
\draw [tran,<->] (V2) -- (T0);
\draw [tran,<->] (V2) -- (T1);
\draw [tran,<->] (T0) -- (T1);
\path (V0) edge[loop left] node[left=1pt] {} (V0);
\path (V1) edge[loop above] node[left=1pt] {} (V1);
\path (V2) edge[loop right] node[left=1pt] {} (V2);
\path (T0) edge[loop right] node[left=1pt] {} (T0);
\path (T1) edge[loop left] node[left=1pt] {} (T1);
\node[text width=4cm,align=left] at (5.9,-.4)
{ $\hat{u} = v_0$\\
$d(v_0) = d(v_1) = d(v_2) = 3$\\
$d(t_0) = d(t_1) = 2$\\[1ex]
$\sigma^*(h) = \mu_{\ell,\ell'}$\\
$\ell = |h| \,\mathit{mod}\, 3$, $\ell' = |h| \,\mathit{mod}\, 2$\\[1ex]
$\mu_{i,j}$ selects uniformly \\
~~~~ between $v_{i}$ and $t_{j}$
};
\end{tikzpicture}
\caption{Two examples of patrolling problems and the corresponding
optimal defender's strategies.}
\label{fig-example}
\end{figure}
\noindent
\textbf{Two simple examples.} To get some intuition about the patrolling
problem, we start with two simple examples that will also be
used to demonstrate some of our results. Let us first
consider the patrolling problem of Fig.~\ref{fig-example}~(left).
Here, we need to patrol three nodes with the same attack length~$2$ (i.e., $T = U$),
where $u_0$ is the initial node, in a fully connected environment.
Let us try to determine the Stackelberg value and
an optimal strategy of the defender. A naive idea is to pick
a strategy $\sigma$ which always selects each of the
three immediate successors with probability $1/3$.
Consider a strategy
$\pi$ of the attacker such that $\pi(u_0) = u_2$. We have that
$\mathcal{P}^{\sigma}(\mathcal{D}[\pi]) = 1/3 + 2/3 \cdot 1/3 = 5/9$, and one
can easily verify that for \emph{every} attacker's strategy $\pi'$
we have that $\mathcal{P}^{\sigma}(\mathcal{D}[\pi']) \geq 5/9$. Hence,
$\mathit{val} \geq \mathit{val}(\sigma) = 5/9$. However, the defender can do
better. Consider
the strategy $\sigma^*$ defined in Fig.~\ref{fig-example}~(left).
Observe that $\sigma^*$ is \emph{independent} of the currently
visited node; the only relevant information about the history
of a play is whether its \emph{length} is even or odd. If it is
even (odd), then $\sigma^*$ randomly selects between $u_0$ and $u_2$
(or between $u_1$ and $u_2$) where the ratio between the two
probabilities is the \emph{golden ratio}.
One can check that for every defender's strategy $\pi$ we have that
$\mathcal{P}^{\sigma^*}(\mathcal{D}[\pi]) \geq (\!\sqrt{5}-1)/2$. Hence,
$\mathit{val} \geq (\!\sqrt{5}-1)/2 > 5/9$. In fact, the
strategy $\sigma^*$ is \emph{optimal}, i.e., $\mathit{val} = (\!\sqrt{5}-1)/2$, which is perhaps
unexpected (see also the paragraph ``\emph{Comments on \contrref{D}}'' below).
Now consider the patrolling problem of Fig.~\ref{fig-example}~(right).
Here we need to patrol five nodes ($T=U$); two of them have the attack
length~$2$ and three of them have the attack length~$3$.
Again, we assume a fully connected environment. If we examine a naive strategy $\sigma$ which
always selects the next node uniformly among all immediate successors, we
obtain that $\mathit{val}(\sigma)= 9/25$. A better strategy $\sigma^*$ for the
defender is shown in Fig.~\ref{fig-example}~(right). The strategy
$\sigma^*$ depends only on the length of the history modulo~$6$, and it
always chooses uniformly between exactly two nodes. It directly follows from
our subsequent contributions (namely \contrref{C}) that $\mathit{val} =
\mathit{val}(\sigma^*)= 1/2$, i.e., $\sigma^*$ is optimal and the Stackelberg value
is equal to $1/2$.
\smallskip
\noindent
\textbf{Our contribution.} We start by proving the following results about the general
patrolling problem:
\begin{itemize}
\item[\textbf{A.}] For an arbitrary patrolling problem, there exists an optimal strategy for the defender.
\item[\textbf{B.}] Given a patrolling problem $\mathcal{G} = (U,T,\hat{u},E,d)$ and
a rational $\varepsilon > 0$, there is a \mbox{finite-memory} \mbox{$\varepsilon$-optimal}
strategy $\sigma$ for the defender computable in time exponential in $\size{\mathcal{G}}$ and polynomial in
$\varepsilon^{-1}$ (here, $\size{\mathcal{G}}$ is the encoding size of $\mathcal{G}$, where
the attack lengths are encoded in unary). Further, $\mathit{val}(\sigma)$ is rational and
can also be computed in exponential time, i.e., we can also approximate $\mathit{val}$ up to a given $\varepsilon > 0$
in exponential time. We also observe that $\mathit{val}$ cannot be approximated up to the error smaller than
$|U|^{-1}$ in polynomial time unless $\textbf{P} = \textbf{NP}$.
\end{itemize}
\noindent
\emph{Comments on \contrref{A}.} The existence of optimal strategies for
patrolling problems (and their variants) has been claimed in previous works
(see, e.g., \cite{Basilico2009,Basilico2012}) by arguing in
the following way. For each
$j \in \mathbb{N}$, let $\Sigma^j$ be the class of all defender's strategies
$\sigma$ such that $\sigma(h)$ depends only on the last~$j$ nodes of~$h$.
If we restrict the range of $\sigma$ to the strategies of $\Sigma^j$ in
the definition of Stackelberg value, we obtain an
approximated value, denoted by $\mathit{val}^j$.
Obviously, $\mathit{val}^{j+1} \geq \mathit{val}^j$ for every $j \in \mathbb{N}$.
By adapting the results of \cite{conitzer2006}, it has been shown
in \cite{Basilico2012} that for every $j \in \mathbb{N}$ one can compute a
strategy $\sigma \in \Sigma^j$ which achieves the outcome $\mathit{val}^j$
or better against every attacker's strategy. In
\cite{Basilico2009,Basilico2012}, it has been also claimed that
$\mathit{val} = \mathit{val}^j$ for some sufficiently large~$j$ (without providing any upper bound).
The argument is
based on applying general results about strategic-form games, but
a full proof is omitted. Using the techniques of Section~\ref{sec-well-formed},
we prove that this claim is \emph{incorrect}, even for the
simple patrolling problem of Fig.~\ref{fig-example}~(right) where
the defender has \emph{no} optimal strategy in
$\bigcup_{j=1}^\infty \Sigma^j$.
In our proof of~\contrref{A}, we take
an infinite sequence of strategies $\sigma_1,\sigma_2,\ldots$
such that $\lim_{n \rightarrow \infty} \mathit{val}(\sigma_n) = \mathit{val}$
and ``extract'' and optimal strategy out of it.
\smallskip
\noindent
\emph{Comments on \contrref{B}.} Our exponential-time algorithm for constructing
an \mbox{$\varepsilon$-optimal} strategy is based on combining two main ideas.
First, we show that the Stackelberg value of a given game stays the same
when the initial target is changed. This implies that small perturbations in
probability distributions employed by an optimal strategy cause only
a small change in the strategy value. Hence, we can compute a suitable discretization
scale and safely restrict the range of considered strategies to the discretized
probability distributions. Let $\hat{d} = \max_{u\in U} \{d(u)\}$. The next important observation
is that the $\hat{d}$-step behaviour of every strategy (after some finite history) can be fully
characterized by a real-valued vector with exponentially many components, where each component corresponds to a probability of visiting some vertex in at most $k \leq \hat{d}$ transitions. Due to the previous discretization
step, we can safely restrict the range of these vectors to finitely (exponentially)
many values. It follows that if there is \emph{some} \mbox{$\varepsilon$-optimal} strategy,
then there is also an \mbox{$\varepsilon$-optimal} strategy whose $\hat{d}$-step behaviour (after every
finite history) can be characterized by one of these exponentially many vectors, and we show how to
check the existence of such a strategy in exponential time (this is perhaps the most difficult part
of the argument).
The lower complexity bound is trivial. Given a patrolling problem with $d(u)=|U|=k$ for all $u\in U$, we have that $\mathit{val}=1$ iff the environment contains a directed cycle
through all the nodes (i.e., it is a \emph{Hamiltonian digraph}),
which is \textbf{NP}-hard to decide.
If the game is a negative instance, then for every strategy of the defender, the attacker clearly can launch an
attack at the very beginning of a play with probability of success at least~$1/k$. From this we immediately obtain the second part of~\contrref{B}.
Although in recent~\cite{DBLP:conf/fossacs/HoO15}, it is shown that the problem whether $\mathit{val} =1$
for a given patrolling problem is \textbf{PSPACE}-complete,
the construction of~\cite{DBLP:conf/fossacs/HoO15} only (for principal reasons) rules out,
unless $\textbf{P} = \textbf{PSPACE}$,
the existence of an $\varepsilon$-optimal strategy for the defender
with $\varepsilon\leq c\cdot\mathop{exp}\,(-|U|)$ for some~$c>0$.
\smallskip
Since solving general patrolling problems is computationally hard, we continue our study by restricting ourselves to \emph{fully connected} environments, where $E = U \times U$. Observe that the defender has no reason to visit non-target nodes in fully connected environments, and hence we can further safely assume that $T = U$. For example, think of a surveillance system equipped with several cameras installed in front of various doors, where the footage of the cameras is shown in turns on a single screen (for some small constant amount of time) watched by a human guard. The time needed to break (open and close) different doors can be different. Then, the nodes/targets of the associated
patrolling problem correspond to the cameras, the environment is fully connected (assuming one can switch between the cameras freely), and the transition time between two nodes is the same (and it can be normalized to $1$).
Under these assumptions, a patrolling problem is fully specified by its \emph{signature}, i.e., a function
$S : \mathbb{N} \rightarrow \mathbb{N}_0$ which for a given $k \in \mathbb{N}$ returns the number of all $u\in T$
with $d(u) = k$. An important subclass of signatures are \emph{well-formed} signatures, where $k$ divides
$S(k)$ for all $k \in \mathbb{N}$. For example, the signature of the patrolling problem of Fig.~\ref{fig-example}~(right) is well-formed, while the
signature of the patrolling problem of Fig.~\ref{fig-example}~(left)
is not. We assume that signatures are represented using \emph{binary}
numbers, i.e., the encoding size of $S$, denoted by $\size{S}$, can be \emph{exponentially smaller} than the number of nodes.
Before formulating our results about the patrolling problem in a fully connected environment, we need to explain one important \emph{conceptual} contribution of this paper, which is the notion of a \emph{modular} strategy and the associated \emph{compositionality} principle. A defender's strategy $\sigma$ is \emph{modular} if
$\sigma(h)$ depends only on the length of $h$ modulo some constant~$c$
(in particular, note that the current defender's position is irrelevant). For example, the two strategies of Fig.~\ref{fig-example} are modular (the constant $c$ is equal
to $2$ and $6$ for the strategy on the left and on the right, respectively). Let $\mathcal{G}$ be a patrolling problem with a set of nodes~$U$. For every
$U' \subseteq U$, let $\mathcal{G}[U']$ be the patrolling problem obtained
from $\mathcal{G}$ by restricting the set of nodes to $U'$ and the set of transitions to $E \cap U' {\times} U'$ (note that this
makes sense even if the environment of $\mathcal{G}$ is not fully connected). Let $U_1,\ldots, U_k \subseteq U$, and let
$\sigma_1,\ldots,\sigma_k$ be modular defender's strategies in
$\mathcal{G}[U_1],\ldots,\mathcal{G}[U_k]$, respectively. For every probability
distribution $\nu$ over $\{1,\ldots,k\}$, we can construct the
\emph{$\nu$-composition} of $\sigma_1,\ldots,\sigma_k$, which is a modular
defender's strategy $\sigma$ in $\mathcal{G}[U_1\cup \cdots \cup U_k]$
defined by $\sigma(h) = \nu_1\cdot\sigma_1(h) + \cdots + \nu_k\cdot\sigma_k(h)$.
Note that $\sigma$ is a correctly defined defender's strategy for
$\mathcal{G}[U_1\cup \cdots \cup U_k]$ only if the environment
of $\mathcal{G}$ contains all of the required transitions between the nodes
of $U_1,\ldots,U_k$ (if the environment of $\mathcal{G}$ is fully connected,
this is no issue). It follows immediately that
$\mathit{val}(\sigma) \geq \min \{\nu_i\cdot \mathit{val}(\sigma_i) \mid 1 \leq i \leq k \}$ (as we shall see, this inequality can be \emph{strict}).
Thus, one can construct a defender's strategy for a given
patrolling problem $\mathcal{G}$ by splitting the set of nodes into two or
more subsets (not necessarily disjoint), solving the smaller instances recursively, and then
computing a suitable convex combination of the solutions. As we shall see
momentarily, this approach leads to an efficient algorithm capable
of computing optimal (or suboptimal) strategies for \emph{very} large patrolling problems in couple of seconds.
Now we can explain our main results about the patrolling problem in a fully connected environment.
\begin{itemize}
\item[\textbf{C.}] Given a patrolling problem $\mathcal{G}$ where $T=U$, we have that \mbox{$\mathit{val} \leq \left(\sum_{k \in \mathit{supp}(S)} \frac{S(k)}{k}\right)^{-1}$} where $S$ is the signature of $\mathcal{G}$ and $\mathit{supp}(S)$ is the set of all $k \in \mathbb{N}$ such that
$S(k) > 0$. This bound is valid for an arbitrary environment~$E$.
\item[\textbf{D.}] There is an algorithm which inputs a signature $S$ of a patrolling problem $\mathcal{G}$ with a fully connected environment (where $T=U$) and outputs a pair $(\theta,V)$ such that the following conditions are satisfied:
\begin{itemize}
\item The running time of the algorithm in \emph{polynomial in $\size{S}$}.
\item $\theta$ is a symbolic representation of a modular strategy for $\mathcal{G}$, and $V$ is a symbolic representation
of $\mathit{val}(\theta)$. Both $\theta$ and $V$ are parameterized by variables $\{p_1,\ldots,p_k\}$, where $k$ is bounded by a polynomial in $\size{S}$. The values of $\{p_1,\ldots,p_k\}$ correspond to the unique solution (in $[0,1]^k$)
of a recursive system of polynomial equations that is also constructed by the algorithm. The number of variables $k$
actually depends on the ``Euclid complexity'' of $S$ and can be constant (or even zero) for arbitrarily large~$S$.
\item If the signature $S$ is well-formed, then $k=0$ and the strategy $\theta$ is optimal.
Since $k=0$, no extra computational time is needed to calculate/approximate the parameters, and hence $\theta$ is ``fully synthesized'' in time polynomial in $\size{S}$.
\item If the signature $S$ is not well-formed, then the
strategy $\theta$ is a $\nu$-composition of simpler modular strategies and the variables defined via the
system of polynomial equations correspond to the weights used to combine these simpler strategies together.
Further, we have that $\mathit{val}_d < \mathit{val}(\theta) < \mathit{val}_u$, where
$\mathit{val}_d$ and $\mathit{val}_u$ are the Stackelberg values of the patrolling
problems with signatures $S\!_d$ and $S\!_u$ defined by
$S\!_d(k) = k \cdot \lfloor \frac{n}{k} \rfloor$ and
$S\!_u(k) = k \cdot \lceil \frac{n}{k} \rceil$, respectively.
\end{itemize}
\item[\textbf{E.}] Given a patrolling problem $\mathcal{G}$
with $T = U$ and a well-formed
attack signature $S$, we say that the environment
$E$ of $\mathcal{G}$ is \emph{sufficiently connected} if $\mathit{val}$ is equal to the value
of $\mathcal{G}$ in the \emph{fully connected} environment. The problem whether $E$ is sufficiently connected is
\mbox{\textbf{NP}-complete}. Further,
this problem is \mbox{\textbf{NP}-complete} even for
a subclass of patrolling problems such that $\mathit{supp}(S) = \{k\}$, where
$k \geq 3$ is a fixed constant. For a subclass of patrolling
problems where $\mathit{supp}(S) = \{2\}$, the problem is solvable in
polynomial time.
\end{itemize}
\noindent
\emph{Comments on \contrref{C}.} Note that the presented upper bound on
$\mathit{val}$ does not depend on~$E$. An obvious question is whether this bound is
\emph{tight}. That is, given a function $S : \mathbb{N} \rightarrow \mathbb{N}_0$ such
that $\mathit{supp}(S)$ is finite, we ask whether there exists a patrolling problem
$\mathcal{G}$ with $T = U$ such that the signature
of $\mathcal{G}$ is $S$ and
\mbox{$\mathit{val} = 1/(\sum_{k \in \mathit{supp}(S)} S(k)/k)$}. It follows from our results that
the answer to this question is \emph{yes} if $S$ is well formed. This means that the bound can be potentially lowered
(only) for those $S$ that are not well formed. As an example, consider the
patrolling problem of Fig.~\ref{fig-example}~(left). Here $\mathit{supp}(S) = \{2\}$
and $S(2) = 3$, and hence we obtain $\mathit{val} \leq 2/3$. Since $\mathit{val} =
(\!\sqrt{5}-1)/2 < 2/3$, the bound is not tight. For the patrolling problem
of Fig.~\ref{fig-example}~(right) we have that $\mathit{supp}(S) = \{2,3\}$, $S(2) =
2$, and $S(3) = 3$, which gives an upper bound $(2/2 + 3/3)^{-1} =
1/2$. Since $\mathit{val} = 1/2$, this bound is tight. \smallskip
\noindent
\emph{Comments on \contrref{D}.}
The strategy $\theta$ is obtained by applying the ``decomposition''
technique described earlier. Since we intend to produce a strategy synthesis algorithm whose
running time is polynomial in $\size{S}$, we also need to design a special
language allowing for compact representation of modular strategies in space polynomial
in $\size{S}$ (see Section~\ref{sec-modular-strategies}). First, we split the nodes of $\mathcal{G}$ into
disjoint subsets according to their attack length. Then, we show how to compute a modular strategy for a set of $n$ nodes with the same attack
length $d$. Here, we use a decomposition technique which resembles Euclid's gcd algorithm. First we check whether $d$ divides $n$. If so, we split the $n$ nodes into pairwise disjoint sets $U_0,\ldots,U_{d-1}$ so that $|U_i| = n/d$ for every
$0\leq i < d$, and define a modular strategy $\sigma$ such that
$\sigma(h)$ selects uniformly among the elements of $U_i$, where
$i = |h| \mathrm{~mod~} d$. Observe that
$\mathit{val}(\sigma) = d/n$, which is optimal by \contrref{C}. If $d$ does not
divide $n$ and $n = k\cdot d + c$ where $1 \leq c < d$, then we split
the $n$ nodes into two disjoint subsets $U_1$ and $U_2$, where
$U_1$ contains $k \cdot d$ nodes and $U_2$ contains $c$ nodes.
A strategy $\sigma_1$ for $U_1$ is constructed as above, and we need
to process the set $U_2$. If $c$ divides $d$, the strategy $\sigma_2$
for $U_2$ is a simple loop over the nodes of $U_2$.
A closer look reveals that an appropriate distribution
$\nu = (\nu_1,\nu_2)$ for combining $\sigma_1$ and $\sigma_2$ should
satisfy the equation
$\nu_1 \cdot \mathit{val}(\sigma_1) = 1 - \nu_1^{d/c}$ which says that the nodes of
$U_1$ and $U_2$ are defended equally well. If
$c$ does not divide $d$ and $d = j \cdot c + t$, where $1 \leq t < c$,
then the strategy $\sigma_2$ for $U_2$ spends the first $j\cdot c$ steps by
performing the simple loop over the nodes of $U_2$, and the next $t$ steps by behaving exactly as
the strategy constructed for $|U_2|$ nodes with attack length $t$ (which is constructed
recursively). Then, $\sigma_2$ just keeps repeating its first $d$~steps. Again, we can setup an equation that should be satisfied by an appropriated distribution which combines $\sigma_1$ and $\sigma_2$ so that all targets are protected
equally well. This procedure eventually produces a modular strategy for defending $n$ nodes with
the same attack length~$d$. If $d$ divides $n$, then this strategy is provably optimal. In fact, we conjecture
that the constructed strategy is \emph{always} optimal, but we leave this hypothesis open (recently,
it has been shown by Lamser \cite{Lamser:BCthesis} that the algorithm produces an optimal strategy for all odd $n$ and
$d=2$). Further, let us note that the number of variables/equations in the constructed system of polynomial equations
is bounded by a polynomial in $\size{S}$, but the size of $S$
is \emph{not} a good measure for identifying hard instances.
What really matters is the number of ``swaps'' in the Euclid's algorithm applied to $n$ and $d$; see
Section~\ref{sec-modular-strategies} for further comments.
After processing all subsets of nodes with the same attack length, we
combine the resulting strategies using an appropriate distribution.
The details are given in Section~\ref{sec-modular-strategies}.
As an example, consider the patrolling problems of Fig.~\ref{fig-example}.
In the first case, we have $3$~nodes with the same attack length~$2$.
Since $2$ does not divide $3$, we split the set of nodes into
$U_1 = \{u_0,u_1\}$ and $U_2=\{u_2\}$. The strategy $\sigma_1$ for $U_1$
selects the node $u_1$ or $u_0$ with probability $1$, depending on whether
the length of the history is odd or even, respectively. Note that
$\mathit{val}(\sigma_1) = 1$. For the set $U_2$, we have that $|U_2|$ divides
$2$, and so the strategy $\sigma_2$ is a self-loop on $u_2$. The appropriate
distribution $\nu = (\nu_1,\nu_2)$ for combining $\sigma_1$ and $\sigma_2$ should satisfy the equation $\nu_1 = 1 - \nu_1^{2}$. Thus, we obtain that
$\nu = \kappa = (\!\sqrt{5} -1)/2$, which yields the strategy of Fig.~\ref{fig-example}~(left). The strategy of Fig.~\ref{fig-example}~(right)
is obtained by first splitting the set of nodes into $U_1 = \{t_0,t_1\}$ and
$U_2 = \{v_0,v_1,v_2\}$ according to their attack length, solving these
subproblems (note that the solution for $U_i$ is a strategy which loops
over the vertices of $U_i$), and then combining them with $\nu = (0.5, 0.5)$.
\smallskip
\noindent
\emph{Comments on \contrref{E}.} We show that for every patrolling
problem $\mathcal{G} = (U,T,\hat{u},E,d)$ with $T=U$ and a well formed signature~$S$, there
exists a \emph{characteristic digraph} $M_S$ depending only on $S$
and computable in polynomial time, such that $E$ is sufficiently connected \emph{if, and only if,}
$(U,E)$ contains a subdigraph isomorphic (respecting the attack lengths) to $M_S$. From this we immediately obtain that the problem whether a given $E$ is sufficiently connected
is in~\textbf{NP}, and we also provide the matching lower bound.
Note that the characteristic digraph can be used to \emph{synthesize}
a minimal sufficiently connected environment for solving a given
patrolling problem.
\smallskip
\noindent
\textbf{Related work.}
Two player zero-sum stochastic games with both perfect and imperfect
information have been studied very intensively in recent years
(see, e.g., \cite{ChatterjeeH12,HansenMZ13,HansenKLMT11}),
also for games with infinite state-space
\cite{BBKO:BPA-games-reachability-IC,EY:RSCG,EtessamiWY08,AbdullaCMS13}.
Patrolling games have so far been considered mainly in the context of
operation research. Here, the emphasis is usually put on finding methods
allowing to synthesize a sufficiently good defender's strategy, and
the basic theoretical questions related to the underlying formal model
are usually not studied in greater detail. The problem of finding
locally optimal strategies for robotic patrolling units have been studied
either in restricted environments (e.g., on circles in
\cite{AgmonKK08,AgmonKK08-2}), or fully-connected environments with weighted
preference on the targets \cite{Basilico2009,Basilico2009-2}.
Some novel aspects of the problem, such as variants with moving
targets \cite{bosansky2011aamas,Fang2013}, multiple patrolling
units \cite{Basilico2010}, or movement
of the attacker on the graph \cite{Basilico2009-2} and reaction
to alarms \cite{MunozdeCote2013} have also been considered in recent
works.
\iffalse
******
There are a few exceptions -- Agmon
et al. \cite{AgmonKK08} considers the strategies that take direction
of the patrolling unit on a circle into consideration (i.e., which way
the robot is faced), and Basilico et al. \cite{Basilico2009} discuss
the generic fixed-memory strategies, however, only memoryless
strategies are used for the experimental evaluation. Finally, Bosansky
et al. \cite{bosansky2012-aaaiss} try to unify the formalisms
providing a non-linear mathematical program for solving the patrolling
games on arbitrary weighted graphs with arbitrary states.
Let us note that the technical
setting of \cite{ddd} is slightly different from ours (in particular,
if the attacker attacks a node currently visited by the defender,
then the defender is \emph{not} obliged to revisit this node in the
next $d$ moves). Still, the underlying ideas and the results
themselves are valid also in our case.
Our translation to safety games (see~A.~above) is based on the
following idea: we let the defender to choose his ``plan'' for the
next $d$ steps, and then we let the attacker to decide whether or
not he wants to attack. If he decides not to attack, we perform
one move of previously fixed plan and let the defender to ``prolong''
his plan by one step. If the attacker decides to attack some
node, we enter a special node with only two successors
$\mathit{succ}$ and $\mathit{fail}$ where the game ends. The node $\mathit{succ}$
is selected randomly with the probability that the attack is
successful w.r.t.{} the fixed plan of the defender, and the node
$\mathit{fail}$ in entered with the remaining probability. The goal of
the defender is to maximize the probability of avoiding the visit
to the $\mathit{succ}$ node. The constructed game has uncountably many
nodes, because there are uncountably many ``plans'' for the
defender. Still, we can show that the defender has an optimal
strategy in this safety game and this strategy can be transferred back
to the original patrolling game. This construction does \emph{not}
give any upper bound on the~$k$ such that $\mathit{val}^\infty(v) = \mathit{val}^k(v)$,
and it does not even imply that such a $k$~exists. The underlying
idea is rather generic and can be applied to various modifications
and extensions of the considered model.
At the core of our results, there is an important technical observation
about the structure of optimal strategies, which can be intuitively
formulated as follows: Suppose that $\sigma$ is an optimal strategy of
the defender. For every
*************************************************************
The patrolling problem can be modeled as a two-player partially
observable zero-sum stochastic game between the defender and the
attacker. The topology of the environment is given as a finite
directed graph $G = (V,E,T)$ where the set of nodes $V$ corresponds
to possible locations of the defender, the edges of $E$ define
possible moves of the defender (we require that every node has
at least one outgoing edge), and $T \subseteq V$ are the
targets. We assume that traversing each edge takes one unit of time
(note that a ``long distance'' between two nodes can be modeled
by splitting the corresponding edge into several edges and inserting
auxiliary non-target nodes).
A \emph{defender's strategy} is a function
$\sigma : \mathcal{H} \rightarrow \Delta(V)$ where $\mathcal{H}$
are the \emph{histories} (i.e., finite paths in $G$) and $\Delta(V)$
are the probability distributions over~$V$. Hence, the defender may
choose the next node randomly and independently of his previous
choices.
Modelling the attacker's behaviour is more
subtle. In this paper, we consider the \emph{adversarial}
case when the attacker knows the strategy of the defender and can observe
his moves along~$G$ (consider, e.g., a UAV patrolling military bases;
its moves are observable and the implemented controller
might have been reverse-engineered or betrayed). Depending on the
observed walk of the defender, the attacker may choose to
attack some target or wait, i.e., his strategy is a function
$\pi : \mathcal{H} \rightarrow T \cup \{\mathit{wait}\}$. We assume that
the attacker may attack at most once, i.e., if $\pi(h) \neq \mathit{wait}$,
then $\pi(h') = \mathit{wait}$ for all proper prefixes $h'$ of $h$. Note that
the attacker cannot randomize. This is because
randomization does not help the attacker to decrease the
Stackelberg equilibrium (see below), and hence we can safely
adopt this restriction from the very beginning.
For the sake of clarity, we formulate and prove our results in
the simplified setting where all targets are equally important to both players
and the attacker needs $d \geq 1$ time units to complete his attack
at every target. However, our techniques can easily be extended
to more general models, which is explained at appropriate places
in the main body of this paper.
Since the attacker has a complete knowledge about the current position
of the defender, he would \emph{never} attack a target currently visited
by the defender. Still, he may attack this target \emph{immediately after}
the defender's departure. Hence, if $\pi(h) = u$ and the history $h$
ends in $u$, the defender needs to \emph{revisit} the node $u$ in
at most~$d$ moves to discover this attack. The aim of the defender
is to maximize the probability of successfully detected
(or not initiated) attacks, while the attacker aims at the opposite.
Given an attacker's strategy $\pi$, we say that an infinite path
$w$ in $G$ \emph{contains a successful attack} if there exist a finite prefix
$h$ of $w$ and a target $u$ such that $\pi(h) = u$ and the node
$u$ is not among the first $d$ nodes visited by $w$ after the prefix
$h$ (i.e., if $h\hat{h}$ is the prefix of $w$ such that the length of $\hat{h}$
is $d$, then $u$ does not appear in $\hat{h}$). Further,
we define a function $\mathit{payoff}^{\pi}$ which to every
infinite path $w$ in $G$ returns either $0$ or $1$, depending
on whether $w$ contains a successful attack or not, respectively.
Note that a defender's strategy $\sigma$ determines a unique probability
space over all infinite paths initiated in a given initial node
$v \in V$ (see, e.g., \cite{KS:book}), and the function $\mathit{payoff}^{\pi}$
then becomes a random variable over this probability space.
Hence, for each pair of strategies $(\sigma,\pi)$
and $v \in V$ there is the \emph{expected value} of $\mathit{payoff}^{\pi}$,
denoted by $\mathbb{E}^{\sigma,\pi}_v [\mathit{payoff}]$, which is
actually equal to the \emph{probability} of all infinite paths initiated
in~$v$ not containing a successful attack (later we shall also discuss
more general payoff functions which depend on the attacked target;
thus, we can model targets with different importance).
The \emph{Stackelberg value} (or \emph{equilibrium}) of a given node
$v \in V$, denoted by $\mathit{val}(v)$, is defined by
\[
\mathit{val}(v) \ = \ \sup_{\sigma} \, \inf_{\pi} \, \mathbb{E}^{\sigma,\pi}_v [\mathit{payoff}] \,.
\]
A defender's strategy $\hat{\sigma}$ is \emph{$\varepsilon$-optimal}, where
$\varepsilon \geq 0$, if
$\inf_{\pi} \mathbb{E}^{\hat{\sigma},\pi}_v [\mathit{payoff}] \geq \mathit{val}(v) - \varepsilon$
for every $v \in V$. $0$-optimal strategies are called \emph{optimal}.
Before explaining the main results of this paper, let us demonstrate
the introduced notions on a simple example, which also reveals some
subtle differences from the previous works. Consider the
graph of Fig.~\ref{fig:triangle}~(left), where $d = 2$ and
$v$ is the initial node. Further, let $\pi$ be a strategy
such that $\pi(v) = v$, i.e., the attacker tries to attack $v$
immediately after the departure of the defender from his starting
location. Hence, the defender needs to come back to $v$ in at most two
moves to discover this attack.
******************************************************************
Some other models
(see, e.g., \cite{aaa}) assume that such an attack is discovered
immediately, which is appropriate in somewhat different situations.
The difference between these two approaches is well illustrated by the
simple example of Fig.~\ref{fig}, where all nodes are targets and $d=2$.
Assume that the defender starts at~$v$.
*****************************************************
For a given initial node $v$, a defender's strategy $\sigma$
can be drawn as an infinite tree $T_v^{\sigma}$ rooted by $v$ whose
nodes are the histories initiated in $v$, and the edges (decorated with
the associated probabilities) are determined in the natural way.
An attacker's strategy $\pi$ can be depicted by decorating
the nodes in this tree by the elements of $T \cup \{\mathit{wait}\}$ that
correspond to the attacker's decisions. Thus, we obtain the tree
$T_v^{\sigma,\pi}$. As an example, consider the simple
graph with three nodes of Fig.~\ref{fig-triangle}~(left).
Every infinite path $w$ in $T_v^{\sigma,\pi}$ is assigned its
that aims to attack some of the
targets or to enter the protected area. The game is played on a graph
and evolves in turns. In one turn, both players act simultaneously:
the defender moves according to her capabilities to an adjacent node
in a graph, and the attacker can either choose to attack some target
(the attack takes a given period of time and cannot be suspended by
the attacker), or she can wait. The goal of the defender is to capture
the attacker during the attack; attacker aims for successfully
finishing the attack. The solution of a patrolling game is a
description of a movement of a patrolling unit(s) in the area in such
a way that it minimizes the probability that none of the targets will
be left unvisited for longer than some given period of time assuming
that the attacker knows the strategy of the defender and can observe
the current position of the defender on the graph.
Many game theoretic models has recently been successfully applied in
practice; the most distinctive practical applications include number
of examples from the security domain \cite{tambe2011}, auctions and
mechanism design \cite{XXX}, or challenging games such as Poker
\cite{XXX}. However, it is often the case that in order to overcome
the computational complexity and to achieve scalable algorithms that
solve reasonably large scenarios, number of assumptions and
simplifications are posed. Moreover, the theoretical impact of these
simplifications on the quality of the solutions from the simplified
games is often unknown, and only experimental performance is provided.
This paper aims to analyze simplifications made for a class of
patrolling games \cite{XXX_hrozne_moc_citacii_z_patrollingu} that
model number of scenarios from the security domain corresponding to a
problem of protecting an area (or targets placed in an environment)
against an intrusion (or an attack on a target by the attacker). This
problem can be modeled as a two-player zero-sum game partially
observable stochastic game between \emph{the defender} (or \emph{the
patroller}), and \emph{the attacker} that aims to attack some of the
targets or to enter the protected area. The game is played on a graph
and evolves in turns. In one turn, both players act simultaneously:
the defender moves according to her capabilities to an adjacent node
in a graph, and the attacker can either choose to attack some target
(the attack takes a given period of time and cannot be suspended by
the attacker), or she can wait. The goal of the defender is to capture
the attacker during the attack; attacker aims for successfully
finishing the attack. The solution of a patrolling game is a
description of a movement of a patrolling unit(s) in the area in such
a way that it minimizes the probability that none of the targets will
be left unvisited for longer than some given period of time assuming
that the attacker knows the strategy of the defender and can observe
the current position of the defender on the graph.
\subsection{Related Work}
Patrolling games were widely solved in recent years. The focus was
primarily on finding good strategies for robotic patrolling units
either on restricted graphs (e.g., on circles in
\cite{AgmonKK08,AgmonKK08-2}), or arbitrary graphs with weighted
preference on the targets \cite{Basilico2009,Basilico2009-2}, or
focused on some novel aspects of the problem, such as variants with
moving targets \cite{bosansky2011aamas,Fang2013}, multiple patrolling
units \cite{Basilico2010}, or movement of the attacker on the graph
\cite{XXX} In most of the existing literature it is assumed that the defender is
using a memoryless (Markov) strategy that depends solely on the
current position of the defender. There are a few exceptions -- Agmon
et al. \cite{AgmonKK08} considers the strategies that take direction
of the patrolling unit on a circle into consideration (i.e., which way
the robot is faced), and Basilico et al. \cite{Basilico2009} discuss
the generic fixed-memory strategies, however, only memoryless
strategies are used for the experimental evaluation. Finally, Bosansky
et al. \cite{bosansky2012-aaaiss} try to unify the formalisms
providing a non-linear mathematical program for solving the patrolling
games on arbitrary weighted graphs with arbitrary states.
However, none of the existing work tackles the problem of overall optimality. More specifically, what is the value of the game and what are the optimal strategies if we allow the defender to use strategies with unlimited memory? This paper aims to answer this question following several theoretical steps that are the main contributions: (1) we provide a formal proof that a patrolling game with unlimited-memory strategies can be transformed to a perfect-information stochastic game, (2) we show how to construct this stochastic game and prove that an optimal strategy exists in this game, (3) and we show that there exists a finite state stochastic game, solving which yields an $\epsilon$-optimal solution of the patrolling game with unlimited-memory strategies. Finally, (4) we demonstrate our approach by comparing the value of the patrolling game with unlimited-memory strategies to the state of the art techniques that use memoryless strategies and show that ... XXX
\fi
\subsection{The existence of an optimal defender's strategy}
We start by proving that there exists an optimal strategy for
the defender. This is a generalization of similar results
recently achieved in \cite{ABRBK} for a special
type of patrolling games where all nodes share the same attack length
(i.e., $\mathit{supp}(S)$ is a singleton). The proof technique is completely different.
\begin{theorem}
\label{thm:optimal}
For every patrolling problem $\mathcal{G} = (U,T,\hat{u},E,d)$, there exists an optimal defender's strategy.
\end{theorem}
\begin{proof}[Proof Sketch]
We construct an optimal strategy $\sigma^*$ as a point-wise limit of a sequence $\sigma^1,\sigma^2,\ldots$ of strategies where each $\sigma^k$ is $1/k$-optimal. More precisely, we select $\sigma^1,\sigma^2,\ldots$ in such a way that for each history $h$, the sequence of distributions $\sigma^1(h),\sigma^2(h),\ldots$ converges to a probability distribution, and we define $\sigma^*(h)$ to be its limit (we obtain $\sigma^1,\sigma^2,\ldots$ by starting with an arbitrary sequence of $1/k$-optimal strategies and successively filtering subsequences that are convergent on individual histories). It is relatively straightforward to show that if $\mathit{val}(\sigma^*)\leq\mathit{val}-\delta$ for some $\delta>0$, then for all $k$'s large enough we have $\mathit{val}(\sigma^k)\leq \mathit{val}-\delta/2$, which contradicts the fact that each $\sigma^k$ is $1/k$-optimal. For details see Appendix~\ref{app-optimal}.
\end{proof}
\noindent
\subsection{Computing finite-memory $\varepsilon$-optimal strategies}
In this subsection we describe a generic algorithm which for a given patrolling problem computes a finite representation of an $\varepsilon$-optimal strategy. Let us start with the definition of a finite-memory strategy.
\begin{definition}
A {\em finite-memory defender's strategy} is a tuple $(M,N,m_0,\xi)$ where $M$ is a finite set of memory elements, $N:M\times U\rightarrow M$ assigns to every memory element $m\in M$ and a node $u\in U$ a next memory element $N(m,u)$, $m_0$ is an initial memory element, and $\xi:M\times U\rightarrow \Delta(U)$ is a function which to every memory element $m\in M$ and a node $u\in U$ assigns a distribution $\xi(m,u)$ on $U$ such that $\mathit{supp}(\xi(m,u))\subseteq \mathit{succ}(u)$.
A finite-memory defender's strategy $(M,N,m_0,\xi)$ induces a defender's
strategy $\sigma$ as follows: We extend $N$ to an "empty" history $\varepsilon$ by $N(m_0,\varepsilon)=m_0$, and to all histories $hv\in \mathcal{H}$,
here $v\in U$, inductively by $N(m_0,hv)=N(N(m_0,h),v)$. Then for $hu\in \mathcal{H}$ (where $u\in U$) we have that $\sigma(hu)=\xi(N(m_0,h),u)$.
\end{definition}
\noindent
\begin{theorem}\label{thm:eps-opt}
Let $\varepsilon>0$ and assume that $\hat{u}\in T$. There is an $\varepsilon$-optimal finite-memory defender's strategy computable in time
\[
\left(\frac{\hat{d}\cdot |U|}{\varepsilon}\right)^{\mathcal{O}(\hat{d}^2\cdot |U|^2)}.
\]
\end{theorem}
\noindent
We construct our strategy using the so-called \emph{characteristics} (some intuition is given below).
\begin{definition}
A {\em characteristic} $c$ is a triple $(\mathbf{r},\mathbf{s},\mathbf{c})$ where $\mathbf{r}\in U$, $\mathbf{s}$ is a probability distribution on $U$, and $\mathbf{c}:\{2,\ldots,\hat{d}\}\times
T\rightarrow [0,1]$. Denote by $\mathbf{Char}$ the
set of all characteristics. Given $c=(\mathbf{r},\mathbf{s},\mathbf{c})\in \mathbf{Char}$, we denote by $\mathit{val}(c)$
the value $\min_{u\in T} \mathbf{c}(d(u),u)$ of $c$.
\end{definition}
\noindent
Given a characteristic $c$, we use $c_{\mathbf{r}}, c_{\mathbf{s}}, c_{\mathbf{c}}$ to denote the three components of $c=(\mathbf{r},\mathbf{s},\mathbf{c})$, respectively.
Intuitively, we interpret a given characteristic $c$ as a "local" plan of defence for next $\hat{d}$ steps where
\begin{itemize}
\item $c_{\mathbf{r}}$ is the current node,
\item $c_{\mathbf{s}}$ is the current assignment of probabilities to the successors
of $c_{\mathbf{r}}$, and
\item for every $2\leq k\leq \hat{d}$ and every $u\in T$, we interpret $c_{\mathbf{c}}(k,u)$ as the probability of visiting $u$ in at least one, and at most $k$ steps from
$c_{\mathbf{r}}$.~\footnote{Note that many characteristics are not
``consistent'' (if e.g. $c_{\mathbf{s}}(u)=1/2$ and $c_{\mathbf{c}}(1,u)=1/4$). But later we make
sure that only consistent characteristics are used.}
\end{itemize}
To simplify our notation, we denote by $c_{\mathbf{c}}(1,u)$ the probability $c_{\mathbf{s}}(u)$ for every $u\in T$.
Now assume that the current plan is formalized by a characteristic $c$, and suppose that the defender makes one step to a next vertex $v$ chosen randomly with probability $c_{\mathbf{s}}(v)$. Now the defender declares a new plan, $c^v\in \mathbf{Char}$ where $c^v_{\mathbf{r}}=v$. However, the crucial observation is that the new plans $(c^v)_{v\in U}$ must be consistent with the original plan $c$ in the following sense for all $2\leq k\leq \hat{d}$ and all $u\in T$ :
\[
c_{\mathbf{c}}(k,u) = c_{\mathbf{s}}(u)+\sum_{v\not = u} c_{\mathbf{s}}(v)\cdot c^v_{\mathbf{c}}(k-1,u)
\]
We say that such a~vector $(c^v)_{v\in U}\in \mathbf{Char}^U$ of characteristics is a {\em successor} of $c$.
Now let $C$ be a finite set of characteristics such that every $c\in C$ has
a successor $(c^v)_{v\in U}\in C^U$ (i.e., $c^v\in C$ for all $v\in U$), and
there is at least one $\hat{c}\in C$ such that $\hat{c}_{\mathbf{r}}=\hat{u}$. We say that such $C$ is {\em closed}. We construct a~finite-memory strategy $(M,N,m_0,\xi)$ where $M=C$, $N(c,v)=c^v$, $m_0=\hat{c}$, and $\xi(c)=c_{\mathbf{s}}$. Intuitively, the strategy follows the plans in $C$ and always proceeds to the next plan according to a fixed successor in $C^U$. We prove that this strategy works consistently with the characteristics of $C$, i.e., whenever the current history is $h$ and the current memory element is $c$, then, subsequently, the probability of reaching $u$ in at least one, and at most $k$ steps is equal to $c_{\mathbf{s}}(k,u)$. Thus the value of the finite-memory strategy cannot be worse than $\min_{c\in C} \mathit{val}(c)$.
So, the computation of a finite-memory strategy reduces to a computation of a
finite closed set of characteristics. We show that one such set can be
extracted from a carefully selected $\varepsilon$-optimal strategy. Given a
defender's strategy $\sigma$, we denote by $\mathcal{H}(\sigma)$ the set of
all histories that $\sigma$ may follow with a positive probability. Given a
strategy $\sigma$ and a history $h\in \mathcal{H}(\sigma)$, we define a
characteristic $c[\sigma,h]$ such that $c[h]_{\mathbf{r}}$ is the last node of
$h$, $c[h]_{\mathbf{s}}=\sigma(h)$, and each $c[h]_{\mathbf{c}}(k,u)$ is the
probability of reaching $u$ in at least one, and at most $k$ steps starting
with the history $h$ using $\sigma$. Now let $\sigma^*$ be an optimal
strategy. The crucial observation (see also Proposition~\ref{cor:eps-opt-discr} in
Appendix~\ref{app-eps-opt}) is that for every $h\in \mathcal{H}(\sigma)$ it
holds that $\mathit{val}(c[\sigma^*,h])\geq \mathit{val}$. By appropriately rounding probabilities in $\sigma^*$, we obtain an $\varepsilon$-optimal strategy $\sigma^{\varepsilon}$ such that for every history $h$ and every $u\in U$ :
\[
\sigma_{\varepsilon}(h)(u)=k\cdot \lceil \hat{d}\cdot|U|/\varepsilon\rceil^{-1}\text{ for a suitable }k\in \mathbb{N}
\]
and $c[\sigma^{\varepsilon},h]\geq \mathit{val}-\varepsilon$ for all $h\in \mathcal{H}(\sigma^{\varepsilon})$.
Now it is rather straightforward to show that for each $h$, the vector $(c[hv])_{v\in U}$ is a successor of $c[h]$. Thus the set $\mathbf{Char}[\sigma^{\varepsilon}]$ of all $c[h]$, here $h\in \mathcal{H}$, is a closed set. It is also finite, of size that is bounded by $\left(\hat{d}\cdot |U|/\varepsilon\right)^{\mathcal{O}(\hat{d}^2\cdot |U|)}$, and every $c\in \mathbf{Char}[\sigma^{\varepsilon}]$ satisfies $\mathit{val}(c)\geq \mathit{val}-\varepsilon$. This shows that there always exists a $\varepsilon$-optimal finite-memory strategy of the size bounded by $\left(\hat{d}\cdot |U| /\varepsilon\right)^{\mathcal{O}(\hat{d}^2\cdot |U|)}$.
Our algorithm computes a closed subset $C$ of a (finite) set of appropriately rounded characteristics that maximizes $\min_{c\in C}\mathit{val}(c)$. This is done by a simple iterative procedure which maintains a growing pool of characteristics (in order of decreasing value) and tries to find its closed subset. For details see Appendix~\ref{app-eps-opt}.
\section{The results}
\label{sec-results}
\input{resultsAB-short}
\subsection{A bound on the Stackelberg value}
Now we establish an upper bound on $\mathit{val}$ which depends only
on the attack signature $\mathit{S}$ of $\mathcal{G}$. The simplicity of the argument
is due to Proposition~\ref{prop-abafy}.
\begin{theorem}
\label{thm-upper}
For every patrolling problem $\mathcal{G} = (U,T,\hat{u},E,d)$
such that $T = U$,
we have that \mbox{$\mathit{val} \leq \left(\sum_{k \in \mathit{supp}(S)} \frac{S(k)}{k}\right)^{-1}$}
where $S$ is the attack signature of $\mathcal{G}$.
\end{theorem}
\begin{proof}[Proof Sketch] Intuitivelly, every node $u$ has to be visited by the
defender with propability at least $\mathit{val}$ during each $d(u)$ consecutive
steps. Hence, summing the probabilities of visiting $u$ in each of the
steps from $1$ to $\ell = \Pi_{k \in \mathit{supp}(S)}\, k$ we need to reach a
value greater than or equal to $\mathit{val} \cdot\ell/d(u)$. Summing these values
for all nodes we have at least $\sum_{u\in U}\mathit{val} \cdot \ell/d(u)$. Note
that in each step we visit some node with probability one and so,
the sum for all nodes and $\ell$ steps is just $\ell$. This implies the
theorem due to $\ell\geq\sum_{u\in U}\mathit{val}\cdot\ell/d(u)=\ell \cdot \mathit{val}
\cdot \sum_{k \in \mathit{supp}(S)} S(k)/k$. For more details see
Appendix~\ref{app-val-bound}.
\end{proof}
\subsection{Solving patrolling problems with a fully connected environment}
\label{sec-modular-strategies}
Let $\mathcal{G} = (U,T,\hat{u},E, d)$ be a patrolling problem where
$T = U$ and $E = U \times U$, and let $S$ be the signature of $\mathcal{G}$. Recall the notion of
modular strategy and the associated decomposition principle introduced in
Section~\ref{sec-intro}. In particular, recall that a $d$-modular strategy
$\sigma$ for $\mathcal{G}$ is fully represented by probability distributions
$\mu_0,\ldots,\mu_{d-1}$ over $U$ such that $\sigma(h) = \mu_i$ where
$i = |h|~\mathrm{mod}~d$.
We start by considering the case when $\mathcal{G}$ has $n$ nodes with the same
attack length~$d$. Since we aim at developing a strategy synthesis
algorithm \emph{polynomial in $\size{S}$}, we need to invent a compact representation of modular strategies which is sufficiently
expressive for our purposes.
We assume that the nodes of $U$ are
indexed by numbers from $1$ to $|U|$, and we use $\U{i,N}$ to denote
the subset of $U$ consisting of $N$ subsequent nodes starting from $i$,
i.e., all $u_{\ell}$ where $i \leq \ell < i+N$ and $1 \leq i \leq i+N-1 \leq |U|$.
Let us consider the
class of expressions determined by the following abstract syntax equation:
\[
\theta ~~::=~~ \mathit{Circle}(\U{i,N},M,L) ~~\mid~~ \theta_1;\theta_2 ~~\mid~~ \nu_p[\theta_1,\theta_2]
\]
Here, $M,L \in \mathbb{N}$ such that $M$ divides $N$, and $p$ ranges over a countable set of variables $\mathit{Var}$. Assuming some valuation $\alpha: \mathit{Var} \rightarrow [0,1]$, every expression $\theta$ determines a modular strategy for $U$ defined inductively as follows:
$\mathit{Circle}(\U{i,N},M,L)$ is a modular strategy which splits
$\U{i,N}$ into pairwise disjoint subsets of size~$M$
and then ``walks around'' these sets $L$ times,
$\theta_1;\theta_2$ is a modular strategy which ``sequentially alternates'' between $\theta_1$ and $\theta_2$, and $\nu_p[\theta_1,\theta_2]$ is a strategy
which ``composes'' $\theta_1$ and $\theta_2$
using the distribution $(1-\alpha(p),\alpha(p))$.
A detailed description of the semantics is given in
Appendix~\ref{app-algorithm}.
Our strategy synthesis algorithm is a recursive procedure
$\textsc{Defend}$ which inputs a triple $(\U{i,N},D,e)$, where $\U{i,N}$ is the set of nodes to be defended,
$D$ is the number of steps available for defending $\U{i,N}$, and $e$ is an expression
which represents the ``weight'' of the constructed defending strategy
in the final distribution $\nu$. The procedure outputs a pair $(\theta,V)$ where
$\theta$ is an expression specifying a $D$-modular strategy for $\U{i,N}$, and $V$ is
an arithmetic expression representing the guaranteed ``coverage'' of the targets in $\U{i,N}$ when using $\theta$ with
the weight~$e$.
As a side effect, the function $\textsc{Defend}$ may produce
equations for the variables that are employed in
symbolic strategy compositions of the form $\nu_p[\theta_1,\theta_2]$. The algorithm is invoked by
$\textsc{Defend}(\U{1,|U|},d,1)$, and the system of equations is initially empty. The recursion is stopped when $D$ divides
$N$ or $N$ divides $D$, and in these cases $\textsc{Defend}$ provably
produces strategies that achieve the best coverage
for every value of~$e$. In the other cases, $\textsc{Defend}$ proceeds
recursively by splitting either the set of
nodes or the number of steps available to protect the nodes. In both cases, $\textsc{Defend}$ tries to exploit
the available resources in the best possible way.
A full description is given in Appendix~\ref{app-algorithm}.
At the very end, we obtain a $d$-modular strategy $\sigma$ for $\mathcal{G}$
specified by an expression $\theta$ whose size is polynomial in $\size{S}$,
an expression $V$ which represents $\mathit{val}(\sigma)$, and we also obtain a system of polynomial equations for the variables
which parameterize $\theta$ and $V$. The system has a unique solution in $[0,1]^k$ (where $k$ is the number of variables) that
corresponds to the intended valuation. The size of $k$ can be, for given $n > d$, computed as follows: we put $n_0 = n$ and $d_0 =d$,
and then $n_{i+1} = n_i~\mathrm{mod}~d_i$ and $d_{i+1} = d_i~\mathrm{mod}~n_{i+1}$. The number
of variables for $n$ and $d$ is equal to the least index $j$ such that $d_j$ divides $n_j$.
In particular, if $d$ divides $n$, there is no variable at all, and our algorithm immediately
produces a strategy which achieves the value $d/n$, which is optimal by Theorem~\ref{thm-upper}.
As an example of a ``hard'' instance, consider $n = 709793170386861531$ and $d = 37248973638339152$, which requires $30$
variables and equations. The solution (producing $\mathit{val}(\sigma) = 0.05247471678$) can be computed by Maple
in fractions of a second.
It has been recently proved by Lamser \cite{Lamser:BCthesis} that our algorithm produces na optimal
strategy also when $d=2$ (for arbitrary $n$), which includes the example of
Fig.~\ref{fig-example}~(left). Since the algorithm seems to exploit
the available resources optimally, we conjecture that it actually outputs an optimal strategy for all parameters.
To solve a patrolling problem with a general signature $S$, we simply split the nodes into
disjoint subsets according to their attack lengths, solve these subproblems by the above algorithm,
and then compose the modular strategies so that all nodes are defended equally well. One can easily check
that if $S$ is well formed, this leads to a strategy whose value matches the bound of Theorem~\ref{thm-upper}.
Thus, we obtain the following:
\begin{theorem}
\label{thm:well-formed-optimal}
Let $\mathcal{G}$ be a patrolling problem with $T=U$, a fully connected environment, and a well formed
signature~$S$. Then there is an optimal modular strategy $\sigma$ computable in time polynomial
in $\size{S}$.
\end{theorem}
\subsection{A characterization of sufficiently connected environments}
\label{sec-well-formed}
\noindent
For the rest of this subsection, we fix a patrolling problem
$\mathcal{G} = (U,T,\hat{u},E,d)$ with $T =U$ and a well-formed signature~$S$.
We classify the conditions under which
$E$ is sufficiently connected (recall that
$E$ is sufficiently connected iff the value for $\mathcal{G}$ is the same as the value for $\mathcal{G}$ when $E$ is
replaced with the fully connected environment
$U \times U$. Let $M_S$ be a digraph with vertex labelling $d$ constructed as follows:
\begin{itemize}
\item For all $k \in \mathit{supp}(S)$, $i \in \{0,\ldots,k{-}1\}$, and
$j \in \{1,\ldots,S(k)/k\}$, we add a fresh vertex $v_k[i,j]$
and set $d(v_k[i,j]):=k$. Hence,
$M_S$ has exactly $\sum_{k \in \mathit{supp}(S)} S(k)$ vertices.
\item For every pair of vertices $v_k[i,j]$ and $v_{k'}[i',j']$,
there is an arc from $v_k[i,j]$ to $v_{k'}[i',j']$ in $M_S$ iff
there is some $0 \leq \ell < k\cdot k'$ such that
$i = \ell \,\mathit{mod}\, k$ and
$i' = (\ell {+} 1) \,\mathit{mod}\, k'$.
\end{itemize}
Note that $M_S$ is computable in polynomial time. We prove the following:
\begin{theorem}
\label{thm-subdigraph}
Let $\mathcal{G} = (U,T,\hat{u},E,d)$ be a patrolling problem
such that $T=U$ and the signature~$S$ of $\mathcal{G}$ is
well formed. Then $E$ is sufficiently connected iff
$(U,E)$ contains a subdigraph $H$ which is $d$-preserving isomorphic to~$M_s$
(i.e., if $x$ of $H$ is mapped to $y$ of $M_S$ then $d(x)=d(y)$).
\end{theorem}
\noindent
The ``if'' part of Theorem~\ref{thm-subdigraph} is trivial, because if
$(U,E)$ contains a subdigraph $M_s$, then we can implement the optimal
modular strategy constructed by the algorithm of
Subsection~\ref{sec-modular-strategies}. The ``only if'' part is more
challenging. The crucial observation is that the defender is not allowed to visit any target $u$ twice within $d(u)$ steps whenever she is
aiming to reach the bound of Theorem~\ref{thm-upper}.
The underlying observations also reveals that \emph{every} optimal
strategy $\sigma$ starts to
behave like the strategy $\sigma^*$ after every history which visits all
nodes. Hence, the strategy $\sigma^*$ does \emph{not} belong to
$\bigcup_{j=1}^\infty \Sigma^j$, except for some trivial cases (see
Section~\ref{sec-intro}). A proof of Theorem~\ref{thm-subdigraph} is given
in Appendix~\ref{app-subdigraph}.
An immediate consequence of Theorem~\ref{thm-subdigraph} is that
the problem whether a environment $E$ is sufficiently
connected is in \textbf{NP}. We complement this by a matching
lower bound in the following theorem
with a full proof in Appendix~\ref{app-HAM}.
\begin{theorem}
\label{thm-connected}
The problem whether the environment of a given
patrolling problem $\mathcal{G} = (U,T,\hat{u},E,d))$, such that
$T =U$ and the signature $S$ of $\mathcal{G}$ is well formed,
is sufficiently connected, is \mbox{\textbf{NP}-complete}. Further,
this problem is \mbox{\textbf{NP}-complete} even for
a subclass of patrolling problems such that $\mathit{supp}(S) = \{k\}$, where
$k \geq 3$ is a fixed constant. For a subclass of patrolling
problems where $\mathit{supp}(S) = \{2\}$, the problem is solvable in
polynomial time.
\end{theorem}
\section{Open problems}
\noindent
Our proof of the existence of an optimal defender's strategy
(Theorem~\ref{thm:optimal}) does not allow to conclude
anything about the \emph{structure} of optimal strategies.
One is tempted to expect that optimal strategies are in some sense ``regular'' and require only finite-memory,
but our present understanding does not allow to prove
this conjecture. Another challenge it to lift the
presented compositional technique to a more general class of patrolling games (such results would have a considerable
practical impact). Finally, the question whether the algorithm of Section~\ref{sec-modular-strategies} produces an optimal strategy for all inputs is also interesting
but left open. |
hep-ph/9703336 | \section{Introduction}
One of the ways $CP$ violation can manifest itself is in the
particle-antiparticle imbalance in mass eigenstates of
neutral meson systems.
In the case of the neutral kaon, it can be measured as the asymmetry between
$K_L\to \pi^- \ell^+ \nu$ and $K_L\to \pi^+ \ell^- \nu$ \cite{PDG,Kaon}:
\begin{equation}
\delta_K\equiv
{Br(K_L\to \pi^- \ell^+ \nu) - Br(K_L\to \pi^+ \ell^- \nu) \over
Br(K_L\to \pi^- \ell^+ \nu) + Br(K_L\to \pi^+ \ell^- \nu)}
= (3.27\pm0.12)\times10^{-3}\;,
\end{equation}
which indicates that $K_L$ contains more $K^0$ than $\overline K^0$
(assuming $\Delta S = \Delta Q$). If
$CPT$ is conserved, $K_S$ has the same asymmetry $\delta_K$
with the same sign; thus there is no need to specify which of the
two mass eigenstates is being considered.
For the neutral $B$ meson
system, one can similarly define the asymmetry $\delta$ as
\begin{equation}
\delta\equiv
{|\langle B^0|B_{a,b}\rangle|^2 - |\langle\overline B^0|B_{a,b}\rangle|^2\over
|\langle B^0|B_{a,b}\rangle|^2 + |\langle\overline B^0|B_{a,b}\rangle|^2}\; ,
\label{eq:deltadef}
\end{equation}
where $B_a$ and $B_b$ are the two mass eigenstates which, as in the case
of the kaon, have the same asymmetry (assuming $CPT$). In practice, however,
it is experimentally difficult to isolate $B_a$ or $B_b$. The traditional
method is to measure the same-sign di-lepton asymmetry in
$\Upsilon(4S)\to B^0 \overline B^0$ \cite{Okun+,AliAydin}:
\begin{equation}
A_{\ell\ell}\equiv
{ N(\ell^+\ell^+) - N(\ell^-\ell^-) \over
N(\ell^+\ell^+) + N(\ell^-\ell^-) }
\sim 2\delta\; .
\end{equation}
There is in principle no dilution due to $\Upsilon(4S)\to B^+B^-$ since
the same-sign di-lepton events are caused by mixing of the neutral $B$ meson
(assuming $\Delta B = \Delta Q$).
Within the framework of the standard model, the short distance calculation
gives \cite{Ma+,Barger+,Hagelin}
\begin{equation}
A_{\ell\ell}\sim -\,4\pi{m_c^2\over m_t^2}\,
\Im\left({V_{cb}V_{cd}^*\over V_{tb}V_{td}^*}\right),
\label{Ashort}
\end{equation}
which is around $10^{-3}$. Including long distance effects, Altomari {\it et al.}
estimated $A_{\ell\ell}$ to be in the range $10^{-3}$ to $10^{-2}$
\cite{Altomari+}. The uncertainty is primarily due to
hadronic intermediate states,
and even the sign cannot be reliably predicted.
As a consequence, a measurement of $CP$ violation in the semileptonic
asymmetry does not lead to the determination of basic $CP$ violating
parameters in the standard model.
Outside of the standard model, however, the asymmetry can
be larger, and thus an experimental value of
$|\delta|$ above $\sim10^{-2}$ would signal
new physics \cite{Asl-Newphys,Newphys}. The current experimental number is
not very recent or precise:
$A_{\ell\ell} = 0.031\pm0.096\pm0.032$ \cite{CLEO-A}.
For $B_s$, the short distance prediction of $A_{\ell\ell}$ is obtained by
replacing $d$ by $s$ in (\ref{Ashort}), and it is
even smaller than for $B^0$.
The $CP$ asymmetry in single lepton sample
had been suggested as a possible observable to search for $CP$ violation
in the case when the mixing is small \cite{Hagelin,Buras+}.
The logic was that if the mixing is
small, then the statistics of the di-lepton events will decrease,
making the di-lepton method impractical.
After the observation of substantial mixing in the neutral
$B$ meson system \cite{Bmixobs}, however, the single lepton method
has not received much attention.
In this note, we point out that the advantage of the single lepton method
over the di-lepton method actually increases for large mixings, and that,
on $\Upsilon(4S)$, the single lepton method has a comparable or
better sensitivity than the di-lepton method. This is so in spite of the fact
that in the single lepton measurement, one usually cannot distinguish charged
and neutral $B$ mesons.
We begin by briefly reviewing the phenomenological
background, and then move on to estimate experimental sensitivities.
In the appendix, we present a general rule that relates the inclusive
decay time distribution on $\Upsilon(4S)$ to those of $B^0$ and $\overline B^0$
tagged at $t=0$, as well as decay rate formulas without assuming $CPT$
invariance.
\section{Phenomenology}
The mass
eigenstates can be written in terms of $B^0$ and $\overline B^0$ as
\begin{equation}
\left\{
\begin{array}{rcl@{\quad}l}
B_a &=& p B^0 + q \overline B^0
&(\hbox{mass: }m_a,\hbox{ decay rate: }\gamma_a)\\
B_b &=& p' B^0 - q' \overline B^0
&(\hbox{mass: }m_b,\hbox{ decay rate: }\gamma_b)\\
\end{array}, \right.
\label{eq:Babdef}
\end{equation}
where the normalization is
\begin{equation}
|p|^2 + |q|^2 = 1\; ,\qquad |p'|^2 + |q'|^2 = 1\; .
\end{equation}
If $CPT$ is a good symmetry, we have
\begin{equation}
p' = p\; , \quad q' = q \quad (CPT)\; . \label{eq:CPTpq}
\end{equation}
In the following we assume $CPT$ invariance. $CPT$ symmetry
also effectively allows us to take \cite{CPTamp}
\begin{equation}
|Amp(B^0\to\ell^+)| = |Amp(\overline B^0\to\ell^-)|\equiv A_0\quad (CPT)\; .
\label{eq:CPTamp}
\end{equation}
Furthermore, we will assume $\Delta B = \Delta Q$ \cite{dBdQ}.
The probability that a pure $B^0$
or $\overline B^0$ at $t=0$ decays to $\ell^\pm$ at time $t$ is then given by
(see Appendix)
\begin{eqnarray}
\Gamma_{B^0\to\ell^+}(t) &=& \Gamma_{\overline B^0\to\ell^-}(t) =
{A_0^2\over2} e^{-\gamma_+ t}
\left[\cosh\gamma_- t + \cos\delta m t\right]\; , \nonumber \\
\Gamma_{\overline B^0\to\ell^+}(t) &=& {|p|^2\over |q|^2}
{A_0^2\over2} e^{-\gamma_+ t}
\left[\cosh\gamma_- t - \cos\delta m t\right]\; , \label{eq:BBlepdist}\\
\Gamma_{ B^0\to\ell^-}(t) &=& {|q|^2\over |p|^2}
{A_0^2\over2} e^{-\gamma_+ t}
\left[\cosh\gamma_- t - \cos\delta m t\right]\; , \nonumber
\end{eqnarray}
where
\begin{equation}
\delta m\equiv m_a - m_b\; ,\quad
\gamma_\pm \equiv {\gamma_a\pm\gamma_b\over 2}\; .
\end{equation}
The short distance calculation predicts
$\gamma_a\sim\gamma_b$ \cite{Buras+}.
There is, however, no stringent experimental limit, and we will allow for the
possibility that the difference is sizable. Also, the following expressions
are applicable to $B_s$ mesons which is expected to have a sizable
decay rate difference.
The fraction of $B^0$
or $\overline B^0$ at $t=0$ eventually decaying to $\ell^\pm$, which we denote
as $Br(B^0 (\overline B^0) \to\ell^\pm)$, is obtained by integrating
the above expressions:
\begin{eqnarray}
Br(B^0\to\ell^+) &=& Br(\overline B^0\to\ell^-) =
{b_{sl}\over 1-y^2}\, (1-\chi)\; , \nonumber \\
Br(\overline B^0\to\ell^+) &=&
{b_{sl}\over 1-y^2}{|p|^2\over |q|^2}\,\chi\; , \label{eq:BBlep}\\
Br(B^0\to\ell^-) &=&
{b_{sl}\over 1-y^2}{|q|^2\over |p|^2}\,\chi\; , \nonumber
\end{eqnarray}
where $\chi$ is the standard mixing parameter \cite{PaisTreiman}
defined by
\begin{equation}
\chi \equiv {1\over2}\,{x^2 + y^2\over 1 + x^2}
= 0.175\pm0.016\;\cite{PDG}\;,
\end{equation}
\begin{equation}
x \equiv {\delta m\over\gamma_+}\;,\quad
y \equiv {\gamma_-\over\gamma_+}\;,
\end{equation}
and $b_{sl}$ is the `normalized' semileptonic branching fraction
which reduces to the experimental semileptonic branching fraction
in the limit of $\gamma_a = \gamma_b$ and $CP$ symmetry:
\begin{equation}
b_{sl}\equiv {\displaystyle {A_0^2\over\gamma_+}} \;
\stackrel{\gamma_a=\gamma_b, CP}{\longrightarrow} \;
2 Br(B\to X\ell\nu) = 2 \times (0.1043\pm0.0024)\;\cite{PDG}\;,
\label{eq:SLamp}
\end{equation}
where the factor 2 comes from the fact that there are electron and muon modes.
From the first line of (\ref{eq:BBlep}), we see that there is no asymmetry
in the `right-sign' lepton branching
fractions.
The particle-antiparticle imbalance in $B_{a,b}$ (namely,
$|p|^2 \not= |q|^2$) shows up only in the `wrong-sign' decays:
\begin{equation}
{Br(\overline B^0\to\ell^+) - Br(B^0\to\ell^-) \over
Br(\overline B^0\to\ell^+) + Br(B^0\to\ell^-)}
= {|p|^4 - |q|^4\over |p|^4 + |q|^4} = {2\delta \over 1 + \delta^2}\;,
\label{eq:Aflvtag}
\end{equation}
with
\begin{equation}
\delta = {|p|^2 - |q|^2\over |p|^2 + |q|^2}\; ,
\end{equation}
which follows from definition (\ref{eq:deltadef}).
Experimentally, measurement of this asymmetry requires flavor tagging
at $t=0$.
In the decay $\Upsilon(4S)\to B^0\overline B^0$, one needs to take into account
the quantum correlations arising from the fact that the two mesons are in
a coherent $L=1$ state.
We consider the case when one side decays to $\ell^+$ and the other side
decays to $\ell^-$ (and all other charge combinations).
The decay time variable accessible in an asymmetric $B$-factory is the
time difference of decays:
\begin{equation}
t_- \equiv t_1 - t_2\; .
\end{equation}
In terms of $t_-$, the decay time distributions are (see Appendix)
\begin{eqnarray}
\Gamma_{\Upsilon(4S)\to\ell^+\ell^-}(t_-) &=&
\Gamma_{\Upsilon(4S)\to\ell^-\ell^+}(t_-) \nonumber \\
&=& {A_0^4\over8\gamma_+} e^{-\gamma_+ |t_-|}
\left[ \cosh\gamma_- t_- + \cos \delta m t_- \right] \;,
\nonumber \\
\Gamma_{\Upsilon(4S)\to\ell^+\ell^+}(t_-) &=&
{|p|^2\over|q|^2}{A_0^4\over8\gamma_+} e^{-\gamma_+ |t_-|}
\left[ \cosh\gamma_- t_- - \cos \delta m t_- \right]\;,
\label{eq:Upstmi}\\
\Gamma_{\Upsilon(4S)\to\ell^-\ell^-}(t_-) &=&
{|q|^2\over|p|^2}{A_0^4\over8\gamma_+} e^{-\gamma_+ |t_-|}
\left[ \cosh\gamma_- t_- - \cos \delta m t_- \right]\; .
\nonumber
\end{eqnarray}
Note that the $\ell^+\ell^+$ and $\ell^-\ell^-$ distributions
have exactly the
same $t_-$ dependence, and thus the asymmetry between them does not
depend on $t_-$.
We thus integrate (\ref{eq:Upstmi}) to obtain
the fraction of
$\Upsilon(4S)\to B^0\overline B^0$ decays that eventually result in
a lepton pair $\ell^\pm\ell^\pm$, which we denote by
$Br(\Upsilon(4S)\to \ell^\pm\ell^\pm)$:
\begin{eqnarray}
Br(\Upsilon(4S)\to\ell^+\ell^-) &=& Br(\Upsilon(4S)\to\ell^-\ell^+)
= {1\over2}{b_{sl}^2\over 1-y^2}\, (1-\chi)\;, \nonumber \\
Br(\Upsilon(4S)\to\ell^+\ell^+) &=&
{1\over2}{b_{sl}^2\over 1-y^2}{|p|^2\over |q|^2}\,\chi\;,
\label{eq:Upslep}\\
Br(\Upsilon(4S)\to\ell^-\ell^-) &=&
{1\over2}{b_{sl}^2\over 1-y^2}{|q|^2\over |p|^2}\,\chi\;.
\nonumber
\end{eqnarray}
The common factor $b_{sl}^2 / 1-y^2$ can be written as
\begin{equation}
{b_{sl}^2\over 1-y^2} =
Br(B_a\to\ell^\pm)\, Br(B_b\to\ell^\pm)\;.
\end{equation}
Then, in the absence of $CP$ violation (namely, $|p|^2=|q|^2$),
the total yield of di-lepton events
becomes $Br(B_a\to\ell^\pm) Br(B_b\to\ell^\pm)$, as expected from the
simple picture $\Upsilon(4S)\to B_a B_b$.
The asymmetry between $(\ell^+\ell^+)$ and $(\ell^-\ell^-)$ is the same
as that of the `wrong-sign' leptons discussed above:
\begin{equation}
A_{\ell\ell} \equiv
{Br(\Upsilon(4S)\to\ell^+\ell^+) - Br(\Upsilon(4S)\to\ell^-\ell^-) \over
Br(\Upsilon(4S)\to\ell^+\ell^+) + Br(\Upsilon(4S)\to\ell^-\ell^-) }
= {2\delta \over 1 + \delta^2}\;.
\label{eq:Asymll}
\end{equation}
No flavor tagging is required for this measurement.
In order to study the single lepton asymmetry on $\Upsilon(4S)$, one
has to take into account the cases where one side decays semileptonically
and the other side decays to anything. To do so, we use the
following general rule (see Appendix):
\begin{equation}
\Gamma_{\Upsilon(4S)\to f}(t) =
2 \sum_{f_1}\int_0^\infty
\Gamma_{\Upsilon(4S)\to f_1f}(t_1,t)\, dt_1 =
\Gamma_{B^0\to f}(t) + \Gamma_{\overline B^0\to f}(t)\;,
\label{eq:4Sinc}
\end{equation}
where $\Gamma_{\Upsilon(4S)\to f}(t)$ is the probability density
that one finds a given final state $f$ decaying at time $t$ in the
process $\Upsilon(4S)\to B^0\overline B^0$,
$\Gamma_{\Upsilon(4S)\to f_1f_2}(t_1,t_2)$
is the probability density that
one side of $\Upsilon(4S)\to B^0\overline B^0$
decays to final state $f_1$ at time $t_1$
and the other side decays
to $f_2$ at time $t_2$, and $\Gamma_{B^0(\overline B^0)\to f}(t)$
is the probability density that a pure $B^0(\overline B^0)$ at $t=0$
decays to final state $f$ at time $t$. The factor 2 in
(\ref{eq:4Sinc}) accounts for the fact that the final state $f$
can come from either side of the $\Upsilon(4S)$ decay.
These functions are
related to the branching fractions discussed above by
\begin{equation}
Br(\Upsilon(4S)\to f_1f_2) =
\int_0^\infty \int_0^\infty
\Gamma_{\Upsilon(4S)\to f_1f_2}(t_1,t_2)\, dt_1dt_2\; ,
\end{equation}
\begin{equation}
Br(B^0(\overline B^0)\to f) =
\int_0^\infty \Gamma_{B^0(\overline B^0)\to f}(t)\,dt\;.
\label{eq:BBgamma}
\end{equation}
The relation (\ref{eq:4Sinc}) is a consequence of quantum mechanics
and conservation of probability, and is valid even when $CPT$ is violated.
The same relation holds for $e^+e^-\to V \to K^0\overline K^0$,
$D^0\overline D^0$, and $B_s \overline B_s$, where $V$ is a vector state or
a virtual photon.
Let us define the inclusive quantity on $\Upsilon(4S)$ by
\begin{equation}
N(\Upsilon(4S)\to f) \equiv
\int_0^\infty\Gamma_{\Upsilon(4S)\to f}(t)\, dt\; ,
\label{eq:4SBr}
\end{equation}
which is the total expected number of final state $f$ in one
$\Upsilon(4S)\to B^0\overline B^0$ decay.
The normalizations are given by (see Appendix)
\begin{eqnarray}
\sum_{f_1,f_2} Br(\Upsilon(4S)\to f_1f_2) &=& 1\;, \label{eq:Upsnorm}\\
\sum_f Br(B^0(\overline B^0)\to f) &=& 1\;, \label{eq:BBnorm} \\
\sum_f N(\Upsilon(4S)\to f) &=& 2 \;. \label{eq:Ups1norm}
\end{eqnarray}
The last normalization reflects the fact that there are two
$B$ mesons per $\Upsilon(4S)$ decay.
From (\ref{eq:BBlep}),(\ref{eq:4Sinc}),(\ref{eq:BBgamma}), and
(\ref{eq:4SBr}), one then obtains the inclusive lepton yields on
$\Upsilon(4S)$:
\begin{eqnarray}
N(\Upsilon(4S)\to \ell^+) &=& {b_{sl}\over 1-y^2}
\left[ 1 + \left( {|p|^2\over|q|^2}-1\right)\chi\right] \;,
\nonumber \\
N(\Upsilon(4S)\to \ell^-) &=& {b_{sl}\over 1-y^2}
\left[ 1 + \left( {|q|^2\over|p|^2}-1\right)\chi\right]\; .
\label{eq:Ups1lep}
\end{eqnarray}
One sees that when $|p|^2\not= |q|^2$ there is an asymmetry. In practice,
however, leptons from $B^\pm$ are difficult to reject, and the resulting
dilution needs to be taken into account.
\section{Experimental Sensitivities}
We will now estimate the sensitivities to $\delta$ of single and
di-lepton asymmetry measurements. We assume that the lepton detection
efficiency $\epsilon_\ell$ for each lepton
is the same in the single and di-lepton cases,
and that they are uncorrelated in the latter.
Also we assume $\delta\ll 1$ for the expressions of
asymmetries below. In estimating statistics, we further assume
$\gamma_a\sim\gamma_b$ (or equivalently
$y\ll 1$).
If we have $N_0$ $\Upsilon(4S)\to B^0\overline B^0$ decays, then from
(\ref{eq:Upslep}) the total number of same sign di-lepton events
detected is $N_0\, b_{sl}^2\,\chi\,\epsilon_\ell^2$.
Using (\ref{eq:Asymll}), the error in
$\delta$ is then
\begin{equation}
\sigma_{\delta}(\ell\ell) = {1\over2}
{1\over\sqrt{N_0\, b_{sl}^2\,\chi\,\epsilon_\ell^2}}\;.
\end{equation}
The single lepton asymmetry on $\Upsilon(4S)$
can be obtained from (\ref{eq:Ups1lep}):
\begin{equation}
A_\ell(\Upsilon(4S)) \equiv D\;
{N(\Upsilon(4S)\to\ell^+) - N(\Upsilon(4S)\to\ell^-) \over
N(\Upsilon(4S)\to\ell^+) + N(\Upsilon(4S)\to\ell^-) }
= 2 D\,\chi\,\delta\;,
\end{equation}
where $D$ is the dilution factor due to charged
$B$ mesons, and is equal to the fraction of leptons coming from neutral $B$
mesons. Other dilution effects such as those due to misidentified
leptons or leptons from charmed hadrons could also be absorbed into $D$.
Assuming that there are the same number of leptons from
charged $B$'s as from neutral $B$'s, we take $D=1/2$.
The total number of single lepton events detected is
$N_0\, 4 b_{sl}\,\epsilon_\ell$; thus the sensitivity to $\delta$
of the single lepton measurement is
\begin{equation}
\sigma_{\delta}(\ell) = {1\over\chi}
{1\over\sqrt{N_0\, 4 b_{sl}\,\epsilon_\ell}}\;.
\end{equation}
The ratio of sensitivities of single to di-lepton measurements is then
\begin{equation}
{\sigma_{\delta}(\ell)\over\sigma_{\delta}(\ell\ell)} =
\sqrt{{b_{sl}\,\epsilon_\ell\over\chi}}\; .
\end{equation}
We see that the larger the mixing, the more advantageous the single
lepton method becomes. This may be counter-intuitive, but can be
understood as follows: as $\chi$ increases, the
statistics goes up linearly for
the di-lepton sample while its asymmetry stays the same.
For the single lepton sample,
the statistics stays the same while the asymmetry goes
up linearly, which is equivalent to statistics increasing quadratically for
a fixed asymmetry.
A typical value for $\epsilon_\ell$ is 0.5. Together with the experimental
values for $b_{sl}$ and $\chi$, the ratio above is 0.78. Namely, the single
lepton measurement has a sensitivity comparable to or better than
that of the di-lepton
measurement. Note also that the
single and di-lepton datasets are largely statistically independent
(only about 10\%\ of the single lepton events are also in the di-lepton
dataset). The two measurements can thus be combined to improve
overall sensitivity. For example, the current CLEO data corresponds
to $N_0\sim 2\times10^6$. This
gives $\sigma_{\delta}(\ell) = 0.6\%$ and
$\sigma_{\delta}(\ell\ell) = 0.8\%$ with the combined sensitivity of
0.5\% which is already in the range relevant to
standard model predictions.
When $B^0$'s and $\overline B^0$'s are generated incoherently (e.g. on
$Z^0$ or in $p\bar p$ collisions), one cannot perform the correlated
di-lepton analysis. However, one
can still perform single lepton asymmetry measurements.
The discussion below applies also to $B_s$ mesons.
When an equal number of $B^0$ and $\overline B^0$ are generated, the decay
time distribution of $\ell^+$ can be compared to that of $\ell^-$
without tagging the flavor of the parent $B$ meson. Using
(\ref{eq:BBlepdist}),
the time dependent single lepton asymmetry is then
\begin{equation}
A_\ell(t)\equiv D\;{\Gamma_{B^0,\overline B^0\to\ell^+}(t) -
\Gamma_{B^0,\overline B^0\to\ell^-}(t) \over
\Gamma_{B^0,\overline B^0\to\ell^+}(t) +
\Gamma_{B^0,\overline B^0\to\ell^-}(t)} =
D\, \delta \left(1 - {\cos\delta mt \over\cosh\gamma_- t}\right)\;,
\label{eq:ABBleptim}
\end{equation}
where $D$ is the dilution factor due to $B^\pm$'s as before.
In this case, $D$ could in principle
be a function of decay time (e.g., if the lifetimes of neutral and
charged $B$ mesons are different). Because of the relation (\ref{eq:4Sinc}),
the time dependent asymmetry for the single lepton measurement on
$\Upsilon(4S)$ is also given by (\ref{eq:ABBleptim}).
We see that the asymmetry starts out as zero at $t=0$ and reaches the
first maximum at around $\delta m t\sim\pi$ (about 4 times the $b$
lifetime). If we simply count the number of leptons without
measuring the decay time, then the asymmetry
becomes the same as that on $\Upsilon(4S)$:
\begin{equation}
A_\ell \equiv D\;
{Br(B^0,\overline B^0\to\ell^+) - Br(B^0,\overline B^0\to\ell^-) \over
Br(B^0,\overline B^0\to\ell^+) + Br(B^0,\overline B^0\to\ell^-)} =
2 D\, \chi\,\delta\; .
\end{equation}
where we have used (\ref{eq:BBlep}). The factor $D$ again includes
the dilution due to $B^\pm$'s.
This observable (with $D=1$) was first proposed by Hagelin \cite{Hagelin} as a
measure of observability of $CP$ violation in $B^0-\overline B^0$ mixing, since it
contains both the mixing parameter $\chi$ and the $CP$ violation parameter
$\delta$.
The currently available statistics on $Z^0$ is
smaller than that of CLEO; the $p\bar p$
collider at Fermilab, however, may be able to obtain a better sensitivity.
In actual data analysis,
the decay time is often required to
be larger than a given threshold in order to
reject non-$B$ background. Such a requirement, however, should not
sacrifice sensitivity significantly since the asymmetry
at short decay time is small as seen in (\ref{eq:ABBleptim}).
Also, there is a possibility that vertexing allows
separation of neutral $B$'s from charged $B$'s by counting the total
charge emerging from a given vertex, which would substantially
improve sensitivity.
In addition, flavor tagging by leptons, jet charge,
or associated pion production
\cite{CDFtag} may allow for measurement of the flavor-tagged
asymmetry (\ref{eq:Aflvtag}).
The vertexing technique may become useful also at
asymmetric $B$-factories.
In conclusion, we have studied the sensitivity of single lepton $CP$
asymmetry relative to that of the traditional di-lepton asymmetry
on $\Upsilon(4S)$. We find that the single lepton sensitivity is
comparable to or better than that of the di-lepton analysis.
The achievable sensitivities on $\Upsilon(4S)$ and in $p\bar p$ collisions
with currently available datasets
are already close to the predictions of the standard model.
The single lepton method also holds promise for
measurement of the leptonic $CP$ asymmetry of $B_s$.
In the near future (namely, at the $B$-factories, HERA-$B$, and
the upgraded $p\bar p$ collider), it is quite
possible that
$CP$ violation will be observed in the semileptonic modes.
\vspace{1cm}
\noindent {\Large\bf Acknowledgements}
\vspace{0.5cm}
\noindent I would like to thank Sheldon Glashow for pointing out the
asymmetries in the single lepton counting, and
Isi Dunietz for useful discussions.
This work was supported by the Department of Energy
Grant DE-FG02-91ER40654.
\vspace{1cm}
\noindent {\Large\bf Appendix} \eqnapp%
\vspace{0.5cm}
\noindent This section is based on
quantum mechanics, conservation of probability, and the
Weisskopf-Wigner formalism \cite{Wigner+}. We will not assume
$CPT$ invariance unless otherwise stated. No further approximations
are made.
Solving (\ref{eq:Babdef}) for $B^0$ and $\overline B^0$, we obtain
\begin{equation}
\left\{
\begin{array}{rcl}
B^0 &=& c\, ( q' B_a + q B_b ) \\
\overline B^0 &=& c\, ( p' B_a - p B_b ) \\
\end{array}
\right. ,
\end{equation}
with
\begin{equation}
c \equiv {1\over p'q + pq'}\;.
\end{equation}
The time evolution of the mass eigenstates are given by
\begin{equation}
B_a \to e^{-(\gamma_a/2 + i m_a)t} B_a\; , \qquad
B_b \to e^{-(\gamma_b/2 + i m_b)t} B_b\; .
\end{equation}
If we have a pure $B^0$ or $\overline B^0$ at $t=0$,
the decay time distributions to a final state $f$ are then
\begin{eqnarray}
\Gamma_{B^0\to f}(t) &=& |c|^2
\left[ |q'a_f|^2 e^{-\gamma_a t} +
|q b_f|^2 e^{-\gamma_b t} +
2 \Re((q'a_f)^*(q b_f) e^{-(\gamma_+ - i\delta m)t}) \right] ,
\nonumber \\
\Gamma_{\overline B^0\to f}(t) &=& |c|^2
\left[ |p'a_f|^2 e^{-\gamma_a t} +
|p b_f|^2 e^{-\gamma_b t} -
2 \Re((p'a_f)^*(p b_f) e^{-(\gamma_+ - i\delta m)t}) \right] ,
\label{eq:BBdist}
\end{eqnarray}
where
\begin{equation}
a_f \equiv Amp(B_a\to f)\;,\quad b_f \equiv Amp(B_b\to f)\;.
\end{equation}
The parameters $\gamma_\pm$ and $\delta m$ are defined in the main text.
Note that we would have $|c|^2 = 1$ if $CPT$ and $CP$ were conserved.
The normalization of the decay amplitudes is such that
\begin{equation}
\sum_f {|a_f|^2\over\gamma_a} = 1\;, \quad
\sum_f {|b_f|^2\over\gamma_b} = 1\; .
\label{eq:ampnorm}
\end{equation}
Integrating (\ref{eq:BBdist}) over time gives the
fraction of a pure $B^0$ or $\overline B^0$
at $t=0$ eventually decaying to a final state $f$:
\begin{eqnarray}
Br(B^0\to f) &=& |c|^2\left[ |q'|^2{|a_f|^2\over\gamma_a} +
|q |^2{|b_f|^2\over\gamma_b} +
2\Re\left(q'^*q{a_f^*b_f\,\over\gamma_+ - i\delta m}\right)\right] ,
\nonumber \\
Br(\overline B^0\to f) &=& |c|^2\left[ |p'|^2{|a_f|^2\over\gamma_a} +
|p |^2{|b_f|^2\over\gamma_b} -
2\Re\left(p'^*p{a_f^*b_f\,\over\gamma_+ - i\delta m}\right)\right] .
\end{eqnarray}
The normalization (\ref{eq:BBnorm})
can be obtained by summing the above equations over $f$ and using
the Bell-Steinberger
relation\cite{Bell-Stein}
\begin{equation}
{\sum_f a_f^*b_f\over\gamma_+ - i\delta m} =
\langle B_a | B_b \rangle\;\; ( = p'p^* - q'q^* )\; ,
\label{eq:Bell-Stein}
\end{equation}
which expresses the conservation of probability.
On $\Upsilon(4S)$, the $B^0\overline B^0$ pair is created in the coherent
$L=1$ state
\begin{equation}
\begin{array}{l}
{\displaystyle {1\over\sqrt2}}
\left( |\overline B^0 (1)\rangle |B^0 (2)\rangle -
|B^0 (1)\rangle |\overline B^0 (2)\rangle \right) \\
\qquad\qquad = {\displaystyle {c\over\sqrt2}}
\left( |B_a (1)\rangle |B_b (2)\rangle -
|B_b (1)\rangle |B_a (2)\rangle \right)\; ,
\end{array}
\end{equation}
where the numbers 1 and 2 distinguish the sides; namely, they
may be distinguished by the direction of the $B$ meson in the $\Upsilon(4S)$
C.M. system: $\hat k$ or $-\hat k$. Then the probability density
that the side 1 decays to final state $f_1$ at time $t_1$ and
the side 2 decays to $f_2$ at time $t_2$ is given by
\begin{eqnarray}
\Gamma_{\Upsilon(4S)\to f_1f_2}(t_1,t_2) &=& {|c|^2\over2}
\left[ e^{-\gamma_a t_1 -\gamma_b t_2} |a_{f_1}b_{f_2}|^2 +
e^{-\gamma_b t_1 -\gamma_a t_2} |b_{f_1}a_{f_2}|^2
\right.\nonumber\\
&&\left. \; -
2 \Re\left( e^{-(\gamma_+ - i \delta m) t_1}
e^{-(\gamma_+ + i \delta m) t_2}
(a_{f_1} b_{f_2})^* (b_{f_1}a_{f_2}) \right)\right],
\label{eq:Upsgen}
\end{eqnarray}
or equivalently,
\begin{eqnarray}
\Gamma_{\Upsilon(4S)\to f_1f_2}(t_+,t_-) &=& \label{eq:Upsgenpm} \\
&& \hspace{-3.5cm} {|c|^2\over4}e^{-\gamma_+ t_+}
\left[ e^{-\gamma_- t_-} |a_{f_1}b_{f_2}|^2 +
e^{ \gamma_- t_-} |b_{f_1}a_{f_2}|^2 -
2 \Re\left(
(a_{f_1} b_{f_2})^* (b_{f_1}a_{f_2}) e^{i \delta m t_-}
\right)\right], \nonumber
\end{eqnarray}
with
\begin{equation}
t_\pm \equiv t_1 \pm t_2 \;,
\end{equation}
and we have used the relation $ 2 dt_1 dt_2 = dt_+ dt_- $.
Integrating (\ref{eq:Upsgen}) over $t_2$ and summing over all
possible final states $f_2$, we obtain
\begin{equation}
\begin{array}{l} {\displaystyle
\Gamma_{\Upsilon(4S)\to f_1}(t_1) \equiv
2 \sum_{f_2} \int_0^\infty \Gamma_{\Upsilon(4S)\to f_1f_2}(t_1,t_2)dt_2} \\
{\displaystyle
= |c|^2 \left[ |a_{f_1}|^2 e^{-\gamma_a t_1} +
|b_{f_1}|^2 e^{-\gamma_b t_1} -
2\Re\left( e^{-(\gamma_+ - i \delta m) t_1} a_{f_1}^* b_{f_1}
{\sum_{f_2}b_{f_2}^* a_{f_2}\over \gamma_+ + i \delta m}
\right)\right] ,}
\end{array}
\end{equation}
where we have used ({\ref{eq:ampnorm}), and the factor 2 arises from the fact
that the given final state can come from either side.
This together with (\ref{eq:BBdist}) and the Bell-Steinberger relation
(\ref{eq:Bell-Stein}) establishes the general rule (\ref{eq:4Sinc}).
The normalization (\ref{eq:Upsnorm}) and (\ref{eq:Ups1norm}) then
follows from (\ref{eq:4Sinc}) and (\ref{eq:BBnorm}).
Expressions for semileptonic decays are obtained by the substitutions
\begin{equation}
\begin{array}{ll}
a_{\ell^+} = p a_0\;, & b_{\ell^+} = p' a_0\; , \\
a_{\ell^-} = q \bar a_0\;, & b_{\ell^-} = -q' \bar a_0\; ,
\end{array}
\label{eq:lepamps}
\end{equation}
where we have used the assumption $\Delta B = \Delta Q$, and
\begin{equation}
a_0 \equiv Amp(B^0\to\ell^+)\;,\quad
\bar a_0 \equiv Amp(\bar B^0\to\ell^-)\; .
\end{equation}
Namely, (\ref{eq:BBdist}) gives
\begin{eqnarray}
\Gamma_{B^0\to\ell^+}(t) &=& |c|^2 |a_0|^2 e^{-\gamma_+ t}
\left[ |pq'|^2 e^{-\gamma_- t} + |p'q|^2 e^{\gamma_- t}
+ 2\Re\left((pq')^*(p'q) e^{i\delta m t}\right)\right],
\nonumber \\
\Gamma_{\overline B^0\to\ell^-}(t) &=& |c|^2 |\bar a_0|^2 e^{-\gamma_+ t}
\left[ |p'q|^2 e^{-\gamma_- t} + |pq'|^2 e^{\gamma_- t}
+ 2\Re\left((p'q)^*(pq') e^{i\delta m t}\right)\right],
\nonumber \\
\Gamma_{\overline B^0\to\ell^+}(t) &=& 2|c|^2 |a_0|^2 |pp'|^2 e^{-\gamma_+ t}
\left[ \cosh\gamma_- t - \cos \delta m t \right],
\nonumber \\
\Gamma_{B^0\to\ell^-}(t) &=& 2|c|^2 |\bar a_0|^2 |qq'|^2 e^{-\gamma_+ t}
\left[ \cosh\gamma_- t - \cos \delta m t \right] .
\end{eqnarray}
The $CPT$ relations (\ref{eq:CPTpq}) and (\ref{eq:CPTamp}) then lead to
(\ref{eq:BBlepdist}).
On $\Upsilon(4S)$, (\ref{eq:Upsgenpm}) and (\ref{eq:lepamps}) give the
di-lepton decay distributions:
\begin{eqnarray}
\Gamma_{\Upsilon(4S)\to\ell^+\ell^-}(t_+,t_-) &=& \nonumber \\
&&\hspace{-3cm} {|c|^2\over4} |\bar a_0 a_0|^2 e^{-\gamma_+ t_+}
\left[ |pq'|^2 e^{-\gamma_- t_-} + |p'q|^2 e^{\gamma_- t_-}
+ 2\Re\left((pq')^*(p'q) e^{i\delta m t_-}\right)\right],
\nonumber \\
\Gamma_{\Upsilon(4S)\to\ell^-\ell^+}(t_+,t_-) &=& \nonumber \\
&&\hspace{-3cm} {|c|^2\over4} |\bar a_0 a_0|^2 e^{-\gamma_+ t_+}
\left[ |p'q|^2 e^{-\gamma_- t_-} + |pq'|^2 e^{\gamma_- t_-}
+ 2\Re\left((p'q)^*(pq') e^{i\delta m t_-}\right)\right],
\nonumber \\
\Gamma_{\Upsilon(4S)\to\ell^+\ell^+}(t_+,t_-) &=&
{|c|^2\over2} |a_0|^4 |pp'|^2 e^{-\gamma_+ t_+}
\left[ \cosh\gamma_- t_- - \cos \delta m t_- \right],
\nonumber \\
\Gamma_{\Upsilon(4S)\to\ell^-\ell^-}(t_+,t_-) &=&
{|c|^2\over2} |\bar a_0|^4 |qq'|^2 e^{-\gamma_+ t_+}
\left[ \cosh\gamma_- t_- - \cos \delta m t_- \right].
\end{eqnarray}
Note that the opposite-sign lepton rates satisfy the relation
\begin{equation}
\Gamma_{\Upsilon(4S)\to\ell^+\ell^-}(t_+,t_-) =
\Gamma_{\Upsilon(4S)\to\ell^-\ell^+}(t_+,-t_-) \;,
\end{equation}
which corresponds to re-labeling the final states.
Under $CPT$ symmetry this simplifies to
\begin{eqnarray}
\Gamma_{\Upsilon(4S)\to\ell^+\ell^-}(t_+,t_-) &=&
\Gamma_{\Upsilon(4S)\to\ell^-\ell^+}(t_+,t_-) \nonumber \\
&=& {A_0^4\over8} e^{-\gamma_+ t_+}
\left[ \cosh\gamma_- t_- + \cos \delta m t_- \right],
\nonumber \\
\Gamma_{\Upsilon(4S)\to\ell^+\ell^+}(t_+,t_-) &=&
{|p|^2\over|q|^2}{A_0^4\over8} e^{-\gamma_+ t_+}
\left[ \cosh\gamma_- t_- - \cos \delta m t_- \right],
\\
\Gamma_{\Upsilon(4S)\to\ell^-\ell^-}(t_+,t_-) &=&
{|q|^2\over|p|^2}{A_0^4\over8} e^{-\gamma_+ t_+}
\left[ \cosh\gamma_- t_- - \cos \delta m t_- \right] .
\nonumber
\end{eqnarray}
Integrating over $t_+$ from $|t_-|$ to $\infty$ gives (\ref{eq:Upstmi}).
Note that $CPT$ invariance ensures that the decay distributions are
symmetric under sign change of $t_-$. Thus, such asymmetry (for example,
if the $\ell^+$ side tends to decay earlier then the $\ell^-$ side
in the $\ell^+\ell^-$ sample) is a
signature of $CPT$ violation \cite{CPT-B}.
This is in contrast to the case of
lepton tagged $CP$ eigenstates (e.g., $\Psi K_s$ \cite{PsiKs,NirQuinn})
where the asymmetry
with respect to $t_-$ is possible under $CPT$ invariance and
signals $CP$ violation.
\vspace{1cm} |
cond-mat/9703035 | \section{Introduction}
Physical properties of the vortex structure in high temperature
superconductors are strongly dependent on the values of some external
parameters, such as magnetic field, anisotropy, and quenched disorder.\cite
{blatter} Some of these properties -as for example the existence of a first
order melting transition in clean samples-\cite{1orden} are originated in
the interactions between vortices. Some others, however, are rather
independent of the details of the interaction, and are related to the
topological configuration of the vortex structure. It is interesting then to
analyze in the simplest way the origin of those properties that depend only
on the geometrical configuration of vortices, and not on their interaction.
I concentrate in this paper on the behavior of the linear resistivity $\rho$
($\rho=\lim_{I\rightarrow 0} V/I$) as a
function of temperature -which has been widely used experimentally to
unravel the properties of the vortex structure-,\cite{blatter} both
perpendicular and parallel to the applied magnetic field of a model that
disregards non-local interactions between vortices. The results of numerical
simulations on this model compare well with
experimental results obtained in Y$_1$Ba$_2$Cu$_3$O$_7$ (YBCO, rather low
anisotropy) and Bi$_2$Sr$_2$Ca$_1$Cu$_2$O$_8$ (BSCCO, high anisotropy)
samples, as long as we consider zones of the phase diagram of the materials
in which a first order transition is not observed. As stated above, the
first order transition is generated by the interaction of vortices, and
cannot be expected to occur in a model with only local interactions.
I describe the model in the next section, and the results in section III.
The case of very high anisotropies deserves special attention and the
limit of two-dimensional systems is discussed in section IV. The relevance
of these results to real materials (in which vortices interact at finite
-usually large- distances) is discussed in section V. Finally, in section
VI, I summarize and conclude.
\section{Model}
Vortices are modeled at different levels of detail when performing numerical
simulations. A quite precise description is the Ginzburg-Landau theory,
formulated in terms of the superconducting order parameter $\Psi \equiv
\left| \Psi \right| \exp \left( i\theta \right) $.\cite{gl} In this context,
when an external magnetic field or thermal fluctuations are introduced,
vortices appear in the system as line-singularities around which $\oint
\theta \left( {\bf r}\right) d{\bf r}=2\pi $. A usual simplification which
is appropriate in high temperature superconductors is to consider the
modulus $\left| \Psi \right| $ of the order parameter as a constant, and
keep only the phases $\theta \left( {\bf r}\right) $ as the dynamic
variables. This leads to the study of the uniformly frustrated $XY$ model,%
\cite{xydegl} which has been extensively studied, both because of its
intrinsic properties and because of its applications to superconducting
systems. As first shown by Villain,\cite{villain} the most important degrees
of freedom of this model can be identified with the positions of the vortex
in the system and an alternative description with these positions as the
fundamental dynamical variables can be obtained. The original structure of
the Ginzburg-Landau free energy is reflected at this level in the particular
form of the interactions between vortices,\cite{carneiro,korshunov} which are cut off
at distances of the order of the some penetration lengths $\lambda _{ab}$, $%
\lambda _c$ (the subindexes refer to the crystalline directions). These
distances are in general large compared with the intervortex distance, and
so the energy of the system has contributions coming from interaction
between vortices that are far away from each other. I will consider here the
case in which $\lambda $ is very small or, stated in another form, when the
non-local terms of the interaction are dropped. In this case, the energy of
a given configuration is simply proportional to the total length of vortices
in the system. The investigation of the properties of the model in this case
is important since, as we shall see, it gives insight on the behavior of the
system with the full interaction, and allows to understand that the origin
of many properties of the vortex lattice comes from the topological structure
of vortex lines, and not from the exact nature of the interactions.
I will consider vortex segments lying on the bonds of a cubic mesh with
periodic boundary conditions. Formally, the Hamiltonian of the system is
\begin{equation}
H_0=\sum_{i,\mu }{}^{\prime }\varepsilon _{i,\mu }\left( n_{i,\mu }\right)
^2, \label{hamil}
\end{equation}
where $n_{i,\mu }$ are integer variables defined on the nodes $i$ of the
lattice, with direction $\mu $ ($\mu =$ {\it a, b, c}). The prime in the sum
symbol indicates that only those configurations with zero divergence of the
vector field $n$ have to be considered. The constants $\varepsilon _{i,\mu }$
are the energies of vortex segments at the positions $i,\mu $. We allow for
the existence of anisotropy, defining a parameter $\eta $ as $\eta \equiv
\left\langle \varepsilon _{i,c}\right\rangle /\left\langle \varepsilon
_{i,ab}\right\rangle ,$ with $\left\langle ...\right\rangle $ indicating
averaged values throughout the lattice. Disorder is introduced by allowing
the values of $\varepsilon _{i,\mu }$ to be different in different points of
the sample, fluctuating around the mean value. A disorder parameter (the
same for the three spatial directions) is defined as $D=\left( \varepsilon
_{i,\mu }^{\max }-\varepsilon _{i,\mu }^{\min }\right) /\left( \varepsilon
_{i,\mu }^{\max }+\varepsilon _{i,\mu }^{\min }\right) ,$ with $\varepsilon
_{i,\mu }^{\max }$ and $\varepsilon _{i,\mu }^{\min }$ being the maximum and
minimum value of the energy of a vortex segment. The distribution between $%
\varepsilon _{i,\mu }^{\min }$ and $\varepsilon _{i,\mu }^{\max }$ is taken
flat.
As the initial configuration of the system, a set of straight vortex lines
directed along the {\it c} direction and uniformly distributed on the {\it ab%
} plane is considered. The number of vortices divided by the number of elementary
plaquettes of the system perpendicular to the
{\it c} direction defines the dimensionless magnetic field $H$.
The Montecarlo process for updating the configuration
consists in sequentially proposing the creation of elemental squared loops
in all plaquettes of the lattice and with the three possible directions. The
acceptance of the new configuration is carried out using a standard
Metropolis algorithm. The initial configuration and the Montecarlo procedure
guarantee that at any moment the vortex configuration has zero divergence.
In order to calculate resistivities, both parallel and perpendicular to the
applied field ($\rho _c$ and $\rho _{ab}$, respectively), we have to include
a small external current $I$. This is done by adding a term to the
Hamiltonian (\ref{hamil}) that changes the energy of loops oriented
perpendicularly to the current. One orientation increase its energy by $+I,$
and the other one decreases it in the same quantity. The value of the
external current $I$ is chosen in such a way that the disbalance between
right- and left-handed loops is never higher that 1/100 of the energy of the
loop. In this regime of low currents the response is linear in the applied
current, and resistivities do not depend on the exact value of $I$. The
numerical results for different values of the anisotropy follow.
\section{Results for three dimensional samples}
I will describe the results of the numerical simulations performed in
systems with progressively higher anisotropies. Three different regions are
distinguishable.
\begin{figure}
\narrowtext
\epsfxsize=3.3truein
\vbox{\hskip 0.05truein
\epsffile{f1.eps}}
\medskip
\caption{(a) Circles: resistivities along the {\it ab} and {\it c}
directions using the Hamiltonian (\ref{hamil}) for an isotropic lattice of
size $40\times 40\times 8$ with $D=0$ and $H=1/4$. The dotted line is a
fitting to a thermally activated movement of single vortices of the form $%
\rho _{ab}\sim \frac 1T\exp \left( -2\varepsilon _0/k_BT\right) $. Stars:
percolation probability as defined in text, note that this magnitude is
(different from) zero only if $\rho _c$ is (different from) zero. (b) The
resistivity values plotted to show thermally activated behaviors.}
\label{rabrcdet}
\end{figure}
For low anisotropies, typical results for $\rho _c$ and $\rho _{ab}$ as a
function of temperature are shown in Fig. \ref{rabrcdet} for a sample of
size $L_a\times L_b\times L_c=40\times 40\times 8$ with $D=0$ (all $%
\varepsilon _{i,\mu }$ equal to $\varepsilon _0$) and $H=1/4$. As we see the
{\it ab} dissipation has a thermally activated behavior (dotted line
fitting) as long as the {\it c} axis dissipation is zero ($T<T_P$). The
activation energy of this process is $2\varepsilon _0,$ and corresponds to
the energy necessary to create a double kink on a straight vortex, which is
the first step in the process of movement. So in this region the dynamics of
the system is that of a set of individual vortices thermally wandering into
the sample. At a certain temperature $T_P,$ the {\it c} axis dissipation $%
\rho _c$ becomes finite. The origin of this dissipation is related to a
percolation transition of vortices occurring in the system, as studied
before in the three dimensional Josephson junction array model.\cite{perco}
For $T<T_P$ the vortices are practically independent, and a {\it c} axis
directed current exerts no net force on them, thus generating no dissipation
(in the linear regime). For $T>T_P$ the vortex lines are so heavily
interconnected that vortex paths running (on average) in the {\it ab}
direction appear. An external current applied along the {\it c} direction,
being perpendicular to these paths, exert a net force on them and generates
dissipation. In Fig. \ref{rabrcdet}(a) we also see the value of the
percolation probability $P$, defined as the fraction of time in which at
least one percolation path is found in the system. It is clearly observed
that the {\it c} axis dissipation is zero if $P$ is zero. The transition in
the $P\left( T\right) $ curve between 0 and 1 becomes sharp in the limit $%
L_{ab}\rightarrow \infty $.\cite{perco}
An interesting effect of the percolation transition on the values of $\rho
_{ab}$ is observed. When $\rho _c$ starts to be different from zero, the
values of $\rho _{ab}$ deviate from the prediction of an activated behavior
of individual vortices, and become smaller. This is an indication that for
temperatures greater that $T_P$ the percolation of the vortex structure and
the existence of many thermal excitations interferes with the thermally
activated behavior of single vortices. In fact, for $T>T_P$ the concept of
an isolated vortex in the system looses its sense, because we have a
strongly entangled configuration of vortex lines. This effect of the
percolation transition on the $\rho _{ab}\left( T\right) $ curve causes this
to develop a typical shoulder that has been experimentally observed in
measurements on YBCO samples,\cite{hombroexp,fm} although its origin had not
been clearly established.
In the case $D=0,$ when increasing anisotropy, the {\it ab} plane activation
energy goes to zero as $1/\eta $ because this is mainly determined by the
energy necessary to create a double kink, which is precisely $2/\eta $. Also
the {\it c} axis transition is governed by an energy scale of the order of $%
\sim 1/\eta ,$ because this temperature is mainly determined by the typical
energy of the interlayer excitations (which goes as $\sim 1/\eta $)$.$ On
heating, and from a practical point of view, the {\it c} axis transition
occurs at a temperature at which $\rho _{ab}$ is clearly different from zero
for all anisotropies.
The physics is richer in the case of samples with defects ($D\neq 0$). In
this case, when anisotropy is increased, the activation energy for the {\it %
ab} dissipation tends to a value of the order of $D^{1/2},$ because the
energy of a vortex segment piercing the {\it ab} plane is not constant in
this case but has a dispersion of the order of $D^{1/2},$ and this is the
typical energy barrier that has to be overcome when a vortex segment wanders
within the {\it ab} planes. Anisotropy decreases the percolation temperature
$T_P$ in a factor $\sim 1/\eta $, but has minor effect on the thermal
activation in the planes as long as $D\neq 0$. Thus for high enough
anisotropies, the {\it c} axis dissipation is expected to occur even at
lower temperatures than the {\it ab} plane dissipation. However, when this
range is approached a particular transition occurs in the system. The low
temperature configuration of the vortex structure passes from a
disentangled, and rather ordered configuration of vortex lines for low
anisotropies, to an entangled configuration of vortex lines, i.e., we can
say that the percolation temperature of the system drops abruptly to zero.
The origin of this particular transition is the following. If anisotropy is
low the system will prefer to remain ordered, with vortices almost straight
in order to minimize their line energy. If anisotropy is increased,
entangled configurations (in which vortices use the strongest pinning sites
on the {\it ab} planes) diminish their energy and become locally stable, and
from some value of the anisotropy, one entangled configuration becomes
globally stable. This is the critical anisotropy $\eta _1$. This transition
is related to the Bragg glass to vortex glass transition\cite{bragglass}
proposed to occur in very anisotropic samples when increasing the magnetic
field, because -as it has been discussed elsewhere-\cite{jb5} an
increase in the magnetic field is equivalent to an increase in the effective
anisotropy and disorder of the system.
\begin{figure}
\narrowtext
\epsfxsize=3.3truein
\vbox{\hskip 0.05truein
\epsffile{f2.eps}}
\medskip
\caption{Typical results for $\rho _c$ and $\rho _{ab}$ as a function of
temperature for two different values of anisotropy, in a sample of size $%
40\times 40\times 8$, with $D=0.5$ and $H=1/4$.}
\label{rabrcamedio}
\end{figure}
Numerical simulations indicate that in the $\eta >\eta _1$ regime {\em both}
{\it ab} plane and {\it c} axis dissipation have a thermally activated
behavior as shown in Fig. \ref{rabrcamedio}. The results for activation
energies of $\rho _c$ and $\rho _{ab}$ as a function of anisotropy for a
system of $40\times 40\times 8,$ with $D=0.5$ are shown in figure \ref{eact}
(results for other values of $H$ are similar, with only a rescaling of the
anisotropy axis). From a numerical (or experimental) point of view, it must
be kept in mind
that a thermally activated behavior with a given activation energy can be
checked only for values of $k_BT$ greater than some fraction (which depends
on sensibility) of the activation energy. In particular, no statements can
be made about the possible existence of a true critical temperature much
lower than that corresponding to the activation energy. In Fig. \ref{eact}
and for $\eta <\eta _1\sim 4$ the {\it c} axis activation energy is not
defined, because the transition is a percolation process and not a thermally
activated process, as discussed before.
\begin{figure}
\narrowtext
\epsfxsize=3.3truein
\vbox{\hskip 0.05truein
\epsffile{f3.eps}}
\medskip
\caption{Activation energies for the dissipation along {\it c} and {\it ab}
direction as a function of anisotropy (size $40\times 40\times 8$, $D=0.5$, $%
H=1/4$). For $\eta <\eta _1$ the {\it c} axis dissipation is not thermally
activated but has a critical behavior instead. For $\eta >\eta _2$ the {\it c%
} axis activation energy decays as $1/\eta $.}
\label{eact}
\end{figure}
The most direct and important conclusion from this figure is that there are
two sub-regions in the range $\eta >\eta _1$. For $\eta _1<\eta <\eta _2$
activation energies for transport parallel and perpendicular to the field are
rather similar, whereas for $\eta >\eta _2$ the {\it c} axis activation
energy is lower than the corresponding to the {\it ab} plane. In the whole
range $\eta >\eta _1,$ {\it ab} plane activation energy is rather anisotropy
independent, indicating that the {\it ab} plane dissipation is governed by
the thermal activation of vertical segments of vortex lines pinned to the
{\it ab} planes. For $\eta <\eta _2$ both parallel and perpendicular
dissipation have the same activation energy indicating that they both
originate in the same physical process.\cite{koshelev} In fact, {\it ab}
plane dissipation is caused by the thermal depinning of vortex lines that
cross the sample in the {\it c} direction. In turn, the {\it c }axis
dissipation is caused by the thermal depinning of vortex lines that cross
the sample in the {\it ab} direction (which exist because the vortex
configuration is heavily entangled). However, as the formation of these
paths is mediated by the externally generated vortices they are also pinned
to the {\it ab} planes, and the activation energy for both processes is the
same. Despite this, a global factor in the resistivity relation $\rho
_c/\rho _{ab}$ depending upon the anisotropy and the geometrical
configuration of vortex lines (especially on the relation between number of
paths running along {\it c} axis and {\it ab} direction) is expected.
For $\eta >\eta _2$ the decrease of the activation energy for {\it c} axis
dissipation as $1/\eta $ indicates that a new dissipation mechanism that
depends only on the excitation of horizontal loops between planes is taking
place. Being the vertical segments of vortex lines still frozen in the range
of temperatures in which the {\it c} axis dissipation starts to be
appreciable, this mechanism has to be related to processes occurring between
consecutive {\it ab} planes, {\it i.e.}, in the zone $\eta >\eta _2$ a
complete {\it decoupling} of the planes takes place.
\section{Perpendicular dissipation in two dimensional systems}
As I mentioned in the previous section, the activation energy of the {\it c}
axis dissipation at high anisotropies ($\eta >\eta _2$) goes to zero as $%
1/\eta ,$ indicating that processes involving only the horizontal segments
between consecutive planes are important. In order to understand clearly
this kind of processes I will analyze the dissipation in the case of a
unique horizontal plane.
Consider the model studied previously (Eq. \ref{hamil}) but now on a two
dimensional geometry. It is useful to point out that this model (in the case
without disorder) has been named in other context the {\it roughening model} of
surfaces.\cite{rough,ng,rough0} The mapping is made between vortex segments
in the original model and height differences of a growing surface in the
roughening model, the divergence free condition in the vortex model being
the key property that makes possible the mapping. In the language of the
roughening problem, the surface is smooth at large scales in the low
temperature phase, and is rough in the high temperature phase. The long
extending terraces that make the surface rough are no more than the infinite
length vortex paths of the vortex model, and the growing rate of the surface
maps onto the perpendicular resistivity of the vortex model. In the
homogeneous case (without disorder), the roughening model is known to have
an inverted Kosterlitz-Thouless transition at a temperature $T_{KT}\simeq $ $%
0.36\epsilon $ where $\epsilon $ is the energy of an elemental growing step
(an elemental vortex loop in the vortex model, $\epsilon =4\varepsilon _0$).%
\cite{rough1} This temperature can be viewed again as a percolation
temperature of vortex lines in the vortex system: below the critical
temperature $T_{KT}$ there is no infinite length paths, whereas for $T>T_{KT}
$ these paths exist. In Fig. \ref{2dd0} the percolation probability for
systems of different sizes is shown, and the transition is clearly
distinguishable. As in the three dimensional case the transition in the
variable $P\left( T\right) $ becomes sharp in the limit $L_{ab}\rightarrow
\infty $. From known results on the roughening model\cite{ng} we can
directly conclude that for an infinite system the perpendicular
resistivity ($\rho _c$) jumps from zero to a finite value at the temperature
$T_{KT}$.
\begin{figure}
\narrowtext
\epsfxsize=3.3truein
\vbox{\hskip 0.05truein
\epsffile{f4.eps}}
\medskip
\caption{Percolation probability in a two-dimensional system without
disorder for different system sizes. A phase transition in the limit of very
large systems is clearly observed.}
\label{2dd0}
\end{figure}
The disorder in the two-dimensional case may have two origins: one is the
disorder introduced directly in the Hamiltonian, through the dependence of $%
\varepsilon _{i,\mu }$ on coordinates (Eq. (\ref{hamil})). The other comes
from the existence in the 2D system of some quenched horizontal segments
that come from the horizontal parts of the 3D vortices. This segments within
the 2D system{\it \ }do not satisfy the condition $\nabla n=0,$ so its
inclusion in the dynamics makes a non trivial contribution. We will
concentrate on the second type of disorder for two reasons: numerical
simulations show that the qualitative changes produced by the two types of
disorder are similar, and secondly because this type of disorder is
dynamical, and may change when changing anisotropy.
So the problem may be stated as that of a Hamiltonian
\begin{equation}
H=\sum_{i,\mu }{}^{\prime }\varepsilon _0\left( n_{i,\mu }-b_{i,\mu }\right)
^2, \label{h2}
\end{equation}
where now $b_{i,\mu }$ represent the horizontal segments induced by the 3D
vortices, and $\mu =a,b$. Note that the variables $n_{i,\mu }$ still satisfy
the condition $\nabla n=0$, and the bare energy of the segments lying on the
links has been taken to be $\varepsilon _0$ in all sites. It is interesting
to consider the behavior of the system when the number of horizontal
segments is being increased. We can characterize this value as the fraction
of links in which $b_{i,\mu }=\pm 1,$ (which we denote by $D$). A necessary
condition for a finite dissipation (perpendicular to the plane) is still
that the vector field $n_{i,\mu }$ generates a path running all across the
(two-dimensional) system. The probability of existence of such paths is
shown for different sample sizes and different values of the disorder in
Fig. \ref{pcded}(a). For $D < D_{cr} \simeq 0.3$ a well defined transition when the
system size increases is detected. For finite disorder the transition
temperature $T_P$ decreases with respect to the zero disorder value $T_P^{(D=0)}=T_{KT}\simeq
1.47$. If disorder is too high, however, there is no intersection of lines
corresponding to different sizes, indicating that the percolation transition
moves down to
aero temperature. The behavior of the percolation temperature as a
function of $D$ is plotted in Fig. \ref{pcded}(b).
\begin{figure}
\narrowtext
\epsfxsize=3.3truein
\vbox{\hskip 0.05truein
\epsffile{f5.eps}}
\medskip
\caption{(a) Percolation probability in a two-dimensional system for
different values of the disorder for system sizes $20\times 20$, $40\times 40
$, and $80\times 80$ (from bottom to top, at the right of the curves).
The results are the averaged values over 3
configurations of disorder for $D=0.125,$ $0.25,$ and over 8 configurations
for $D=0.375,$ $0.5$. For $D<D_{cr}\simeq 0.3$ a percolation phase
transition at finite temperature is clearly detectable. For higher values of
disorder percolation temperature moves down to $T=0$. (b) Temperature of the
percolation transition as a function of disorder as obtained from the
results in panel (a). Points are the results of numerical simulations.
Dotted line is a guide to the eye only.}
\label{pcded}
\end{figure}
Perpendicular resistivity simulations in this model (Fig. \ref{r2d}) give
values that clearly go to zero in the case in which $T_P$ is finite ($D<0.3$)\cite{otranota},
and in the case when this variable is finite at any
temperature give results that can be well fitted by a thermal activation
expression. However, the
possibility that the system has a real critical temperature at a temperature
where the resistivity becomes undetectable in the simulations cannot be
ruled out (see the note on the mapping to the roughening problem with
disorder in the next paragraph).
\begin{figure}
\narrowtext
\epsfxsize=3.3truein
\vbox{\hskip 0.05truein
\epsffile{f6.eps}}
\medskip
\caption{Perpendicular resistivity of a two dimensional system ($80\times 80$%
) as a function of temperature for different values of the concentration of
quenched segments $D$. For $D<D_{cr}\simeq 0.3$ the approximate temperature at which
temperature vanishes is indicated by arrows. For $D>D_{cr}$ the curves show activated
low temperature tails.}
\label{r2d}
\end{figure}
The behavior of the percolation probability and resistivity for different
values of $D$ of model (\ref{h2}) can be qualitatively explained using an
argument of the type of the celebrated original Kosterlitz-Thouless argument
for the phase transition in the two-dimensional $XY$ model. It goes as
follows. The energy of a path running across a system of size $\sim L$ is
the sum of $\sim L$ variables. According to (\ref{h2}), each of these variables may have the value $%
\varepsilon _0$, $-\varepsilon _0$, or $3\varepsilon _0$ depending if for
that particular size the variable $b$ is $0,\pm 1$. The energy of the path
becomes a Gaussian variable with mean value $\sim L$ and dispersion $\sqrt{%
LD\varepsilon _0}$. The total number of paths of length $\sim L$ running
across the system is exponential in $L$, i.e., about $\sim e^L$. Taking into
account these estimations, and considering the calculated number of paths
with different energies as a density of states for non-interacting
``particles'', it is possible to make the statistical mechanics of the
system and look for the thermodynamic free energy of the equilibrium
configuration. I only show the results, and not the details of the
calculations, which are straightforward. Two clearly different regions
appear (see Fig. \ref{ktargumento}): there is a zone (labelled I) of high
disorder or high temperature in which some percolation paths have negative
free energy, and so their existence is thermodynamically stable. In the other
zone (labelled II) with low temperature and disorder, all percolation paths
have positive free energy and thus the configuration without any path is the
stable one. In zone II the system does not contain any percolation paths,
and its resistivity is zero. When passing to zone I increasing the
temperature, many percolation paths become immediately accessible at the
transition, and perpendicular resistivity jumps to a finite value. In the
case $D>D_{cr}$ there are available percolation paths down to zero
temperature, and its likely that resistivity remains finite (with a
thermally activated behavior) up to zero temperature, although a critical
behavior of the resistivity at some finite temperature cannot be completely
ruled out from these qualitative arguments.\cite{notita} These estimates
compare well with the numerical results on Hamiltonian (\ref{h2}) (Figures (%
\ref{pcded}) and (\ref{r2d})).
\begin{figure}
\narrowtext
\epsfxsize=3.3truein
\vbox{\hskip 0.05truein
\epsffile{f7.eps}}
\medskip
\caption{Qualitative phase diagram from the estimation of free energy of
percolation paths, as discussed in the text. Zone I is a region with finite
linear dissipation. Zone II has zero linear dissipation. The labels Ia and
Ib and the dashed region are linked to the discussion presented in [18].}
\label{ktargumento}
\end{figure}
The mechanism discussed in this section for the perpendicular dissipation of
a two-dimensional system is the responsible of the c axis dissipation in the
three dimensional case for $\eta >\eta _2$ discussed in the previous
section. As it was indicated before this mechanism relies on the movement of
vortex segments between consecutive {\it ab} planes, contrary to the dissipation
occurring for $\eta < \eta_2$ which involves paths that occupy a finite thickness
along the {\it c} direction.
\section{Relevance to real materials. Effect of non local interactions}
In spite of its simplicity, the previously studied model reproduces many
characteristics of real transport measurements in high-$T_c$
superconductors, indicating that some of these properties are rather
independent of the exact form of the interaction, and they are only
consequences of geometrical properties. In particular, the case $\eta <\eta
_1$ (Fig. \ref{eact}) gives results which are comparable to experimental results in YBCO.\cite
{fm,exper1} In this case the {\it c} axis resistivity has been observed to
appear only at higher temperatures than the {\it ab }dissipation, and a
shoulder in $\rho _{ab}$ near $T_P$ similar to that of Fig.\ref{rabrcdet} has been
experimentally observed. Numerical simulations using the complete
vortex-vortex interaction give also the same qualitative behavior.\cite
{numer1,perco} The results of the previous sections for the activation energy in
the range $\eta _1<\eta <\eta _2$, namely the coincidence of activation
energies for $\rho _{ab}$ and $\rho _c,$ has been experimentally observed in
BSCCO samples\cite{exper2} and theoretically interpreted as an indication of
the same origin for both dissipations in this regime.\cite{koshelev}
The only one region that gives a qualitatively new result is the range $\eta
>\eta _2$, where a finite value for $\rho _c$ was obtained even at
temperatures at which $\rho _{ab}$ is still negligible. We have to determine
if this region can occur when the complete vortex-vortex interaction is
taken into account or if it is a consequence of the local form of the
interaction used in the model. The complete Hamiltonian for interacting
vortex segments $n_\mu $ can be written in the generic form
\begin{equation}
H=\sum_{i,j}\sum_{\mu =a,b,c}G_\mu \left( {\bf r}_i-{\bf r}_j\right) n_\mu
\left( {\bf r}_i\right) n_\mu \left( {\bf r}_j\right) . \label{htotal}
\end{equation}
where $G_\mu \left( {\bf r}\right) $ is the non-local interaction function
between segments (in previous sections I used this type of Hamiltonian with
a local function $G$). The function $G_{ab}$ ($G_c$) is globally
proportional to an energy scale that we can identify with our $\varepsilon
_{ab}$ ($\varepsilon _c$) of equation (\ref{hamil}), and the naive
conclusion would be that in the highly anisotropic case ($\varepsilon
_{ab}\ll \varepsilon _c$) the system of interplane vortices has a
transition temperature $T_1$ of the order $k_BT_1$ $\simeq \varepsilon _{ab}
$, whereas the system of vortex-antivortex pairs within the planes has a
transition temperature $T_2$ of the order $k_BT_2$ $\simeq \varepsilon _c\gg
k_BT_1$. However, the structure of the function $G$ for segments parallel to
the layers
$$
G_{ab}\left( {\bf k}\right) =
$$
\begin{equation}
=\frac{\varepsilon _{ab}}{\left[ 4-2\cos\left%
(k_x\right)-2\cos\left(k_y\right)\right] +\frac{\varepsilon _{ab}%
}{\varepsilon _c}\left[ 2-2\cos\left(k_z\right) \right] +\frac{\varepsilon
_{ab}}\Delta } \label{g12}
\end{equation}
(where $\sqrt{\Delta /\varepsilon _{ab}}\equiv \lambda _c$ is the magnetic
penetration depth along the {\it c} direction in units of the lattice
spacing) shows that the range of the interaction of parallel to plane
segments increases with anisotropy as $\sim \left( \min \left[ \varepsilon
_c,\Delta \right] /\varepsilon _{ab}\right) ^{1/2},$ and this cannot be
neglected.\cite{korshunov,hetzel} In fact, in the language of the
renormalization group and in the case of very large $\lambda _c$,
interacting horizontal segments renormalize to non-interacting segments but
with an energy $\simeq \varepsilon _{ab}\left\{ \left( \varepsilon
_c/\varepsilon _{ab}\right) ^{1/2}\right\} ^2=\varepsilon _c$, so the
transition temperature $T_1$ is of the order of $k_BT_1\simeq \varepsilon _c.
$ A more detailed calculation\cite{korshunov} gives $T_1\sim 4\pi \min
\left[ \varepsilon _c,2\Delta \right] $, $T_2\sim \frac \pi 2\min \left[
\varepsilon _c,2\Delta \right] .$ To have finite $\rho _c$ at temperatures
where $\rho _{ab}$ is still zero is necessary at least that $T_1$ be lower
than $T_2,$ and this is not the case.
These estimates seem to rule out the practical possibility of a zone like
that for $\eta >\eta _2$ in figure \ref{eact}. However, the analysis made
corresponds to the case of zero external field and disorder, and we know
that the inclusion of them decreases the transition temperature of the
interplane system of loops. So the question if in a finite external field,
and for high anisotropies we can have a {\it c} axis transition temperature
lower than the corresponding to the {\it ab} plane has still to be answered.
The point cannot be fully discussed in all situations, but there is a
limiting case in which it can be addressed. In fact, the case with a
divergent anisotropy, in which the horizontal segments interact through a
really long range potential, can surprisingly be discussed much easily that
the case in which the interaction is local. Let us consider a finite value
of $\varepsilon _{ab}$, and take $\varepsilon _c,\Delta \rightarrow \infty ,
$ (so $T_2\rightarrow \infty $ also), and analyze the problem of the system
of horizontal vortex segments between two consecutive planes. To be able to
work with some indeterminations that will appear, the system size we will
consider the system size will be considered finite with a value $L\times L$, and the
limit $L\rightarrow \infty $ will be taken at the end. In this limiting case
the function $G_{ab}\left( {\bf k}\right) $ in terms of the two dimensional
vector ${\bf k\equiv }\left( k_x,k_y\right) ,$ reduces to $G_{ab}\left(
{\bf k}\right) =\varepsilon _{ab}/\left[ 4-2\cos \left( k_x\right) -2\cos
\left( k_y\right) \right] .$ Introducing (integer) loop variables $l({\bf r})$ through the
substitution $\partial _\mu l\left( {\bf r}\right) =n_\mu \left( {\bf r}%
\right) ,$ and integrating by parts twice in the Hamiltonian, an equivalent
model is obtained:
\begin{equation}
H=\varepsilon _{ab}\sum_{{\bf r}}\left( l\left( {\bf r}\right) -\bar{l}%
\right) ^2+I\sum_{{\bf r}}l\left( {\bf r}\right) , \label{hb}
\end{equation}
where $\bar{l}=\sum_{{\bf r}}l\left( {\bf r}\right) /L^2,$ is the mean value
of the loops variables over the $L^2$ plaquettes of the system. The time
derivative of $\bar{l}$ is proportional to the voltage generated by the
external current $I$. This expression is much easier to handle than the
original formulation, and it is a sort of mean field Hamiltonian in which
variables $l$ interact only through the $\bar{l}$ term. I stress however
that no approximations other than the ones mentioned were done on passing
from (\ref{htotal}) to (\ref{hb}). When $L$ is large, the partition function
corresponding to (\ref{hb}) can be calculated using a Lagrange multiplier for $%
\bar{l}$. The energy as a function of $\bar{l}$ becomes a periodic function
of $\bar{l}$ (except by the current term), and in the limit $T\gg
\varepsilon _{ab}$ it reads
\begin{equation}
E=L^2\left( -\frac{T^2}{\varepsilon _{ab}}e^{-\pi ^2T/\varepsilon
_{ab}}\cos \left( 2\pi \bar{l}\right) +I\bar{l}\right) . \label{edeb}
\end{equation}
When $L\rightarrow \infty ,$ and for any value of temperature there is a
critical value of $I,$ namely $I_{cr}=\frac{T^2}{\varepsilon _{ab}}e^{-\pi
^2T/\varepsilon _{ab}}$. This critical current is nonzero for any
temperature and thus indicates that the critical temperature of the system
is infinite. This is consistent with the exact results since we took $%
\varepsilon _c\rightarrow \infty $. Notice, however, that the extremely
small value of this critical current when $T$ is much larger that $%
\varepsilon _{ab}$ turns it difficult to verify this results in numerical
simulations.
When quenched disorder due to horizontal segments of externally generated
vortices are included, the previous picture changes in the following way.
The potential due to these quenched segments is also long ranged, and on the
$l$ variables they produce a potential that goes as $\sim \varepsilon _{ab}/r$. Summing up
on each site the contributions from a random distribution of disorder on a
sample of size $L$ gives a potential of typical amplitude $V\sim \varepsilon _{ab}D\left[
\int^L\left( 1/r\right) ^2d^2r\right] ^{1/2}\sim \varepsilon _{ab}D\sqrt{2\pi \ln L}$, where $%
D$ is the concentration of quenched segments. The Hamiltonian now reads
$$
H^D=\varepsilon _{ab}\sum_{{\bf r}}\left( l\left( {\bf r}\right) -\bar{l}%
\right) ^2+I\sum_{{\bf r}}l\left( {\bf r}\right) +
$$
\begin{equation}
+\varepsilon _{ab}D\sqrt{2\pi \ln L}\sum_{%
{\bf r}}\chi _{{\bf r}}l\left( {\bf r}\right) . \label{hd}
\end{equation}
The $\chi _{{\bf r}}$ are random variables with typical value $1$ and
satisfying $\sum_{{\bf r}}\chi _{{\bf r}}=0$.\cite{correl} In the same way
as before the energy of the system may be written in the form
\begin{equation}
E^D=L^2\left( -\frac{T^2}{\varepsilon _{ab}}e^{-\pi ^2T/\varepsilon
_{ab}}e^{-\pi ^3D^2\ln L/2}\cos \left( 2\pi \bar{l}%
\right) +I\bar{l}\right), \label{edebd}
\end{equation}
\begin{equation}
E^D=-\frac{T^2}{\varepsilon _{ab}}e^{-\pi ^2T/\varepsilon _{ab}}L^{2-\pi
^3D^2/2}\cos \left( 2\pi \bar{l}\right) +L^2I\bar{l}.
\end{equation}
When $L$ goes to infinity the first term will give an infinite activation
energy -and thus a zero resistivity- if $D<2/\pi
^{3/2}.$ On the other hand if $D>2/\pi ^{3/2}$ activation
energy goes to zero and the resistivity is finite. $D_{cr}=2
/\pi ^{3/2}$ is the critical value of the disorder in the system. As
we see, in this case the {\it c }axis transition occurs at zero temperature,
whereas the {\it ab} transition temperature is $\sim \varepsilon
_c\rightarrow \infty $.
Thus at least in the case of infinite ranged interactions between vortex
loops, it can be analytically shown that the {\it c} axis transition occurs
at lower temperature than the {\it ab }transition, if disorder is greater
than a critical value. This suggests that even in cases with finite $%
\varepsilon _c$, disorder can make longitudinal resistivity to appear at
lower temperatures than transversal dissipation. The
conclusion is that with the inclusion of the full interactions between
vortices and for finite values of $\varepsilon _{ab}$ and $\varepsilon _c$ (%
$\varepsilon _c\gg \varepsilon _{ab}$), a phase diagram as the one sketched
in Fig. \ref{ktargumento} still holds, but with the zero disorder
temperature transition being proportional to $\varepsilon _c$ rather to $%
\varepsilon _{ab}$. This indicates that a regime where $\rho_c\neq 0$ and
$\rho_{ab}=0$ may be experimentally accessible in highly anisotropic
and disordered samples.
\section{Summary and conclusion}
In this paper I have presented results from numerical simulations of vortex
lattices in the presence of external magnetic field and quenched disorder,
in which the interactions between vortices are neglected. In spite of the
simplicity of the model the results reproduce qualitatively well many
characteristics of the $\rho _c\left( T\right) $ and $\rho _{ab}\left(
T\right) $ functions observed in experiments, both for YBCO samples, in
which $\rho _c$ becomes different from zero when the value of $\rho _{ab}$
is already appreciable, and for BSCCO where experiments indicate that $\rho
_c\left( T\right) $ and $\rho _{ab}\left( T\right) $ have thermally
activated behaviors with the same activation energy. In addition, a new
regime in which activation energy for $\rho _c\left( T\right) $ is lower
than that for $\rho _{ab}\left( T\right) $ was found in the simulations,
corresponding to the case where coherence length along {\it c} direction
reduces to the distance between consecutive superconducting planes.
Although in principle this regime
may be an artifact of having disregarded interactions, I have shown it may
occur in real samples in the presence of disorder, external magnetic
field, and for high enough anisotropy.
\section{Acknowledgments}
I want to thank C. A. Balseiro and M. Goffman for discussions, and K. Hallberg
for critical reading of the manuscript. This work was
financially supported by Consejo Nacional de Investigaciones
Cient\'{\i}ficas y T\'{e}cnicas (CONICET), Argentina. |
1210.5531 | \section*{Appendix}
This appendix is split into four sections. The first one introduces the notation and repeats the solution of the fermionic quantum marginal problem. In the second section we explain how to simplify the pinning analysis by truncating the spectrum. This amounts to the proof of statement (13), a relation connecting polytope distances of the correct and truncated marginal setting. The third section introduces a selection rule, which explains how the structure of a $N-$fermion state simplifies if its natural occupation numbers are exactly pinned to some Pauli facet and applies it to the Borland-Dennis setting. In the last section we present a modification of this selection rule for the case of only approximate pinning. This then justifies our Hartree-Fock generalization.
\subsection{Notation and Fermionic Quantum Marginal Problem.---}
\label{sec:notation}
The problem of determining all spectra
\begin{equation}\label{specordered}
\lambda=(\lambda_i)_{i=1}^{d'}\qquad, \,1\geq \lambda_1\geq \lambda_2\geq \ldots \geq \lambda_{d'}\geq 0 \qquad
\end{equation}
of $1-$particle reduced density operators ($1-$RDO) $\rho_1$ arising from some pure $N-$fermion state $|\Psi\rangle \in \wedge^N[\mathcal{H}_{d'}]$,
\begin{equation}
\rho_1 \equiv N\,\mbox{tr}_{N-1}[|\Psi\rangle \langle \Psi|]
\end{equation}
by tracing out $N-1$ particles, is known as the fermionic quantum marginal problem of the setting $\wedge^N[\mathcal{H}_{d'}]$. Here $d' \in \mathbb{N}\cup \{\infty\}$, $\mathcal{H}_{d'}$ is the $d'-$dimensional separable $1-$particle Hilbert space and we use the trace normalization convention,
\begin{equation}\label{norm}
\mbox{tr}[\rho_1] = \lambda_1+\ldots+\lambda_{d'} = N ,
\end{equation}
common in quantum chemistry.
For $d'$ finite, the family of possible spectra (we call them compatible w.r.t $\wedge^N[\mathcal{H}_{d'}]$),
is described by finitely many independent conditions $\{C_i\}$, the generalized Pauli constraints. Each of them has the form
\begin{equation}\label{margcon}
C_i\,:\,\,\,D_i(\lambda) = \kappa_0 + \kappa_1 \lambda_1+\ldots \kappa_{d'} \lambda_{d'} \geq 0 ,
\end{equation}
$\kappa_0,\ldots,\kappa_{d'} \in \mathbb{Z}$ and describes a half-space $V_i$ of $\mathbb{R}^{d'}$. These constraints together with the trivial conditions (\ref{specordered}) and (\ref{norm}) define the polytope $\mathcal{P}_{N,d'} \subset \mathbb{R}^{d'}$
of possible spectra. In that sense every constraint (\ref{margcon}) gives rise to a facet $F_i$ of this polytope,
\begin{equation}
F_i = \{\lambda \in \mathcal{P}_{N,d'}\,|\,D_i(\lambda)= 0\}.
\end{equation}
Note that besides these Pauli facets there are also further facets, those corresponding to the trivial constraints (\ref{specordered}), but they will not be of interest in our work. Moreover, the quantity $D_i(\cdot)$, which is only defined up to a positive factors, defines after fixing this factor (i.e. the parameters $\kappa_i$) a measure for the distance of spectra to the corresponding facet $F_i$. For the case of $d'$ finite it coincides up to normalization with the Euclidean distance, $\mbox{dist}_2(\mu,F_i)=\frac{D_i(\mu)}{\|\kappa\|_2}$, $\kappa= (\kappa_1,\ldots,\kappa_{d'})$.
For the case $d'=\infty$ the set $\mathcal{P}_{N,\infty}$ of compatible spectra is not explicitly known yet. Nevertheless, for our work we assume that it is also defined by a family of linear inequalities
\begin{equation}
D_j^{(\infty)}(\lambda) = \kappa_0+\kappa_1 \lambda_1+ \kappa_2 \lambda_2+ \ldots \geq 0\,.
\end{equation}
The results on truncation of the spectrum and the relation of polytope $\mathcal{P}_{N,d}$ and $\mathcal{P}_{N,d'}$, $d<d'$ finite presented in Appendix \ref{sec:truncation} strongly emphasizes that this assumption is justified. Moreover, the involved fact that the $l^1-$closure $\overline{\mathcal{P}}_{N,d}$ is convex also suggests this assumption.
Finally, we still make some comments on the meaning of natural orbitals $\{|k\rangle\}$, the eigenvectors of the $1-$RDO,
\begin{equation}
\rho_1 = \sum_{k=1}^{d'} \, \lambda_k \, |k\rangle \langle k| ,
\end{equation}
and their utility for applications.
These natural orbitals induced by a fixed state $|\Psi\rangle \in \wedge^N[\mathcal{H}_{d'}]$, $d' \in \mathbb{N}\cup \{\infty\}$, define a basis $\mathcal{B}_1:=\{|k\rangle\}_{k=1}^{d'}$ for the $1-$particle Hilbert space $\mathcal{H}_{d'}$. For ease of notation we skip the argument $\Psi$ of $|i(\Psi)\rangle$. Basis $\mathcal{B}_1$ then induces the basis $\mathcal{B}_N$ for $\wedge^{N}[\mathcal{H}_{d'}]$ of corresponding Slater determinants ($1\leq i_1<\ldots<i_N\leq d'$)
\begin{equation}
|\mathbf{i}\rangle \equiv |i_1,\ldots,i_N\rangle \equiv \mathcal{A}_N [|i_1\rangle \otimes \ldots \otimes |i_N\rangle] ,
\end{equation}
where $\mathcal{A}_N$ is the anti-symmetrizing operator on the $N-$particle Hilbert space $\mathcal{H}_{d'}^{\,\,\,\otimes^N}$.
By expanding $|\Psi\rangle$ w.r.t. to $\mathcal{B}_N$,
\begin{equation}\label{expansion}
|\Psi\rangle = \sum_{\mathbf{i}}\,c_{\mathbf{i}}\,|\mathbf{i}\rangle
\end{equation}
the natural occupation numbers are given by
\begin{equation}\label{noncoef}
\lambda_k = \sum_{\mathbf{i},\,k \in \mathbf{i}}\,|c_{\mathbf{i}}|^2 .
\end{equation}
To compare marginal settings of different dimensions, $d, d'$ with $d < d'\leq \infty$ we imbed $\mathcal{H}_{d}$ into $\mathcal{H}_{d'}$,
\begin{equation}
\mbox{span}\{|i\rangle\}_{i=1}^{d}\equiv\mathcal{H}_{d} \leq \mathcal{H}_{d'} \equiv \overline{\mbox{span}\{|i\rangle\}_{i=1}^{d'}} ,
\end{equation}
where the closure is only relevant for the case $d'$ infinite.
In the same way,
\begin{equation}
\wedge^N[\mathcal{H}_{d}] \leq \wedge^N[\mathcal{H}_{d'}] .
\end{equation}
Indeed, according to (\ref{expansion}), we find that every state
\begin{equation}\label{expansionsmall}
|\Psi\rangle = \sum_{1\leq i_1 <\ldots<i_N \leq d} c_{\mathbf{i}} \,|\mathbf{i}\rangle \,\,\,\,\in \wedge^N[\mathcal{H}_{d}]
\end{equation}
can be imbedded into $\wedge^N[\mathcal{H}_{d'}]$ by
\begin{equation}\label{expansionlarge}
|\Psi'\rangle = \sum_{1\leq i_1 <\ldots<i_N \leq d} c_{\mathbf{i}} \,|\mathbf{i}\rangle \,\,\,\,\in \wedge^N[\mathcal{H}_{d'}]\,,
\end{equation}
and all the other coefficients $c_{\mathbf{i}}$ in (\ref{expansionlarge}), those with $i_N>d$, vanish. We used here different symbols for the states $|\Psi\rangle$ and $|\Psi'\rangle$ to distinguish between the two different spaces $\wedge^N[\mathcal{H}_{d}]$ and $\wedge^N[\mathcal{H}_{d'}]$ to which they belong. This subtle difference is becoming relevant if we determine the natural occupation numbers $\lambda'$ of $|\Psi'\rangle$ (recall (\ref{noncoef})), \begin{equation}\label{nonimbedded}
\lambda' = (\lambda_1,\ldots,\lambda_d,\underbrace{0,\ldots,0}_{d'-d})
\end{equation}
differing from $\lambda=(\lambda_1,\ldots\lambda_d)$ by additional zeros. In the following, to simplify the notation, we will use the same symbols for mathematical objects and their imbeddings into larger spaces.
\subsection{Truncation of the Spectrum.---}\label{sec:truncation}
In our work we have determined the ``trajectory'' of spectra
\begin{equation}
\lambda(\delta) = (\lambda_i(\delta))_{i=1}^{\infty} \in \mathcal{P}_{3,\infty} ,
\end{equation}
of the $1-$RDO corresponding to the ground state of a $3-$fermion model with relative fermion-fermion interaction strength $\delta$. The goal was then to show that for $\delta$ not too large, $\lambda(\delta)$ is almost but not exactly saturating some of the generalized Pauli constraints of its setting $\wedge^N[\mathcal{H}_{\infty}]$. Geometrically this means that the vector $\lambda(\delta)$ is very close to some Pauli facet $F_i$ of $\mathcal{P}_{3,\infty}$. In that case we say that the spectrum is quasi-pinned to the facet $F_i$.
Since $\mathcal{P}_{3,\infty}$ is not explicitly known and quite involved (it is described by infinitely many constraints on infinitely many eigenvalues), we have truncated the spectrum and simplified the pinning analysis by considering only the largest $d$ eigenvalues,
\begin{equation}\label{spectrunc}
\lambda^{\mathrm{tr}} = (\lambda_1,\ldots,\lambda_d) ,
\end{equation}
and analyzed the saturation of the constraints corresponding to the setting $\wedge^3[\mathcal{H}_{d}]$. The following fact justifies this approach: For $d<d'$ every Pauli facet $F$ of $\mathcal{P}_{N,d}$ is contained in some Pauli facet $F'$ of $\mathcal{P}_{N,d'}$, i.e. $F$ is the intersection of $F'$ with the hyperplane of spectra with only $d$ non-zero eigenvalues.
Then, for small $\lambda_{d+1},\lambda_{d+2},\ldots$, small distance of $\lambda^{\mathrm{tr}}$ to $F$ translates to small distances of $\lambda$ to $F'$ modulo an error of order of the largest neglected eigenvalue, $\lambda_{d+1}$.
To illustrate this, we present the example $\wedge^3[\mathcal{H}_6]$, which is one of the two settings studied in our work. There one generalized Pauli constraint reads \cite{Borl1972, Rus1, Rus2, Kly3}
\begin{equation}
D^{(6)}(\lambda):=2-(\lambda_1+\lambda_2+\lambda_4) \geq 0 \label{d=6g}\,\,.
\end{equation}
For the setting $\wedge^3[\mathcal{H}_{\infty}]$ the known constraint \cite{Kly3}
\begin{equation}\label{margconinf}
D^{(\infty)}(\lambda)=2-(\lambda_1+\lambda_2+\lambda_4+\lambda_7+\lambda_{11}+\lambda_{16} +\ldots )\geq 0\,,
\end{equation}
coincides with constraint (\ref{d=6g}) up to a linear combination of eigenvalues $\lambda_7,\lambda_{11},\lambda_{16},\ldots$, which where neglected in the truncated setting.
A first important step in proving the universality of this relation between polytope distances of correct and truncated setting is the next lemma:
\begin{lemma}\label{lemzeros}
Consider the quantum marginal problems of the two settings $\wedge^N[\mathcal{H}_d]$ and $\wedge^N[\mathcal{H}_{d'}]$, $d < d' \in \mathbb{N}\cup \{\infty\}$ and let $\lambda = (\lambda_1,\ldots,\lambda_d)$ be a spectrum. Then,
\begin{eqnarray}
(\lambda_1,\ldots,\lambda_d)\,\, \mbox{compatible w.r.t.} \,\,\wedge^N[\mathcal{H}_d] &&\nonumber \\
\Leftrightarrow \qquad \qquad\qquad \nonumber &&\\
(\lambda_1,\ldots,\lambda_d,\underbrace{0,\ldots,0}_{d'-d})\,\, \mbox{compatible w.r.t.}\,\, \wedge^N[\mathcal{H}_{d'}] &&\,\,.
\end{eqnarray}
For the corresponding polytopes this means
\begin{equation}
\mathcal{P}_{N,d} = \mathcal{P}_{N,d'}|_{\lambda_{d+1},\lambda_{d+2},\ldots=0} ,
\end{equation}
the polytope $\mathcal{P}_{N,d'}$ intersected with the hyperplane
given by $\lambda_{d+1},\lambda_{d+2},\ldots=0$ coincides with
$\mathcal{P}_{N,d}$.
\end{lemma}
\begin{proof}
The direction ``$\Rightarrow$'' was already explained at the end of Section\ref{sec:notation}.
To prove ``$\Leftarrow$'' we show that a state $|\Psi'\rangle$ expanded according to (\ref{expansion}),
\begin{equation
|\Psi'\rangle = \sum_{1\leq i_1 <\ldots<i_N \leq d'} c_{\mathbf{i}} \,|\mathbf{i}\rangle \,\,\,\,\in \wedge^N[\mathcal{H}_{d'}]\,,
\end{equation}
with natural occupation numbers $(\lambda_1,\ldots,\lambda_d,\underbrace{0,\ldots,0}_s)$ contains only Slater determinants $|\mathbf{i}\rangle$, with $i_1,\ldots,i_N\leq d$. But this is clear
due to (\ref{noncoef}), which then yields
\begin{equation}
\forall \,k\,>d\,:\,\,\, 0 \stackrel{!}{=}\lambda_k = \sum_{\mathbf{i},\,k \in \mathbf{i}}\,|c_{\mathbf{i}}|^2 .
\end{equation}
Hence $c_{\mathbf{i}} = 0$ if $i_N>d$.
\end{proof}
What does Lemma \ref{lemzeros} imply for the relation between the families of generalized Pauli constraints of two settings?
Let us consider two settings with $d,d'$ finite, $d<d'$. Every constraint $D_j'$ for the setting $\wedge^N[\mathcal{H}_{d'}]$
is linear and hence its restriction
\begin{equation}\label{constrestricted}
\hat{D}_j'(\lambda_1,\ldots,\lambda_d) \equiv D_j'(\lambda_1,\ldots,\lambda_d,0,\ldots) \geq 0
\end{equation}
to the hyperplane defined by $0=\lambda_{d+1},\lambda_{d+2},\ldots$ is also a linear constraint in the remaining coordinates $\lambda_1,\ldots,\lambda_d$. How is the half space $V_j\subset \mathbb{R}^d$ corresponding to (\ref{constrestricted}) related to the polytope $\mathcal{P}_{N,d}$? Lemma \ref{lemzeros} states that
\begin{equation}
\mathcal{P}_{N,d} \subset V_j
\end{equation}
and
\begin{equation}
\mathcal{P}_{N,d} = \cap_j V_j|_{\ast} ,
\end{equation}
where the star $\ast$ denotes here the restriction to spectra, i.e. ordered and normalized vectors.
There are two possible relations between $V_j$ (or $V_j|_{\ast}$) and $\mathcal{P}_{N,d}$. They are illustrated in Figure \ref{facetproj} in form of a simplified $2-$dimensional picture:
\begin{figure}[!h]
\includegraphics[width=8cm]{facetsprojected}
\centering
\caption{Polytope $\mathcal{P}_{N,d}$ and two restricted generalized Pauli constraints $\hat{D}_1, \hat{D}_2 \geq 0$ with boundaries $S_1, S_2$ arising from two constraints $D_1,D_2\geq 0$ belonging to a higher dimensional marginal settings $\wedge^N[\mathcal{H}_{d'}]$.}
\label{facetproj}
\end{figure}
There, we consider two half spaces $V_1$ and $V_2$ corresponding to the ``restricted'' constraints $\hat{D}'_1\geq 0$ and $\hat{D}'_2\geq 0$ with boundaries $S_1$ and $S_2$ and orientation indicated by stripes. Such hyperplanes can either contain a facet of maximal (example $S_1$) or lower dimension of $\mathcal{P}_{N,d}$ or they lie outside of $\mathcal{P}_{N,d}$ (example $S_2$). The third case of a proper intersection is not possible due to Lemma \ref{lemzeros}. Every constraint $D'$ with boundary $S$ of its restriction $\hat{D}'$ lying outside of $\mathcal{P}_{N,d}$ is a constraint, which is irrelevant
for the pinning analysis since it has the form
\begin{equation}
D'(\lambda) = c + \tilde{D}(\lambda^{\mathrm{tr}}) + O(\lambda_{d+1})\,,
\end{equation}
where $\tilde{D}(\lambda^{\mathrm{tr}})\geq 0$ is a constraint of the setting $\wedge^N[\mathcal{H}_d]$ with a boundary shown in Figure \ref{facetproj} as hyperplane $\tilde{S}_2$ and $c >0$ is some offset.
Hence if the spectrum decays sufficiently fast, constraint $D'$ is not saturated at all due to the offset $c$ and thus irrelevant.
Moreover, for every Pauli facet of $\mathcal{P}_{N,d}$ corresponding to some constraint $D>0$, Lemma \ref{lemzeros} guarantees the existence of a constraint $D'>0$ in the larger setting whose projection $\hat{D'}$ coincides with $D$.
We summarize these insights by stating
\begin{lemma}\label{lemdistancemodif}
Given two marginal settings $\wedge^N[\mathcal{H}_d]$ and $\wedge^N[\mathcal{H}_{d'}]$ with $d < d' \in \mathbb{N}$. Every generalized Pauli constraint $D'\geq 0$ of the setting $\wedge^N[\mathcal{H}_{d'}]$ relevant for the pinning analysis is given by a linear modification of some generalized Pauli constraint $D\geq 0$ of the setting $\wedge^N[\mathcal{H}_{d}]$,
\begin{equation}\label{distancemodif}
D'(\lambda) = D(\lambda^{\mathrm{tr}}) + O(\lambda_{d+1}) .
\end{equation}
\end{lemma}
Finally, we remark that for the important case $d'$ infinite effectively the same results holds but one has to deal with one subtlety. Since $\mathcal{P}_{N,\infty}$ is described by infinitely many constraints Lemma \ref{lemzeros} guarantees for every constraint $D\geq 0$ of the setting $\wedge^N[\mathcal{H}_d]$, only the existence of a sequence of constraints $D'_j\geq 0$ whose restrictions $\hat{D}'_j\geq 0$ converge to the constraint $D\geq 0$. This means that condition (\ref{distancemodif}) in Lemma \ref{lemdistancemodif} holds up to a small error $\varepsilon$,
\begin{equation}
D_{\varepsilon}'(\lambda) = \varepsilon + D(\lambda^{\mathrm{tr}}) + O(\lambda_{d+1}) ,
\end{equation}
which can be made arbitrarily small by choosing appropriate constraints $D'_{\varepsilon}$. Hence, to minimize the technical effort we assume in our work that Lemma \ref{lemdistancemodif} holds in its original form also for the case $d'$ infinite.
\subsection{\label{sec:selectionrule} Selection Rule.---}
In this section we state a selection rule which explains how the structure of the $N-$fermion state
$|\Psi\rangle \in \wedge^N[\mathcal{H}_d]$ simplifies if the spectrum of the corresponding $1-$RDO is pinned to
some Pauli facet of $\mathcal{P}_{N,d}$. Moreover, we apply it for the setting $\wedge^3[\mathcal{H}_6]$.
Let's consider a state $|\Psi\rangle$ with natural occupation numbers $\lambda = (\lambda_i)_{i=1}^d$ saturating some generalized Pauli constraint
\begin{equation}\label{margcon2}
D(\lambda) = \kappa_0 + \kappa_1 \lambda_1+\ldots \kappa_{d} \lambda_{d} \geq 0 .
\end{equation}
In \cite{Kly1}, by introducing the creation and annihilation operator $a^{\dagger}_k, a_k$ of a fermion in the natural orbital $|k\rangle$ and the particle number operators $N_k \equiv a^{\dagger}_k a_k$,
an important condition is stated, which $|\Psi\rangle$ in that case satisfies:
\begin{equation}\label{pinningeigen}
\hat{D}|\Psi\rangle \equiv \left(\kappa_0 \mathrm{Id} + \kappa_1 N_1+\ldots \kappa_{d} N_{d}\right) |\Psi\rangle = 0 .
\end{equation}
Applying this condition to the expansion of $|\Psi\rangle$ in Slater determinants induced by the natural orbitals,
\begin{equation}\label{Psiansatz}
|\Psi\rangle = \sum_{\mathbf{i}} c_{\mathbf{i}} \,|\mathbf{i}\rangle \qquad
\end{equation}
it implies \emph{Klyachko's selection rule}, which states that whenever
\begin{equation}
\hat{D}|\mathbf{i}\rangle \neq 0 ,
\end{equation}
the corresponding coefficient $c_{\mathbf{i}}$ vanishes.
To show the strength of this selection rule we study states in the Borland-Dennis setting. The corresponding Hilbert space
$\wedge^3[\mathcal{H}_6]$ has dimension $\binom{6}{3}=20$ and the generalized Pauli constraints read \cite{Borl1972, Rus1, Rus2, Kly3}
\begin{eqnarray}
&&\lambda_1+\lambda_6, \,\lambda_2+\lambda_5, \,\lambda_3+\lambda_4 \leq 1 \label{d=6c} \qquad\\
&&D^{(6)} := 2-(\lambda_1 +\lambda_2+\lambda_4) \geq 0 \label{d=6d} .
\end{eqnarray}
The normalization together with the non-negativity of the eigenvalues leads to
\begin{equation}
\lambda_1+\lambda_6 =\lambda_2+\lambda_5 = \lambda_3+\lambda_4 = 1 \label{d=6e} .
\end{equation}
Hence the constraints in (\ref{d=6c}) are always saturated and this implies according to (\ref{pinningeigen})
\begin{eqnarray}\label{d=6f}
\left(\mathrm{Id}-N_1-N_6\right)|\Psi\rangle &=& 0 \nonumber \\
\left(\mathrm{Id}-N_2-N_5\right)|\Psi\rangle &=& 0 \nonumber \\
\left(\mathrm{Id}-N_3-N_4\right)|\Psi\rangle &=& 0 .
\end{eqnarray}
Klyachko's selection rule applied to (\ref{d=6f}) implies that every Slater determinant
showing up in the ansatz (\ref{Psiansatz}) for $|\Psi\rangle$ is built up by natural orbitals with one index from each set $\{1,6\}, \{2,5\}$ and $\{3,4\}$.
Those are the $8$ states $|1,2,3\rangle$, $|1,2,4\rangle$, $|1,3,5\rangle$, $|1,4,5\rangle$, $|2,3,6\rangle$,
$|2,4,6\rangle$, $|3,5,6\rangle$ and $|4,5,6\rangle$. If the constraint (\ref{d=6d}) is also saturated the selection rule
restricts this family of Slater determinants to the three states $|1,2,3\rangle$, $|1,4,5\rangle$ and $|2,4,6\rangle$ and in
that case we find
\begin{equation}\label{HFextstate}
|\Psi_3\rangle = \alpha |1,2,3\rangle + \beta |1,4,5\rangle + \gamma |2,4,6\rangle .
\end{equation}
\subsection{Quasi-Pinning and modified Selection Rule.---}\label{sec:quasiSelectionRule}
In this section we show for the Borland-Dennis setting that any state $|\Psi\rangle \in \wedge^3[\mathcal{H}_6]$ whose natural
occupation numbers are approximately saturating the corresponding generalized Pauli constraint (\ref{d=6d}) also fulfill approximately
condition (\ref{pinningeigen}). We also quantify this relation. This result then guarantees that our Hartree-Fock extension will work
for systems exposing strong pinning.
As a warm-up and since we will need the result we first study a simpler question.
It is a basic fact that the spectrum $\lambda_{\mathrm{Sl}}=(1, \dots, 1, 0, \dots ,0)$ can \emph{only}
arise from a Slater determinant $|\Psi\rangle=|1,\ldots,N\rangle$. Is this statement stable under small
deviations, i.e. $\lambda\approx \lambda_{\mathrm{Sl}} \Rightarrow|\Psi\rangle \approx |1,\ldots,N\rangle$? Yes, it is true according to
\begin{lemma}\label{Slaterstable}
Consider a state $|\Psi\rangle \in \wedge^N[\mathcal{H}_d]$, let $\{|k\rangle\}_{k=1}^d$ be its natural orbitals
and denote the projection operator onto the space spanned by $|1,\ldots,N\rangle$ by $P_{\mathrm{Sl}}$.
Then,
\begin{equation}
1-\delta \leq \|P_{\mathrm{Sl}} \Psi \|_{L^2}^2 \leq 1- \frac{1}{N} \delta ,
\end{equation}
where
\begin{equation}
0\leq N-(\lambda_1+\ldots+\lambda_N) =:\delta .
\end{equation}
\end{lemma}
\begin{proof}
We expand the state $|\Psi\rangle$ in Slater determinants induced by natural orbitals (recall Section \ref{sec:notation}),
\begin{equation}\label{expansionstate1}
|\Psi\rangle = \sum_{\textbf{i}}\,c_{\textbf{i}}\,|\textbf{i}\rangle .
\end{equation}
We define the operator
\begin{equation}
\hat{S} = N\, \mathrm{Id}-\left(a_1^{\dagger} a_1+\ldots + a_N^{\dagger} a_N\right) .
\end{equation}
Since all operators $a_i^{\dagger} a_i, i=1,\ldots, d$ commute it is clear that $\hat{S}$ has the spectrum $\{0,1,\ldots,N\}$ with eigenstates $|\textbf{i}\rangle$.
The eigenvalue corresponding to $|\textbf{i}\rangle$ is the number of indices $k \in \textbf{i}$ not belonging to the set $\{1,\ldots,N\}$. We denote the set of indices leading to the eigenvalue $k$ by $J_k$ and find
\begin{eqnarray}
\delta &\equiv& N- (\lambda_1+\ldots+\lambda_d ) \nonumber \\
&=& \langle \Psi|N \, \mathrm{Id} -\left( a_1^{\dagger}a_1+\ldots+ a_d^{\dagger}a_d\right) \,|\Psi\rangle \nonumber \\
&\equiv& \langle \Psi|\hat{S}|\Psi\rangle \nonumber \\
&=& \sum_{\textbf{i},\textbf{j}\in J_0\cup \ldots \cup J_N } c_{\textbf{j}}^{\ast}\,c_{\textbf{i}}\,\langle \textbf{j} |\hat{S} |\textbf{i}\rangle \nonumber \\
&=& \sum_{\textbf{i}\in J_0\cup \ldots \cup J_N } |c_{\textbf{i}}|^2\,\langle \textbf{i} |\hat{S} |\textbf{i}\rangle .
\end{eqnarray}
Since for $\textbf{i} \in J_k$,
\begin{equation}
\langle \textbf{i} |\hat{S} |\textbf{i}\rangle = k
\end{equation}
we find
\begin{equation}
\delta = \sum_{k=0}^N \sum_{\textbf{i}\in J_k} |c_{\textbf{i}}|^2\, k \geq \sum_{k=1}^N \sum_{\textbf{i}\in J_k} |c_{\textbf{i}}|^2
\end{equation}
and alternatively also
\begin{equation}
\delta \leq N\, \sum_{k=1}^N \sum_{\textbf{i}\in J_k} |c_{\textbf{i}}|^2 .
\end{equation}
The normalization of $|\Psi\rangle$, $\sum_{\textbf{i}} |c_{\textbf{i}}|^2 =1$ yields ($J_0 =\{(1,2,\ldots,N)\}$ contains only one element)
\begin{equation}
\frac{\delta}{N} \leq 1-|c_{(1,\ldots,N)}|^2 \leq \delta .
\end{equation}
and thus
\begin{equation}
1-\delta \leq |c_{(1,\ldots,N)}|^2 \leq 1-\frac{1}{N}\delta .
\end{equation}
\end{proof}
Now, we come back to the original question.
We first state the mathematical result and present the proof afterwards.
\begin{theorem}\label{HFworks}
Given a state $|\Psi\rangle \in \wedge^3[\mathcal{H}_6]$ with natural occupation numbers $(\lambda_k)_{k=1}^6$.
Let $P$ be the projection operator onto the subspace spanned by the states $|1,2,3\rangle, |1,4,5\rangle, |2,4,6\rangle$, which corresponds to exact pinning
of $D^{(6)}=\lambda_5+\lambda_6-\lambda_4 \geq 0$ (recall (\ref{HFextstate})). Then as long as
\begin{equation}
\delta\equiv 3-\lambda_1-\lambda_2-\lambda_3 \leq \frac{1}{4}
\end{equation}
(which means nothing else but being not too far away from the spectrum $\lambda_{\mathrm{Sl}}=(1,1,1,0,0,0)$ of a single Slater determinant) we find
\begin{equation}
1- \chi_{\delta} D^{(6)}\leq \|P \Psi\|_2^2 \leq 1- \frac{1}{2} D^{(6)} ,
\end{equation}
with
\begin{equation}
\chi_{\delta} \equiv \frac{1+2\delta}{1 - 4\delta} .
\end{equation}
\end{theorem}
\begin{proof}
In Section \ref{sec:selectionrule} we concluded that $|\Psi\rangle$ has the form
\begin{eqnarray}\label{ansatz}
|\Psi\rangle &=& \alpha |1,2,3\rangle+ \beta |1,2,4\rangle+ \gamma |1,3,5\rangle \nonumber \\
&&+ \,\delta |2,3,6\rangle +\nu |1,4,5\rangle +\mu |2,4,6\rangle \nonumber \\
&&+ \,\xi |3,5,6\rangle+\zeta |4,5,6\rangle \,,
\end{eqnarray}
with natural orbitals $\{|k\rangle\}_{k=1}^6$.
Since the corresponding $1-$RDO is diagonal w.r.t. $\{|k\rangle\}_{k=1}^6$,
\begin{equation}\label{diagonal}
\langle k |\rho_1|l\rangle = \delta_{k l} \,\lambda_k ,
\end{equation}
we find (recall (\ref{noncoef}))
\begin{eqnarray}
\lambda_4 &=& |\beta|^2+|\nu|^2+|\mu|^2+|\zeta|^2 \\
\lambda_5 &=& |\gamma|^2+|\nu|^2+|\xi|^2+|\zeta|^2 \\
\lambda_6 &=& |\delta|^2+|\mu|^2+|\xi|^2+|\zeta|^2
\end{eqnarray}
The goal is now to show that the coefficients $\beta, \gamma, \delta,\xi$ and $\zeta$ are small, i.e.
\begin{eqnarray}
\|P \Psi\|_{L^2}^2 &=& |\alpha|^2+ |\mu|^2+|\nu|^2 \nonumber \\
&=& 1-\left( |\beta|^2+ |\gamma|^2+|\delta|^2+|\xi|^2+ |\zeta|^2\right)
\end{eqnarray}
is close to $1$, whenever (\ref{d=6d}), which here reads
\begin{equation}\label{distancegreek}
D^{(6)} = -|\beta|^2+|\gamma|^2+|\delta|^2+ 2|\xi|^2+|\zeta|^2 , \\
\end{equation}
is approximately saturated.
First we observe
\begin{eqnarray}\label{overlapgreek}
\|P \Psi\|_{L^2}^2 &\leq& 1- \frac{1}{2}\left( |\beta|^2+ |\gamma|^2+|\delta|^2+ 2|\xi|^2+ |\zeta|^2\right)\nonumber\\
&\leq& 1- \frac{1}{2}\left( - |\beta|^2+ |\gamma|^2+|\delta|^2+ 2|\xi|^2+ |\zeta|^2\right)\nonumber\\
&=& 1-\frac{1}{2}\,D^{(6)} ,
\end{eqnarray}
which is the upper bound for $\|P \Psi\|_{L^2}^2$ in Theorem \ref{HFworks}.
To derive the lower bound note the essential difference in (\ref{distancegreek}) and (\ref{overlapgreek}), the sign of the term $|\beta|^2$.
To get rid of this we write $|\beta|^2 = - \chi\, |\beta|^2 + (1+ \chi) |\beta|^2$, $\chi>0$ and estimate $(1+\chi)|\beta|^2$ in terms of $|\gamma|^2, |\delta|^2, |\xi|^2, |\zeta|^2$.
For this observe that (\ref{diagonal}) in particular implies
\begin{eqnarray}
0 = \langle 4|\rho_1|3\rangle = \overline{\alpha} \beta + \overline{\gamma} \nu + \overline{\delta} \mu + \overline{\xi} \zeta\,,
\end{eqnarray}
which leads by the triangle inequality, the identity $(A+B+C)^2 \leq 3 \,(A^2+B^2 + C^2)$ and $|\mu|^2, |\nu|^2,|\xi|^2,|\zeta|^2 \leq 1-|\alpha|^2$ to
\begin{eqnarray}\label{betaestimate}
|\beta|^2 &=& \left|\frac{1}{\overline{\alpha}}\,(\overline{\gamma} \nu + \overline{\delta} \mu + \overline{\xi} \zeta)\right|^2 \nonumber \\
&\leq& \frac{1}{|\alpha|^2}\,\left(|\gamma| \,|\nu| + |\delta| \,|\mu| +|\xi| \,|\zeta| \right)^2 \nonumber \\
&\leq& \frac{3}{|\alpha|^2}\,\left(|\gamma|^2 \,|\nu|^2 + |\delta|^2 \,|\mu|^2 +|\xi|^2 \,|\zeta|^2 \right)\nonumber \\
&\leq& \frac{3(1-|\alpha|^2)}{|\alpha|^2}\,\left(|\gamma|^2 + |\delta|^2 + \frac{1}{3} (2|\xi|^2 +|\zeta|^2) \right)\,.
\end{eqnarray}
Now, for all $s,r \geq0$ we find by using (\ref{betaestimate})
\begin{eqnarray}\label{lowerboundest}
\lefteqn{|\beta|^2+ |\gamma|^2+|\delta|^2+|\xi|^2+ |\zeta|^2}&&\nonumber \\
&\leq& (1-r)|\beta|^2+ |\gamma|^2+|\delta|^2+ (1+s)(2 |\xi|^2+ |\zeta|^2) + r |\beta|^2 \nonumber \\
&\leq& (1-r)|\beta|^2+ |\gamma|^2+|\delta|^2+(1+s)(2 |\xi|^2+ |\zeta|^2) \nonumber \\
&& + \frac{3 r (1-|\alpha|^2)}{|\alpha|^2}\,\left(|\gamma|^2 + |\delta|^2 + \frac{1}{3} (2|\xi|^2 +|\zeta|^2) \right) \nonumber \\
&=& (1-r)|\beta|^2 + \left(1+ \frac{3r(1-|\alpha|^2)}{|\alpha|^2}\right)\,\left(|\gamma|^2 + |\delta|^2\right) \nonumber \\
&& + \left( 1+ s+\frac{r(1-|\alpha|^2)}{|\alpha|^2}\right) \left(2 |\xi|^2+ |\zeta|^2\right) .
\end{eqnarray}
By choosing
\begin{eqnarray}
r&=& \frac{2 |\alpha|^2}{4 |\alpha|^2-3}\\
s&=& \frac{4 (1-|\alpha|^2)}{4|\alpha|^2-3}
\end{eqnarray}
the last expression in (\ref{lowerboundest}) coincides with $D^{(6)}$ up to a global factor $\chi$.
Both parameters $r,s$ are non-negative as long as $|\alpha|^2\geq \frac{3}{4}$.
Finally, this leads to
\begin{eqnarray}
\|P \Psi\|_{L^2}^2 &=& 1-(|\beta|^2+ |\gamma|^2+|\delta|^2+|\xi|^2+ |\zeta|^2) \nonumber \\
&\geq& 1-(r-1)D^{(6)} \nonumber \\
&\equiv& 1-\chi_{1-|\alpha|^2} D^{(6)} ,
\end{eqnarray}
with
\begin{eqnarray}
\chi_{1-|\alpha|^2}\equiv r-1 &=& \frac{3-2 |\alpha|^2}{4 |\alpha|^2-3} \nonumber \\
&=& \frac{1 + 2(1- |\alpha|^2)}{1-4(1-|\alpha|^2)} .
\end{eqnarray}
Lemma \ref{Slaterstable} states $|\alpha|^2 \geq 1-\delta$ and since $\chi$ is monotonously increasing, $\chi_{1-|\alpha|^2}\leq \chi_{\delta}$,
which finishes the proof.
\end{proof} |
1912.06563 | \section*{Introduction}
Operads are mathematical structures which have been intensively studied in the context of
topology, algebra \cite{LV12} but also of combinatorics~\cite{CSLC} ---see for
example~\cite{Men15, Gir18} for general references on symmetric and non-symmetric operads,
set-operads through species, {\em etc.} In the last decades, several interesting operads on
trees have been defined. Amongst these tree operads, maybe the most studied are the pre-Lie
operad $\mathbf{PLie}$~\cite{CL01} and the nonassociative permutative operad $\mathbf{NAP}$~\cite{Liv06}.
However, it seems to us that a natural question to ask is what kind of operads can be
defined on graphs and what are their properties? The need for defining appropriate graph
operads comes from combinatorics, where graphs are, just like trees, natural objects to
study. It comes also from physics, where it was recently proposed to use graph operads in order
to encode the combinatorics of the renormalization of Feynman graphs in quantum field
theory~\cite{Kreimer:2000ja}.
Other graph operads have been defined for example in~\cite{Kon99,Wil05,Men15,Gir17,MV19}.
In this paper, we go further in this direction and we define, using the combinatorial
species setting~\cite{BLL98}, new graph operads. Moreover, we investigate several properties
of these operads: we describe an explicit link with the pre-Lie tree operad mentioned above,
and we study interesting (finitely generated) suboperads.
This paper is organized as follows. In Section~\ref{sec:preliminaries} we give the definitions
of species, operads and graphs as well as classical results on these objects. Moreover, we
introduce here different notations used throughout the this paper. In Section~\ref{sec:constructions}
we propose new ways of constructing species and operads. We use these new constructions
in Section~\ref{sec:graph_operads} to define and study the main operads of interest of this paper.
Section~\ref{sec:suboperads} is devoted to the study of some particularly interesting finitely generated suboperads.
\section{Definitions and reminders} \label{sec:preliminaries}
Most definitions, results and proofs of this section can be found with more details in \cite{Men15}.
We refer the reader to~\cite{BLL98}
for the theory of species and to~\cite{LV12} for the theory of operads.
In all the following, $\mathbb{K}$ is a field of characteristic zero. For any positive integer $n$, $[n]$
stands for the set $\{1,\dots,n\}$. For $V$ a vector space and $A$ a non empty finite set, we
denote by $V\times A$ the vector space $\bigoplus_{a\in A}V$.
We denote by $(v,a)$ elements of $V\times A$ we thus have $(k_1 v_1+k_2v_2,a) =k_1(v_1,a) + k_2(v_2,a)$.
\subsection{Species}
\begin{definition}
A \textit{(positive) set species} $S$ consists of the following data.
\begin{itemize}
\item For each finite set $V$, a set $S[V]$, such that $S[\emptyset] =\emptyset$.
\item For each bijection of finite sets $\sigma: V\rightarrow V'$, a map $S[\sigma]:S[V]\rightarrow S[V']$.
These maps should be such that $S[\sigma\circ\tau] = S[\sigma]\circ S[\tau]$ and $S[\id] = \id$.
\end{itemize}
A \textit{morphism of set species} $f: R\rightarrow S$ is a collection of map
$f_V : R[V] \rightarrow S[V]$ such that for each bijection $\sigma: V\rightarrow V'$,
$f_{V'}\circ R[\sigma] = S[\sigma]\circ f_V$.
A set species $S$ is \textit{connected} if $|S[\{v\}]| = 1$ for any singleton $\{v\}$.
\end{definition}
In the previous definitions switching from sets to vector spaces, from maps to linear
maps and cardinality with dimension, we obtain the definition of \textit{(positive)
linear species, morphism of linear species} and \textit{connected linear species}.
We denote by $\mathcal{L}$ the functor from set species to linear species defined by
$L(S)[V] = \mathbb{K} S[V]$, where $\mathbb{K} S[V]$ is the free $\mathbb{K}$-vector space on $S[V]$,
and $L(f)_V$ the linear extension of $f$. We also denote by $\mathbb{K} S$ for $\mathcal{L}(S)$.
For any set species $S$, and $w=\sum_{x\in S[V]} a_x x\in\mathbb{K} S[V]$ we call \textit{support
of $w$} the set of $x\in S[V]$ such that $a_x\not = 0$.
\begin{example}
\label{notpol}
\begin{itemize}
\item We denote by $X$ the set species defined by $X[V] = \{v\}$ if $V=\{v\}$ and $X[V] = \emptyset$ else.
\item For $V$ a non empty finite set, let $\text{Pol}[V]$ be the set (and not the module)
of polynomials on $\mathbb{Z}$ with variables in $V$.
Then $\text{Pol}$ is the set species of polynomials on $\mathbb{Z}$. When considering $\mathbb{K} \text{Pol}$
one has to take into consideration the fact that we need to differentiate the plus of polynomials and the addition of vectors.
We will thus denote by $\oplus$ the former and keep $+$ for the latter and we will denote by
$0_V\in\text{Pol}[V]$ the polynomial constant to 0 and keep the notation $0$ for the null vector.
For example, $ab\oplus c$ is an element of $\text{Pol}[\{a,b,c\}]$, but $a\oplus b+c$
is a vector in $\mathbb{K} \text{Pol}[\{a,b,c\}]$ with support $\{a\oplus b, c\}$.
\item For any linear species $S$, we denote by $S^{\vee}$ the linear species defined by
$S^{\vee}[V] = S[V]^*$ and $S^{\vee}[\sigma]x = \text{sign}(\sigma)x\circ S[\sigma^{-1}]$.
If $S$ is a set species such that $S[V]=\{b_1,\dots,b_n\}$, we denote by $b_i^{\vee}$,
for $1\leq i\leq n$, the element of $\mathbb{K} S^{\vee}[V]$ defined by $b_i^{\vee}(b_j) = 1$ if
$i=j$ and $b_i^{\vee}(b_j) = 0$ else.
\end{itemize}
\end{example}
In all the following $V$ denotes a non empty finite set.
\begin{definition}
Let $R$ and $S$ be two species. We can then construct new set species which are defined as follows:
$$Sum\quad (R+S)[V] = R[V]\oplus S[V],\qquad Product\quad R\cdot S[V] = \bigoplus_{V_1\sqcup V_2 = V} R[V_1]\otimes S[V_2],$$
$$Hadamard\, product \quad (R\times S)[V] = R[V]\otimes S[V],\qquad Derivative\quad R'[V] = R[V+\{\ast\}] \text{ where $\ast\not\in V$},$$
$$\text{\textit{$n$-th derivative}}\quad R^{(n)} = R[V+\{\ast_1,\dots,\ast_n\}] \text{ where $\ast_1,\dots,\ast_n\not \in V$},\qquad Pointing\quad R^{\bullet}[V] = R[V]\times V,$$
$$Assembly\quad E(R)[V]= \bigoplus_{\cong}\bigotimes_{W\in V/\cong} R[W]\text{ where $\cong$ run over the set of equivalence relations on V}.$$
\\We have the same definitions on set species by replacing sums by direct unions and
tensor products by Cartesian products.
\end{definition}
Note that these definitions are compatible with $\mathcal{L}$ i. e $\mathcal{L}(R+S) =
\mathcal{L}(R)+\mathcal{L}(S)$, $\mathcal{L}(R\cdot S) = \mathcal{L}(R)\cdot \mathcal{L}(S)$ etc.
\subsection{Operads}
\begin{definition}
\label{op}
A \textit{(symmetric) set} (resp \textit{linear}) \textit{operad} is a set
(resp linear) species $\op$ together with a \textit{unity} $e: \text{(resp $\mathbb{K}$)}X\rightarrow \op$
and a set (resp linear) species morphism $\circ_{\ast}:\op'\cdot\op\rightarrow \op$ called
\textit{partial composition} such that the following diagrams commute
\begin{center}
\begin{tikzcd}
\op''\cdot\op^2 \arrow[r, "\circ_{\ast_1}"] \arrow[d, "\circ_{\ast_2}\circ\id\cdot\tau"] & \op'\cdot\op \arrow[d, "\circ_{\ast_2}"] & &
\op'\cdot \op'\cdot\op \arrow[r, "\circ_{\ast_1}\cdot\id"] \arrow[d, "\id\cdot\circ_{\ast_2}"] & \op'\cdot\op \arrow[d, "\circ_{\ast_2}"] \\
\op'\cdot \op \arrow[r, "\circ_{\ast_1}"] & \op & &
\op'\cdot \op \arrow[r, "\circ_{\ast_1}"] & \op
\end{tikzcd}
\end{center}
\begin{center}
\begin{tikzcd}
\op'\cdot \mathbb{K} X \arrow[r, "\op'\cdot e"] \arrow[rd, "p"] & \op'\cdot\op \arrow[d, "\circ_{\ast}"] & \mathbb{K} X'\cdot \op \arrow[l, "e'\cdot \op" swap]\arrow[dl, "\cong" swap]\\
& \op
\end{tikzcd}
\end{center}
where $\tau_V:x\otimes y\in \op^2[V] \mapsto y\otimes x\in \op^2[V]$ and
$p_V: x\otimes v \mapsto \op[\ast\mapsto v](x)$ with $\ast\mapsto v$ the
bijection that sends $\ast$ on $v$ and is the identity on $V\setminus\{v\}$.
An \textit{operad morphism} is a species morphism compatible with unities and
partial compositions.
\end{definition}
Note also that if $(S,e,\circ_{\ast)}$ is a set operad, then extending $e$ and
$\circ_{\ast}$ linearly turns $(\mathbb{K} S,e,\circ_{\ast})$ into a linear operad. In all
the following, $e$ will often be trivial and we will not mention it.
From now on we use species and operad for linear species
and linear operad and only specify when we work with their set
counterparts.
\begin{example}
\label{exop}
\begin{itemize}
\item The {\em singleton set species} $E$ defined by $E[V] = \{V\}$ naturally has
a set operad structure given by $\{V_1+\{\ast\}\}\circ_{\ast}\{V_2\} = \{V_1+V_2\}$.
\item The {\em identity set species} $Id$ given by $Id[V] = V$ has a set operad
structure given by $v\circ_{\ast} w = v|_{\ast \leftarrow w}$ which is equal
to $v$ if $v\not = \ast$ and equal to $w$ else.
\item Let us recall the following operad structure on rooted trees: the units are the one
vertex trees and for a rooted tree $t_1$ with vertex set $V_1+\{\ast\}$ and a
rooted tree $t_2$ with vertex set $V_2$ the partial composition $t_1\circ_{\ast} t_2$
is the sum over all tree obtained as follows.
\begin{enumerate}
\item Consider the forest obtained by removing $\ast$ from $t_1$ and take
the union with $t_2$.
\item Add an edge between the parent of $\ast$ in $t_1$ and the root of $t_2$.
\item For each child of $\ast$ in $t_1$, add an edge between this vertex
and any vertex of $t_2$.
\end{enumerate}
This operad is called the PreLie operad~\cite{CL01} and we will denote it by $\mathbf{PLie}$.
\item The set species of polynomials $\text{Pol}$ has a natural partial
composition given by the composition of polynomials: for $V_1=\{v_1,\dots,v_k\}$
and $V_2=\{v_1',\dots,v_l'\}$ disjoint sets and $p_1(v_1,\dots,v_k,\ast)\in \text{Pol}'[V_1]$
and $p_2(v_1',\dots,v_l')\in \text{Pol}[V_2]$ define
\begin{equation}
(p_1\circ_{\ast} p_2)(v_1,\dots,v_k,v_1',\dots,v_l') = p_1|_{\ast \leftarrow p_2}
= p_1(v_1,\dots, v_{k},p_2(v_1',\dots,v_l'))\in \text{Pol}[V_1+V_2].
\end{equation}
One can directly check that this partial composition satisfies the commutative diagrams of
Definition \ref{op}. This turns $\text{Pol}$ into a set operad where the units are
the singleton polynomials $v\in\text{Pol}[\{v\}]$. As mentioned previously,
the linear extension of $\circ_{\ast}$ then turns $\mathbb{K}\text{Pol}$ into a linear operad.
Both the set operads $E$ and $Id$ can be seen as set sub-operads of $\text{Pol}$
respectively by the monomorphisms $\{V\}\mapsto \oplus_{v\in V}v$ and $v\mapsto v$.
The operad $\mathbb{K} E$ can also be seen as a sub-operad of $\mathbb{K}\text{Pol}$ by the monomorphism
$\{V\}\mapsto \sum_{v\in V}v$ (which is not the linear extension of the previous monomorphism).
\end{itemize}
\end{example}
An \textit{ideal} of an operad $\mathcal{O}$ is a subspecies $S$ such that the image of the
products $\mathcal{O}'\cdot S$ and $S'\cdot\mathcal{O}$ by the partial composition maps are in $S$.
The \textit{quotient species} $\mathcal{O}/ S$ defined by $(\mathcal{O}/S)[V] = \mathcal{O}[V]/S[V]$ is
then an operad with the natural partial composition and unit.
We now need to recall the notion of free operad~\cite{Men15}. For $S$ a set species define the
{\em free set operad} $\mathbf{Free}_S$ over $S$ by $\mathbf{Free}_S[V]$ being the set of
trees on $V$ enriched with elements in $S$. Such a tree $\mathcal{T}\in \mathbf{Free}_S[V]$
is defined as follows.
\begin{itemize}
\item The leaves of $\mathcal{T}$ are the elements of $V$.
\item Each internal vertex $u$ of $\mathcal{T}$ is labelled with the set $B_u$
of leaves that are descendants of $u$ in $\mathcal{T}$.
\item There is an element of $S[\pi_u]$ attached to each fiber (set of sons) of each
internal vertex $u$.
\end{itemize}
\noindent The set $\pi_u$ in the third item is defined as follows. To each leaf $v$ we associate
the set $B_v=\{v\}$. Then $\pi_u$ is the set $\{B_w\,,\,w\in c(u)\}$ with $c(u)$ is the set
of children of $u$.
The partial composition of $\mathbf{Free}_G$, which we denote by $\circ^{\xi}_{\ast}$ in order to
not confuse it with an already existing operad structure on $G$,
is the grafting of trees: for any disjoint sets $V_1$ and $V_2$ with $\ast\in V_1$,
and $\mathcal{T}_1 \in\mathbf{Free}_G[V_1]$ and $\mathcal{T}_2 \in \mathbf{Free}_G[V_2]$,
$\mathcal{T}_1\circ^{\xi}{\ast} \mathcal{T}_2$ is the tree obtained by grafting
$\mathcal{T}_2$ on the leaf $\ast$ of $\mathcal{T}_1$ and updating the labels
accordingly, i. e for each vertex $u$ of $\mathcal{T}_1$ with $\ast$ as descendant,
update $B_u$ to $B_u-\{\ast\}+V_2$.
In the linear case, for $S$ a species, define the {\em free operad} $\mathbf{Free}_S$ over $S$ by $\mathbf{Free}_S$
being the linear span of the set of trees on $V$ enriched with elements in $S$. Such trees
are defined in the same way as in the set species case and the partial composition
$\circ^{\xi}{\ast}$ is also the grafting of trees.
Remark that for $S$ a set species we have that $\mathbb{K}\mathbf{Free}_{S} = \mathbf{Free}_{\mathbb{K} S}$.
For any $k \geq 0$, we denote by $\mathbf{Free}_S^{(k)}$ the subspecies of $\mathbf{Free}_S$ of trees
with $k$ exactly internal nodes.
If $R$ is a subspecies of $\mathbf{Free}_S$, we denote by $(R)$ the smallest ideal
containing $R$ and write that $(R)$ is \textit{generated by $R$}.
\begin{definition}
Let $G$ be a species and $R$ be a subspecies of $\mathbf{Free}_G$. Let
$\mathrm{Ope}(G,R) = \mathbf{Free}_G/(R)$. The operad $\mathrm{Ope}(G,R)$ is \textit{binary} if the species $G$
of generators is concentrated in cardinality $2$ ({\em i. e.}, for all $n \ne 2$, $G[[n]] =
\{0\}$). This operad is \textit{quadratic} if $R$ is a subspecies of
$\mathbf{Free}_G^{(2)}$.
\end{definition}
\begin{definition}
Let $\mathcal{O} =\mathrm{Ope}(G,R)$ be a binary quadratic operad. Define
the linear form $\langle -, -\rangle$ on $\mathbf{Free}_{G^{\vee}}^{(2)}\times\mathbf{Free}_{G}^{(2)}$ by
\begin{equation}
\langle f_1\circ_{\ast} f_2, x_1\circ_{\ast} x_2 \rangle = f_1(x_1)f_2(x_2),
\end{equation}
The \textit{Koszul dual} of $\mathcal{O}$ is then the operad $\mathcal{O}^!=\mathrm{Ope}(G^{\vee},R^{\bot})$
where $R^{\bot}$ is the orthogonal of $R$ for $\langle -,-\rangle$.
\end{definition}
When $\mathcal{O}$ is quadratic and its Koszul complex is acyclic, $\mathcal{O}$ is
a Koszul operad~\cite{LV12}. In this case, the Hilbert series of $\mathcal{O}$ and of its Koszul
dual are related by the identity
\begin{equation} \label{hdual}
\mathcal{H}_{\mathcal{O}}(-\mathcal{H}_{\mathcal{O}^!}(-t)) = t.
\end{equation}
\subsection{Graphs and hypergraphs}
\label{graph}
In this subsection we present a formalism to define graphs and hypergraphs and
their ``multi'' variants.
A \textit{multiset} $m$ over $V$ is a set of couples $\{(v,m(v))\,|\, v\in V\}$ in
$V\times\mathbb{N}^*$. We denote by $D(m)=V$ the {\em domain} of $m$. We say that
$v$ is in $m$ and denote by $v\in m$ if $v\in D(m)$. For any element $v$ not in
the domain of $m$, we have $m(v) = 0$.
We denote by $\m(V)$ the set of multisets with domain in $\mathcal{P}(V)$, $\m_k(V)$ the set
of elements of $\m(V)$ of cardinality $k$ (the cardinality of a multiset $m$ over
$V$ being $\sum_{v\in V} m(v)$) and $\m(V)^*$ the set of multisets with domain in
$\mathcal{P}(V)^* = \mathcal{P}(V)\setminus\{\emptyset\}$. We identify non empty sets with multisets
constant equal to 1.
For $m$ a multiset and $V$ a set, we denote by $m\cap V = m\cap V\times\mathbb{N}^*$.
If $m'$ is another multiset, we call the union of $m$ and $m'$ the multiset
$\{(v,m(v)+m'(v))\,|\, v\in D(m)\cup D(m')\}$.
\begin{definition}
Let $V$ be a set. A \textit{multi-hypergraph} over $V$ is a multiset with domain
in $\m(V)^*$. In this context the elements of $V$ are called \textit{vertices},
the elements of a multi-hypergraph are called \textit{edges} and the elements of
an edge are called its \textit{ends}. A vertex contained in the domain of no edge
is called an \textit{isolated vertex}. We denote by $\mathbf{MHG}$ the set species of multi-hypergraphs.
A \textit{hypergraph} is a multi-hypergraph whose edges are sets. A \textit{multigraph}
is a multi-hypergraph whose edges have cardinality 2. A \textit{graph} is a
multi-hypergraph which is a hypergraph and a multigraph at the same time as
well as a set. Denote by $\mathbf{HG}$, $\mathbf{MG}$ and $\mathbf{G}$ the set species corresponding to these structures.
We also denote by $\mathbf{F}$ the species of \textit{forests}, which is the subspecies
of $\mathbf{G}$ such that for every $f\in \mathbf{F}[V]$ there are no sequences $e_1,\dots, e_k$
of distinct edges such that $e_i\cap e_{i+1} \not = \emptyset$ for $1\leq i< k$
and $e_k\cap e_1 \not = \emptyset$.
Finally, for a subspecies $S$ of $\mathbf{MHG}$ we denote by $S_c$ its sub-species
of connected components that is to say elements such that for every pair of
vertices $v,v'$, there is a sequence of edge $e_1,\dots, e_k$ such that
$v\in e_1$, $v'\in e_k$ and $e_i\cup e_{i+1}\not = \emptyset$. We denote
by $\mathbf{T}=\mathbf{F}_c$ the species of \textit{trees}.
\end{definition}
Note that for any sub-species $S$ of $\mathbf{MHG}$ we have that $E(S_c) = S$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.5]{fig/he1}
\hspace{2cm}
{\includegraphics[scale=1.5]{fig/he2}}
\hspace{2cm}
\includegraphics[scale=1.5]{fig/he3}
\caption{Three edges of cardinality 3.}
\label{e3}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=1.5]{fig/mhg1}
\caption{A multi-hypergraph over $\{a,b,c,d,e,f,g\}$.}
\label{mhg}
\end{center}
\end{figure}
\begin{example}
We represent the three edges $\{(1,1),(2,1),(3,1)\}$, $\{(1,2),(2,1)\}$
and $\{(1,3)\}$ in Figure \ref{e3} and the multi-hypergraph
\begin{equation}
\{(\{(a,2),(b,1),(d,1)\},1),(\{(b,1),(c,1),(e,1)\},1),(\{e,4\},1),(\{(e,1),(f,1)\},1),(\{(d,1),(f,1)\},2)\}
\end{equation}
over $\{a,b,c,d,e,f,g\}$ in Figure \ref{mhg}.
\end{example}
\begin{remark} \label{graphpol}
The set species $\mathbf{MHG}$ is isomorphic to the sub-species of $\text{Pol}$
of polynomials with constant term equal to 0. This isomorphism is defined
as follows. For $V$ a finite set:
\begin{itemize}
\item the empty graph $\emptyset_V\in \mathbf{MHG}[V]$ is sent on the null polynomial $0_V$,
\item an edge $e$ is sent on the monomial $\prod_{v\in e} v^{e(v)}$,
\item an element $h\in \mathbf{MHG}[V]$ is sent on the polynomial $\bigoplus_{e\in h} e$.
\end{itemize}
We often identify $\mathbf{MHG}$ with this sub-species. This identification
is very useful to do computations since it is easier
to formally write operations on polynomials than on graphs. With this
identification, hypergraphs can be seen as polynomials where each variable
appears at most once in each monomial and multigraphs as homogeneous polynomials of degree 2.
\end{remark}
\begin{example}
With this identification, the multi-hypergraph in Example \ref{mhg} writes
$a^2bd\oplus bce\oplus e^4\oplus ef\oplus df\oplus df$.
\end{example}
\section{Species and operad construction}\label{sec:constructions}
The goal of this section is to define new constructions of species and operads
from already existing structures.
\begin{definition}
Let $A$ be a set and $S$ be a (resp set) species. An \textit{$A$-augmentation}
of $S$ is a (resp set) species $A\text{-}S$ such that $A\text{-}S[V] \cong S[V\times A]$ for every finite set $V$.
\end{definition}
\begin{example}\label{defor}
Let $A$ be a set. Instead of considering an $A$-augmented multi-hypergraph on
$V$ as a multi-hypergraph on $V\times A$, we consider them as multi-hypergraphs on
$V$ where the ends of the edges are labelled with elements of $A$. This is
illustrated in Figure \ref{augm}. In particular, the set species of oriented
multigraphs $\mathbf{MG}_{or}$ is the set species $\{\_,>\}-\mathbf{MG}$
of multigraphs where each end of each edge is non a labelled end (i. e labeled by
$\_$) or labelled with an arrow head $>$.
Instead of seeing the variables of a polynomial in $A\text{-Pol}[V]$ as couples
$(v,a)\in V\times A$, we consider them as elements of $V$ indexed by elements of $A$: $v_a$.
Note that the identification presented in subsection~\ref{graph}
also holds between augmented multi-hypergraphs and augmented polynomials.
\end{example}
\begin{equation}\label{augm}
\includegraphics[scale=1.5]{fig/augm}
\end{equation}
\begin{proposition}
\label{semiprod}
Let $S$ be a (resp set) species and $\op$ a set operad. Let $\varphi$ be a
collection of linear maps (resp maps) $\varphi_{V_1+\{\ast\},V_2}:
(S[V_1+\{\ast\}]\otimes S[V_2]) \times \op[V_2] \rightarrow S[V_1+V_2]$ (resp
$S[V_1+\{\ast\}]\times S[V_2]$),
where $V_1$ and $V_2$ are disjoint, such that:
\begin{itemize}
\item for $x\in S[V_1+\{\ast_1\}]$, $(y,f)\in S\times\op[V_2+\{\ast_2\}]$
and $(z,g)\in S\times\op[V_3]$:
\begin{equation}
\varphi_{V_1+\{\ast_1\},V_2+V_3}(x,\varphi_{V_2+\{\ast_2\},V_3}(y,z,g),f\circ_{\ast_2}g) =
\varphi_{V_1+V_2+\{\ast_2\},V_3}(\varphi_{V_1+\{\ast_1\},V_2+\{\ast_2\}}(x,y,f),z,g)
\end{equation}
\item for $x\in S[V_1+\{\ast_1,\ast_2\}]$, $(y,f)\in S\times\op[V_2]$
and $(z,g)\in S\times\op[V_3]$:
\begin{equation}
\varphi_{V_1+V_2+\{\ast_2\},V_3}(\varphi_{V_1+\{\ast_1,\ast2\},V_2}(x,y,f),z,g) =
\varphi_{V_1+V_3+\{\ast_1\},V_2}(\varphi_{V_1+\{\ast_1,\ast2\},V_3}(x,z,g),y,f)
\end{equation}
\item there exists a map $e:X\rightarrow S$ such that for $(x,f)\in S\times\op[V]$
and $(y,g)\in S\times\op[V+\{\ast\}]$ we have $\varphi_{\{\ast\},V}(e(\ast),x,f) = x$
and $\varphi_{V+\{\ast\}, \{v\}}(x,e(v),e_{\op}(v)) = S[\tau_{\ast,v}](x)$ where
$e_{\op}$ is the unit of $\op$ and $\tau_{\ast,v}$ is the permutation which switches $\ast$ and $v$.
\end{itemize}
Then the partial composition $\circ^{\varphi}_{\ast}$ defined by
\begin{equation}\begin{split}
\circ^{\varphi}_{\ast}: S\times\op[V_1+\{\ast\}]\otimes S\times\op[V_2] &\rightarrow S\times\op[V_1+V_2] \\
(x,f)\otimes(y,g) &\mapsto (\varphi(x,y,g),f\circ_{\ast}g)
\end{split}\end{equation}
makes $S\times\op$ an (resp set) operad with unit $e$. We call this operad the
\textit{semidirect product of $S$ and $\op$ over $\varphi$} and we denote it by $S\ltimes_{\varphi}\op$.
\end{proposition}
\begin{proof}
This a rewriting of the axioms of Definition \ref{op}.
\end{proof}
When it is clear in the context we will not mention $\varphi$ and just write
semidirect product of $S$ and $\op$ and denote by $S\ltimes \op$. The
goal of this construction is to give an operad structure to $S$ using
the already known set operad structure on $\op$.
\begin{example}
Let $C$ be a finite set and denote by $C_{2+}$ the set species defined by
$C_{2+}[V] = C$ if $|V|>1$ and $C_{2+}[V] = \emptyset$ else. The species
$\mathcal{C}=X+C_{2+}$ has a set operad structure with partial composition
defined by, for $x\in \mathcal{C}'[V_1]$ and $y\in C[V_2]$: $x\circ_{\ast} y = x$
if $V_1\not = \emptyset$ and $x\circ_{\ast} y = y$ else. Let $\mathcal{F}^C = X +
\mathcal{F}_{2+}^C$ be the set species of maps with codomain $C$:
$\mathcal{F}^C[V]=\{f:V\rightarrow C\}$ for $|V| > 1$. Then we have the
semidirect product $\mathbb{K}\mathcal{F}^C\ltimes_{\varphi} \mathcal{C}$ given by,
for $V_1\not = \emptyset$, $|V_2|>1$ and $f\in \mathcal{F}^C[V_1+\{\ast\}]$
and $(g,x)\in \mathcal{F}^C\times \mathcal{C}[V_2]$: $\varphi(f,g,c) = 0$
if $f(\ast) \not = c$ and $\varphi(f,g,c)(v) = \left\{\begin{array}{rl}
f(v) & \text{if $v\in V_1$} \\
g(v) & \text{if $v\in V_2$}\end{array}\right.$ else. When $V_1=\emptyset$
or $|V_2|=1$ the partial composition is implied by the definition of the unit.
We call this operad the \textit{$C$-coloration} operad. Alone, one can see
an element of $(f,c)\mathcal{F}^C\ltimes C[V]$ (with $|V|>1$) as a corolla
on $V$ with its root colored by $c$ and its leaves $v\in V$ colored by $f(v)$.
The partial composition consists then in grafting two corollas if the root and
the leaf on which it must be grafted share the same colors. However this operad is
used more frequently in a Hadamard product with another operad as a way to color it.
\end{example}
\begin{definition}
Let $A$ be a set and $\op$ be an (resp set) operad with unit $e$.
The set species of \textit{functions from $A$ to $\op$} is defined by
$\mathcal{F}_A^{\op}[V] = \{f:A\rightarrow \op[V]\}$. This set species has
a set operad structure with the elements $f:A \rightarrow \{e(v)\}$ in
$\mathcal{F}_A^{\op}[\{v\}]$ as units and partial composition defined by
$f_1\circ_{\ast} f_2(a) = f_1(a)\circ_{\ast} f_2(a)$.
\end{definition}
Note that if $A$ is a singleton then $\mathcal{F}_A^{\op} \cong \op$.
Let $A,B,C,D$ four sets such that $A$ and $B$ are disjoint and $f:A\rightarrow C$
and $g:B\rightarrow D$ two maps. We denote by $f\uplus g$ the map from $A\sqcup B$
to $C\cup D$ defined by $f\uplus g(a) = f(a)$ for $a\in A$ and $f\uplus g(b) = g(b)$
for $b\in B$.
\begin{proposition}
Let $A$ and $B$ be two disjoint sets and $\op_1$ and $\op_2$ be two operads.
Then the set species $\mathcal{F}_A^{\op_1}\uplus\mathcal{F}_B^{\op_2}$ defined by
$\mathcal{F}_A^{\op_1}\uplus\mathcal{F}_B^{\op_2}[V] = \{f\uplus g\,|\, f\in\mathcal{F}_A^{\op_1}, g\in\mathcal{F}_B^{\op_2}\}$
is a sub-operad of $\mathcal{F}_{A\sqcup B}^{\op_1+\op_2}$.
\end{proposition}
\begin{proof}
Since $A$ and $B$ are disjoint, the partial composition is well defined and
stable on $\mathcal{F}_A^{\op_1}\uplus\mathcal{F}_B^{\op_2}$.
\end{proof}
\section{Graph operads}\label{sec:graph_operads}
In this section we use the construction of the previous section to define operad
structures on $\mathbb{K} \mathbf{MHG}$ and its sub-species.
\subsection{Graph insertion operads}
Recall from Example \ref{notpol} that we denote by $\oplus$ the addition of
polynomials and $0_{V}$ the zero polynomials in order to distinguish them from
the addition of vectors and the null vector. As announced in
Remark \ref{graphpol}, we identify the elements of $\mathbf{MHG}$ with polynomials
with null constant term. We also identify $A$-augmented elements with
polynomials with variables indexed by $A$.
We now consider that the addition and multiplication of polynomials are
distributive on the addition of vectors.
\begin{theorem}
\label{propins}
Let $A$ be a set. Define the collection of maps $\varphi=\{\varphi_{V_1+\{\ast\},V_2}:
(\mathbb{K} A\text{-}\mathbf{MHG}[V_1+\{\ast\}]\otimes\mathbb{K} A\text{-}\mathbf{MHG}[V_2])\times \mathcal{F}_A^{\mathbb{K} \mathbf{MHG}})[V_2]
\rightarrow \mathbb{K} A\text{-}\mathbf{MHG}[V_1+V_2]\}_{V_1\cap V_2=\emptyset}$ by
\begin{equation}
\varphi(h_1,h_2,f) = h_1|_{\{\ast_a\leftarrow f(a)_a\}}\oplus h_2,
\end{equation}
where for a sum of polynomials $\sum P$, $(\sum P)_a = \sum P_a$ is the same sum
of polynomials but with all the variables indexed by $a$.
We can then do the semidirect product of $\mathbb{K} A\text{-}\mathbf{MHG}$ and $\mathcal{F}_A^{\mathbb{K} \mathbf{MHG}}$ over $\varphi$.
\end{theorem}
We call any operad isomorphic to a sub-operad of $A\text{-}\mathbf{MHG}\ltimes\mathcal{F}_A^{\mathbf{MHG}}$
a {\em graph insertion} operad. The idea is to give a general construction of
operads on (multi-)(hyper)graphs where the partial composition of two elements
$g_1\circ_{\ast}g_2$ is given by:
\begin{enumerate}
\item take the disjoint union of $g_1$ and $g_2$,
\item remove the vertex $\ast$ from $g_1$,
\item connect \underline{independently} each loose ends of $g_1$ to $g_2$ in a certain way.
\end{enumerate}
\noindent What we mean by independently is that the way of connecting one end does not depend on
how we connect the other ends. Note that the ``certain way'' in which an end can be
connected may include duplication of edges and augmentation of the number of vertices of edges.
Examples are given after the proof of Proposition \ref{propins}.
\begin{proof}
The linearity of $\varphi$ is given by the fact that the addition and multiplication of
polynomials are distributive on the addition of vectors. We need to verify that $\varphi$
satisfies the three items of Proposition \ref{semiprod}. The first two items are direct
polynomials computations over polynomials:
\begin{equation}\begin{split}
\varphi_{V_1+\{\ast_1\},V_2+V_3}(h_1,\varphi_{V_2+\{\ast_2\},V_3}(h_2,h_3,g),f\circ_{\ast_2}g)
&= h_1|_{\{\ast_{1a}\leftarrow f\circ_{\ast_2} g(a)_a\}}\oplus h_2|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_3 \\
&= h_1|_{\{\ast_{1a}\leftarrow f(a)_a\circ_{\ast_2} g(a)_a\}}\oplus h_2|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_3 \\
&= h_1|_{\{\ast_{1a}\leftarrow f(a)_a\}}|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_2|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_3 \\
&= (h_1|_{\{\ast_{1a}\leftarrow f(a)_a\}}\oplus h_2)|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_3 \\
&=\varphi_{V_1+V_2+\{\ast_2\},V_3}(\varphi_{V_1+\{\ast_1\},V_2+\{\ast_2\}}(h_1,h_2,f),h_3,g)
\end{split}\end{equation}
\begin{equation}\begin{split}
\varphi_{V_1+V_2+\{\ast_2\},V_3}(\varphi_{V_1+\{\ast_1,\ast2\},V_2}(h_1,h_2,f),h_3,g)
&= (h_1|_{\{\ast_{1a}\leftarrow f(a)_a\}}\oplus h_2)|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_3 \\
&= h_1|_{\{\ast_{1a}\leftarrow f(a)_a\}}|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_2\oplus h_3 \\
&= h_1|_{\{\ast_{2a}\leftarrow g(a)_a\}}|_{\{\ast_{1a}\leftarrow f(a)_a\}}\oplus h_3\oplus h_2 \\
&= (h_1|_{\{\ast_{2a}\leftarrow g(a)_a\}}\oplus h_3)|_{\{\ast_{1a}\leftarrow f(a)_a\}}\oplus h_2 \\
&= \varphi_{V_1+V_3+\{\ast_1\},V_2}(\varphi_{V_1+\{\ast_1,\ast2\},V_3}(h_1,h_3,g),h_2,f).
\end{split}\end{equation}
For the last item, let $e: X\rightarrow \text{-}\mathbf{MHG}$ be defined by $e(v) = \emptyset_{\{v\}}$.
We then have, with $e_{\mathcal{F}}$ the unit of $\mathcal{F}_A^{\mathbf{MHG}}$:
\begin{equation}
\varphi_{\{\ast\},V}(e(\ast),h,f) = \emptyset_{\{\ast\}}|_{\{\ast_a \leftarrow f(a)_a\}}\oplus h = h.
\end{equation}
Moreover, we have:
\begin{equation}\begin{split}
\varphi_{V+\{\ast\}, \{v\}}(h,e(v),e_{\mathcal{F}}(v))
&= h|_{\{\ast_a \leftarrow e_{\mathcal{F}}(v)(a)_a\}}\oplus\emptyset_{\{v\}} \\
&= h|_{\{\ast_a \leftarrow v_a\}} = A\text{-}\mathbf{MHG}[\tau_{(\ast,v)}](h).
\end{split}\end{equation}
This concludes the proof.
\end{proof}
In all the following when considering a semidirect product of a sub-species
of $\mathbb{K} A\text{-}\mathbf{MHG}$ and a sub-operad of $\mathcal{F}_A^{\mathbf{MHG}}$, this product
is over the map $\varphi$ defined in the Proposition \ref{propins}. We
hence will omit the $\varphi$ index.
From now on we denote by $\sum V$ the sum $\sum_{v\in V} v$ in order
to slightly lighten the notations.
Recall from Example \ref{exop} that we have natural embeddings of
$E$ and $Id$ in $\text{Pol}$ and a natural embedding of $\mathbb{K} E$ in $\mathbb{K}\text{Pol}$.
Since the images of these embeddings have null constant term, these embeddings are in $\mathbf{MHG}$.
\begin{example}
$G^{\bullet}$ has a natural set operad structure given by
$G^{\bullet} \cong G\times Id \cong \{0\}\text{-}G\ltimes\mathcal{F}_{\{0\}}^{Id}$.
For $(g_1,v_1)$ and $(g_2,v_2)$ two pointed graphs the partial composition
$(g_1,v_1)\circ_{\ast} (g_2,v_2)$ is then equal to $(g_3,v_1|_{\ast\leftarrow v_2})$
where $g_3$ is the graph obtained by connecting all the ends on $\ast$ to $v_2$. More formally:
\begin{equation}\begin{split}
(g_1,v_1)\circ_{\ast} (g_2,v_2) &= (g_1|_{\ast\leftarrow v_2}\oplus g_2, v_1|_{\ast\leftarrow v_2}) \\
&= (G[\tau_{\ast,v_2}](g_1)\oplus g_2,v_1|_{\ast\leftarrow v_2}).
\end{split}\end{equation}
For instance, one has:
\begin{equation}
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](s)at(0,0){$\ast$};
\node[RootGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\draw[EdgeGraph](a)--(s);
\draw[EdgeGraph](s)--(b);
\end{tikzpicture}
\enspace \circ_\ast \enspace
\begin{tikzpicture}[Centering,scale=1]
\node[RootGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(1,0){$a$};
\draw[EdgeGraph](c)--(d);
\end{tikzpicture}
\enspace = \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[RootGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](c)--(b);
\draw[EdgeGraph](c)--(d);
\end{tikzpicture}.
\end{equation}
Remark that the set operad NAP~\cite{Liv06} is a set sub-operad of the operad above
and hence is a graph insertion set operad.
\end{example}
\begin{example}
$G$ has a natural set operad structure given by
$G\cong G\times E\cong \{0\}\text{-}G\ltimes\mathcal{F}_{\{0\}}^{E}$.
For $g_1$ and $g_2$ two graphs the partial composition $g_1\circ g_2$
is then the graph obtained by adding an edge between each neighbour of
$\ast$ and each vertex of $g_2$. More formally, for $g_1\in G'[V_1]$ and $g_2\in G[V_2]$:
\begin{equation}\begin{split}
g_1\circ_{\ast} g_2 &= g_1|_{\ast \leftarrow \oplus_{v\in V_2} v}\oplus g_2\\
&= g_1\cap V_1^2\oplus\bigoplus_{v\in n(\ast)}v(\bigoplus_{v'\in V_2} v')\oplus g_2 \\
&= g_1\cap V_1^2\oplus\bigoplus_{v\in n(\ast)}\bigoplus_{v'\in V_2}vv'\oplus g_2,
\end{split}\end{equation}
where $n(\ast)$ is the set of neighbours of $\ast$. Note that we also
consider $g_1$ as a set of edges in order to write $g_1\cap V_1^2$
for the set of edges of $g_1$ not containing $\ast$. For instance,
one has:
\begin{equation}
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](s)at(0,0){$\ast$};
\node[NodeGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\draw[EdgeGraph](a)--(s);
\draw[EdgeGraph](s)--(b);
\end{tikzpicture}
\enspace \circ_\ast \enspace
\begin{tikzpicture}[Centering,scale=1]
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(1,0){$a$};
\draw[EdgeGraph](c)--(d);
\end{tikzpicture}
\enspace = \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](c)--(b);
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](b)--(d);
\end{tikzpicture}.
\end{equation}
\end{example}
Let $V_1$ and $V_2$ be two disjoint sets.
For any multigraphs $g_1 \in \mathbf{MG}'[V_1]$ and $g_2 \in \mathbf{MG}[V_2]$, define
a partial composition of $g_1$ and $g_2$ as the sum of all the multigraphs
of $\mathbf{MG}[V_1 \setminus \sqcup V_2]$ obtained by the following:
\begin{enumerate}
\item Take the disjoint union of $g_1$ and $g_2$;
\item Remove the vertex $\ast$. We then have some edges with one (or
two if $\ast$ has loops) loose end(s);
\item Connect each loose end to any vertex in $V_2$.
\end{enumerate}
For instance, one has:
\begin{equation}\begin{split}
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](s)at(1,0){$\ast$};
\draw[EdgeGraph](a)edge[bend left=40](s);
\draw[EdgeGraph](a)edge[bend right=40](s);
\draw[EdgeGraph](s)edge[loop above](s);
\draw[EdgeGraph,draw=ColWhite](s)edge[loop below](s);
\end{tikzpicture}
\enspace \circ_\ast \enspace
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](c)at(1,0){$c$};
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)edge[loop above](c);
\draw[EdgeGraph,draw=ColWhite](s)edge[loop below](s);
\end{tikzpicture}
& \enspace = \enspace
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)edge[bend left=40](b);
\draw[EdgeGraph](a)edge[bend right=40](b);
\draw[EdgeGraph](b)edge[loop below](b);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)edge[bend left=40](b);
\draw[EdgeGraph](a)edge[bend right=40](b);
\draw[EdgeGraph](c)edge[loop below](c);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}
\enspace + \enspace
2\,
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)edge[bend left=40](b);
\draw[EdgeGraph](a)edge[bend right=40](b);
\draw[EdgeGraph](b)edge[bend left=40](c);
\draw[EdgeGraph](b)edge[bend right=40](c);
\draw[EdgeGraph](c)edge[loop above](c);
\draw[EdgeGraph,draw=ColWhite](c)edge[loop below](c);
\end{tikzpicture}
\\
& \quad + \enspace
2\,
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)edge[bend right=40](c);
\draw[EdgeGraph](b)edge[loop above](b);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}
\enspace + \enspace
2\,
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)edge[bend right=40](c);
\draw[EdgeGraph](c)edge[loop below](c);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}
\enspace + \enspace
4\,
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)edge[bend right=40](c);
\draw[EdgeGraph](b)edge[bend right=40](c);
\draw[EdgeGraph](b)edge[bend left=40](c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}
\\
& \quad + \enspace
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)edge[bend right=40](c);
\draw[EdgeGraph](a)edge[bend left=40](c);
\draw[EdgeGraph](b)edge[loop left](b);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)edge[bend right=40](c);
\draw[EdgeGraph](a)edge[bend left=40](c);
\draw[EdgeGraph](c)edge[loop below](c);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}
\enspace + \enspace
2\,
\begin{tikzpicture}[Centering,scale=1]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)edge[bend right=40](c);
\draw[EdgeGraph](a)edge[bend left=40](c);
\draw[EdgeGraph](b)edge[bend left=20](c);
\draw[EdgeGraph](b)edge[bend right=20](c);
\draw[EdgeGraph](c)edge[loop above](c);
\end{tikzpicture}.
\end{split}\end{equation}
Let us now state the main results of this subsection:
\begin{theorem} \label{excano}
The species $\mathbb{K} \mathbf{MG}$, endowed with the preceding partial composition, is
an operad.
\end{theorem}
\begin{proof}
This is the operad structure on $\mathbb{K} MG$ implied by the isomorphism of species
$\mathbb{K}\mathbf{MG}\to \{0\}\text{-}MG\ltimes \mathcal{F}_{\{0\}}^{\mathbb{K} E}$.
\end{proof}
One notes that the species $\mathbb{K} \mathbf{G}$ and $\mathbb{K} \mathbf{MG}_c$ are suboperads of $\mathbb{K} \mathbf{MG}$, that $\mathbb{K} \mathbf{G}_c$
a suboperad of $\mathbb{K} \mathbf{G}$, and that $\mathbb{K} \mathbf{T}$ is a suboperad of $\mathbb{K} \mathbf{G}_c$. In particular, this
structure on $\mathbb{K} \mathbf{G}$ is known as the Kontsevich-Willwacher operad~\cite{MV19}. This partial
composition can be formally written as follows. For any $g_1 \in \mathbf{MG}[V_1]$ and $g_2 \in \mathbf{MG}[V_2]$
such that $V_1$ and $V_2$ are two disjoint sets and $\ast \in V_1$,
\begin{equation}\begin{split}
g_1\circ_{\ast} g_2 &= g_1|_{\ast \leftarrow \sum V_2}\oplus g_2\\
&= g_1\cap V_1 \oplus\bigoplus_{v\in n(\ast)}v(\sum V_2)\oplus((\sum V_2)^2)^{\oplus g_1(\ast\ast)}\oplus g_2 \\
&= \sum_{f:n(\ast)\to V_2}\sum_{l:[g_1(\ast\ast)]\to V_2V_2} g_1\cap V_1^2\oplus\bigoplus_{v\in n(\ast)}vf(v)\oplus\bigoplus_{i=1}^{g_1(\ast\ast)}l(i) \oplus g_2,
\end{split}\end{equation}
where $n(\ast)$ is the multiset of neighbours of $\ast$ in $g_1$ and $g_1(\ast\ast)$ is
the number of loops on $\ast$ in $g_1$. This partial composition reformulates in
a simpler way on $\mathbb{K}\mathbf{G}$. For any $g_1 \in \mathbf{G}[V_1]$ and $g_2 \in \mathbf{G}[V_2]$ such
that $V_1$ and $V_2$ are two disjoint sets and $\ast \in V_1$,
\begin{equation}\begin{split}
g_1\circ_{\ast} g_2 &= g_1|_{\ast \leftarrow \sum V_2}\oplus g_2\\
&= g_1\cap V_1 \oplus\bigoplus_{v\in n(\ast)}v(\sum V_2)\oplus g_2 \\
&= \sum_{f:n(\ast)\rightarrow V_2} g_1\cap V_1^2\oplus\bigoplus_{v\in n(\ast)}vf(v)\oplus g_2,
\end{split}\end{equation}
where $n(\ast)$ is now the set of neighbour of $\ast$ in $g_1$. For instance, one has:
\begin{equation}
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(1,1){$a$};
\node[NodeGraph](s)at(0,0){$\ast$};
\node[NodeGraph](b)at(1,-1){$b$};
\draw[EdgeGraph](a)--(s);
\draw[EdgeGraph](s)--(b);
\end{tikzpicture}
\enspace \circ_\ast \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(1,0){$d$};
\draw[EdgeGraph](c)--(d);
\end{tikzpicture}
\enspace = \enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[EdgeGraph](c)--(a);
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](c)--(b);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](d)--(a);
\draw[EdgeGraph](c)--(b);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](c)--(a);
\draw[EdgeGraph](d)--(b);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](d)--(a);
\draw[EdgeGraph](d)--(b);
\end{tikzpicture}.
\end{equation}
We observe that all the graphs appearing in $g_1 \circ_\ast g_2$ have $1$ as
coefficient.
Let us turn to the oriented case (cf Example~\ref{defor}). Let $V_1$ and $V_2$ be two disjoint
sets such that $\ast \in V_1$. For any rooted oriented multigraphs $(g_1, v_1) \in \mathbf{MG}_{or}^\bullet[V_1]$
and $(g_2, v_2) \in\mathbf{MG}_{or}[V_2]^\bullet$, define a partial composition of $(g_1, v_1)$ and $(g_2, v_2)$ as
the sum of all the rooted multigraphs of $\mathbf{MG}_{or}^\bullet[V_1 \setminus \{\ast\} \sqcup V_2]$ obtained
by the following:
\begin{enumerate}
\item Take the disjoint union of $g_1$ and $g_2$;
\item Remove the vertex $\ast$. We then have some edges with a loose end;
\item Connect each non labelled loose end to $v_2$;
\item Connect each labelled loose end to any vertex in $V_2$;
\item The new root is $v_1$ if $v_1 \ne \ast$ and is $v_2$ otherwise.
\end{enumerate}
For instance, by depicting by squares the roots of the multigraphs, one has:
\begin{equation}
\label{canoor}
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](s)at(0,0){$\ast$};
\node[RootGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\draw[ArcGraph](a)--(s);
\draw[ArcGraph](s)--(b);
\end{tikzpicture}
\enspace \circ_\ast \enspace
\begin{tikzpicture}[Centering,scale=1]
\node[RootGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(1,0){$a$};
\draw[ArcGraph](c)--(d);
\end{tikzpicture}
\enspace = \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[RootGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[ArcGraph](a)--(c);
\draw[ArcGraph](c)--(b);
\draw[ArcGraph](c)--(d);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[RootGraph](a)at(1,1){$a$};
\node[NodeGraph](b)at(1,-1){$b$};
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](d)at(2,0){$d$};
\draw[ArcGraph](a)--(d);
\draw[ArcGraph](c)--(d);
\draw[ArcGraph](c)--(b);
\end{tikzpicture}.
\end{equation}
\begin{theorem}
The species $\mathbb{K} \mathbf{MG}_{orc}^\bullet$, endowed with the preceding partial
composition, is an operad.
\end{theorem}
\begin{proof}
This is the operad structure on $\mathbb{K} MG_{or}^{\bullet}$ implied by the monomorphism
$\mathbb{K} G_{or}^{\bullet} \hookrightarrow \{\_,>\} \text{-}G\ltimes\mathcal{F}_{\_}^{\mathbb{K} Id}\uplus\mathcal{F}_{>}^{\mathbb{K} E}$
defined by:
\begin{equation}\begin{split}
\mathbb{K} \mathbf{G}_{or}^{\bullet}[V] &\hookrightarrow \{\_,>\}\text{-}\mathbf{G}\ltimes\mathcal{F}_{\_}^{\mathbb{K} Id}\uplus\mathcal{F}_{>}^{\mathbb{K} E}[V] \\
(g,r) &\mapsto (g,f:\left\{\begin{array}{l}
\_ \mapsto r_{\_} \\
> \mapsto \left(\sum V\right)_{>}\end{array}\right.).
\end{split}\end{equation}
This concludes the proof
\end{proof}
It is straightforward to note that the subspecies of connected components $\mathbb{K} \mathbf{MG}^{\bullet}_{orc}$
and the species $\mathbb{K} \mathbf{G}^{\bullet}_{or}$ are suboperads of $\mathbb{K} \mathbf{MG}$ and that $\mathbb{K} \mathbf{G}^{\bullet}_{orc}$ is a
suboperad of $\mathbb{K} \mathbf{G}^{\bullet}_{or}$.
In a rooted tree, each edge has a parent end and a child end. Given a rooted tree $t$ with
root $r$, denote by $t_r$ the oriented tree where each parent end of $t$ is labelled and
each child end is non labelled. Then, the monomorphism $\mathbf{T}^{\bullet}\hookrightarrow
\mathbf{G}_{orc}^{\bullet}$ which sends each ordered pair $(t,r)$, where $t$ is a tree and $r$ is
its root, on $(t_r,r)$ induces an operad structure on the species of rooted trees which is
exactly the operad $\mathbf{PLie}$. Hence $\mathbf{PLie}$ is a graph insertion operad.
For the sake of completeness, let us end this section by mentioning
that the notion of graph insertion operad introduced here is different
than the one mentioned in \cite{Kreimer:2000ja}, in the context of Feynman
graph insertions in quantum field theory.
\subsection{Canonical graph operad}
We study here in more details the operad structure on $\mathbb{K} \mathbf{G}$ implied by the one
on $\mathbb{K} \mathbf{MG}$ given in Theorem \ref{excano}.
We will see that while $\mathbb{K} \mathbf{G}$ itself has an
involved operadic structure, it has many interesting sub-operads.
Before explaining how $\mathbb{K} G$ has an involved operadic structure, let us first introduce some notations.
Let $S$ be a species, $I$ be a set, $\{V_i\}_{i\in I}$ be a family of finite sets, and
$x_i\in S[V_i]$ for all $i\in I$. We call {\em subspecies of $S$ generated by $\{x_i\}_{i\in I}$}
the smallest subspecies of $S$ containing the family $\{x_i\}_{i\in I}$.
If $S$ is furthermore an operad, we call {\em suboperad of $S$ generated by $\{x_i\}_{i\in I}$}
the smallest suboperad of $S$ containing the family $\{x_i\}_{i\in I}$. We write that {\em $x$ is
generated by $\{x_i\}_{i\in I}$} if $x$ is in the suboperad generated by $\{x_i\}_{i\in I}$.
These definitions given, it is natural to search for a smallest family of generators of
$\mathbb{K} \mathbf{G}$. The search of such a family is computationally hard. Using computer algebra,
we obtain a family of generators of $\mathbb{K} \mathbf{G}$ of arity less than $5$:
\begin{equation}\begin{split} \label{equ:generators_G}
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,0){};
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,0){};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture},
\qquad
\begin{tikzpicture}[Centering,scale=.4]
\node[NodeGraph](a)at(-1,-1){};
\node[NodeGraph](b)at(1,-1){};
\node[NodeGraph](c)at(0,.707){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](b)--(c);
\end{tikzpicture},
\qquad
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,0){};
\node[NodeGraph](c)at(2,0){};
\node[NodeGraph](d)at(3,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)--(d);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(-1,-1){};
\node[NodeGraph](b)at(-1,1){};
\node[NodeGraph](c)at(0,0){};
\node[NodeGraph](d)at(-2,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](b)--(d);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(-1,-1){};
\node[NodeGraph](b)at(-1,1){};
\node[NodeGraph](c)at(0,0){};
\node[NodeGraph](d)at(-2,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](b)--(d);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(-1,-1){};
\node[NodeGraph](b)at(-1,1){};
\node[NodeGraph](c)at(0,0){};
\node[NodeGraph](d)at(-2,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](c)--(d);
\end{tikzpicture},
\\
\quad
\begin{tikzpicture}[Centering,scale=.35]
\node[NodeGraph](a)at(-2,0){};
\node[NodeGraph](b)at(0,0){};
\node[NodeGraph](c)at(-2,2){};
\node[NodeGraph](d)at(0,2){};
\node[NodeGraph](e)at(-1,-2){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.35]
\node[NodeGraph](a)at(-2,0){};
\node[NodeGraph](b)at(0,-2){};
\node[NodeGraph](c)at(-2,2){};
\node[NodeGraph](d)at(0,0){};
\node[NodeGraph](e)at(-2,-2){};
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(-1,-1){};
\node[NodeGraph](b)at(-1.25,.75){};
\node[NodeGraph](c)at(0,2){};
\node[NodeGraph](d)at(1.25,.75){};
\node[NodeGraph](e)at(1,-1){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](d)--(e);
\draw[EdgeGraph](e)--(a);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.35]
\node[NodeGraph](a)at(-2,0){};
\node[NodeGraph](b)at(0,0){};
\node[NodeGraph](c)at(-2,2){};
\node[NodeGraph](d)at(0,2){};
\node[NodeGraph](e)at(-1,-2){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,-1){};
\node[NodeGraph](c)at(-1,1){};
\node[NodeGraph](d)at(1,1){};
\node[NodeGraph](e)at(-1,-1){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,-1){};
\node[NodeGraph](c)at(-1,1){};
\node[NodeGraph](d)at(1,1){};
\node[NodeGraph](e)at(-1,-1){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,-1){};
\node[NodeGraph](c)at(-1,1){};
\node[NodeGraph](d)at(1,1){};
\node[NodeGraph](e)at(-1,-1){};
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\\
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(0,-1){};
\node[NodeGraph](b)at(3,-1){};
\node[NodeGraph](c)at(1.5,2){};
\node[NodeGraph](d)at(1.5,.5){};
\node[NodeGraph](e)at(1.5,-2.5){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(0,-1.5){};
\node[NodeGraph](b)at(3,-1){};
\node[NodeGraph](c)at(3,-2){};
\node[NodeGraph](d)at(1.5,0){};
\node[NodeGraph](e)at(1.5,-3){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,1){};
\node[NodeGraph](c)at(.5,-1){};
\node[NodeGraph](d)at(0,1){};
\node[NodeGraph](e)at(1,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,1){};
\node[NodeGraph](c)at(2,-1){};
\node[NodeGraph](d)at(0,1){};
\node[NodeGraph](e)at(1,0){};
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,1){};
\node[NodeGraph](c)at(2,-1){};
\node[NodeGraph](d)at(0,1){};
\node[NodeGraph](e)at(1,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.7,rotate=90]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,1){};
\node[NodeGraph](c)at(-1,-1){};
\node[NodeGraph](d)at(0,1){};
\node[NodeGraph](e)at(1,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](c)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.7,rotate=90]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,1){};
\node[NodeGraph](c)at(-1,-1){};
\node[NodeGraph](d)at(0,1){};
\node[NodeGraph](e)at(1,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](c)--(e);
\draw[EdgeGraph](d)--(e);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.5]
\node[NodeGraph](a)at(-1,-1){};
\node[NodeGraph](b)at(-1.25,.75){};
\node[NodeGraph](c)at(0,2){};
\node[NodeGraph](d)at(1.25,.75){};
\node[NodeGraph](e)at(1,-1){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](b)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](c)--(e);
\draw[EdgeGraph](d)--(e);
\end{tikzpicture}.
\end{split}\end{equation}
Due to the symmetric group action on $\mathbb{K} \mathbf{G}$, only the knowledge of the shapes of the graphs
is significant. While~\eqref{equ:generators_G} does not provide to us any particular
insight on a possible characterisation of the generators, it does suggest that any graph
with ``enough'' edges must be a generator. This is confirmed by the following lemma.
\begin{lemma} \label{lem:infinite_number_generators_G}
Let $\{V_i\}_{i\in I}$ be a family of non empty finite sets, $\{g_i\}_{i \in I}$ be a family
of graphs such that $g_i \in\mathbf{G}[V_i]$, and let $g$ be a graph in $\mathbf{G}[V]$ with at least
$\binom{n-1}{2} +1$ edges, where $n = |V|$. Then $g$ is generated by $\{g_i\}_{i\in I}$ if
and only if $g=g_i$ for some $i\in I$.
\end{lemma}
\begin{proof}
Suppose that $g\not\in\{g_i\}_{i\in I}$. It is sufficient to show that $g$ can not appear
in the support of any vector of the form $g_1\circ_{\ast} g_2$ for any $g_1$ and $g_2$
different of $g$. Hence let $V_1$ and $V_2$ be two disjoint finite sets such that
$V_1\sqcup V_2 = V$, $g_1\in G'[V_1]$ and $g_2\in G[V_2]$, and denote by $e_1$ the number
of edges of $g_1$ and by $e_2$ the number of edges of $g_2$. Then the graphs in the support
of $g_1\circ_{\ast} g_2$ have $e_1+e_2$ edges. This is maximal when $g_1$ and $g_2$ are
both complete graphs and is then equal to $\binom{x}{2}+\binom{n-x}{2} = x^2-nx+\binom{n}{2}$
where $0\leq x=|V_1|\leq n-1$.
If $x = 0$ then necessarily $g_1 =\emptyset_{\ast}$ and $g\in \text{Supp}(g_1\circ_{\ast}g_2)
= \text{Supp}(g_2)$ if and only if $g=g_2$. This is impossible, hence $x\not = 0$. The expression
$x^2-nx+\binom{n}{2}$ is then maximal for $x=1$ or $x=n-1$ and is equal in both cases to
$\binom{n-1}{2}<\binom{n-1}{2} +1$. This implies that $g$ can not be in the support of $g_1\circ_{\ast} g_2$.
This concludes the proof.
\end{proof}
\begin{proposition} \label{gc}
The operad $\mathbb{K} \mathbf{G}$ is not free and has an infinite number of generators.
\end{proposition}
\begin{proof}
The fact that $\mathbb{K} \mathbf{G}$ has an infinite number of generators is a direct consequence of
Lemma~\ref{lem:infinite_number_generators_G}. Moreover, the relation
\begin{equation}\begin{split} \label{nf}
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](s)at(1,0){$\ast$};
\draw[EdgeGraph](a)--(s);
\end{tikzpicture}
&
\circ_\ast
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](c)at(1,0){$c$};
\draw[EdgeGraph](b)--(c);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](s)at(1,0){$\ast$};
\draw[EdgeGraph](c)--(s);
\end{tikzpicture}
\circ_\ast
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](a)at(1,0){$a$};
\draw[EdgeGraph](b)--(a);
\end{tikzpicture}
\enspace - \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](s)at(1,0){$\ast$};
\draw[EdgeGraph](b)--(s);
\end{tikzpicture}
\circ_\ast
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](c)at(1,0){$c$};
\draw[EdgeGraph](a)--(c);
\end{tikzpicture}
\enspace
- 2 \,
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)--(c);
\end{tikzpicture}
\\[.5em]
& =
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)--(c);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](c)at(1,0){$c$};
\node[NodeGraph](a)at(2,0){$a$};
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)--(a);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](c)at(0,0){$c$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](a)at(2,0){$a$};
\draw[EdgeGraph](c)--(b);
\draw[EdgeGraph](b)--(a);
\end{tikzpicture}
\enspace + \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](a)at(1,0){$a$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](b)--(a);
\draw[EdgeGraph](a)--(c);
\end{tikzpicture}
\\[.5em]
& \qquad - \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](a)at(1,0){$a$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](b)--(a);
\draw[EdgeGraph](a)--(c);
\end{tikzpicture}
\enspace - \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](c)at(1,0){$c$};
\node[NodeGraph](b)at(2,0){$b$};
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](c)--(b);
\end{tikzpicture}
- 2 \,
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)--(c);
\end{tikzpicture}
\\[.5em]
& = 0
\end{split}\end{equation}
shows that $\mathbb{K} \mathbf{G}$ is not free.
\end{proof}
As a consequence of Proposition~\ref{gc}, it seems particularly involved to further
investigate the structure of $\mathbb{K} \mathbf{G}$. Let us then restrict further to its suboperad $\mathbb{K} \mathbf{T}$ of
trees. A family of generators of $\mathbb{K} \mathbf{T}$ with arity less than $6$ is:
\begin{equation}\begin{split}
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,0){};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture},
\qquad
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,0){};
\node[NodeGraph](c)at(2,0){};
\node[NodeGraph](d)at(3,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)--(d);
\end{tikzpicture},
\qquad
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(-1,0){};
\node[NodeGraph](c)at(0,1){};
\node[NodeGraph](d)at(1,0){};
\node[NodeGraph](e)at(-2,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](b)--(e);
\end{tikzpicture},
\qquad
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(-1,0){};
\node[NodeGraph](c)at(0,1){};
\node[NodeGraph](d)at(1,0){};
\node[NodeGraph](e)at(0,-1){};
\node[NodeGraph](f)at(-2,0){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](a)--(e);
\draw[EdgeGraph](b)--(f);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(-1,0){};
\node[NodeGraph](c)at(1,-1){};
\node[NodeGraph](d)at(1,1){};
\node[NodeGraph](e)at(-2,-1){};
\node[NodeGraph](f)at(-2,1){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](a)--(c);
\draw[EdgeGraph](a)--(d);
\draw[EdgeGraph](b)--(e);
\draw[EdgeGraph](b)--(f);
\end{tikzpicture},
\enspace
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){};
\node[NodeGraph](b)at(1,0){};
\node[NodeGraph](c)at(2,0){};
\node[NodeGraph](d)at(3,0){};
\node[NodeGraph](e)at(4,0){};
\node[NodeGraph](f)at(2,1){};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)--(c);
\draw[EdgeGraph](c)--(d);
\draw[EdgeGraph](d)--(e);
\draw[EdgeGraph](c)--(f);
\end{tikzpicture}.
\end{split}\end{equation}
This operad $\mathbb{K} \mathbf{T}$ has a non trivial link with the pre-Lie operad $\mathbf{PLie}$~\cite{CL01}.
This link is given by the following result.
Recall that $\mathbf{PLie}$ can be seen as an operad structure on $\mathbb{K} T^{\bullet}$.
\begin{proposition} \label{prelie}
The monomorphism of species $\psi : \mathbb{K} \mathbf{T} \to \mathbb{K} \mathbf{T}^{\bullet}$ defined, for any tree
$t \in \mathbf{T}[V]$ by
\begin{equation}
\psi(t) = \sum_{r \in V} (t, r),
\end{equation}
is a monomorphism of operads from $\mathbb{K} \mathbf{T}$ to $\mathbf{PLie}$.
\end{proposition}
\begin{proof}
Let $t\in T[V]$ be a tree and $r,v\in V$. Denote by $n_t(v)$ the set of neighbours of
$v$ in $t$ and denote by $c_{t,r}(v)$ the set of children of $v$ when $t$ is rooted
on $r$, i. e $c_{t,r}(v)= n_{>}(v)$ in $t_r$. If $r\not = v$, further denote by
$p_{t,r}(v)$ the parent of $v$ in $t$ when $t$ is rooted on $r$ i. e
$\{p_{t,r}(v)\} = n_{\_}(v)$ in $t_r$.
Let $V_1$ and $V_2$ be two disjoint sets and $t_1\in T'[V_1]$ and $t_2\in T[V_2]$. We have:
\begin{equation}\begin{split}
\psi_{V_1}(t_1)\circ_{\ast} \psi_{V_2}(t_2)
&= \sum_{r_1\in V_1+\{\ast\}} (t_1,r_1) \circ_{\ast} \sum_{r_2\in V_2}(t_2,r_2) \\
&= \sum_{r_1\in V_1+\{\ast\}}\sum_{r_2\in V_2} (t_1,r_1)\circ_{\ast} (t_2,r_2) \\
&= \sum_{r_1\in V_1}\sum_{r_2\in V_2} \left(t_1\cap V_1^2\oplus p_{t_1,r_1}(\ast)r_2
\oplus t_2 \oplus \bigoplus_{v\in c_{t_1,r_1}(\ast)} \left(\sum V_2\right)v, r_1\right) \\
&\quad + \sum_{r_2\in V_2}\left(t_1\cap V_1^2\oplus t_2 \oplus \bigoplus_{v\in c_{t_1,\ast}(\ast)}
\left(\sum V_2\right)v,r_2\right) \\
&= \sum_{r_1\in V_1} \left(t_1\cap V_1^2\oplus p_{t_1,r_1}(\ast)\left(\sum V_2\right)
\oplus t_2 \oplus \bigoplus_{v\in c_{t_1,r_1}(\ast)} \left(\sum V_2\right)v, r_1\right) \\
&\quad+ \sum_{r_2\in V_2} \left(t_1\cap V_1^2\oplus t_2 \oplus \bigoplus_{v\in c_{t_1,\ast}(\ast)}
\left(\sum V_2\right)v,r_2\right) \\
&= \sum_{r\in V_1+V_2}\left(t_1\cap V_1^2\oplus\bigoplus_{v\in n_{t_1}(\ast)}\left(\sum V_2\right)v
\oplus t_2,r\right) \\
&= \sum_{r\in V_1+V_2} \left(t_1|_{\ast \leftarrow \sum V_2}\oplus t_2,r\right) \\
&= \psi_{V_1+V_2}(t_1\circ_{\ast} t_2)
\end{split}\end{equation}
\end{proof}
A natural question to ask is how to extend this morphism to $\mathbb{K} \mathbf{G}_c$ and $\mathbb{K} \mathbf{MG}_c$. Let us
introduce some notations in order to answer this question. For $g\in \mathbf{MG}_c[V]$, $r\in V$,
and $t\in \mathbf{T}[V]$ a spanning tree of $g$, let $\overrightarrow{g}^{(t,r)}\in \mathbf{MG}_{orc}$ be
the oriented multigraph obtained by labelling the edges of $g$ in $t$ in the same way as the
edges of $t_r$, and by labelling both ends of the edges in $g$ not in $t$. More formally, we
have $\overrightarrow{g}^{(t,r)}=t_{r}\oplus\iota_{\mathbf{G}}(g\setminus t)$, where $\iota: \mathbb{K}
\mathbf{MG}\rightarrow \mathbb{K} \mathbf{MG}_{or}$ sends a multigraph to the oriented multigraph obtained by
labelling all the edges ends.
Define $\mathbb{K} \mathcal{O}_2\subset \mathbb{K}\mathcal{O}_1\subset\mathbb{K} \mathbf{ST}$ three subspecies of $\mathbb{K}
\mathbf{MG}_{orc}^{\bullet}$ by
\begin{equation}
\mathbf{ST}[V]=\left\{
(\overrightarrow{g}^{(t,r)},r) : g\in \mathbf{MG}_c[V], r\in V \text{ and $t$ a
spanning tree of $g$}\right\},
\end{equation}
\begin{equation}
\mathcal{O}_1[V] = \left\{\sum_{r\in V} (\overrightarrow{g}^{(t(r),r)},r) : g\in
\mathbf{MG}_c[V]\text{ and for each $r$, $t(r)$ a spanning tree of $g$}\right\},
\end{equation}
\begin{multline}
\mathcal{O}_2[V]=
\left\{(\overrightarrow{g}^{(t_1,r)},r)-(\overrightarrow{g}^{(t_2,r)},r) : g\in
\mathbf{MG}_c[V],r\in V,
\right. \\ \left.
\text{ and $t_1$ and $t_2$ two spanning trees of $g$}\right\}.
\end{multline}
\begin{lemma} \label{lemmfond}
The following properties hold:
\begin{enumerate}[label=(\roman*)]
\item $\mathbb{K} \mathbf{ST}$ is a suboperad of $\mathbb{K} \mathbf{MG}_{orc}^{\bullet}$ isomorphic to $\mathbb{K}\mathbf{MG}\times\mathbf{PLie}$,
\item $\mathbb{K} \mathcal{O}_1$ is a suboperad of $\mathbb{K} \mathbf{ST}$,
\item $\mathbb{K}\mathcal{O}_2$ is an ideal of $\mathbb{K}\mathcal{O}_1$.
\end{enumerate}
\end{lemma}
\begin{proof}
{\em Proof of i.} The species morphism $\mathbb{K} \mathbf{MG}\times \mathbf{PLie} \hookrightarrow \mathbb{K} \mathbf{MG}^{\bullet}_{orc}$ given by
$(g,(t,r)) \mapsto (\overrightarrow{g}^{(t,r)}, r)$ is an operad morphism and hence its image $\mathbf{ST}$
is a suboperad of $ \mathbb{K} MG^{\bullet}_{orc}$.
In order to prove the next two items we first give two equalities. Let $U:\mathbb{K} MG_{or} \rightarrow \mathbb{K} MG$
be the forgetful functor which sends an oriented graph on the graph obtained by forgetting the
orientation (i. e the labels). Let $V_1$ and $V_2$ be two disjoint sets, $g_1\in \mathbf{MG}_c'[V_1]$
and $g_2\in \mathbf{MG}_c[V_2]$ be two connected multigraphs, $t$ a spanning tree of $g_1$ and
for each $v\in V_2$, $t(v)$ a spanning tree of $g_2$. Then, for $r\in V_2$
\begin{equation} \begin{split}\label{eq1}
U\times\id&\left((\overrightarrow{g_1}^{(t,\ast)},\ast)\circ_{\ast}(\overrightarrow{g_2}^{(t(r),r)},r)\right) \\
&= \left(g_1\cap V_1 \oplus\bigoplus_{v\in n(\ast)}v(\sum V_2)\oplus((\sum V_2)^2)^{\oplus g_1(\ast\ast)}\oplus g_2, r\right) \\
&= (g_1\circ_{\ast}g_2, r).
\end{split} \end{equation}
Let now $r$ be a vertex in $V_1$. Denote by $p$ the parent of $\ast$ in $t_r$, by $c(\ast)$
the children of $\ast$ in $t_r$, by $n_{g_1\setminus t}(\ast)$ the multiset of neighbours
of $\ast$ in $g_1\setminus t$ and by $n(\ast)$ the multiset of neighbours of $\ast$ in
$g_1$, so that $n(\ast)=n_{g_1\setminus t}(\ast)\cup c(\ast)\cup \{p\}$. Then
\begin{equation} \begin{split}\label{eq2}
U\times\id&\left((\overrightarrow{g_1}^{(t,r)},r)\circ_{\ast}\sum_{v\in V_2}(\overrightarrow{g_2}^{(t(v),v)},v) \right) \\
&= \sum_{v\in V_2} U\times\id\left((\overrightarrow{g_1}^{(t,r)},r)\circ_{\ast}\sum_{v\in V_2}(\overrightarrow{g_2}^{(t(v),v)},v) \right) \\
&= \sum_{v\in V_2}\left(g_1\cap V_1^2\oplus pv \oplus \bigoplus_{v'\in c(\ast)} v'\left(\sum V_2\right)
\oplus\bigoplus_{v'\in n_{g_1\setminus t}(\ast)}v'\left(\sum V_2\right) \oplus ((\sum V_2)^2)^{\oplus g_1(\ast\ast)}\oplus g_2, r\right)\\
&=\left(g_1\cap V_1^2\oplus p\left(\sum V_2\right) \oplus \bigoplus_{v'\in c(\ast)} v'\left(\sum V_2\right)
\oplus\bigoplus_{v'\in n_{g_1\setminus t}(\ast)}v'\left(\sum V_2\right) \oplus ((\sum V_2)^2)^{\oplus g_1(\ast\ast)}\oplus g_2, r\right) \\
&= \left(g_1\cap V_1^2\oplus \bigoplus_{v\in n(\ast)} v\left(\sum V_2\right)\oplus ((\sum V_2)^2)^{\oplus g_1(\ast\ast)}\oplus g_2, r\right) \\
&= (g_1\circ_{\ast} g_2, r).
\end{split} \end{equation}
{\em Proof of ii.} Let $V_1$ and $V_2$ be two disjoint sets, $g_1\in \mathbf{MG}'_c[V_1]$ and $g_2\in \mathbf{MG}_c[V_2]$
be two connected multigraphs and for each $v\in V_1+\{\ast\}$, $t(v)$ a spanning tree of $g_1$
and for each $v\in V_2$, $t(v)$ a spanning tree of $g_2$. We have
\begin{equation}\begin{split}
\sum_{r_1\in V_1+\{\ast\}} \overrightarrow{g_1}^{(t(r_1),r_1)} &\circ_{\ast} \sum_{r_2\in V_2} \overrightarrow{g_2}^{(t(r_2),r_2)}
= \sum_{r_1\in V_1+\{\ast\}}\sum_{r_2\in V_2} \overrightarrow{g_1}^{(t(r_1),r_1)}\circ_{\ast} \overrightarrow{g_2}^{(t(r_2),r_2)} \\
&= \sum_{r_1\in V_1+\{\ast\}}\sum_{r_2\in V_2} \left(\overrightarrow{g_1}^{(t(r_1),r_1)}|_{\ast_{\_}\leftarrow r_{2\_},\, \ast_{>}\leftarrow(\sum V_2)_{>}}
\oplus \overrightarrow{g_2}^{(t(r_2),r_2)}, r_1|_{\ast\leftarrow r_2}\right).
\end{split}\end{equation}
Then from \ref{eq1} and \ref{eq2} we know that applying $U\times\id$ to the preceding sum gives us:
\begin{equation}
\sum_{r\in V_1+V_2} (g_1\circ_{\ast}g_2,r).
\end{equation}
To conclude remark that $\mathbb{K}\op_1[V]$ can be defined as the reciprocal image of
$\mathbb{K}\{\sum_{v\in V}(g,v)\,|\,g\in \mathbf{MG}_c[V]\}$ by $U\times\id: \mathbb{K} \mathbf{ST}\rightarrow \mathbb{K} \mathbf{MG}_c^{\bullet}$.
{\em Proof of iii.} It is easy to see that $\mathbb{K}\op_2$ is a left ideal of $\mathbb{K} \mathbf{ST}$ and hence of $\mathbb{K}\op_1$.
Let $V_1$ and $V_2$ be two disjoint finite sets, $g_1\in \mathbf{MG}'_c[V_1]$ and
$g_2\in \mathbf{MG}_c[V_2]$, $r\in V_1$, $t$ a spanning tree of $g_1$ and for every $v\in V_2$, $t(v)$
a spanning tree of $g_2$. Then from \ref{eq1} and \ref{eq2} we know that
$U\times\id(\overrightarrow{g_1}^{(t,r)}\circ_{\ast}\sum_{v\in V_2} \overrightarrow{g_2}^{(t(v),v))}$
is of the form $(g_1\circ_{\ast} g_2, r)$ if $r\not = \ast$, and of the form
$\sum_{v\in V_2} (g_1\circ_{\ast} g_2, v)$ otherwise. In both cases it does not depend on $t$.
This concludes this proof since $\mathbb{K} \op_2[V]$ is the kernel of $(U\times\id)_V: \mathbb{K} \mathbf{ST}[V]\rightarrow \mathbb{K} G_c^{\bullet}[V]$.
\end{proof}
We can see $\mathbf{PLie}$ as a suboperad of $\mathbf{ST}$ by the monomorphism
$(t,r)\mapsto (t_r,r)$. The image of the operad morphism $\psi$ of Proposition~\ref{prelie} is
then $\mathbb{K}\mathcal{O}_1\cap \mathbf{PLie}$ and we have that $\mathbb{K}\mathcal{O}_2\cap \mathbf{PLie} = \{0\}$ and hence
$\mathbb{K}\mathcal{O}_1\cap \mathbf{PLie}/\mathbb{K}\mathcal{O}_2\cap \mathbf{PLie} = \mathbb{K}\mathcal{O}_1\cap \mathbf{PLie}$.
\begin{proposition}
The operad isomorphism $\psi: \mathbb{K} \mathbf{T} \to \mathbf{PLie}\cap\mathbb{K}\mathcal{O}_1$
extends into an operad isomorphism $\psi: \mathbb{K} \mathbf{MG}_c \to \mathbb{K}\mathcal{O}_1/\mathbb{K}\mathcal{O}_2$ satisfying,
for any $g \in \mathbf{MG}_c[V]$,
\begin{equation}
\psi(g) = \sum_{r\in V}\overrightarrow{g}^{(t(r),r)},
\end{equation}
where for each $r\in V$, $t(r)$ is a spanning tree of $g$. Furthermore, this isomorphism
restricts itself to an isomorphism $\mathbb{K}\mathbf{G}_c \to \mathbb{K}\mathcal{O}_1\cap\mathbb{K}\mathbf{G}_{orc}^{\bullet}/\mathbb{K}\mathcal{O}_2\cap\mathbb{K}\mathbf{G}_{orc}^{\bullet}$.
\end{proposition}
\begin{proof}
This statement is a direct consequence of Lemma \ref{lemmfond} and its proof.
\end{proof}
The last results are summarized in the following commutative diagram of operad morphisms.
\begin{equation}
\begin{tikzcd}
\mathbb{K} \mathbf{T} \arrow[r, "\sim"] \arrow[d, hook]
& \mathbf{PLie}\cap\mathbb{K}\mathcal{O}_1/\mathbb{K}\mathcal{O}_2 \arrow[r, equal] \arrow[d, hook]
& \mathbf{PLie}\cap\mathbb{K}\mathcal{O}_1 \arrow[d,hook] \arrow[r, hook]
& \mathbf{PLie} \arrow[d,hook]\\
\mathbb{K} \mathbf{G}_c \arrow[r, "\sim"] \arrow[d, hook]
& \mathbb{K}\mathcal{O}_1\cap\mathbb{K}\mathbf{G}_{orc}^{\bullet}/\mathbb{K}\mathcal{O}_2\cap\mathbb{K}\mathbf{G}_{orc}^{\bullet} \arrow[d, hook]
& \mathbb{K}\mathbf{G}_{orc}^{\bullet}\cap\mathbb{K}\mathcal{O}_1 \arrow[l, two heads] \arrow[r, hook] \arrow[d, hook]
& \mathbb{K}\mathbf{G}_{orc}^{\bullet}\cap\mathbb{K} \mathbf{ST} \arrow[d, hook] \\
\mathbb{K}\mathbf{MG}_c \arrow[r, "\sim"]
& \mathbb{K}\mathcal{O}_1/\mathcal{O}_2
& \mathbb{K}\mathcal{O}_1 \arrow[l, two heads] \arrow[r, hook]
& \mathbb{K}\mathbf{MG}\times\mathbf{PLie}
\end{tikzcd}
\end{equation}
\section{Finitely generated suboperads} \label{sec:suboperads}
Let us now focus on finitely generated suboperads of $\mathbb{K} \mathbf{MG}$. In particular we will study
the operads generated by:
\begin{enumerate}
\item
\begin{math}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture}
\right\}
\end{math} which is isomorphic to $\mathbf{Com}$,
\item \begin{math}\left\{
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
\right\}
\end{math} which is isomorphic to $\mathbf{ComMag}$,
\item
\begin{math}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture},
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
\right\}
\end{math} which we will denote by $\mathbf{SP}$,
\item \begin{math}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\draw[EdgeGraph](a)edge[loop](a);
\end{tikzpicture},
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture}
\right\}
\end{math} which we will denote by $\mathbf{LP}$.
\end{enumerate}
First remark that the
suboperad of $\mathbb{K} \mathbf{G}$ generated by
\begin{math}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture}
\right\}
\end{math}
is isomorphic to the commutative operad $\mathbf{Com}$. Indeed,
\begin{equation}
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](s)at(1,0){$\ast$};
\end{tikzpicture}
\enspace \circ_\ast \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](b)at(0,0){$b$};
\node[NodeGraph](c)at(1,0){$c$};
\end{tikzpicture}
\enspace = \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\end{tikzpicture}
\enspace = \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](s)at(0,0){$\ast$};
\node[NodeGraph](c)at(1,0){$c$};
\end{tikzpicture}
\enspace \circ_\ast \enspace
\begin{tikzpicture}[Centering,scale=.7]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture}.
\end{equation}
Now recall that the set operad $\mathbf{ComMag}$ is the free operad generated by one binary
and symmetric element~\cite{BL11}. More formally, $\mathbf{ComMag}[V]$ is spanned by nonplanar binary trees
with set of leaves equal to $V$. Let $s$ be the connected species defined by $\dim(s[V]) = 1$ if
$|V|=2$, $\dim(s[V])=0$ otherwise. The action of transposition on the sole element of
$s[\{a,b\}]$ is trivial. Then $\mathbf{ComMag} = \mathbf{Free}_{s}$.
\begin{proposition} \label{commag}
The suboperad of $\mathbb{K} \mathbf{G}$ generated by
\begin{math}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
\right\}
\end{math}
is isomorphic to $\mathbf{ComMag}$.
\end{proposition}
\begin{proof}
We know from Proposition~\ref{prelie} that the operad of the statement
is isomorphic to the
suboperad of $\mathbf{PLie}$ generated by
\begin{equation}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\node[RootGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(0,-1){$b$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
+
\begin{tikzpicture}[Centering,scale=.6]
\node[RootGraph](a)at(0,0){$b$};
\node[NodeGraph](b)at(0,-1){$a$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
\right\}
\end{equation}
Then~\cite{BL11} gives us that this suboperad is isomorphic to $\mathbf{ComMag}$. This concludes
the proof
\end{proof}
The fact that we can see both $\mathbf{Com}$ and $\mathbf{ComMag}$ as disjoint suboperads of $\mathbb{K} \mathbf{G}$ gives us a
natural way to define the smallest operad containing these two as disjoint suboperads. Denote by $G$
the subspecies of $\mathbb{K}\mathbf{G}$ generated by
\begin{math}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture},
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
\right\}
\end{math}
and $\mathbf{SP}$ the suboperad generated by $G$. This operad has some nice properties.
\begin{proposition}
The operad $\mathbf{SP}$ is isomorphic to the operad $\mathrm{Ope}(G,R)$ where $R$
is the subspecies of $\mathbf{Free}_{G}$ generated by
\begin{subequations}
\begin{equation} \label{equ:rel_1}
\Points{c}{\ast} \circ^{\xi}_\ast \Points{a}{b}
\enspace - \enspace
\Points{a}{\ast} \circ^{\xi}_\ast \Points{b}{c},
\end{equation}
\begin{center}
and
\end{center}
\begin{equation} \label{equ:rel_2}
\Segment{a}{\ast} \circ^{\xi}_\ast \Points{b}{c}
\enspace - \enspace
\Points{c}{\ast} \circ^{\xi}_\ast \Segment{a}{b}
\enspace - \enspace
\Points{b}{\ast} \circ^{\xi}_\ast \Segment{a}{c}.
\end{equation}
\end{subequations}
Therefore, $\mathbf{SP}$ is binary and quadratic.
\end{proposition}
\begin{proof}
There is a natural epimorphism $\phi$ from $\mathbf{Free}_{G}$ to $\mathbf{SP}$ which is the identity
on
\begin{math}
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture}
\end{math}
and
\begin{math}
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
\end{math}
and which sends a partial composition $g_1\circ^{\xi}_{\ast}g_2$ on the partial composition
$g_1\circ_{\ast} g_2$. The fact that $(R)$ is included in the kernel of $\phi$
is straightforward. Let now be $w\in \mathbf{Free}_G/(R)[V]$.
A possible representant of $w$ is of the form $\sum_{i=1}^{l}a_iw_i$ where for each $1\leq i\leq l$
$a_i\in \mathbb{K}$ and there is a partition $V=V_{i,1}\sqcup\dots\sqcup V_{i,k_i}$ such that
$w_i=(\dots(\mu_i\circ_{\ast_{i,1}} t_{i,1})\dots)\circ_{\ast_{i,k_i}} t_{i,k_i}$
with $\mu_i$ the sole element in $\in\mathbf{Com}[\{\ast_{i,1},\dots,\ast_{i,k_i}\}]$ and
$t_{i,j}$ is in the basis of $\mathbf{ComMag}[V_{i,j}]$. Here we use the identification of $\mathbf{ComMag}$ and
$\mathbf{Com}$ as suboperads of $\mathbb{K}\mathbf{G}$ done previously. Without loss of generality, we can suppose
that all the $w_i$ are on the same partition of $w$ i. e $V=V_1\sqcup\dots\sqcup V_k$ and
for all $i,j$, $k_i = k$ and $V_{i,j} = V_j$.
With these notations we now have
\begin{equation}
\phi(w) = \sum_{i=1}^l a_i \bigoplus_{j=1}^{k_i}\phi(t_{i,j}).
\end{equation}
Denote by $\mathbf{G}[V_1,\dots,V_k] = \{g_1\oplus\dots\oplus g_k\,,\,g_i\in\mathbf{G}[V_i]\}$. Then there is an isomorphism
from $\mathbb{K}\mathbf{G}[V_1,\dots,V_k]$ to $\mathbb{K}\mathbf{G}[V_1]\otimes\dots\otimes\mathbb{K}\mathbf{G}[V_k]$ defined by $g_1\oplus\dots\oplus g_k
\mapsto g_1\otimes\dots\otimes g_k$. This isomorphism sends $\phi(w)$ on
$\sum_{i=1}^l a_i \bigotimes_{j=1}^{k_i}\phi(t_{i,j})$. Since for all $1\leq i\leq k$ the basis of
$\mathbf{ComMag}[V_i]$ is a free family, the family $\{v_1\otimes\dots \otimes v_k\,,\, \text{$v_i$ is in the
basis of $\mathbf{ComMag}[V_i]$}\}$ is also free and hence $\phi(w) = 0$
implies $a_i = 0$ for all $1\leq i\leq k$. This shows that the epimorphism $\phi$ is also a monomorphism
and hence an isomorphism, which concludes this proof.
\end{proof}
\begin{proposition}
The operad $\mathbf{SP}$ admits as Koszul dual the operad $\mathbf{SP}^!$ which is isomorphic to the
operad $\mathrm{Ope}(G^{\vee}, R)$ where
$R$ is the subspecies of $\mathbf{Free}_{G^\vee}$ generated by
\begin{subequations}
\begin{equation} \label{equ:rel_dual_1}
\Segment{a}{\ast}^{\vee} \circ^{\xi}_\ast \Segment{b}{c}^{\vee},
\end{equation}
\begin{equation} \label{equ:rel_dual_2}
\Points{a}{\ast}^{\vee} \circ^{\xi}_\ast \Segment{b}{c}^{\vee}
\enspace + \enspace
\Segment{c}{\ast}^{\vee} \circ^{\xi}_\ast \Points{a}{b}^{\vee}
\enspace + \enspace
\Segment{b}{\ast}^{\vee} \circ^{\xi}_\ast \Points{a}{c}^{\vee},
\end{equation}
\begin{equation} \label{equ:rel_dual_3}
\Points{a}{\ast}^{\vee} \circ^{\xi}_\ast \Points{b}{c}^{\vee}
+
\Points{c}{\ast}^{\vee} \circ^{\xi}_\ast \Points{a}{b}^{\vee}
+
\Points{b}{\ast}^{\vee} \circ^{\xi}_\ast \Points{c}{a}^{\vee}.
\end{equation}
\end{subequations}
\end{proposition}
\begin{proof}
Let us respectively denote by $r_1$ and $r_2$ and $r'_1$, $r'_2$, and $r'_3$ the vectors
\eqref{equ:rel_1}, \eqref{equ:rel_2}, \eqref{equ:rel_dual_1}, \eqref{equ:rel_dual_2},
and~\eqref{equ:rel_dual_3}. Denote by $I$ the operad ideal generated by~$r_1$ and~$r_2$.
Then as a vector space, $I[[\{a,b,c\}]]$ is the linear span of the set
\begin{equation}
\{r_1, r_1 \cdot (ab), r_2,r_2 \cdot (abc), r_2 \cdot (acb)\},
\end{equation}
where $\cdot$ is the action of the symmetric group, e.g $r_1\cdot (ab) = \mathbf{Free}_{G}[(ab)](r_1)$.
This space is a sub-space of dimension $5$ of $\mathbf{Free}_G[\{a,b,c\}]$, which is
of dimension $12$. Hence, since as a vector space
\begin{equation}
\mathbf{Free}_{G^\vee}[\{a,b,c\}]
\cong \mathbf{Free}_{G^*}[\{a,b,c\}]\cong
\mathbf{Free}_{G}[\{a,b,c\}],
\end{equation}
$I^{\bot}[\{a,b,c\}]$ must be of dimension 7.
Denote by $J$ the ideal generated by $r_1'$, $r_2'$ and $r_3'$. Then as a vector space
$J[\{a,b,c\}]$ is the linear span of the set
\begin{equation}
\{ r_1', r_1'\cdot (ab), r_1'\cdot (ac),r_2', r_2'\cdot (abc),r_2'\cdot (acb), r_3'\}.
\end{equation}
This space is of dimension 7. To conclude we need to show that for any $f\in J[\{a,b,c\}]$
and $x\in I[\{a,b,c\}]$ we have $<f,x>=0$. Denote by $p_{a,b} = \Points{a}{b}$ and $s_{a,b} = \Segment{a}{b}$.
Among the 21 cases to check, we have for example:
\begin{equation}\begin{split}
<r_1',r_1> &=
<s_{a,\ast}^{\vee}\circ^{\xi}_{\ast} s_{b,c}^{\vee}\,,\, p_{\ast,c}\circ^{\xi}_{\ast}p_{a,b} -
p_{a,\ast}\circ^{\xi}_{\ast} p_{b,c}> \\
&= <s_{a,\ast}^{\vee}\circ^{\xi}_{\ast} s_{b,c}^{\vee}\,,\, p_{\ast,c}\circ^{\xi}_{\ast}p_{a,b}> - <s_{a,\ast}^{\vee}\circ^{\xi}_{\ast} s_{b,c}^{\vee}\,,\,p_{a,\ast}\circ^{\xi}_{\ast} p_{b,c}> \\
&= s_{a,\ast}^{\vee}(p_{\ast,c})s_{b,c}^{\vee}(p_{a,b}) -
s_{a,\ast}^{\vee}(p_{a,\ast})s_{b,c}^{\vee}(p_{b,c}) = 0,
\end{split}\end{equation}
and
\begin{equation}\begin{split}
<r_2'\cdot(abc)&,r_2> = \\
&<p_{b,\ast}^{\vee}\circ^{\xi}_{\ast}s_{c,a}^{\vee}+s_{a,\ast}^{\vee}\circ^{\xi}_{\ast}p_{b,c}^{\vee}+s_{c,\ast}^{\vee}\circ^{\xi}_{\ast}p_{a,b}^{\vee},
s_{a,\ast}\circ_{\ast}p_{b,c}-p_{c,\ast}\circ_{\ast}s_{a,b}-p_{b,\ast}\circ_{\ast}s_{c,a}> \\
&=p_{b,\ast}^{\vee}(s_{a,\ast})s_{c,a}^{\vee}(p_{b,c}) -p_{b,\ast}^{\vee}(p_{c,\ast})s_{c,a}^{\vee}(s_{a,b}) -p_{b,\ast}^{\vee}(p_{b,\ast})s_{c,a}^{\vee}(s_{c,a}) \\
&+ s_{a,\ast}^{\vee}(s_{a,\ast})p_{b,c}^{\vee}(p_{b,c}) - s_{a,\ast}^{\vee}(p_{c,\ast})p_{b,c}^{\vee}(s_{a,b}) - s_{a,\ast}^{\vee}(p_{b,\ast})p_{b,c}^{\vee}(s_{c,a}) \\
&+ s_{c,\ast}^{\vee}(s_{a,\ast})p_{a,b}^{\vee}(p_{b,c}) - s_{c,\ast}^{\vee}(p_{c,\ast})p_{a,b}^{\vee}(s_{a,b}) - s_{c,\ast}^{\vee}(p_{b,\ast})p_{a,b}^{\vee}(s_{c,a}) \\
&= -1 +1 = 0.
\end{split}\end{equation}
We leave the verification of the 19 remaining cases to the interested reader.
\end{proof}
In order to compute the Hilbert series of $\mathbf{SP}^!$ we need to use identity (\ref{hdual})
and hence to prove that $\mathbf{SP}$ is Koszul. Providing the necessary background to define
Koszul operads and their properties is out of the scope of this article and the interested
reader is referred to \cite{LV12} and \cite{Men15}. We use here the characterisation
given in \cite{Hof10}.
\begin{proposition}
The operad $\mathbf{SP}$ is Koszul.
\end{proposition}
\begin{proof}
The rooted trees in $\mathbf{Free}_G[n]$ are in bijection with planar tree following the process
described in \cite{Hof10}. Consider the trees of the form
\begin{equation}
\includegraphics[scale=0.7]{fig/pbw},
\end{equation}
where $p$ is the graph with two vertices and no edge, $t$ the graph with two vertices and one
edge and, for $1\leq i\leq k$, $t_i$ is a tree with internal vertices labelled by $t$ and set of
leaves $V_i\subseteq [n]$ such that $\bigsqcup V_i = [n]$.
Then choosing $t<p$ and an order on planar trees similar to the suitable order presented in
section 3.4 of \cite{Hof10} makes the considered trees
a PBW basis of the operad (over an $\mathbb{S}$-module) $\bigoplus_{n\geq 0} \mathbf{SP}[n]$.
This concludes the proof.
\end{proof}
\begin{proposition} \label{hilbert}
The Hilbert series of $\mathbf{SP}^{!}$ is
\begin{equation}
\mathcal{H}_{\mathbf{SP}^!}(x) = \dfrac{(1-\log(1-x))^2-1}{2}.
\end{equation}
\end{proposition}
\begin{proof}
The Hilbert series of $\mathbf{ComMag}$ is $\mathcal{H}_{\mathbf{ComMag}}(x) = 1-\sqrt{1-2x}$
hence the Hilbert series of $\mathbf{SP}\cong E(\mathbf{ComMag})$ is $\mathcal{H}_{\mathbf{SP}}(x)=e^{1-\sqrt{1-2x}}-1$,
where the $-1$ comes from the fact that we consider positive species.
We deduce the Hilbert series of $\mathbf{SP}^!$ from $\mathcal{H}_{\mathbf{SP}}$ and the identity (\ref{hdual}).
\end{proof}
The first dimensions $\dim \mathbf{SP}^![[n]]$ for $n\geq 1$ are
\begin{equation}
1, 2, 5, 17, 74, 394, 2484, 18108, 149904.
\end{equation}
This is sequence~\OEIS{A000774} of~\cite{Slo}. This sequence is in particular linked to some
pattern avoiding signed permutations and mesh patterns.
Before ending this section let us mention the suboperad $\mathbf{LP}$ of $\mathbb{K} \mathbf{MG}$ generated by
\begin{equation}
\left\{
\begin{tikzpicture}[Centering,scale=.6]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\draw[EdgeGraph](a)edge[loop](a);
\end{tikzpicture},
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\end{tikzpicture}
\right\}.
\end{equation}
This operad presents a clear interest since its two generators can be considered as minimal
elements in the sense that a partial composition with the two isolated vertices adds exactly
one vertex and no edge, while a partial composition with the loop adds exactly one edge and
no vertex. A natural question to ask at this point concerns the description of the
multigraphs generated by these two minimal elements.
\begin{proposition}
The following properties hold:
\begin{itemize}
\item the operad $\mathbf{SP}$ is a suboperad of $\mathbf{LP}$;
\item the operad $\mathbf{LP}$ is a strict suboperad of $\mathbb{K} \mathbf{MG}$. In particular, the multigraph
\begin{equation}
\begin{tikzpicture}[Centering,scale=.8]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\node[NodeGraph](c)at(2,0){$c$};
\draw[EdgeGraph](a)--(b);
\draw[EdgeGraph](b)edge[bend left=40](c);
\draw[EdgeGraph](b)edge[bend right=40](c);
\end{tikzpicture}
\end{equation}
is in $\mathbb{K} \mathbf{MG}$ but is not in $\mathbf{LP}$.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item The following shows that
\begin{math}
\begin{tikzpicture}[Centering,scale=.6]
\node[NodeGraph](a)at(0,0){$a$};
\node[NodeGraph](b)at(1,0){$b$};
\draw[EdgeGraph](a)--(b);
\end{tikzpicture}
\end{math}
is in $\mathbf{LP}[\{a,b\}]$ and hence that $\mathbf{SP}$ is a suboperad of $\mathbf{LP}$:
\begin{equation}
\begin{tikzpicture}[Centering,scale=.6]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$\ast$};
\draw[EdgeGraph](a)edge[loop](a);
\end{tikzpicture}
\circ_{\ast}
\Points{a}{b}
-
\begin{tikzpicture}[Centering,scale=.6]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$a$};
\draw[EdgeGraph](a)edge[loop](a);
\end{tikzpicture}
-
\begin{tikzpicture}[Centering,scale=.6]
\tikzset{every loop/.style={}}
\node[NodeGraph](a)at(0,0){$b$};
\draw[EdgeGraph](a)edge[loop](a);
\end{tikzpicture}
\enspace = \enspace
2 \Segment{a}{b}.
\end{equation}
\item Using computer algebra, we generated all vectors in $\mathbf{LP}[\{a,b,c\}]$ with three edges
and showed that the announced multigraph is not a linear combination of these.
\end{itemize}
\end{proof}
\bibliographystyle{plain} |
1704.04130 | \section{Introduction}
Many oscillatory systems enter stable limit cycles as their dynamic steady state. If such systems are coupled, they often interact only through their positions along their periodic orbit, their phases. The simplest prototypical model to describe such coupled phase oscillators is the celebrated Kuramoto model \cite{Kura75,Stro00}. It characterizes the collective dynamics of a variety of phase oscillator systems ranging from chemical reactions \cite{Kura84} and neural networks \cite{Somp90,kirst16_dynamic} to coupled Josephson junctions \cite{Wies96}, laser arrays \cite{Vlad03}, optomechanical systems \cite{Hein11} and mean-field quantum systems \cite{14Witthaut,17Witthaut}.
Studies of the Kuramoto model and more general phase oscillator networks typically focus on the onset of synchronization between the individual oscillators \cite{Kura75,Stro00,Kura84,Aceb05,Doerfler14}. Starting from the analytical results for the mean field behavior in the all-to-all coupled Kuramoto model, correctly predicting the emergence of partial phase locking, extensions of this result to various network topologies were developed \cite{Timm06,06Bocaletti,07Gomez,08Arenas}. These extensions often use a similar methodology and define an adapted order parameter to analyze the transition to synchrony. Interestingly, none of these order parameters captures all transitions from the incoherent to the completely synchronized state for arbitrary, finite networks.
Depending on the application different states of phase ordering are relevant and a different order parameter is appropriate. Commonly, the onset of partial phase locking has received most interest \cite{Kura75,Stro00,Kura84}. For example, partial phase locking indicates the growth of number fluctuations in quantum mean-field models \cite{14Witthaut,17Witthaut}. In contrast, in technical systems such as power grids, a fully phase locked state is required for stable operation \cite{12powergrid,Dorf13,Mott13,16critical}.
We propose a universal order parameter that accurately reflects the phase coherence of phase oscillators in any network, describing the initial growth of partially phase locked clusters as well as the convergence to full synchrony. This order parameter is particularly suited to study the fully phase locked state as it directly reflects the dynamic stability of this steady state. It increases monotonically with the coupling strength, in contrast to previously defined mean field order parameters.
\begin{figure*}
\centering
\includegraphics[width = 0.9\textwidth]{FIG1_kuramoto_example_newSim.pdf}
\caption{\textbf{Synchronization in the Kuramoto model.}
Dynamics of the Kuramoto model for $N=10$ oscillators with a random interaction network. The phase coherence between neighboring oscillators increases with the coupling strength, eventually leading to full synchrony of all oscillators. (a) Topology of the interaction network, the numbers denote $\omega_i$ of the respective oscillator. (b) For small coupling the oscillators move (almost) independently with their individual frequencies (slope). (c,d) As the coupling strength increases beyond $K_{c1} = 0.1$ some oscillators enter a partially phase locked state and their phases evolve with the same time-averaged frequency. (e) If the coupling strength becomes larger than $K_{c2} = 1$, all nodes are phase locked and move with the same constant frequency $\frac{d \theta_i}{dt} = 0$. (f,g) Further increasing the coupling strength reduces the phase differences of the oscillators until complete synchrony $\theta_i - \theta_j = 0$ is achieved for $K \rightarrow \infty$.}
\label{fig:kuramoto_example}
\end{figure*}
\section{Phase oscillators and the Kuramoto model}
Limit cycles are ubiquitous as dynamically stable states in a wide range of systems. When such systems are coupled, interactions can typically be approximated as interactions between their phases $\theta_i$. The Kuramoto model
\begin{equation}
\frac{d \theta_i}{dt} = \omega_i +
K \sum_{j=1}^N A_{i,j} \sin(\theta_j - \theta_i)
\label{eqn:kuramoto-intro}
\end{equation}
is one of the simplest models for such coupled phase oscillators. It describes the dynamics of $N$ oscillators with natural frequencies $\omega_i$ and sinusoidal coupling. The parameter $K$ denotes the coupling strength of the interactions and $A_{i,j} \in \left\{0,1\right\}$ is the adjacency matrix of the interaction network, describing which nodes interact with which other nodes. The results easily extend to inhomogeneous coupling strengths with $A_{i,j} \in \mathbb{R}$. In many applications, interactions between individual oscillators are reciprocal and in the following we assume an undirected network, i.e., a symmetric adjacency matrix $A_{i,j} = A_{j,i}$. Similarly, we can without loss of generality consider a co-rotating frame such that the natural frequencies of the oscillators are centered around $0$ and we have $\sum_i \omega_i = 0$, where the sum runs from $1$ to $N$. In the following we only consider connected networks, as otherwise we can treat the connected sub-systems individually.
The dynamics of coupled Kuramoto oscillators depends strongly on the strength $K$ of the interactions. For small coupling $K$ all oscillators rotate (almost) independently with their natural frequencies $\omega_j$. In this state the phases are \emph{incoherent}. Above some critical coupling strength $K \ge K_{c1}$, a subset of the oscillators starts to synchronize such that their time averaged frequencies $\left< \frac{d \theta_i}{dt} \right>_t$ become identical. The phases of these oscillators then move together in a \emph{partially phase locked} state and their phase differences $\theta_i - \theta_j$ are bounded. When the coupling becomes even stronger, $K \ge K_{c2}$, a \emph{fully phase locked} state appears in a saddle-node bifurcation \cite{14bifurcation}. All oscillators synchronize to a common frequency $\frac{d \theta_i}{dt} = \mathrm{const.} = 0$ and the phase differences between all nodes become constant $\theta_i - \theta_j = \mathrm{const}$. Further increasing the coupling reduces the phase differences until \emph{complete synchronization} of the oscillators, defined by $\theta_i - \theta_j = 0$, is achieved as $K \rightarrow \infty$. This behavior is illustrated in Fig.~\ref{fig:kuramoto_example} showing the dynamics of a small random network of oscillators for various coupling strengths.
Most studies focus on the transition from incoherent oscillators moving at their individual frequencies to a partially phase locked state \cite{Kura75,Stro00,Kura84,Aceb05,Doerfler14}. In a variety of technical systems, however, partial phase coherence is not sufficient for stable function. For instance, Kuramoto-like dynamics appear in a second order model describing the frequency dynamics of power grids \cite{12powergrid,12braess,13nonlocal,14bifurcation,schafer2015decentral,16critical,Dorf13,Mott13}:
\begin{equation}
M_i \frac{\mathrm{d}^2\theta_i}{\mathrm{d}t^2} - D_i \frac{\mathrm{d}\theta_i}{\mathrm{d}t}
= P_i + \sum_{j = 1}^{N} K A_{i,j} \sin(\theta_j - \theta_i).
\label{eq:powergrid}
\end{equation}
Here, $M_i$ is the inertia, $D_i$ the damping coefficient and $P_i$ the power injection at node $i$. The phases $\theta_i(t)$ describe the state of rotating machines (generators or motors) and the coupling their interactions via power transmission lines. In the steady state $\frac{\mathrm{d}\theta_i}{\mathrm{d}t} = 0$, required for stable operation of the power grid, all machines work at the same frequency. This state is characterized by the same equations that describe a fully phase locked state in the Kuramoto model. The stability of this state and how the phase cohesiveness in the network scales with the coupling strength is an important question \cite{Doerfler10}.
Ideally, a universal order parameter would be able to characterize both the transition to partial as well as to full phase locking and the properties of a phase locked state in arbitrary, especially finite networks.
\section{Kuramoto order parameters}
To quantitatively study the transitions from an incoherent to a fully synchronous state one typically introduces an order parameter to measure the phase coherence. For the original all-to-all coupling model, Kuramoto introduced the complex order parameter \cite{Kura84,Stro00}
\begin{equation}
r(t)e^{\text{i}\psi(t)} = \frac{1}{N} \sum_{i=1}^N e^{\text{i}\theta_i} \,,
\label{eqn:def-order-r}
\end{equation}
where $\psi(t)$ describes the average phase of all oscillators and $r(t)$ the degree of phase coherence.
A single measure for the phase ordering is the given by the long time average of the absolute value of the order parameter
\begin{eqnarray}
r^2_\mathrm{Kuramoto} &=& \left< \left|r(t)e^{\text{i}\psi(t)}\right|^2 \right>_t = \left< r(t)^2 e^{\text{i}\psi(t)} e^{-\text{i}\psi(t)}\right>_t \nonumber\\
&=& \left< \frac{1}{N^2} \sum_{i,j = 1}^N e^{\text{i}(\theta_i - \theta_j)} \right>_t \nonumber\\
&=& \left< \frac{1}{N^2} \sum_{i,j = 1}^N \cos(\theta_i - \theta_j) \right>_t\,.
\label{eqn:def-order-r2}
\end{eqnarray}
This order parameter measures the average of the phase differences of all pairs of oscillators. If the oscillators are incoherent, the time average vanishes and the order parameter is $0$. When a fraction of the oscillators are partially phase locked the cosine of their phase differences becomes positive and does not disappear in the time average; the order parameter becomes positive.
In the original case for $N$ all-to-all coupled oscillators with natural frequencies $\omega_i$ following a distribution $g(\omega)$, mean-field theory correctly predicts the transition to partial phase coherence at the critical coupling $K_{c1} = 2/\left[\pi g(0)\right]$ if the frequency distribution $g$ is unimodal and symmetric around zero. For larger coupling strengths $K>K_{c1}$ the order parameter then grows continuously as $r(K) \propto \sqrt{1-K_{c1}/K}$ \cite{Stro00}. As such, this order parameter characterizes the transition from an \emph{incoherent} to a \emph{partially} phase locked state.
This original order parameter is clearly unsuited when studying more general interaction networks. One would compare the phases of two oscillators in the network that are only interacting indirectly via a (possibly very long) chain of intermediate oscillators. As such, several adaptations of the order parameter have been introduced to study the effect of the network topology on the synchronization of Kuramoto oscillators:
The first definition used by Restrepo et al. \cite{05Restrepo,06Restrepo,08Arenas} considers an intuitively defined local order parameter
\begin{equation}
r_i = \left|\sum_{j=1}^N A_{i,j} \left<e^{\text{i}\theta_j}\right>_t\right|
\end{equation}
for oscillator $i$, measuring the phase coherence of all neighboring oscillators. A global order parameter is then easily defined as the average of the local order parameters
\begin{equation}
r_\mathrm{net} = \frac{\sum_{i=1}^N r_i}{\sum_{i=1}^N k_i},
\end{equation}
where $k_i$ is the degree of node $i$.
A second definition \cite{04Ichinomiya,06Bocaletti} adapts the original order parameter Eq.~(\ref{eqn:def-order-r}) weighting each node with its degree
\begin{equation}
r_\mathrm{mf} = \left<\left|\frac{\sum_{i=1}^N k_i e^{\text{i}\theta_i}}{\sum_{i=1}^N k_i}\right|\right>_t .
\end{equation}
This order parameter ignores the specific network topology in favor of a mean-field view of network ensembles to simplify analytical calculations.
Finally, a definition of an order parameter to study local synchronization used in \cite{07Gomez} derives from the original order parameter Eq.~(\ref{eqn:def-order-r2}), restricting it to the network topology and only averaging over the phase differences between directly connected nodes
\begin{equation}
r_\mathrm{link} = \frac{1}{\sum_{i=1}^N k_i} \sum_{i,j = 1}^N A_{i,j} \left| \left< e^{\text{i} \left(\theta_i - \theta_j\right)} \right> _t \right| \,.
\end{equation}
The above order parameters work well for their respective use, for example to study synchronization analytically in mean-field network models. However, none of them accurately captures the whole transition to synchronization, especially in smaller networks. We illustrate this in Fig.~\ref{fig:network_order_params} for a small random network: While $r_\mathrm{net}$ clearly captures the transition to full phase locking at $K_{c2}=1$, it is effectively $0$ before full phase locking becomes stable and does not indicate where individual nodes enter the partially phase locked state for $K < 1$. Conversely, $r_\mathrm{link}$ describes these transitions but cannot cover the convergence to full synchrony as $r_\mathrm{link} = 1$ in the fully phase locked state, regardless of the network topology. Finally, $r_\mathrm{mf}$ works well to describe the behavior for a large ensemble of networks, but is clearly unsuited for use with specific, particularly small, networks as it ignores the specific network structure and is large already for weak coupling. It is easy to construct further examples where, for instance, the mean field order parameter $r_\mathrm{mf}$ is non-monotonous with respect to the coupling strength $K$, even in the fully phase locked state.
\begin{figure*}
\centering
\includegraphics[width = 0.7\textwidth]{FIG2_order_parameters_newSim.pdf}
\caption{\textbf{Order parameters to measure phase coherence in networks.}
Different order parameters measuring the phase coherence in complex networks of Kuramoto oscillators, describing the transition from a completely incoherent state [$K=0$, cf. Fig.~\ref{fig:kuramoto_example}(b)] to full synchrony [$K\rightarrow \infty$, cf. Fig.~\ref{fig:kuramoto_example}(g)]. None of the order parameters used in the literature $r_\mathrm{net}$, $r_\mathrm{mf}$ and $r_\mathrm{link}$ captures all transitions. (a) Topology of the interaction network, cf. Fig.~\ref{fig:kuramoto_example}(a). (b) $r_\mathrm{net}$ is almost $0$ until the fully phase locked state becomes stable at $K=1$. It fails to capture transitions in the partially phase locked regime. (c) In contrast, $r_\mathrm{link}$ captures the transitions in the partially phase locked regime well. However, $r_\mathrm{link} = 1$ in the fully phase locked state for $K \ge 1$ and does not capture the convergence to complete synchrony. (d) $r_\mathrm{mf}$ measures globally averaged phase coherence. It fails to accurately represent the incoherent and partially phase locked state with respect to the actual network topology, especially for small networks. (e) Our universal order parameter $r_\mathrm{uni}$ accurately reflects the degree of phase coherence in all stages of synchronization. All results show the long time limit of the order parameter starting from identical initial conditions $\theta_i = 0$, the black dashed lines mark transitions where single nodes enter a (partially) phase locked state.}
\label{fig:network_order_params}
\end{figure*}
\section{A universal order parameter for complex networks}
In order to have both a practically applicable and relevant order parameter as well as describe the whole evolution from an incoherent state to complete synchronization we propose a universal network order parameter:
\begin{definition}
Given a network of coupled Kuramoto oscillators Eq.~(\ref{eqn:kuramoto-intro}), phase ordering is measured by
\begin{eqnarray}
r_\mathrm{uni} &=& \frac{1}{\sum_{i=1}^N k_i} \sum_{i,j=1}^N A_{i,j} \left< \Re\left( e^{\text{i}\left(\theta_i - \theta_j\right)} \right)\right>_t \nonumber\\
&=& \frac{1}{\sum_{i=1}^N k_i} \sum_{i,j=1}^N A_{i,j} \left< \cos\left(\theta_i - \theta_j\right) \right>_t . \label{eqn:order_rho}
\end{eqnarray}
\end{definition}
As $r_\mathrm{link}$ this definition respects the topology of the interaction network and considers only phase differences between neighboring nodes. In contrast to $r_\mathrm{link}$, the definition of $r_{\rm uni}$ reduces to the original Kuramoto order parameter Eq.~(\ref{eqn:def-order-r}) for a completely connected network as desired. Figure~\ref{fig:network_order_params}(d) illustrates the behavior in comparison to the other network order parameters, showing that it accurately captures the transitions in all stages of phase locking (cf. Fig.~\ref{fig:network_order_table}).
\subsection{Synchronization and stability}
The order parameter $r_{\rm uni}$ gives a full account of the emergence of synchrony. It accurately follows both the transitions to partially and fully phase locked states as well as the convergence to complete synchrony.
We illustrate this central result in Fig.~\ref{fig:network_order_params} for a small random network. Whenever one of the nodes enters a partially phase locked state we observe a strong kink in $r_{\rm uni}(K)$. Hence, we can directly track the growth of phase locked clusters. In fact, the slope $\mathrm{d} r_{\rm uni}/\mathrm{d} K$ diverges when approaching these transition points from the right. We rigorously proof this result for the transition to full phase locking below (cf. Theorem 1).
The universal order parameter has further advantages compared to the alternatives discussed above. First, $r_{\rm uni}$ quantifies the dynamical stability of a phase-locked steady state (cf. Theorem 2).
This becomes most apparent in a ring of $N$ oscillators with identical natural frequencies, $\omega_i = 0$ for all $i \in \left\{1,2,\dots,N\right\}$, where all interactions have identical coupling strength $K = 1$. Clearly, in a fully phase locked state all phase differences between neighboring nodes need to be identical while the cumulative phase difference around the ring must be a multiple of $2 \pi$ \cite{17Manik,delabays2016multistability}. Under these conditions we can characterize the phase locked states by a mode $m$ describing the total phase change around the ring $2 \pi m$. The individual phases are then given by
\begin{equation}
\theta_i^* = \frac{2 \pi i m}{N} \,
\end{equation}
with $m \in \left\{ -N/2, -N/2 + 1, \dots,N/2 \right\}$, illustrated for $m \ge 0$ in Fig.~\ref{fig:order_stability}. Here and in the following we use an asterisk $*$ to denote a phase locked steady state $\theta_i^*$ of the Kuramoto model Eq.~(\ref{eqn:kuramoto-intro}).
The phase locked states with $\left|\theta_i^* - \theta_{i-1}^* \right| < \pi / 2$, that means $m\in (-N/4,N/4)$, are linearly stable, the remaining states are unstable. Our order parameter $r_{\rm uni}$ reflects the linear stability of these different steady states - the state with perfectly aligned phases ($m=0$) is most stable and has $r_{\rm uni}= 1$. All other states have larger phase differences, which impede dynamical stability, and consequently lower values of $r_{\rm uni}$. This information is completely lost for the alternatives $r_{\rm link}$ and $r_{\rm mf}$, the first one being identically one for all phase-locked states and the second one being one for the fully aligned state and zero otherwise.
The classification of stability is due to the fact that $r_\mathrm{uni}$ Eq.~(\ref{eqn:order_rho}) counts only the phase differences in the stable region as positive contributions, i.e., when \mbox{$\left| \theta_i^* - \theta_j^* \right| < \pi/2$}. As the stability of phase locked state is directly related to these phase differences, with phase differences close to $0$ corresponding to more stable states, the order parameter directly reflects the systems stability of any phase locked state, relevant for example for applications to power grids.
A further advantage of $r_{\rm uni}$ for the analysis of phase-locked states is monotonicity (cf. Theorem 2). Intuitively we expect that an increase of the coupling $K$ leads to a stronger alignment of the phases and thus to an increase of the order parameter. This expectation can be violated for the mean-field order parameter $r_{\rm mf}$, as it measures global alignment, but an increase of the coupling acts only locally on the links. In contrast, we rigorously proof below that the order parameter $r_{\rm uni}$ is monotonic in the coupling strength $K$ for a phase-locked steady state.
\begin{figure}
\centering
\includegraphics[width = 0.45\textwidth]{FIG3_order_parameter_comparison_table.pdf}
\caption{\textbf{A universal order parameter.}
None of the order parameters used in the literature $r_\mathrm{net}$, $r_\mathrm{mf}$ and $r_\mathrm{link}$ capture all transitions. Following the observations in Fig~\ref{fig:network_order_params}, $r_\mathrm{net}$ fails to capture transitions in the partially phase locked regime. It also fails to describe phase coherence for some small networks, most easily seen for just two connected oscillators. $r_\mathrm{link}$ does not capture the transition to complete synchrony and, since $r_\mathrm{link} = 1$ in the fully phase locked state, it does not classify stability. $r_\mathrm{mf}$ does not reflect the phase ordering in networks for partially or fully phase locked states, since it measures global phase coherence. As such it does not represent stability of the phase locked steady states which depends on local phase differences and is not suited for small networks. The order parameter $r_\mathrm{uni}$ accurately reflects the transitions for all stages of synchronization and correctly classifies stability of different phase locked states in arbitrary, even small networks.}
\label{fig:network_order_table}
\end{figure}
\subsection{Analytical results}
To formalize these observations, first consider the linear stability of a phase locked state $\vec \theta^*$ for $K \ge K_{c,2}$:
A small perturbation $\vec \xi$ around the steady state, $\theta_j = \theta_j^* + \xi_j$, evolves as
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} \vec \xi = \vec J \vec \xi + \mathcal{O}(\vec \xi^2), \label{eqn:linear_stability}
\end{equation}
where we make use of vector notation $\vec \xi = (\xi_1,\ldots,\xi_N)^T$. The Jacobian matrix $\vec J$ quantifies the linear stability of a phase-locked steady state. It always has one trivial eigenvalue $\lambda_1 = 0$ with eigenvector $\vec v_1 = \left(1,1,\ldots,1\right)^\mathrm{T}$, representing a global uniform shift of all phases which does not affect the phase-locking of the nodes. In a stable phase locked state all other eigenvalues are negative $0 > \lambda_2 \ge \lambda_3 \ge \cdots \ge \lambda_N$. We denote the associated eigenvectors as $\vec v_{2}, \ldots, \vec v_N$.
We can then formalize the above observations about $r_{\rm uni}$ in the following theorems:
\begin{theorem}
Given a network of coupled Kuramoto oscillators Eq.~(\ref{eqn:kuramoto-intro}) with $\sum_i \omega_i = 0$ and $\vec \omega \cdot \vec v_2 \neq 0$, the derivative of the order parameter $r_\mathrm{uni}$ Eq.~(\ref{eqn:order_rho}) diverges when the fully phase locked state becomes unstable at the critical coupling $K_{c2}$
\begin{equation}
\mathrm{d}r_\mathrm{uni}/\mathrm{d}K \rightarrow \infty \quad\mathrm{for} \quad K \rightarrow K_{c2}^+\,. \nonumber
\end{equation}
\end{theorem}
\begin{theorem}
Given a network of coupled Kuramoto oscillators Eq.~(\ref{eqn:kuramoto-intro}) with $\sum_i \omega_i = 0$, in a fully phase locked regime $K > K_{c2}$ the order parameter $r_\mathrm{uni}$ Eq.~(\ref{eqn:order_rho}) is strictly larger than zero for every stable phase locked state and increases monotonically with increasing $K$.
\end{theorem}
\begin{figure*}
\centering
\includegraphics[width = 0.7\textwidth]{FIG4_cycle_graph_stability.pdf}
\caption{\textbf{Order parameters and stability.}
Steady states in a ring network with $N=10$ nodes and the corresponding values of the different order parameters (shifted horizontally for better visibility). The state $m=0$ is the most stable as the phase differences between neighboring nodes are smallest. The phase locked states become more unstable with increasing $m$. $r_\mathrm{mf}$ and $r_\mathrm{link}$ do not provide information about the stability of the steady state, being either zero for most of the states or identical to one for all phase locked states, respectively. Our universal order parameter $r_\mathrm{uni}$ accurately reflects the stability of the different states.}
\label{fig:order_stability}
\end{figure*}
In the remainder of this section we provide the proof for these theorems with the help of two lemmas, relating the order parameter to the eigenvalues of the Jacobian:
\begin{lemma}
Given a network of coupled Kuramoto oscillators Eq.~(\ref{eqn:kuramoto-intro}) with $\sum_i \omega_i = 0$ and $K \ge K_{c2}$ in the stable phase locked state, the order parameter $r_\mathrm{uni}$ Eq.~(\ref{eqn:order_rho}) is given by the negative trace of the Jacobian $\vec J$,
\begin{eqnarray}
r_{\rm uni} &=& - \frac{1}{K \sum_{i=1}^N k_i} \, {\rm tr}({\vec J}) \nonumber \\
&=& - \frac{1}{K \sum_{i=1}^N k_i} \sum_{j=2}^N \lambda_j.
\end{eqnarray}
\end{lemma}
\begin{proof}
Explicit calculation of the Jacobian matrix $\vec J$ in Eq.~(\ref{eqn:linear_stability}) yields
\begin{align}
J_{i,j} &= K A_{i,j} \cos(\theta_i^* - \theta_j^*) \qquad \mbox{for} \; i \neq j, \nonumber \\
J_{i,i} &= - K \sum_{j=1}^N A_{i,j} \cos(\theta_i^* - \theta_j^*).
\label{eqn:def-jacobian}
\end{align}
The lemma then follows directly by calculating the trace. The second equality follows from the fact that the largest eigenvalue of $\vec J$ is $\lambda_1 = 0$.
\end{proof}
Given that the eigenvalues of the Jacobian $\lambda_2, \dots, \lambda_N < 0$ are all negative for a stable phase locked state, $K > K_{c2}$, it immediately follows that the order parameter $r_\mathrm{uni}$ must be positive.
To finish proving the theorems above, we now also relate the derivative $\mathrm{d}r_\mathrm{uni}/\mathrm{d}K$ to the eigenvalues $\lambda_1,\dots,\lambda_N$ of the Jacobian matrix and their corresponding eigenvectors $\vec v_1,\ldots,\vec v_N$:
\begin{lemma}
Given a network of coupled Kuramoto oscillators Eq.~(\ref{eqn:kuramoto-intro}) with $\sum_i \omega_i = 0$ and $K \ge K_{c2}$ the derivative of the order parameter with respect to the coupling strength
is given by
\begin{eqnarray}
\frac{\mathrm{d} r_{\rm uni}}{\mathrm{d} K} =
\frac{2}{K^2 \sum_{i=1}^N k_i} \sum_{n=2}^N \frac{1}{-\lambda _n}
(\vec v_n \cdot \vec \omega )^2 \ge 0.
\label{eq:slope-r}
\end{eqnarray}
\end{lemma}
\begin{proof}
Consider a global change of the coupling strength $K' = K + \kappa$. This perturbation induces a small change of the steady state phases of the network, $\theta_m^* \rightarrow \theta'_m = \theta_m^* + \xi_m$.
Expanding the steady state condition
\begin{eqnarray}
0 = \omega_i + (K+\kappa) \sum_{m = 1}^{N} A_{i,m} \sin(\theta_m^* + \xi_m - \theta_i^* - \xi_i) \nonumber
\end{eqnarray}
to leading order in $\kappa$ and the $\xi_m$ yields
\begin{eqnarray}
&& 0 = \kappa \sum_{m = 1}^N A_{i,m} \sin(\theta_m^* - \theta_i^*)
+ \sum_{m =1}^N J_{i,m} \xi_m \nonumber \\
\Rightarrow \, && \sum_{m =1}^N J_{i,m} \xi_m = - \frac{\kappa}{2}
\sum_{\ell,m=1}^N A_{\ell,m} \sin(\theta^*_m - \theta^*_\ell) (\delta_{i,\ell} - \delta_{i,m}) \nonumber
\label{eqn:steady2}
\end{eqnarray}
for all $i \in \left\{ 1,\ldots,N \right\}$ using the definition of the Jacobian Eq.~(\ref{eqn:def-jacobian}) and the Kronecker $\delta$ symbol. In vectorial notation this set of equations can be written as
\begin{equation}
\vec J \vec \xi = - \frac{\kappa}{2} \sum_{\ell,m=1}^N A_{\ell,m} \sin(\theta^*_m - \theta^*_\ell)
\vec q_{(\ell,m)},
\label{eq:Jxi}
\end{equation}
where we define the vector $\vec q_{(\ell,m)}$, whose $i$th component is given by $\vec q_{(\ell,m),i} = \delta_{i,\ell} - \delta_{i,m}$. The matrix $\vec J$ is singular, but the vectors $\vec q_{(\ell,m)}$ are orthogonal to its kernel [$\vec v_1 = \left(1,1,\dots,1\right)^T$] such that we can solve equation (\ref{eq:Jxi}) using the Moore-Penrose pseudo-inverse $\vec J^+$. Decomposing $\vec J$ into eigenvalues and eigenstates, we thus obtain
\begin{eqnarray}
\vec \xi &=& - \frac{\kappa}{2} \sum_{\ell,m=1}^N A_{\ell,m} \sin(\theta^*_m - \theta^*_\ell)
\vec J^+ \vec q_{(\ell,m)} \nonumber \\
&=& - \frac{\kappa}{2} \sum_{\ell,m=1}^N \sum_{n=2}^N A_{\ell,m} \sin(\theta^*_m - \theta^*_\ell)
\frac{1}{\lambda_n} (\vec v_n \cdot \vec q_{(\ell,m)}) \vec v_n. \nonumber
\end{eqnarray}
We then find for the change of the phases
\begin{eqnarray}
\frac{\mathrm{d}(\theta_j - \theta_i)}{\mathrm{d}K}
&=& \vec q_{(j,i)} \cdot \lim_{\kappa \rightarrow 0}
\underbrace{\frac{\vec \theta(K+\kappa)-\vec \theta(K)}{\kappa}}_{= \vec \xi/\kappa } \nonumber \\
&=& -\frac{1}{2} \sum_{\ell,m=1}^N A_{\ell,m} \sin(\theta^*_m - \theta^*_\ell) \nonumber \\
&& \qquad \qquad \times \sum_{n=2}^N \frac{1}{\lambda_n} (\vec q_{(\ell,m)} \cdot \vec v_n) (\vec q_{(j,i)} \cdot \vec v_n). \nonumber
\end{eqnarray}
Hence, the derivative of the order parameter is given by
\begin{eqnarray}
&& \frac{\mathrm{d} r_{\rm uni}}{\mathrm{d} K} =
\frac{1}{\sum_{i=1}^N k_i} \sum_{i,j = 1}^N A_{i,j}
\frac{d\cos(\theta_i^* - \theta_j^*)}{dK} \nonumber \\
&& \; = \frac{1}{\sum_{i=1}^N k_i} \sum_{i,j = 1}^N A_{i,j}
\sin(\theta_i^* - \theta_j^*) \frac{d(\theta_j - \theta_i)}{dK} \nonumber \\
&& \; = \frac{1}{2\sum_{i=1}^N k_i} \sum_{n=2}^N \frac{1}{-\lambda _n}
\left[ \sum_{i,j = 1}^N A_{i,j} \sin(\theta_i^* - \theta_j^*)
(\vec q_{(j,i)} \cdot \vec v_n) \right]^2 . \nonumber
\end{eqnarray}
Now we use the steady state condition to simplify this expression.
We write $\vec q_{(j,i)} \cdot \vec v_n = \vec v_{n,j} - \vec v_{n,i}$, where $\vec v_{n,j}$ denotes the $j$th component of the vector $\vec v_n$ and we obtain
\begin{eqnarray}
&& \sum_{i,j = 1}^N A_{i,j} \sin(\theta_i^* - \theta_j^*)
(\vec q_{(j,i)} \cdot \vec v_n) \nonumber \\
&& \qquad = \sum_{j = 1}^N \vec v_{n,j}
\underbrace{ \sum_{i=1}^N A_{i,j} \sin(\theta_i^* - \theta_j^*) }_{= -\omega_j/K} \nonumber \\
&& \qquad \qquad \qquad - \sum_{i = 1}^N \vec v_{n,i}
\underbrace{ \sum_{i=j}^N A_{i,j} \sin(\theta_i^* - \theta_j^*) }_{= +\omega_i/K} \nonumber \\
&& \qquad = -\frac{2}{K} \, \vec v_n \cdot \vec \omega \, .
\end{eqnarray}
The derivative of the order parameter then becomes
\begin{eqnarray}
\frac{\mathrm{d} r_{\rm uni}}{\mathrm{d} K} =
\frac{2}{K^2\sum_{i=1}^N k_i} \sum_{n=2}^N \frac{1}{-\lambda _n}
(\vec v_n \cdot \vec \omega )^2 , \nonumber
\end{eqnarray}
finishing the proof of Lemma 2.
\end{proof}
For any stable steady state we have $\lambda_n < 0$ for all
$n \in \left\{2,\ldots, N \right\}$ such that the slope is non-negative. It can become
zero only if $\vec v_n \cdot \vec \omega = 0$ for all $n \in \left\{2,\ldots, N\right\}$.
As the eigenvectors form an orthonormal basis this would imply that
$\vec \omega$ is parallel to $\vec v_1$. As we assume $\sum_j \omega_j = 0$
this is only possible if $\vec \omega = \vec 0$ and we have $\mathrm{d} r_{\rm uni} / \mathrm{d} K > 0$ for $K > K_{c2}$.
Finally, as $K \rightarrow K_{c2}^+$ from above the phase locked state becomes unstable with $\lambda_2 \rightarrow 0$. With the assumption \mbox{$\vec \omega \cdot \vec v_2 \neq 0$} it follows that the derivative diverges, concluding the proofs for both theorems.
\section{Conclusion}
Kuramoto oscillators are the prototypical systems used to study the synchronization behavior of limit cycle oscillators. The order parameters introduced to study this synchronization capture different aspects of the transition to synchrony. None of the order parameters previously suggested for Kuramoto oscillators on complex networks describes all transitions to partial and full phase locking as well as the convergence to full synchrony in arbitrary networks.\\
Here we have proposed a universal order parameter accurately describing the phase coherence in networks of phase oscillators. This order parameter recovers the original Kuramoto order parameter for a fully connected network of oscillators. We have analytically shown that the slope of the order parameter diverges when the fully phase locked state becomes stable, accurately marking this transition even in small networks. For larger coupling strengths a monotonic increase reflects the slow convergence to complete synchrony and directly relates to the stability of the phase locked state, important, for example, for applications to power grid models where a fully phase locked state is required for stable operation.\\
\acknowledgments
We gratefully acknowledge support from the G\"ottingen Graduate School for Neurosciences and Molecular Biosciences (DFG Grant GSC 226/2 to M.S.),
the Helmholtz Association (grant no.~VH-NG-1025 to D.W.),
the German Federal Ministry of Education and Research
(BMBF grant no.~03SF0472B and 03SF0472E to M.T. and D.W.),
and the Max Planck Society to M.T. |
2011.05625 | \section{Introduction}\label{sec:intro}
With the growing complexity of machine learning models, especially models in recommender system, how to deal with abundant input features effectively and efficiently becomes a crucial problem.
For online recommender in industrial setting, models are often trained on billion-scale binarized sparse features with one-hot encoding~\cite{cheng2016wide,zhou2018din}. Each feature can also be seen as a unique ID, which is mapped to a low dimensional embedding firstly and then fed into the model.
A simple way to deal with the large-scale input is to consider each feature independent. Under this assumption, there are no connections between features so that a generalized linear models can be directly trained to estimate the click-through rate based on the combination (e.g., concatenation) of features.
However, features like ``recommended item'' and ``user click history'' in recommender system are highly relevant~\cite{zhou2018din,zhou2019dien}, i.e., there exists feature collective effects toward final prediction target, such as click-through rate, namely feature \textbf{co-action}. For example, a female user that has ``bathing suit'' in her clicked history is likely to click a recommended ``goggle'' due to the co-action of ``bathing suit'' and ``goggle''. Feature co-action can be considered as modeling the sub-graph of a set of raw features. If the sub-graph only consist of two features, then modeling feature co-action is equivalent to modeling the edges between two IDs. The effect of the co-action explains how a set of features correlates with the optimization target.
As shown in Fig.\ref{fig:coaction}, feature co-action explicitly bridges the feature pair $[A, B]$ to the target label.
Several research efforts have been devoted to model feature co-action in recent years. These methods can be divided into three categories. Aggregation based methods~\cite{zhou2018din, zhou2019dien, Li2019MIND, Pi2019MIMN, FengLSWSZY19DSIN} focus on learning how to aggregate the historical behaviour sequence of user to obtain discriminative representation for the CTR prediction. These methods utilize the feature co-action to model the weight of each user action in historical behaviour sequence. The weighted user behavior sequence is then sum-pooled to represent user interest. Graph based methods~\cite{Gori2005GNN,KipfW17GCN,ShiHZY2019HINEmbedding} regard the features as nodes, which are connected as a directed or undirected graph. In this case, the feature co-action serves as an edge weight for information propagation along edges. Different from aggregation and graph based methods in which the feature co-action are modeled as the weight, the combinatorial embedding methods~\cite{rendle2010factorization,qu2016product,WangFFW2017DCN} models the feature co-action by explicitly combining feature embeddings.
Although previous methods have led to improvement on CTR prediction in different way, they still have some downsides. Aggregation based methods and graph based methods model the feature co-action only by the edge weights, however, the edges are only used for information aggregation but not information augmentation.
Combinatorial embedding methods, on the other hand, combining embeddings of two features to model feature co-action. For example, PNN~\cite{qu2016product} performs inner or outer product of two features to augment the input. One major downside of combinatorial embedding methods is that the embeddings take the responsibility of both representation learning and co-action modeling. The representation learning and co-action modeling may conflict with each other thus bound the performance.
In this paper, we stress the importance of feature co-action modeling and argue state-of-the-art methods underestimate the importance of co-action seriously. These methods fail to capture the feature co-action due to the limited expressive power.
The importance of capturing feature co-action to augment the input is that it can reduce the difficulty for the model to learning and capture the co-action.
Suppose there exists an optimal function $F_*(A,B)$ that models the co-action between feature A and B, the learning difficulty can be substantially alleviated by explicitly providing $F_*(A,B)$ in the input stage.
\begin{figure}[t]
\centering
\includegraphics[width=.8\linewidth]{co-action.pdf}
\caption{A schematic illustration of feature co-action.}
\label{fig:coaction}
\end{figure}
To validate our hypothesis that current approaches fail to fully capture the feature co-action, we revisit state-of-the-art methods and design experiments to show that a simple way exploring the potential of feature co-action can boost the performance.
For example, if feature $A$ and $B$ are selected, then the co-occurrence of $A$ and $B$ is treated as a new feature and fed to the model. We refer to this baseline as \textbf{cartesian product} model.
Although cartesian product is the most direct method to do co-action modeling, it has some serious defects, such as huge parameter quantity, totally independent embedding with low feature frequency learning. However, it is surprising that according to some preliminary experiments in this paper, we discover that most of the state-of-the-art combinatorial embedding methods are totally beat by cartesian product. We speculate that situation may due to poor expressiveness and the inability to learn the
embedding balancing representation and co-action modeling of these methods.
To this end, we propose feature \textbf{C}o-\textbf{A}ction \textbf{N}etwork (\textbf{CAN}) that can capture the feature co-action at the input stage and utilize the mutual and common information of different feature pairs effectively. Instead of directly parameterizing the cartesian product embeddings, CAN parameterizes the embeddings generation network. The re-parameterization reduces the additional parameter from $O(N^2 \times D)$ to $O(N\times T)$ (N is the number of features and D / T are the dimension of parameters with D,T $\ll$ N and D < T) while achieving better performance. Specifically, CAN differentiate the embedding space for representation learning and co-action modeling, where the embedding generation network are derived from the co-action embedding space.
In this manner, CAN enriches its expressive power and alleviates the conflict between representation learning and co-action learning. Compared with cartesian product model, CAN reduces the storage and computation consumption significantly thanks to the improved utilization of parameters.
The main contributions of this work are summarized as follows:
\begin{itemize}
\item We stress the importance of feature co-action modeling, which is underestimated seriously by state-of-the-art methods. Specifically, we revisit existing methods that models feature co-action. The empirical results show that these methods can not catch the performance of cartesian product baseline. This reveals that current CTR models do not fully explore the potential of raw feature co-action.
\item Motivated by our observation, we propose an lightweight model, Co-Action Network (CAN), to model the co-action among raw features. The proposed model can efficiently and effectively capture the feature co-action, improving the model performance while reducing the storage and computation consumption.
\item We conduct extensive experiments on both public dataset and industrial environment. The consistent superiority validates the efficacy of CAN. Up till now, CAN has been deployed in the Alibaba display advertisement system. The deployment of CAN brings an averaging 12\% CTR and 8\% RPM lift.
\item We present techniques for deploying CAN in industrial environment. The idea of CAN to exploit feature co-action and our lessons learned generalize to other setups and are thus of interest to both researchers and industrial practitioners.
\end{itemize}
\input{related}
\input{revisit}
\input{method}
\input{experiments}
\section{Conclusion}
In this paper, we stress the importance of feature co-action modeling, which is underestimated by previous works. Inspired by cartesian product model, we propose a new feature cross paradigm using a specially designed network, Co-Action Network (CAN). The CAN disentangles the representation learning and co-action modeling via a flexible module, co-action unit. Moreover, multi-order enhancement and multi-level independence are introduced in co-action unit to further promote the ability of feature co-action modeling. The experiments show that the CAN outperforms the previous works and has better generalization ability to new feature combinations. For now, the CAN is deployed in Alibaba display advertisement system and serving the main traffic.
\bibliographystyle{ACM-Reference-Format}
\balance
\section{Online Serving}
\subsection{Challenges}
\subsection{Solution}
\begin{itemize}
\item sequence cutoff and feature dim reduction
\item less orders
\item kernel optimazation
\end{itemize}
\subsection{Results}
\section{Experiments}
In this section, we present the experiments in details. In Sec.\ref{sec:setting}, we first introduce the used datasets including Amazon dataset, Taobao dataset and Avazu dataset followed by the previous methods and the implementation details. The results and discussion are elaborated in Sec.\ref{sec:res}. Sec.\ref{sec:ablation} elaborates the ablation studies. Model universality and generalization are presented in Sec.\ref{sec:general}. Experimental results of industrial data and deployment optimizations are shown in Sec.\ref{sec:industrial}. Both the public datasets and experimental codes are made available\footnote{https://github.com/CAN-Paper/Co-Action-Network}.
\subsection{Experimental Settings}\label{sec:setting}
\textbf{Dataset.} We experiment with three publicly accessible datasets for ctr prediction task: Amazon, Taobao and Avazu. The characteristics of these datasets are listed as follow:
\begin{itemize}
\item \textbf{Amazon dataset\footnote{http://jmcauley.ucsd.edu/data/amazon/}} contains product reviews and metadata from Amazon.Among 24 categories of products, we select the Books subset which contains 75053 users, 358367 items and 1583 categories. As this Amazon dataset is not originally a CTRs prediction dataset, the negative samples are not provided. Following previous works \cite{zhou2018din,zhou2019dien,Pi2019MIMN}, we randomly select products not rated by a specific user as negative sample for this user and create corresponding user behavior sequence (click and non-click). The maximum sequence length is limited to 100.
\item \textbf{Taobao dataset\footnote{https://tianchi.aliyun.com/dataset/dataDetail?dataId=649}} is a collection of user behaviors from Taobao’s recommender system. The dataset contains about 1 million users who have behaviors including click, purchase, adding item to shopping cart and item favoring. The click behaviors for each user are taken and sorted according to the timestamp to construct the user behavior sequence. The maximum sequence length is limited to 200.
\item \textbf{Avazu dataset\footnote{https://www.kaggle.com/c/avazu-ctr-prediction}} is a mobile ad dataset including 11 days (10 days for training and 1 day for test) real industrial data provided by Avazu. For Amazon and Taobao datasets, we model the feature co-action based on the user behavior sequences. On the contrary, for the Avazu dataset, we model the feature co-action using the discrete features as the avazu dataset contains various data fields, which is suitable to verify the effect of sequence / non-sequence to feature co-action modeling. During training, the 10th day is regarded as validation set.
\end{itemize}
The datasets statistics are summarized in Tab.\ref{tab:dataset}
\begin{table}[h]
\caption{The datasets used in this paper.}
\label{tab:dataset}
\begin{tabular}{|l|l|l|l|l|}
\toprule
dataset & training & validation & feature size \\
\midrule
Amazon (book) &135040 &14976 & 450000 \\
Taobao &691456 &296192 & 5159463 \\
Avazu &36387240 &403793 & 6763060 \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Baselines.}
In this paper, we use DIEN as the base model of the CAN. Note that any other model is allowed since the co-action unit is a pluggable module. To verify the effectiveness of our approach, we compare the CAN with current approaches focusing on feature interaction. For fair comparison, DIEN is used as the basis of these approaches.
\begin{itemize}
\item \textbf{DIEN}~\cite{zhou2019dien} designs an interest extractor layer to capture user interests from user behavior sequence. An interest evolving layer is further used to model interest evolving process.
\item \textbf{Cartesian product} is the multiplication of two sets to form the set of all ordered pairs. The first element of the ordered pair belong to first set and second pair belong the second set.
\item \textbf{PNN}~\cite{qu2016product} uses a product layer followed by a fully connected layers to explore high-order feature interactions.
\item \textbf{NCF}~\cite{he2017neural} presents a neural network architecture to learn latent features of user and item, which are used to model collaborative filtering using neural networks.
\item \textbf{DeepFM}~\cite{guo2017deepfm} is a new neural network architecture adopts product layer combines the power of factorization machines for recommendation and deep learning.
\end{itemize}
\textbf{Implementation details.} We implement the CAN using Tensorflow \cite{tensorflow2015-whitepaper}. For the $P_{item}$, eight-layer MLP is used with weight dimension set to $4 \times 4$, which results in a dimension of $(4*4+4)\times 8 =160$ (bias included). The order of $P_{user}$ is set to 2. The model is trained from scratch and the model parameters is initialized with a Gaussian distribution (with a mean of 0 and standard deviation of 0.01). We use Adam to optimize the training with the batch size set to 128 and learning rate set to 0.001. The three-layers MLP with $200 \times 100 \times 2$ is used for final CTR prediction. The common used metric AUC is used to evaluate the model performance.
\begin{table}[t]
\caption{Comparison with other approaches on Amazon book and Taobao datasets.}
\label{tab:result}
\begin{tabular}{l|l|l}
\toprule
Model & Amazon (mean ± std) & Taobao (mean ± std) \\
\midrule
DIEN &0.7518 $\pm$ 0.0004 &0.9028 $\pm$ 0.0016 \\
DIEN+Cartesian &0.7608 $\pm$ 0.0005 &0.9091 $\pm$ 0.0012 \\
PNN &0.7589 $\pm$ 0.0002 &0.9072 $\pm$ 0.0014 \\
NCF &0.7536 $\pm$ 0.0005 &0.9064 $\pm$ 0.0023 \\
DeepFM &0.7549 $\pm$ 0.0007 &0.9049 $\pm$ 0.0011 \\
\midrule
CAN &$\bm{0.7690 \pm 0.0011}$ &$\bm{0.9095 \pm 0.0017}$ \\
CAN+Cartesian &$\bm{0.7692 \pm 0.0008}$ &$\bm{0.9163 \pm 0.0013}$ \\
\midrule
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Ablation studies on Amazon book dataset.}
\label{tab:ablation}
\begin{tabular}{l|l|l}
\toprule
Model & AUC (mean ± std) \\
\midrule
MLP layers=2, order=1 &0.7656 $\pm$ 0.0008 \\
MLP layers=2, order=2 &0.7666 $\pm$ 0.0012 \\
MLP layers=2, order=3 &0.7669 $\pm$ 0.0020 \\
MLP layers=2, order=4 &0.7647 $\pm$ 0.0014 \\
\midrule
order=2, MLP layers=1 &0.7645 $\pm$ 0.0007 \\
order=2, MLP layers=2 &0.7666 $\pm$ 0.0012 \\
order=2, MLP layers=4 &0.7688 $\pm$ 0.0013 \\
order=2, MLP layers=8 &0.7690 $\pm$ 0.0011 \\
\midrule
CAN w/o activation &0.7649 $\pm$ 0.0008 \\
CAN w/ SeLU &0.7652 $\pm$ 0.0007 \\
CAN w/ Tanh &$\bm{0.7690 \pm 0.0011}$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Results}\label{sec:res}
Tab.\ref{tab:result} shows the experiment results on Amazon and Taobao dataset. As can be seen, The CAN outperforms other state-of-the-art approaches on both datasets. Comparing to the base model DIEN, the CAN improve the AUC by 1.7\% and 2.1\%, respectively. Meanwhile, the CAN surpasses other co-action approaches with a large margin, which proves the effectiveness of our approach on co-action modeling.
It's worth noting that, as the pure representation learning method, the cartesian product method could achieve better performance compared with other combining embedding methods like PNN, NCF, and DeepFM, which indicates that although the these combining embedding methods could extract some information of co-action features, they could really learn embedding with excellent representation and co-action. In contrast, CAN achieves much better results than the cartesian product and combining embedding methods, which means that the network based mechanism of CAN could learn co-action representation both the representational ability and the collaborative ability.
\subsection{Ablation Study}\label{sec:ablation}
To investigate the effect of each component, we conduct several ablation studies, which is shown in Tab.\ref{tab:ablation}.
\textbf{Multi orders} First, we evaluate the influences of multi orders. On the basis of 1st order term, 2nd, 3rd and 4th order terms are added, gradually. From 1st order to 2nd order, the AUC promotes a lot. Afterwards, as the order growing, the gap starts to shrink even cause negative effect. The multi orders have a marginal effect to the performance gain so that 2 or 3 power terms is proper in practical applications.
\textbf{MLP depth.} Second, we show the influences of $MLP_{can}$ architecture to the co-action modeling. Specifically, we train models with different number of MLP layers, 1, 2, 4 and 8, respectively. The input and output dimension of MLP layers are the same. In general, deeper MLP leads to higher performance. However, when the number of layers exceed 4, there is no obvious AUC gain, i.e., 8-layers MLP only increase the AUC by 0.02\%. The main reason is that the training samples are not enough for such a deep architecture.
\textbf{Activation functions.} Third, we compare the influences of different activation functions. As can be seen from the table, the non-linearity improves the AUC by 0.03~0.41\%. Under the order=2 setting, the Tanh shows more significant performance comparing to SeLU since the Tanh plays a role of normalizer to avoid numerical issues in high orders.
\subsection{Model Universality and Generalization}\label{sec:general}
To validate the feature universality and generalization of CAN, we compare the CAN and other approaches from and generation two aspect: validating the co-action features with non-sequences components and predicting the samples with the unseen co-action features while training.
\textbf{Universality} Although the CAN is designed mainly for real industrial data which contain a lot of behaviour sequences, it's still capable for non-sequential input. The Avazu dataset contains 24 data fields among which we select 9 fields to construct 16 kinds of feature combination. As shown in Tab.\ref{tab:general1}, the CAN outperforms most approaches and is comparable to the cartesian product.
\begin{table}[t]
\caption{Results of different approaches using 16 kinds of feature combinations (DNN excluded) on Avazu dataset. DNN is used as the basic model as the Avazu dataset does not contain sequential features.}
\label{tab:general1}
\begin{tabular}{l|l|l}
\toprule
Model & AUC (mean ± std) \\
\midrule
DNN & 0.7854 $\pm$ 0.0008 \\
Cartesian product & 0.8041 $\pm$ 0.0016 \\
PNN & 0.7871 $\pm$ 0.0011 \\
NCF & 0.7865 $\pm$ 0.0015 \\
DeepFM & 0.7862 $\pm$ 0.0014 \\
CAN & $\bm{0.8037 \pm 0.0017}$ \\
CAN+Cartesian & $\bm{0.8120 \pm 0.0016}$ \\
\bottomrule
\end{tabular}
\end{table}
\textbf{Generalization} In the real commercial scene, countless feature combinations arise on each day, which requires quickly response of CTR models. The generalization is quite important for practical application. To this end, we remove the samples that contain existing feature combinations from test set. In this way, we obtain a new test set whose feature combinations are brand new for a well trained model. Note that we only require the feature combination to be zero shot rather than all features. From Tab.\ref{tab:general2}, the cartesian product is ineffective under this setting because it relies on well trained co-action embeddings which is not available under this setting. On the contrary, the CAN still works well which shows excellent generalization ability to new feature combination comparing to other approaches. In real industrial environment, the feature combinations are extremely sparse that it's much easier to handle the new feature combinations using CAN so long as the $P_{item}$ and $P_{user}$ are well trained.
\begin{table}[t]
\caption{Results of different approaches handling new feature combinations in Amazon dataset.}
\label{tab:general2}
\begin{tabular}{l|l|l|l}
\toprule
method & AUC (mean ± std) \\
\midrule
DIEN & 0.7028 $\pm$ 0.0013 \\
DIEN+Cartesian & 0.7040 $\pm$ 0.0013 \\
NCF & 0.7066 $\pm$ 0.0019 \\
DeepFM & 0.7073 $\pm$ 0.0012 \\
CAN & $\bm{0.7132 \pm 0.0017}$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Results on Industrial Data}\label{sec:industrial}
\textbf{Online Serving and Challenges.} At the very beginning, we deploy the Cartesian product model on our system that causes a lot of troubles. On one hand, the model size is expanding at an extremely fast rate even using IDs frequency filtering. On the other hand, the additional $M \times N$ IDs brings unacceptable number of embedding look up operations as well as system response latency. By contrast, the CAN is much more friendly in this aspect. In order to deploy the CAN on our advertisement system, we select 21 features including 6 ad features and 15 user feature to generate feature combinations so that extra 21 embeddings space are allocated due to co-action independence. The significantly increasing embeddings space still lead to heavy pressure of online serving. As the user features are mostly behaviour sequences with length of more than 100, additional memory access is required which cause response latency rise. Moreover, the computational costs of feature co-action grow linearly according to the number of feature combination which also bring considerable response latency to our system.
\textbf{Solution.} To tackle these problems, much effort are devoted to reduce the response latency. We simplify the model from three aspects:
\begin{itemize}
\item Sequence cut-off. The length of 16 user features range from 50 to 200. To reduce the memory access cost, we simply apply sequence cut-off to our user features, e.g., all user behaviour sequences of length 200 are reduce to 50. The most recent behaviours are kept. The sequence cut-off promote the QPS (Query Per Second) by 20\% yet results in 0.1\% decrease of AUC, which is acceptable.
\item Combination reduction. 6 ad features and 15 user feature can obtain up to 90 feature combinations which is a heavy burden. Empirically, the combinations with ad feature and user feature of the same type can better model the feature co-occurrence. According to this principle, we keep the combinations like ``item\_id'' and ``item\_click\_history'' and ``category\_id'' and ``category\_click\_history'' and remove some irrelevant combinations. In this way, the number of combinations reduce from 90 to 48 which brings 30\% QPS improvement.
\item Computational kernel optimization. The co-action computation refers to a time-consuming large matrix multiplication between $P_{item}$ and $P_{user}$ with shape of [Batch\_size $\times$ K $\times$ dim\_in $\times$ dim\_out] $\times$ [batch\_size $\times$ K $\times$ seq\_len $\times$ dim\_in], where K, seq\_len, dim\_in and dim\_out denote the number of feature co-action, length of user behavior sequence, MLP input dimension and output dimension, respectively. In our case, The dim\_in and dim\_out are not commonly used shape so that such matrix multiplication are not well optimized by the BLAS (Basic Linear Algebra Subprograms). To solve this problem, the internal calculation logic is rewritten which brings 60\% QPS lift. In addition, as this matrix multiplication is followed by a sum-pooling over the seq\_len dimension, we further make a kernel fusion between matrix multiplication and sum-pooling. By doing so, the intermediate GPU memory writing of the matrix multiplication output is avoid, which brings another 47\% QPS lift.
\end{itemize}
The series of optimizations make the CAN capable for online serving in main traffic stably. In our system, the CTR prediction step takes 12 ms using the CAN which can handle nearly 1K QPS per GPU. Tab.\ref{tab:indus} shows the improvement of CAN on CTR and RPM (Revenue Per Mille) in our online A / B test.
\begin{table}[t]
\caption{The CTR and RPM gains in real online advertising system.}
\label{tab:indus}
\begin{tabular}{l|l|l|l}
\toprule
& CTR & RPM \\
\midrule
Scene1 & +11.4\% & +8.8\% \\
Scene2 & +12.5\% & +7.5\% \\
\bottomrule
\end{tabular}
\end{table}
\section{Co-Action Network}
In order to utilize the feature co-action without being constrained by the limitations of cartesian product and other previous works. We propose a Co-Action Network (CAN) to efficiently capture the inter-field interaction. According to above analysis, previous works do not fully explore the potential of feature co-action. Motivated by the independent coding of feature combination in cartesian product, the CAN introduces a pluggable module, co-action unit.
The co-action unit focuses on expanding the parameter space and effectively applying the parameter to model the feature co-action.
Specifically, the co-action unit sufficiently leverages the parameter of one side to construct a Multi-Layer Perceptron (MLP) applied to the other side. This kind of feature cross paradigm bring more flexibility to the model. On one hand, increasing the parameter dimension means expanding the MLP parameters and layers. On the other hand, comparing to the cartesian product that shares no information between different feature combination with same feature, the co-action unit improves the utilization of parameters since the MLPs are directly derived from the feature embeddings.
Moreover, to incorporate high order information in the model, we introduce multi orders enhancement which explicitly constructs a polynomial input for the co-action unit. The multi orders information promote the model non-linearity and help to better estimate the feature co-action.
Besides, multi-level independence including embedding independence, combination independence and order independence are proposed to guarantees the learning independence of co-action by broadening the parameter space.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{framework}
\caption{The overall framework of the Co-Action Network. Given target item and user features, the embedding layer encodes the sparse features into dense embeddings. Meanwhile, some features are selected for co-action modeling. Each item feature corresponds to a Multi Layer Perceptron (MLP) through MLP table look up while user features are regarded as input of these MLPs. The output feature co-actions, together with the common feature embeddings, are used to make final CTRs prediction. This figure is best viewed in color. }
\label{fig:framework}
\end{figure*}
\subsection{Architecture Overview} \label{subsec:emb_as_mlp}
The whole architecture of CAN is shown in Fig.2. The features of user and target item are fed into the CAN in two manners. In the first manner, all features of user $x_{user}$ and target item $x_{item}$ are encoded as dense vector using embedding layer, which are then concatenated as $e_{item}$ and $e_{user}$, respectively. In the second manner, part of features from $x_{user}$ and $x_{item}$ are selected and mapped into parameter $P_{user}$ and $P_{item}$ for the co-action unit. The operator of co-action unit is defined as $H(P_{user}, P_{item})$, which play a role of MLP with its parameter taken from $P_{item}$ and its input taken from $P_{user}$.
The detail implementation of co-action unit is elaborated in Sec.\ref{subsec:co-action_unit}.
The final structure of Co-Action Network is formulated as:
\begin{align}
\hat{y}=\mathrm{DNN}(e_{item},e_{user},H(x_{user}, x_{item},\Theta_{CAN}),\Theta_{DNN}),
\end{align}
where $\hat{y}$ is the predicted probability of the click behavior, $\Theta_{CAN}$ is the parameters set of the lookup table for co-action unit, and $\Theta_{DNN}$ is the parameter set of the DNN. The ground truth is denote as $y \in \{0,1\}$ and we finally minimize the cross-entropy loss function between the $\hat{y}$ and label $y$:
\begin{align}
\min_{\Theta_{CAN},\Theta_{e},\Theta_{DNN}}-y\mathrm{log}(\hat{y})-(1-y)\mathrm{log}(1-\hat{y}),
\end{align}
where $\Theta_{e}$ is the parameter set of feature embedding.
\subsection{Co-Action Unit}
\label{subsec:co-action_unit}
The detail structures of co-action unit is shown in the left part of Fig.2.
The $P_{item} \in \mathbb{R}^{M \times T}$ serves as the weight and bias of each layer in $MLP_{can}$ and the $P_{user} \in \mathbb{R}^{M \times D}$ is fed into the $MLP_{can}$ to output the co-action $H$, where $M$ denotes the numbers of unique ID, $D$ and $T$ are the dimensions of vector, $D < T$. In fact, the $P_{user}$ can also serve as $MLP_{can}$ parameter and vice versa for $P_{item}$. Empirically, in the advertisement system, the candidate items are a small part of all items so that its number is less than the items in user click history. Hence we choose $P_{item}$ as the $MLP_{can}$ parameter. The dimension of $P_{user}$ is just the same as input dimension of the $MLP_{can}$ while the $P_{item}$ has a higher dimension since it's the container of weights and biases.
In the following sections, we denote $P_{item}$ and $P_{user}$ as the parameters of specific item feature ID and user feature ID for simplicity, where $P_{user} \in \mathbb{R}^{D}$ and $P_{item} \in \mathbb{R}^{T}$.
The $P_{item}$ is reshaped and split into the weight matrix and bias vector of all $MLP_{can}$ layers. This process can be formulized as:
\begin{align}
P_{item} &= \mathrm{concatenate}(\{\mathrm{flatten}(w^{(i)}),b^{i}\}_{i=0,\dots,K-1}), \\
|P_{item}| &= \sum_{i=1}^K |w^{(i)}|+|b^{(i)}|,
\end{align}
where $w^{(i)}$ and $b^{(i)}$ denote the weight and bias of i-th layer of $MLP_{can}$, respectively and $|\cdot|$ means the dimension of a matrix or vector. Next, the feature co-action is calculated via:
\begin{align}
h^{(0)} &= P_{user}, \\
h^{(i)} &= \sigma(w^{(i-1)} \otimes h^{(i-1)} + b^{(i-1)}), \\
H(P_{user}, P_{item}) &= h^{(K)}, i \in 1,2, ... ,K-1,
\end{align}
where $\otimes$ and $\sigma$ denotes the matrix multiplication and activation function, $H$ is the features co-action defined previously. For sequence features like user click history, the co-action unit is applied to each item followed by a sum-pool over the sequence.
Our proposed co-action unit could achieve at least three advantages compared with other methods. First, different from previous works that using the same latent vectors in different types of inter-field interactions, the co-action unit utilizes the calculate ability of DNNs and couple two component features by dynamical parameter and input but not a fixed model, which provides more capacity to guarantee the update of two field features and avoid couple gradients. Second, less scale of the learnable parameters. The final aim of co-action learning is to learn excellent representation vector of every co-action features. However, directly learning the embedding of the carstein product of the component features needs to learn quite large-scale parameters. For instance, consider two features with both number of $N$ IDs. If we learning the co-action representation by learning the embeddings of their cartesian product, the parameters scale should be $O(N^2\times D)$, where $D$ is the dimension of embeddings. However, by using co-action unit, this scale will decrease to $O(N \times T)$, where the $T$ is the dimension of the parameter of co-action unit and far less than $N$. Less parameters are not only conducive to learning, but also can effectively reduce the burden of online system. Third, the co-action unit has better generalization to new feature combinations comparing with other previous works. Given a new feature combination, the co-action unit still works as long as embeddings of two sides are trained before.
\subsection{Multi-order Enhancement}
The aforementioned feature co-action is basically formed upon the first order features. However, feature interaction can be estimated over high orders. Although the co-action unit can implicitly learn the high order feature interaction, the learning process is suppose to be lengthy. To this end, we explicitly introduce multi-order information in the co-action unit to obtain a polynomial input. This is achieved by applying $MLP_{can}$ to different orders of $P_{user}$:
\begin{align}
\label{eq:orders}
H(P_{user}, P_{item}) &= \sum_{c=1}^C MLP_{can}((P_{user})^c) \\
&\approx MLP_{can}(\sum_{c=1}^C (P_{user})^c),
\end{align}
where $C$ is the number of orders. Note that SeLU is used as activation function when $ C=1$. Otherwise, we utilize Tanh to avoid the numerical problem caused by high order terms. The multi-order enhancement effectively promotes the model's non-linear fitting ability for co-action modeling without bringing additional computational and storage cost.
\subsection{Multi-level Independence}
The learning independence is one of the main concerns for co-action modeling. To ensure the learning independence, we propose a three-level strategy from different aspects according to the importance:
First level, parameter independence, which is necessary. As mentioned in Sec.\ref{subsec:emb_as_mlp}, our approach differentiate the parameters for representation learning and co-action modeling. The parameter independence is the basis of our CAN.
Second level, combinations independence, which is recommended. The feature co-action grows linearly as the number of feature combinations increases. Empirically, the target item features ``like item\_id'' and ``category\_id'' are selected as the weight-side embeddings while the user features are for the input-side. Since a weight-side embedding can be combined with several input-side and vice versa, our approach enlarges their dimension exponentially. Suppose there are $M$ weight-side and $N$ input-side embeddings, we expand the dimension of weight-side embeddings $N$ times and $M$ times for input-side:
\begin{align}
\label{eq:emb_dim2}
|P_{item}| &= (\sum_{i=1}^K |w^{(i)}|+|b^{(i)}|) \times N \\
|P_{user}| &= |x| \times M,
\end{align}
where $|x|$ is the input dimension of the $MLP_{can}$. In the forward pass, these embeddings are divided into several parts to fulfill the MLP operations.
Third level, orders independence, which is optional. To further improve the flexibility of co-action modeling in multi-order inputs, our approach makes different weight-side embeddings for different orders. The dimension of weight-side embeddings correspondingly increases $orders$ times similar to Eq.\ref{eq:emb_dim2}. Note that as the $MLP_{can}$ share no parameters in different order terms, the approximation in Eq.\ref{eq:orders} is not feasible.
The co-action independence help the co-action modeling but at the same time, bring additional memory access and computational costs. There is trade-off between the independence level and deployment costs. Emperically, the higher independence level the model use, the more training data the model need. In our advertisement system, three levels of independence are used yet only embedding independence is used in public dataset due to the lack of training samples.
\section{Related Work}
Several research efforts have been devoted to model feature co-action for CTR prediction. These methods can be divided into three categories: aggregation based methods, graph based methods and combinatorial embedding methods. We give a brief introduction in the following subsections.
\subsection{Aggregation Based Methods}
Deep click-through rate prediction models generally follow a Embedding \& MLP paradigm. In these methods, large scale sparse input features, or IDs, are first mapped into low dimensional embedding vectors and then aggregated into fixed-length vectors in a group-wise manner. The finally concatenated vector is fed as input to a multi-layer perceptron (MLP).
A series of works focus on learning how to aggregate features to obtain discriminative representation for the CTR prediction. Different neural architectures such as CNN, RNN, Transformer and Capsule are utilized to aggregate features. DIN~\cite{zhou2018din} is one of the representative work that employs the attention mechanism for feature aggregation. It uses attention to activate historical behaviors locally w.r.t. the given target item, and successfully captures the diversity characteristic of user interest. DIEN~\cite{zhou2019dien} further proposes an auxiliary loss to capture latent interest from historical behaviors. Additionally, DIEN integrates attention mechanism with GRU to model dynamic evolution of user interest for feature aggregation. MIND~\cite{Li2019MIND} argues that a single vector might be insufficient to capture complicated pattern lying in the user and items. Capsule network and dynamic routing mechanism are introduced in MIND to learn multiple representation to aggregate raw features. Moreover, inspired by the success of the self-attention architecture in the tasks of sequence to sequence learning~\cite{VaswaniSPUJGKP2017Transformer}, Transformer is introduced in~\cite{FengLSWSZY19DSIN} for feature aggregation. MIMN~\cite{Pi2019MIMN} proposes a memory-based architecture to aggregate features and tackle the challenge of long-term user interest modeling.
\subsection{Graph Based Methods}
A graph contains nodes and edges, where ID features can be represented by node embeddings and feature co-action can be modeled along edges.
Graph based methods like Graph Neural Networks (GNNs)~\cite{Gori2005GNN} conduct feature propagation for each node, where the neighborhood information are aggregated. The feature co-action is modeled as edge weights, which is used for feature propagation that smoothing the node embedding locally along the edges.
~\cite{BrunaZSL13SpectralGraph} first propose a spectral graph-based extension of convolutional networks to graphs for feature propagation.
GCN~\cite{KipfW17GCN} further simplifies graph convolutions by stacking layers of first-order Chebyshev polynomial filters with a redefined propagation matrix.
In GCNs, the edges are predefined and edge weights are one-dimensional real values. The weights are used to aggregating neighborhood information to model feature co-action. \cite{VelickovicCCRLB18GAT} proposes graph attention networks (GAT) that learns to assign different edge weights at each intermediate layer. GAT also model feature co-action by edge weights, however, weights in GAT are a function of nodes due to the attention mechanism. The attention mechanism makes GAT more effective to model feature co-action. There are also some work~\cite{SunHYYW2011PathSim,ShiHZY2019HINEmbedding,ZhaoYLSL2017HINFusion} that exploit meta-path between different nodes for embedding learning.
Although graph-based methods achieve great success on graph structured data, the feature co-action is modeled only by one-dimensional weight indicating the strength of connectives. The expressive power may be insufficient to model feature co-action.
\subsection{Combinatorial Embedding Methods}
Combinatorial embedding methods measure feature co-action in terms of combinatorial embeddings. Factorization Machines (FM)~\cite{rendle2010factorization} is a representative method in the age of shallow models. In FM, the feature co-action is modeled as the inner product of latent vectors of features. However, FM uses the same latent vectors in different types of inter-field interactions, which may cause the \textit{coupled gradient} issue and degrades the model capacity~\cite{QuFZTNGYH19PNN}. The \textit{coupled gradient} issue is caused by using the same latent vectors in different types of inter-field interactions, where two supposedly independent features are updated in the same direction during the gradient update process. Besides, the representative power of FM is limited by its shallow nature. Inspired by the success of deep learning, recent CTR prediction model has made the transition from traditional shallow approaches to modern deep approaches. DNNs are powerful to model non-linear interaction at bit-wise level, however, the feature co-action is learned in an implicit fashion. Many works have shown that model feature co-action explicitly by combining feature embeddings is beneficial to CTR prediction.
Wide\&Deep~\cite{cheng2016wide} manually designed cartesian product features as the input of the ``wide" module, which is a generalized linear model. The ``wide'' module is combined with a deep neural network to predict the final score for CTR prediction.
DeepFM~\cite{guo2017deepfm} imposes a factorization machines as ``wide'' module in Wide\&Deep with no need of constructing cartesian product feature manually. \citet{qu2016product} proposes Product-based Neural Network (PNN), which introduces a product layer to capture feature co-action between inter-field categories. The output of the product layer is fed as input to the following DNN for the final prediction. Deep \& Cross Network (DCN)~\cite{WangFFW2017DCN} applies feature crossing at each layer. Although these methods achieve remarkable performance gain compared with plain DNN, they still have some limitation. Specifically, the embedding of each ID takes the responsibility of representation learning and co-action modeling simultaneously. The mutual interferences between representation learning and co-action modeling might hurt the performance. Consequently, the restriction of combinatorial embedding does not make full use of the power of feature co-action.
\section{Revisiting Feature Co-Action for CTR Prediction}
In this section, we first give a brief introduction about feature co-action in CTR prediction. Then we revisit state-of-the-art methods that models feature co-action.
In the advertisement system, the CTR $\hat{y}$ of an user $u$ clicking on an ad $m$ is calculated via:
\begin{equation*}
\hat{y} = \mathrm{DNN}(E(u_1),\dots,E(u_i),E(m_1),\dots,E(m_j)),
\end{equation*}
where $\{u_1,\dots,u_i\}$ is the set of user features including browsing history, click history, user profile feature, etc, and $\{m_1,\dots,m_i\}$ is the set of item feature. The $E(\cdot)\in \mathbb{R}^d$ means the embedding which map the sparse IDs into learnable dense vector as the inputs of the DNN. Besides these unary terms, some works model the features interaction as an additional input of the DNN:
\begin{equation*}
\hat{y} = \mathrm{DNN}(E(u_1),\dots,E(u_i),E(m_1),\dots,E(m_j),\{F(m_i,u_j)\}_{i,j}),
\end{equation*}
where the $F(\{m_i,u_j\}_{i,j}) \in \mathbb{R}^d$ represents the feature interaction between item feature $m_i$ and user feature $u_i$. The incorporation of feature interaction improves the prediction results, which demonstrates that the combinations of the features from different groups provide additional information. The intuitive reason is that in the CTR prediction task, some features combinations have stronger relationship with label than the isolated features themselves. Taking user click behavior as an example, there is a strong relationship between user click history and the target item that the user may click on due to the existence of users' interests. Therefore, the combination of user click history and target item is a effective co-occurrence feature for the CTR prediction. We call this kind of feature interaction that has strong relationship with label as the feature co-action.
Carefully revisiting previous DNN-based methods, it could be found that some deep neural networks can capture the interaction among specific features even if they do not use the combination features as input. For instance, DIN and DIEN uses attention mechanism to capture the interaction between user behavior features and items. However, the weakness of these methods lie in that they are limited to the feature interaction at the user's interest sequence, and all of them deal with the embedding vector of features while regular embedded vectors in low dimensional space, which often lose a lot of original information.
The most direct implement method is to learn an embedding vector for each combination feature directly, e.g., cartesian product. However, there are some serious defects. The first one is the parameter explosion issue. For instance, two features with size $M$ and $N$ do the cartesian product. The parameter space of cartesian product set will expand from $O(N+M)$ to $O(M\times N)$ comparing with the original parameter space, which will bring great burden to the online system. In addition, there are no information sharing between two combinations that contains same feature, which also limits the representation ability of cartesian product.
Some works try to use special network structures to model features interaction. However, most of these structures interact with each other without any difference between the representations of feature groups\cite{guo2017deepfm, DefferrardBV16CNNSF}. |
2211.03181 | \section{Introduction}
\label{Intro.3-CPC}
In the analysis of multivariate data, it is frequently desirable to employ statistical methods which are insensitive to the presence of outliers in the sample. To address the problem of outliers, it is important to develop robust statistical procedures. Most statistical procedures include explicit or implicit prior assumptions about the distribution of the observations, but often without taking into account the effect of outliers. The purpose of this paper is to present a novel robust version of PCA which has some attractive features.
Principal components analysis (PCA) is considered to be one of the most important techniques in statistics. However, the classical version of PCA depends on either a covariance or a correlation matrix, both of which are very sensitive to outliers. We develop an alternative method to classical PCA, which is far more robust, by using a multivariate Cauchy likelihood to construct a robust principal components (PC) procedure. It is an adaptation of the classic method of PCA obtained by replacing the Gaussian log-likelihood function by the Cauchy log-likelihood function, in a sense that will be explained in section \ref{Lik.Inter.PCA}. Although we do not claim that the interpretation of standard PCA in terms of operations on a Gaussian likelihood is new, see Bolton and Krzanowski, this fact does not appear to have been exploited in the development of a robust PCA procedure, as we do in this paper. An important reason for using the multivariate Cauchy likelihood is that this likelihood has only one maximum point, but the single most important motivation is that it leads to a robust procedure.
In the next section we review briefly some of the techniques employed for estimating parameters and for directing a PCA in ways which are robust against the presence of outliers. We also present robustness preliminaries that include some important techniques which are necessary to assess whether the method used is robust or not. In Section \ref{CPCA} we develop the Cauchy-PCA and theoretically explore its robustness properties. Finally, in Section \ref{Comp.Algo} we present the numerical algorithms for creating Cauchy PCs, and also give the results of a number of very high-dimensional real-data and simulated examples. Our approach is seen to be competitive with, and often gives superior results to, that of the projection pursuit algorithm of Croux et al. (2007, 2013). Finally we conclude the paper in Section \ref{concl.}.
\subsection{Literature review on robust PCA} \label{Lit.Review}
It is well known that PCA is an important technique for high-dimensional data reduction. PCA is based on the sample covariance matrix $\hat{{\bf \Sigma}}$ and it involves searching for a linear combination $y_{j}= {\bf u}^{T}{\bf x}_{j}$ of the ${\bf x}$ components of the vector that maximize the sample variance of the components of $y$. According to \citet{Mardia&Kent&Bibby:1979}, the solution will be given by the equation
\[\hat{\bf \Sigma}={\bf U \Lambda U}^{T},\]
where ${\bf \Lambda}= \hbox{diag}\{\lambda_{1}, \ldots, \lambda_{p}\}$ and its diagonal elements $\lambda_{i}$ are the sample variances, while ${\bf U}$ is an orthogonal matrix, i.e. ${\bf U U}^{T} ={\bf U}^{T}{\bf U}={\bf I}_{p}$, whose columns ${\bf u}_{i}$ are the corresponding eigenvectors which represent the linear combinations.
[[The principal components are efficiently estimated in practice via Singular Value Decomposition (SVD) (cite Lanczos for an efficient algorithm).]]
Classical PCA, unfortunately, is non-robust, since it based on the sample covariance or sample correlation matrix which are very sensitive to outlying observations; see section \ref{NonRob.SPCA}. However, this problem has been handled by two different methods which result in robust versions of PCA by:
\begin{description}
\item[i.] replacing the standard covariance or correlation matrix with a robust estimator; or
\item[ii.] maximising (or minimising) a different objective function to obtain a robust PCA.
\end{description}
Many different proposes had been developed to carry out robust PCA, such that using projection pursuit PP, $M-$estimators and so on.
Despite maximum likelihood estimation, perhaps, being considered as the most important statistical inference method, sometimes this approach can lead to improper results when the underlying assumptions are not satisfied, for instance, when data contain outliers or deviate slightly from the supposed model. A generalization of maximum likelihood estimation proposed by \citet{Huber:1964} which is called $M$-estimation, aims to produce a robust statistic by constructing approaches that are resistant to deviations from the underline assumptions. $M$-estimators were also defined for the multivariate case by \citet{Maronna:1976}.
\citet{Campbell:1980} provided a procedure for robust PCA by examining the estimates of means and covariances which are less affected by outlier observations, and by exploring the observations which have a large effect on the estimates. He replaced the sample covariance sample by an $M-$estimator. \citet{Hubert:2003} introduced a new approach to create robust PCA. It combines the advantages of two methods, the first one is based on replacing the covariance or correlation matrix by its robust estimator, while the second one is based on maximizing the objective function for this robust estimate.
A robust PCA based on the projection pursuit (PP) method was developed by \citet{Li:1985}, using Huber's $M$-estimator of dispersion as the projection index. The objective of PP is to seek projections, of the high-dimensional data set onto low-dimensional subspaces, that optimise a function of "interestingness". The function that should be optimised is called an index or objective function and its choice depends on a feature that the researcher is concerned about. This property gives the PP technique a flexibility to handle many different statistical problems range from clustering to identifying outliers in a multivariate data set.
\citet{Bolton:1999} characterized the PC's for PP in terms of maximum likelihood under the assumption of normality. PCA can be considered as a special case of PP as well as many other methods of multivariate analysis. \citet{Li:1985} used Huber's $M$-estimator of dispersion as projective index to develop a robust PCA based on the PP approach. The sample median was used as a projective index to develop a robust PCA by \citet{Xie:1993}. In their simulation studies, \citet{Xie:1993} observed a PCA resistant to outliers and deviations from the normal distribution.
\cite{croux2007algorithms,croux2013robust} also suggested a robust PCA using projection pursuit and we will contrast our methodology against their algorithm.
\section{Preliminaries on standard PCA} \label{NonRob.SPCA}
PCA is an orthogonal linear transformation that projects the data to a new coordinate system according to the variance of each direction. Given a data matrix ${\bf X}\in\mathbb R^{n\times p}$ with each row correspond to a sample, the first direction ${\bf u}_1$ that maximizes the variance is defined through
\begin{equation*}
{\bf u}_1 = \argmax_{||{\bf u}||_2=1} ||({\bf X} - {\bf 1}_n\bar{{\bf x}}^T) {\bf u}||_2^2,
\end{equation*}
where ${\bf 1}_n$ is an $n$-dimensional vector whose elements are all set to 1 while $\bar{{\bf x}}=\frac{1}{n}\sum_{i=1}^n {\bf x}_i$ is the empirical mean.
The process is repeated $k$ times and at each iteration the to-be-estimated principal direction has to be orthogonal to all previously-computed principal directions. Thus, the
$k$-th direction which has to be orthogonal to the previous ones is defined by
\begin{equation*}
{\bf u}_k = \argmax_{||{\bf u}||_2=1} ||({\bf X} - {\bf 1}_n\bar{{\bf x}}^T) {\bf u}||_2^2 \ \ \text{subject to} \ \ {\bf u}_k \perp {\bf u}_j \ \text{with} \ j=1,...,k-1 \ .
\end{equation*}
\subsection{Non-robustness of standard PCA}
We will show that the influence function for the largest eigenvalue of the covariance matrix and the respective eigenvector are unbounded with respect to the norm of an outlier sample. Suppose that $\boldsymbol{\Sigma}$ is the covariance matrix of a population with distribution function $F$, i.e.,
\begin{equation}\label{Pop.Cov.}
{\boldsymbol{\Sigma}} = \int_{\mathbb R^p} ({\bf x}-\boldsymbol{\mu})({\bf x}-\boldsymbol{\mu})^{T} dF({\bf x}),
\end{equation}
where $\boldsymbol{\mu}=\int_{\mathbb R^p} {\bf x} dF({\bf x})$ corresponds to the mean vector. Assume that the leading eigenvalue of $\boldsymbol{\Sigma}$ has multiplicity 1, then we denote it by $\lambda$ and the leading eigenvector by $\hat{{\bf u}}$ (i.e., ${\bf u}_{1}=\hat{{\bf u}}$).
Let $T$ be an arbitrary functional, $F$ a distribution and ${\bf z}\in\mathbb R^p$ an arbitrary point in the relevant sample space. The influence function is defined as
\begin{equation}
IF_T({\bf z};F) = \lim_{\epsilon\to 0+} \frac{T((1-\epsilon) F + \epsilon \Delta_{{\bf z}}) - T(F)}{\epsilon},
\end{equation}
where $\Delta_{{\bf z}}$ is a unit point mass located at ${\bf z}$.
A robust estimator for $T$ means that the influence function is bounded with respect to the norm of the outlier $\bf z$.
\begin{proposition}
The influence function for the leading eigenvector of $\boldsymbol{\Sigma}$ is given by\footnote{We use ${\bf A}^+$ to denote the Moore-Penrose inverse of a matrix $\bf A$.}
\begin{equation}
IF_{\hat{{\bf u}}} ({\bf z}, F) = - \big( ({\bf z}-\boldsymbol{\mu})^{T}\hat{{\bf u}} \big) (\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} ({\bf z}-\boldsymbol{\mu}).
\end{equation}
Similarly, the IF for the largest eigenvalue of ${\boldsymbol{\Sigma}}$ is
\begin{equation}
IF_\lambda ({\bf z}, F) =
\big( ({\bf z}-\boldsymbol{\mu})^{T}\hat{{\bf u}} \big)^2 - \lambda.
\end{equation}
\end{proposition}
The detailed calculations are presented in Appendix \ref{NonRob:PCA:proof}. The following result shows that outliers with unbounded influence function do exist.
\begin{corollary}
Let ${\bf z}=\boldsymbol{\mu} + \gamma \hat{{\bf u}} + \eta {\bf v}$ where ${\bf v}$ is orthogonal to $\hat{{\bf u}}$ and does not belong to the null space of $\boldsymbol{\Sigma}$ and $\gamma,\eta\neq 0$ then
\begin{equation*}
\lim _{{\bf z}: \, ||{\bf z}||_2 \rightarrow \infty}||IF_{\hat{{\bf u}}} ({\bf z}, F)||_2 = \infty,
\end{equation*}
and similarly for $IF_\lambda ({\bf z}, F)$.
\end{corollary}
\begin{proof}
Direct substitution of ${\bf z}$ into the influence function gives:
\begin{equation*}
IF_{\hat{{\bf u}}} ({\bf z}, F) = -((\gamma \hat{{\bf u}} + \eta {\bf v})^T \hat{{\bf u}}) (\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} (\gamma \hat{{\bf u}} + \eta {\bf v})
= - \gamma \eta (\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} {\bf v}.
\end{equation*}
Since ${\bf v}$ does not belong to the null space of $\boldsymbol{\Sigma}$, it holds that $(\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} {\bf v} \neq {\bf 0}$ thus $||(\boldsymbol{\Sigma}-\lambda {\bf I}_p)^{+} {\bf v}||_2=c\neq0$. Hence,
\begin{equation*}
||IF_{\hat{{\bf u}}} ({\bf z}, F)||_2 = |\gamma| |\eta| c.
\end{equation*}
Given that $||{\bf z}||_2^2 = \gamma^2+\eta^2+||\boldsymbol{\mu}||_2^2+\gamma \boldsymbol{\mu}^T\hat{{\bf u}}+\eta \boldsymbol{\mu}^T{\bf v}$, either sending $|\gamma|\to\infty$ or $|\eta|\to\infty$ completes the proof.
Similarly,
\begin{equation*}
IF_\lambda ({\bf z}, F) = \gamma^2-\lambda \rightarrow \infty,
\end{equation*}
as $|\gamma|\to\infty$.
\end{proof}
\subsection{Generalizations of standard PCA}
\label{Lik.Inter.PCA}
Standard PCA can be viewed as a special case of a more general optimization problem. We present two such generalization: the first one leads to projection pursuit algorithms while the second leads to a maximum likelihood formulation. Let ${\bf u}$ be a unit vector and define the projection values
\begin{equation*}
c_{i}({\bf u}) = {\bf x}^{T}_{i} {\bf u}, {\hspace{3mm}} i=1, \ldots, n,
\end{equation*}
and a function $\Phi:\mathbb R^n \to \mathbb R$ acting on the projected values. The first generalization of PCA is defined as the maximization of $\Phi$:
\begin{equation*}
{\bf u}_1 = \argmax_{||{\bf u}||_2=1} \Phi(c_1({\bf u}),...,c_n({\bf u})) \ .
\end{equation*}
As in the standard PCA, the following principal directions are obtained after removing the contribution of the current principal component from the data. When $\Phi$ is the sample variance then we recover the standard PCA.
The second generalization interprets the computation of the principal component as a maximum likelihood estimation problem. By letting,
\begin{equation}\label{GausLogLik}
l_{G}(\mu, \sigma^{2}| c_{1},\ldots, c_{n})= -\frac{n}{2} \log {\sigma}^{2} - \frac{n}{2{\sigma}^{2}}\sum_{i=1}^{n}(c_{i}-\mu)^{2}.
\end{equation}
be the Gaussian log-likelihood, the first principal direction can be obtained by solving the minimax problem:
\begin{equation*}
\min_{||{\bf u}||_2=1}\max_{\mu, \sigma^2} \ l_{G}(\mu, \sigma^{2}| c_{1}({\bf u}),\ldots, c_{n}({\bf u})).
\end{equation*}
Indeed, the inner maximization can be solved analytically which leads to the optimal solution
\begin{equation*}
\hat{\mu}({\bf u}) = \frac{1}{n} \sum_{i=1}^n c_i({\bf u}) =: \bar{c}({\bf u})
\end{equation*}
and
\begin{equation*}
{\hat{\sigma}}^{2}({\bf u}) = \frac{1}{n} \sum_{i=1}^{n} (c_{i}({\bf u})- \bar{c}({\bf u}))^{2}.
\end{equation*}
Unsurprisingly, the optimal values are the sample mean and the sample variance. Using the above formulas it is straightforward to show that
\begin{eqnarray}
\argmin_{||{\bf u}||_2=1} \ l_{G}\big(\hat{\mu}({\bf u}), {\hat{\sigma}}^{2}({{\bf u}})| c_{1}({{\bf u}}), \ldots, c_{n}({{\bf u}})\big)
= \argmax_{||{\bf u}||_2=1} \ \hat{\sigma}^{2}({{\bf u}}) \ .
\end{eqnarray}
Variations of PCA can be derived by changing the likelihood function and in the next section we analyze the case of Cauchy distribution.
\section{Cauchy PCA}
\label{CPCA}
The Cauchy log-likelihood function is given by
\begin{equation}\label{Cau.LogLik}
l_{C}({\mu},{\sigma}| {c}_{1}({{\bf u}}), \ldots, {c}_{n}({{\bf u}}))= n \log{\frac{\sigma}{\pi}} - \sum_{i=1}^{n} \log \left\{{\sigma}^{2}+ (c_{i}({\bf u})-{\mu})^{2}\right\}.
\end{equation}
where $\mu$ and $\sigma$ are the two parameters of the Cauchy distribution. The first Cauchy principal direction is also obtained by solving the minimax optimization problem:
\begin{equation}\label{cauchy:minimax}
\min_{||{\bf u}||_2=1}\max_{\mu,\sigma} \ l_{C}(\mu, \sigma^{2}| c_{1}({\bf u}),\ldots, c_{n}({\bf u})).
\end{equation}
In contrast to the Gaussian case, the inner maximization cannot be performed analytically. Therefore an iterative approach needs to be utilized. Here, we apply the Newton-Raphson method with initial values the median and half the interquartile range for the location and scale parameters, respectively. According to \citet{Copas:1975}, although the mean of the Cauchy distribution does not exist and it has infinite variance, the Cauchy log-likelihood function $l_{C}(\mu, \sigma)$ has a unique maximum likelihood estimate, $(\hat{\mu},\hat{\sigma})$.
Fixing $\mu$ and $\sigma$, the outer minimization is also non-analytic and a fixed point iteration is applied to calculate ${\bf u}$. The iteration is given by
\begin{equation}
\hat{{\bf u}} = \frac{\hat{{\bf u}}_{un}}{||\hat{{\bf u}}_{un}||_2},
\label{cauchy:norm:eq}
\end{equation}
where $\hat{{\bf u}}_{un}$ is the unnormalized direction which is obtained from the differentiation of the Lagrangian function with respect to ${\bf u}$ and it is given by
\begin{eqnarray} \label{parallel}
\hat{{\bf u}}_{un} = \sum_{i=1}^{n}\frac{({{\bf x}}_i^T{\hat{{\bf u}}}-\hat{\mu}){{\bf x}}_i} {\hat{\sigma}^2 + \left({{\bf x}}_i^T{\hat{{\bf u}}}-\hat{\mu} \right)^2} \ .
\label{fixed:point:eq}
\end{eqnarray}
Once the first principal direction has been computed, its contribution from the dataset ${\bf X}$ is removed and the same procedure to estimate the next principal direction is repeated. This iterative process is repeated $k$ times. The removal of the contribution makes the principal directions orthogonal to each other.
We summarize the estimation of $k$ Cauchy principal components in the following pseudo-code (Algorithm \ref{1CPC:Algo.}).
\begin{algorithm}[H]
\caption{Cauchy PCA}
\label{1CPC:Algo.}
\begin{algorithmic}
\FOR{$j=1,...,k$}
\STATE $\bullet$ Initialize ${\hat{{\bf u}}_{un}}$ and normalize
$\hat{{\bf u}}= \hat{{\bf u}}_{un} / ||\hat{{\bf u}}_{un}||_2$
\WHILE{not converged}
\STATE $\bullet$ Fix $\hat{{\bf u}}$ and set
$$ c_i(\hat{{\bf u}}) = {\bf x}_i^T\hat{{\bf u}}, \ \ i=1,...,n.$$
\STATE $\bullet$ Via Newton-Raphson algorithm find
$$(\hat{\mu},\hat{\sigma})=\argmax_{\mu, \sigma} \ l_C(\mu, \sigma; c_1(\hat{{\bf u}}), \ldots, c_n(\hat{{\bf u}})).$$
\STATE $\bullet$ Fix $(\hat{\mu}, \hat{\sigma})$ and using fixed point iteration (i.e., (\ref{fixed:point:eq}) \& (\ref{cauchy:norm:eq})) find
$$\hat{{\bf u}} = \argmin_{{\bf u}} \ l_C(\hat{\mu}, \hat{\sigma}| c_1({\bf u}), \ldots, c_n({\bf u})) - \lambda (||{\bf u}||_2^2-1)$$
\ENDWHILE
\STATE $\bullet$ Set the $j$-th Cauchy principal direction
$${\bf u}_{j} = \hat{{\bf u}}.$$
\STATE $\bullet$ Remove the contribution from the dataset
\begin{eqnarray*}
{\bf X} = {\bf X} ({\bf I}_p - {\bf u}_{j}{\bf u}^T_{j})
\end{eqnarray*}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Robustness of the Leading Cauchy Principal Direction}
Let $\boldsymbol{\theta} = \left(\mu,\sigma\right)^T$ be the parameter vector of the Cauchy distribution and consider the infinite-sample normalized Cauchy log-likelihood function
\begin{equation}
l({\bf u}|\boldsymbol{\theta}) = \int_{{\bf x}\in\mathbb R^p} g(c({\bf u}),\boldsymbol{\theta})\, dF({\bf x}),
\end{equation}
where $g(c,\boldsymbol{\theta}) = \log(\sigma/\pi) - \log( \sigma^2+(c-\mu)^2)$ and $c({\bf u})={\bf x}^T{\bf u}$. We will estimate the influence function for the leading Cauchy principal direction
\begin{equation}
\hat{{\bf u}} = \argmin_{||{\bf u}||_2=1} \ l({\bf u}|\boldsymbol{\theta}_F({\bf u})),
\end{equation}
where $\boldsymbol{\theta}_F({\bf u})=\argmax_{\boldsymbol{\theta}} l({\bf x}^T{\bf u}|\boldsymbol{\theta})$ is the optimal Cauchy parameters for a given direction ${\bf u}$.
Since $\hat{{\bf u}}$ is restricted to be a unit vector, the standard condition for the minimum, i.e., $\left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = {\bf 0}$ is not valid. The proper condition is defined by
\begin{equation}
{\bf P}_{\hat{{\bf u}}} \left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = {\bf 0} ,
\end{equation}
where ${\bf P}_{{\bf u}}$ is the projection matrix given by ${\bf P}_{{\bf u}}={\bf I}_p-{\bf u}\bfu^T$.
\begin{Remark}
An equivalent condition is to satisfy ${\bf h}^T \left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = {\bf 0}$ for all ${\bf h}$ such that ${\bf h}^T\hat{{\bf u}}=0$ and $||{\bf h}||_2=1$. Both derived conditions are essentially a consequence of the Lagrangian formulation of the constraint optimization problem. Indeed, the Lagrange condition implies that at the minimum the direction of the objective function's derivative should be parallel to the direction of the constraint's derivative which translates to $\left.\frac{\partial}{\partial{\bf u}} l({\bf u}|\boldsymbol{\theta}_F({\bf u}))\right\vert_{{\bf u}=\hat{{\bf u}}} = \lambda \hat{{\bf u}}$ where $\lambda\neq 0$ is the Lagrange multiplier.
\end{Remark}
Let $\bar{g}({\bf x};{\bf u}) = \left.g({\bf x}^T{\bf u}|\theta)\right\vert_{\theta=\theta_F({\bf u})}$ be the likelihood function computed at $\theta=\theta_F({\bf u})$ and let denote its partial derivatives as
\[
\bar{g}_c({\bf x};{\bf u}) = \left.\frac{\partial}{\partial c} g({\bf x}^T{\bf u}|\theta)\right\vert_{\theta=\theta_F({\bf u})}
\]
and
\[
\bar{g}_{\boldsymbol{\theta}}({\bf x};{\bf u}) = \left.\frac{\partial}{\partial \boldsymbol{\theta}} g({\bf x}^T{\bf u}|\theta)\right\vert_{\theta=\theta_F({\bf u})}.
\]
Similarly, $\bar{g}_{cc}$, $\bar{g}_{c\theta}$ and $\bar{g}_{\theta\theta}$ denote the second order derivatives.
The following proposition establishes the expression for the influence function of the leading Cauchy principal direction, $\hat{{\bf u}}$.
\begin{proposition}\label{influence:func:cauchy:pca}
Under the assumption of ${{\bf I}}_F(\hat{{\bf u}})$ and ${\bf A}$ being invertible matrices, the influence function of $\hat{{\bf u}}$ is
\begin{equation}
IF_{\hat{{\bf u}}} ({\bf z}, F) = {\bf A}^{-1} {\bf b} ,
\end{equation}
where
$$
\begin{aligned}
{\bf A} &= {\bf I}_p \int_{\mathbb R^p} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}) {\bf x}^T\hat{{\bf u}} dF({\bf x})
- {\bf P}_{\hat{{\bf u}}} \int_{\mathbb R^p} \bar{g}_{cc}({\bf x};\hat{{\bf u}}) {\bf x}^T{\bf x} dF({\bf x}) {\bf P}_{\hat{{\bf u}}} \\
&+ {\bf P}_{\hat{{\bf u}}} \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}) dF({\bf x}) \, {{\bf I}}_F(\hat{{\bf u}})^{-1} \,
\int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta} c}({\bf x};\hat{{\bf u}}) {\bf x}^T dF({\bf x}) {\bf P}_{\hat{{\bf u}}}
\end{aligned}
$$
and
$$
{\bf b} = {\bf b}(z) = \bar{g}_c({\bf z}, \hat{{\bf u}}) {\bf z} + \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}) dF({\bf x}) \,
{{\bf I}}_F(\hat{{\bf u}})^{-1} \, \bar{g}_{\boldsymbol{\theta}}({\bf z};\hat{{\bf u}}),
$$
while
$$
{{\bf I}}_F(\hat{{\bf u}}) = \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\bftheta}({\bf x};\hat{{\bf u}}) dF({\bf x})
$$
is the expected Fisher information matrix under $F$ for the parameters of the Cauchy distribution computed at $\hat{{\bf u}}$.
\end{proposition}
\begin{proof}
The proof consists of several straightforward series expansions and implicit function calculations. The complete proof is given in Appendix \ref{robust:cauchy:proof}.
\end{proof}
The following boundedness result for the influence function states the conditions under which Cauchy PCA is robust.
\begin{corollary} \label{boundness} Let the assumptions of the proposition hold.
If ${\bf z}\not\perp\hat{{\bf u}}$ or if ${\bf z}\perp\hat{{\bf u}}=0$ but $\mu_F(\hat{{\bf u}})=0$ then the influence function for $\hat{{\bf u}}$ is bounded.
\end{corollary}
\begin{proof}
First, observe that matrix ${\bf A}$ does not depend on ${\bf z}$. It is only ${\bf b}$ that depends on ${\bf z}$ and our goal is to prove that ${\bf b}$ is bounded with respect to ${\bf z}$. Second, we have to compute the partial derivatives $\bar{g}_c({\bf z}; \hat{{\bf u}})$ and $\bar{g}_{\boldsymbol{\theta}}({\bf z}; \hat{{\bf u}})$. Straightforward calculations lead to
$$
\bar{g}_c({\bf z}; \hat{{\bf u}}) = - \frac{2({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))}{\sigma_F^2(\hat{{\bf u}})+({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))^2}
$$
$$
\bar{g}_\mu({\bf z}; \hat{{\bf u}}) = \frac{2({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))}{\sigma_F^2(\hat{{\bf u}})+({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))^2}
$$
and
$$
\bar{g}_\sigma({\bf z}; \hat{{\bf u}}) = \frac{1}{\sigma_F(\hat{{\bf u}})} - \frac{2\sigma_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+({\bf z}^T\hat{{\bf u}}-\mu_F(\hat{{\bf u}}))^2}.
$$
Let us now define an arbitrary scaling of the outlier ${\bf z}\rightarrow\alpha{\bf z}$ and prove boundedness of ${\bf b}$ as we send $\alpha\to\infty$. We consider the first case where ${\bf z}\not\perp\hat{{\bf u}}$. It holds that $\lim_{\alpha\to\infty} \bar{g}_c(\alpha{\bf z}; \hat{{\bf u}})\alpha{\bf z} = -({\bf z}^T\hat{{\bf u}})^{-1}{\bf z}$, $\lim_{\alpha\to\infty} \bar{g}_\mu(\alpha{\bf z}; \hat{{\bf u}}) = 0$ and $\lim_{\alpha\to\infty} \bar{g}_\sigma(\alpha{\bf z}; \hat{{\bf u}}) = \frac{1}{\sigma_F(\hat{{\bf u}})}$ therefore ${\bf b}$ is bounded with respect to $\alpha$.
For the second case, we have
\[
\lim_{\alpha\to\infty} \bar{g}_c(\alpha{\bf z}; \hat{{\bf u}})\alpha{\bf z} = \lim_{\alpha\to\infty} \frac{2\mu_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+\mu_F(\hat{{\bf u}})^2} \alpha{\bf z} = 0,
\]
\[
\lim_{\alpha\to\infty} \bar{g}_\mu(\alpha{\bf z}; \hat{{\bf u}}) = \frac{2\mu_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+\mu_F(\hat{{\bf u}})^2} = 0
\]
and
\[
\lim_{\alpha\to\infty} \bar{g}_\sigma(\alpha{\bf z}; \hat{{\bf u}}) =
\frac{1}{\sigma_F(\hat{{\bf u}})} - \frac{2\sigma_F(\hat{{\bf u}})}{\sigma_F^2(\hat{{\bf u}})+\mu_F(\hat{{\bf u}})^2}
= -\frac{1}{\sigma_F(\hat{{\bf u}})}
\]
since $\mu_F(\hat{{\bf u}})=0$
by assumption. Thus ${\bf b}$ is bounded with respect to $\alpha$ for the second case, too.
\end{proof}
The only case not covered by the corollary is when ${\bf z}^T\hat{{\bf u}}=0$ and $\mu(\hat{{\bf u}})\neq 0$. Our experiments presented in the following section show that outliers that are orthogonal to the Cauchy principal direction do sometimes influence the estimation of the Cauchy principal direction yet not significantly.
\subsection{Several Cauchy principal components}
We briefly mention possibilities for estimating several Cauchy principal components. There are two obvious approaches: one approach, the sequential approach, is to repeat the algorithm described above on the subspace orthogonal to $\hat{{\bf u}}=\hat{{\bf u}}_1$ to obtain $\hat{{\bf u}}_2$, the second Cauchy principal component, where $\hat{{\bf u}}_1$ is the first Cauchy principal component; then repeat the procedure on the subspace orthogonal to $\hat{{\bf u}}_1$ and $\hat{{\bf u}}_2$ to obtain $\hat{{\bf u}}_3$; and so on. A second approach, the simultaneous approach, is to decide in advance how many principal components we wish to determine, $p$ say, and then use a $p$-dimensional multivariate Cauchy likelihood, which has $p+ p(p+1)/2$ free parameters, to obtain $\hat{{\bf u}}_1, \ldots , \hat{{\bf u}}_p$. It turns out that these two approaches lead to equivalent results in classical (Gaussian) PCA but when a Cauchy likelihood is used the two approaches produce different sets of principal components. Our current thinking is this: the sequential approach is easier to implement (essentially the same software can be used at each step) and it is faster. However, the simultaneous approach could potentially be preferable if we know in advance how many principal components we wish to estimate. Further investigation is required.
\section{Numerical Results}
\label{Comp.Algo}
\subsection{Simulation studies}
In this section we will empirically validate our proposed methodology, via simulation studies. We searched for R packages that offer robust PCA in the $n<<p$ case and came up with \textit{FastHCS} \citep{fasthcs2018}, \textit{rrcovHD} \citep{rrcovhd2016}, \textit{rpca} \citep{rpca2017} and \textit{pcaPP} \citep{pcapp2018}. Out of them, \textit{pcaPP} (Projection Pursuit PCA) is the only one which does not require hyper-parameter tuning, e.g. selection of the LASSO penalty $\lambda$ or choice of the percentage of observations used to estimate a robust covariance matrix.
\subsubsection{Setup of the simulations}
Initially, we created a $p \times p$ (orthonormal) basis $\bf B$ by using QR decomposition on some randomly generated data. We then generated eigenvalues $\lambda_i \sim Exp(0.4)$, where $i=1,\ldots,p$ and hence we obtained the covariance matrix $\pmb{\Sigma} = {\bf B}\pmb{\Lambda}{\bf B}^T$, where $\pmb{\Lambda} =\text{diag}(\lambda_i)$. The first column of $\bf B$ served as the first ``clean'' eigenvector, and was the benchmark in our comparative evaluations. Following this step, we simulated $n$ random vectors ${\bf X} \sim N_p\left({\bf 0}, \pmb{\Sigma} \right)$ and in order to check the robustness of the results to the center of the data, all observations were shifted right by adding $50$ everywhere. A number of outliers equal to 2$\%$ of the sample size were introduced. These outliers were $\bar{\bf x}+e^{\kappa}{\bf z} \in {\mathbb{R}}^{p}$, where $\bar{\bf x}$ is the sample mean vector, ${\bf z}$ are unit vector(s) and $e^{\kappa}$ a real number denoting their norm, where $\kappa$ varied from $3$ up to $8$ increasing with a step size equal to $1$ and the angle between the outliers ${\bf z}$ and the first ``clean'' eigenvector spanned from $0^{\circ}$ up to $90^{\circ}$. In all cases, we subtracted the spatial median or the column-wise median\footnote{The results are pretty similar for either type of median and we here show the results of he column-wise median.} and scaled them by the mean absolute deviation.
At each case, we computed the first Cauchy-PCA eigenvector and the first PP-PCA eigenvector. The performance metric is the angle (in degrees) between the first robust (based on Cauchy or PP-PCA) eigenvector and the first "clean" eigenvector computed using the classical PCA. All experiments were repeated $100$ times and the results were averaged.
\subsubsection{Comparative results}
Tables \ref{tab100_500}-\ref{tab500_1000} present the performance of the first Cauchy-PCA eigenvector and of the first PP-PCA eigenvector for a variety of norms of the outlier, with different angles ($\phi$) between the outlier and the leading true eigenvector, for the $n<p$ case.
The case of $n<p$ was selected as statistical inference in this case is more challenging than the $p<n$ case\footnote{In this paper we focus on high-dimensional simulations and real-date examples ($p>n)$ but in results not presented in the paper we found that Cauchy PCA is also very competitive and performs strongly in low dimensional settings ($p<n$).}. Additionally, this case is also ordinarily met in the field of bioinformatics were the -omics data count tens of thousands of variables (genes, single nucleotide polymorphisms, etc.) but only tens or at most hundreds of observations.
As observed in Tables \ref{tab100_500}-\ref{tab500_1000}, the average angular difference between the Cauchy and the PP PCA ranges from $20^{\circ}$ up to more than $50^{\circ}$, which is evidently quite substantial, providing evidence that Cauchy PCA has performed in a superior manner to the projection pursuit method of Croux et al. (2007, 2013). In particular, the tables demonstrate that Cauchy PCA is less error prone than its competitor but, as is seen in Table \ref{tab500_1000}, the error decreases for both methods with increasing sample size. Further, the mean angular difference between the two methods increases as the angle $\phi$ increases. For instance, in Table \ref{tab100_500}, when $k=8$ and $\phi=0^{\circ}$ the difference between the two methods is $20^{\circ}$, whereas when $\phi=90^{\circ}$ the difference increases to $48^{\circ}$. Further, the error is not highly affected by the angle $\phi$, or the norm of the outliers. It can be seen that in Table \ref{tab100_1000} and Table \ref{tab500_1000} in the special case of $\phi=90^{\circ}$, the error increases for the Cauchy PCA by $2^{\circ}-3^{\circ}$, thus corroborating the result of Corollary \ref{boundness}. However, this effect, as in Table \ref{tab100_500}, is rather small, though noticeable.
\begin{table}
\caption{Mean angular difference between the robust eigenvectors computed in the contaminated data and the sample eigenvector computed in the clean data when $n=100$ and $p=500$. The norm of the outliers is $e^{k}$ and their angle with the true clean eigenvector is denoted by $\phi$.}
\label{tab100_500}
\begin{tabular}{ll|rrrrrrr}
\hline
Angle & Method & k=-Inf & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\ \hline
$\phi=0^{\circ}$ & Cauchy & 31.17 & 29.79 & 29.54 & 28.83 & 28.86 & 29.24 & 28.78 \\
& PP & 82.45 & 49.91 & 48.84 & 48.22 & 49.08 & 49.61 & 48.14 \\ \hline
$\phi=30^{\circ}$ & Cauchy & 31.44 & 29.24 & 29.13 & 28.60 & 28.89 & 29.34 & 29.65 \\
& PP & 82.45 & 65.28 & 65.34 & 63.42 & 62.96 & 66.63 & 65.43 \\ \hline
$\phi=60^{\circ}$ & Cauchy & 31.49 & 29.86 & 29.07 & 29.04 & 29.55 & 29.70 & 29.09 \\
& PP & 82.11 & 81.11 & 82.55 & 82.63 & 82.12 & 82.49 & 82.03 \\ \hline
$\phi=90^{\circ}$ & Cauchy & 32.32 & 31.67 & 33.00 & 33.13 & 32.86 & 33.19 & 33.06 \\
& PP & 82.38 & 82.06 & 81.69 & 82.12 & 81.73 & 81.74 & 81.88 \\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{Mean angular difference between the robust eigenvectors computed in the contaminated data and the sample eigenvector computed in the clean data when $n=100$ and $p=1000$. The norm of the outliers is $e^{k}$ and their angle with the true clean eigenvector is denoted by $\phi$.}
\label{tab100_1000}
\begin{tabular}{ll|rrrrrrr}
\hline
Angle & Method & k=-Inf & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\ \hline
$\phi=0^{\circ}$ & Cauchy & 36.53 & 33.12 & 33.60 & 33.69 & 32.62 & 32.51 & 33.16 \\
& PP & 83.06 & 80.36 & 80.17 & 81.87 & 80.50 & 80.76 & 80.16 \\ \hline
$\phi=30^{\circ}$ & Cauchy & 36.55 & 34.72 & 33.91 & 33.09 & 33.11 & 33.16 & 32.79 \\
& PP & 83.07 & 82.36 & 82.76 & 82.65 & 83.07 & 82.93 & 83.12 \\ \hline
$\phi=60^{\circ}$ & Cauchy & 36.42 & 34.46 & 33.96 & 33.61 & 34.41 & 33.07 & 33.47 \\
& PP & 83.78 & 82.86 & 82.71 & 84.05 & 83.46 & 82.71 & 82.78 \\ \hline
$\phi=90^{\circ}$ & Cauchy & 36.50 & 36.12 & 36.81 & 37.18 & 39.34 & 39.11 & 38.51 \\
& PP & 83.63 & 83.73 & 83.69 & 83.65 & 84.03 & 83.66 & 83.00 \\ \hline
\end{tabular}
\end{table}
\begin{table}
\caption{Mean angular difference between the robust eigenvectors computed in the contaminated data and the sample eigenvector computed in the clean data when $n=500$ and $p=1000$. The norm of the outliers is $e^{k}$ and their angle with the true clean eigenvector is denoted by $\phi$.}
\label{tab500_1000}
\begin{tabular}{ll|rrrrrrr}
\hline
Angle & Method & k=-Inf & k=3 & k=4 & k=5 & k=6 & k=7 & k=8 \\ \hline
$\phi=0^{\circ}$ & Cauchy & 19.95 & 18.60 & 18.46 & 18.35 & 18.24 & 18.20 & 17.93 \\
& PP & 68.76 & 26.08 & 24.93 & 24.91 & 24.83 & 24.73 & 24.72 \\ \hline
$\phi=30^{\circ}$ & Cauchy & 19.43 & 18.30 & 18.39 & 18.22 & 18.16 & 18.01 & 18.13 \\
& PP & 68.98 & 39.72 & 38.88 & 38.44 & 38.20 & 38.15 & 38.14 \\ \hline
$\phi=60^{\circ}$ & Cauchy & 19.76 & 18.60 & 18.12 & 18.20 & 18.40 & 18.19 & 18.01 \\
& PP & 69.10 & 64.10 & 63.12 & 62.89 & 62.91 & 62.82 & 62.77 \\ \hline
$\phi=90^{\circ}$ & Cauchy & 19.49 & 19.84 & 20.16 & 21.87 & 22.41 & 22.87 & 22.84 \\
& PP & 68.99 & 68.62 & 68.59 & 68.70 & 68.45 & 68.73 & 68.43 \\ \hline
\end{tabular}
\end{table}
\subsection{High dimensional real datasets}
Two real gene expression datasets, GSE13159 and GSE31161\footnote{From a biological standpoint, the data have already been uniformly pre-processed, curated and automatically annotated.}, downloaded from the \href{dataome.mensxmachina.org}{Biodataome} platform \citep{lakiotaki2018}, were used in the experiments. The dimensions of the datasets were equal to $2,096 \times 54,630$ and $1035 \times 54,675$, respectively. We randomly selected $5,000$ variables and computed the outliers using the high dimensional Minimum Covariance Determinant (MCD) of \cite{ro2015}. In accordance with the simulations studies, we removed the $2\%$ of the most extreme outliers detected by MCD and computed the first classical PC (benchmark eigenvector), the first Cauchy-PCA eigenvector and the first PP-PCA eigenvector of the "clean" data. We then added those outliers and increased their norm by $e^k$, where $k=(0, 3, 4, \ldots, 8)$ and computed computed the first Cauchy-PCA eigenvector and the first PP-PCA eigenvector. In all cases, we subtracted the spatial median or the column-wise median and scaled them by the mean absolute deviation. The performance metric is the angle (in degrees) between the first robust (based on Cauchy or PP-PCA) eigenvector and the first true ``clean" eigenvectors and the time required by each method. This procedure was repeated $200$ times and the average results are graphically displayed in Figures \ref{gse}(a)-(d).
Broadly speaking the effect of the PP PCA does not seem to have been affected substantially by the centering method, i.e. subtraction of the spatial or the column-wise median. On the contrary, the Cauchy PCA is affected by the type of median employed to this end. Centering with the spatial median yields high error levels for all norms of the outliers, for both datasets, whereas centering with the column-wise median produces much lower error levels. On average, the difference in the error between Cauchy PCA and PP PCA is about $30^{\circ}$ for the GSE31159 dataset (Figure \ref{gse}(a)) and about $14^{\circ}$ for the GSE3161 dataset (Figure \ref{gse}(b)). However, the error of the Cauchy PCA increases and the stabilizes in the GSE31159 dataset whereas the error of the PP PCA is stable regardless of the norm of the outliers. A different conclusion is extracted in the GSE31161 where the error of either method decreases as the norm of the outliers increases, until it reaches a plateau.
With regards to computational efficiency, the PP PCA is not affected by either centering method, whereas Cauchy PCA seems to be affected in the GSE31159 dataset but not in the GSE31161 dataset as seen in Figures \ref{gse}(c) and \ref{gse}(d). Cauchy PCA centered with the column-wise median is, on average, 5 times faster than PP PCA.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
GSE31159 & GSE31161 \\
\includegraphics[scale = 0.4, trim = 30 0 0 0]{gse13159.png} &
\includegraphics[scale = 0.4, trim = 30 0 0 0]{gse31161.png} \\
(a) & (b) \\
\includegraphics[scale = 0.4, trim = 30 0 0 0]{gse13159_time.png} &
\includegraphics[scale = 0.4, trim = 30 0 0 0]{gse31161_time.png} \\
(c) & (d)
\end{tabular}
\caption{The first row presents the angle between the first Cauchy PC of the "contaminated" data and the 1st leading eigenvector of the "clean" data and the angle between the first Projection Pursuit PC of the "contaminated" data and the 1st leading eigenvector of the "clean" data for increasing norms of the outliers. The second row contains the time in seconds.}
\label{gse}
\end{figure}
\section{Conclusion}\label{concl.}
The starting point for this paper is the observation that classical PCA can be formulated purely in terms of operations on a Gaussian likelihood. Although this observation is not new, the specifics of this formulation of classical PCA do not appear to be as widely known as might be expected. The novel idea underlying this paper is to formulate a version of PCA in which a Cauchy likelihood is used instead of a Gaussian likelihood, leading to what we call Cauchy PCA. Study of the resulting influence functions shows that Cauchy PCA has very good robustness properties. Moreover, we have provided an implementation of Cauchy PCA which runs quickly and reliably. Numerous simulation and real-data examples, mainly in high-dimensional settings, show that Cauchy PCA typically out-performs alternative robust versions of PCA whose implementation is in the public domain.
\clearpage
\section*{Appendix}
\setcounter{section}{0}
\renewcommand{\thesubsection}{A\arabic{subsection}}
\subsection{Proof of Proposition 2.1}\label{NonRob:PCA:proof}
\begin{proof}
The perturbed distribution $(1-\epsilon)F({\bf x}) + \epsilon\Delta_{\bf z}({\bf x})$ has perturbed mean value
\begin{equation*}
\boldsymbol{\mu}_\epsilon = \boldsymbol{\mu} + \epsilon ({\bf z}-\boldsymbol{\mu})
\end{equation*}
and perturbed covariance matrix
\begin{equation*}
\boldsymbol{\Sigma}_\epsilon = \boldsymbol{\Sigma} + \epsilon (({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}) + \epsilon^2 ({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T
\end{equation*}
Denoting by $\lambda_\epsilon$ the leading eigenvalue of $\boldsymbol{\Sigma}_\epsilon$ and by ${\bf u}_\epsilon$ the corresponding eigenvector, it holds that
\begin{equation}\label{perturbed:eigen:eq}
\boldsymbol{\Sigma}_\epsilon{\bf u}_\epsilon = \lambda_\epsilon {\bf u}_\epsilon \ \ \text{and} \ \ {\bf u}_\epsilon^{T}{\bf u}_\epsilon = 1 \ .
\end{equation}
Next, we expand the perturbed eigenvector and eigenvalue around the unperturbed ones as follows:
\begin{equation*}
{\bf u}_\epsilon = {\bf u}_0 + \epsilon {\bf u}_1 + O(\epsilon^2)
\end{equation*}
and
\begin{equation*}
\lambda_\epsilon = \lambda_0 + \epsilon \lambda_1 + O(\epsilon^2)
\end{equation*}
with
\begin{equation*}
\boldsymbol{\Sigma}{\bf u}_0 = \lambda_0 \ \ \text{and} \ \ {\bf u}_0^{T}{\bf u}_0 = 1 \ .
\end{equation*}
Substituting the formulas into (\ref{perturbed:eigen:eq}), and equating the zero-th and first order we get
\begin{equation*}
\boldsymbol{\Sigma}{\bf u}_0 = \lambda_0 {\bf u}_0 \ \ \text{and} \ \ {\bf u}_0^{T}{\bf u}_0 = 1 \ .
\end{equation*}
and
\begin{equation}\label{1st:order:eq}
(({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0 + \boldsymbol{\Sigma}{\bf u}_1 = \lambda_0{\bf u}_1 + \lambda_1{\bf u}_0
\end{equation}
and
\begin{equation*}
{\bf u}_0^{T}{\bf u}_1 = 0 \ .
\end{equation*}
Multiplying (\ref{1st:order:eq}) from the left with ${\bf u}_0^T$, we get
\begin{equation*}
\lambda_1 = {\bf u}_0^T (({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0 + {\bf u}_0^T\boldsymbol{\Sigma}{\bf u}_1
= ({\bf u}_0^T ({\bf z}-\boldsymbol{\mu}))^2 - \lambda_0
\end{equation*}
For ${\bf u}_1$, we rearrange (\ref{1st:order:eq}) to
\begin{equation*}
(\boldsymbol{\Sigma}-\lambda_0{\bf I}){\bf u}_1 = \lambda_1{\bf u}_0 - (({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0
\end{equation*}
and then multiply from the left with the pseudo-inverse of $\boldsymbol{\Sigma}-\lambda_0{\bf I}$ to obtain
\begin{equation*}
(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+(\boldsymbol{\Sigma}-\lambda_0{\bf I}){\bf u}_1 =
\lambda_1(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+{\bf u}_0 - (\boldsymbol{\Sigma}-\lambda_0{\bf I})^+(({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T-\boldsymbol{\Sigma}){\bf u}_0
\end{equation*}
Using the properties (\cite{Mardia&Kent&Bibby:1979}): $(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+(\boldsymbol{\Sigma}-\lambda_0{\bf I}) = {\bf I} - {\bf u}_0{\bf u}_0^T$ and $(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+{\bf u}_0 = {\bf 0}$, we obtain
\begin{equation*}
\begin{aligned}
&{\bf u}_1 - {\bf u}_0{\bf u}_0^T{\bf u}_1 = (\boldsymbol{\Sigma}-\lambda_0{\bf I})^+({\bf z}-\boldsymbol{\mu})({\bf z}-\boldsymbol{\mu})^T{\bf u}_0 - (\boldsymbol{\Sigma}-\lambda_0{\bf I})^+ \lambda_0 {\bf u}_0 \\
\Rightarrow & {\bf u}_1 = (({\bf z}-\boldsymbol{\mu})^T{\bf u}_0)(\boldsymbol{\Sigma}-\lambda_0{\bf I})^+({\bf z}-\boldsymbol{\mu})
\end{aligned}
\end{equation*}
and the proof is completed.
\end{proof}
\begin{comment}
We prove that standard PCA is not robust by showing that the influence function is unbounded. Suppose that $\Sigma_{0}$ is the covariance matrix of a population with distribution function $F_{0}$, i.e.
\begin{equation}\label{Pop.Cov.}
{\boldsymbol{\Sigma}}_{0}= \int {\bf x}{\bf x}^{T}dF_{0}({\bf x})-\left(\int {\bf x}dF_{0}({\bf x})\right)\left(\int {\bf x}dF_{0}({\bf x})\right)^{T};
\end{equation}
denote the corresponding mean by:
\begin{equation}\label{Pop.Mean}
{\boldsymbol{\alpha}}_{0}=\int {\bf x}dF_{0}({\bf x}).
\end{equation}
Let us consider the distribution function $F_{{\bf z},\epsilon}$. The corresponding mean and covariance matrix will be defined as following, respectively:
\begin{equation}\label{Mix.Mean}
{\boldsymbol{\alpha_{\epsilon}}}= \int {\bf x}[(1-\epsilon)dF_{0}({\bf x})+\epsilon d \delta_{\bf z}(\bf x)]=(1-\epsilon){\boldsymbol{\alpha}}_{0} + \epsilon {\bf z},
\end{equation}
\begin{eqnarray}
{\boldsymbol{\Sigma_{\epsilon}}} &=& (1-\epsilon)({\boldsymbol{\Sigma}}_{0}+{\boldsymbol{\alpha}}_{0}{\boldsymbol{\alpha}}_{0}^{T})+ \epsilon {\bf z}{\bf z}^{T}-((1-\epsilon){\boldsymbol{\alpha}}_{0} + \epsilon {\bf z})((1-\epsilon){\boldsymbol{\alpha}}_{0} + \epsilon {\bf z})^{T} \nonumber \\
&=& {\boldsymbol{\Sigma}}_{0}+\epsilon[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]+\epsilon^{2}({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}.
\end{eqnarray}
Assume that the leading eigenvalue of ${\boldsymbol{\Sigma}}_{0}$ has multiplicity 1. then the leading eigenvalue and the leading eigenvector have the following expansions, respectively (\citet{Amaral:2007}):
\begin{equation}\label{mu-eps}
{\boldsymbol{\mu}}_{[\epsilon]}={\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+\epsilon^{2} {\boldsymbol{\mu}}_{2}+\ldots,
\end{equation}
\begin{equation}\label{lamda-eps}
{\lambda}_{[\epsilon]}=\lambda_{0}+\epsilon \lambda_{1}+\epsilon^{2} \lambda_{2}+\ldots,
\end{equation}
where the following identities hold
\begin{equation}\label{sigma-eps}
{\boldsymbol{\Sigma_{\epsilon}}}{\boldsymbol{\mu}}_{[\epsilon]}={\lambda}_{[\epsilon]}{\boldsymbol{\mu}}_{[\epsilon]} \ \ \text{and} \ \ {\boldsymbol{\mu}}_{[\epsilon]}^{T}{\boldsymbol{\mu}}_{[\epsilon]}=1.
\end{equation}
This expansion will enable us to determine the influence function for parameters of interest.
Substituting (\ref{mu-eps}) and (\ref{lamda-eps}) into (\ref{sigma-eps}) yields
\begin{eqnarray*}
&\left\{{\boldsymbol{\Sigma}}_{0}+\epsilon[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]+\epsilon^{2}({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}\right\} \left\{{\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+\epsilon^{2} {\boldsymbol{\mu}}_{2}+\ldots \right\}&\\
&=\left\{\lambda_{0}+\epsilon \lambda_{1}+\epsilon^{2} \lambda_{2}+\ldots\right\} \left\{{\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+\epsilon^{2} {\boldsymbol{\mu}}_{2}+\ldots\right\}.&
\end{eqnarray*}
Now, collect the coefficients of $\epsilon^{0}=1$; that leads to the original problem:
\begin{equation}\label{epsilon0 coef.}
{\boldsymbol{\Sigma}}_{0} {\boldsymbol{\mu}}_{0}=\lambda_{0} {\boldsymbol{\mu}}_{0}.
\end{equation}
Then collecting the coefficients of $\epsilon^{1}=\epsilon$ gives:
\begin{equation}\label{epsilon1 coef.}
[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}+ {\boldsymbol{\Sigma}}_{0}{\boldsymbol{\mu}}_{1}= \lambda_{0} {\boldsymbol{\mu}}_{1}+ \lambda_{1} {\boldsymbol{\mu}}_{0}.
\end{equation}
Note that ${\boldsymbol{\mu}}^{T}_{[\epsilon]}{\boldsymbol{\mu}}_{[\epsilon]}=1$ which means:
\begin{eqnarray*}
({\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+ \ldots)^{T}({\boldsymbol{\mu}}_{0}+\epsilon {\boldsymbol{\mu}}_{1}+ \ldots)&=& 1 \\
{\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{0}+\epsilon({\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{1}+{\boldsymbol{\mu}}_{1}^{T} {\boldsymbol{\mu}}_{0})+\ldots &=& 1 ,
\end{eqnarray*}
but it is clear from (\ref{epsilon0 coef.}) that ${\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{0}=1$, which results in
\begin{equation}\label{mu.Ortho.}
{\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{1}={\boldsymbol{\mu}}_{1}^{T}{\boldsymbol{\mu}}_{0} = 0.
\end{equation}
So it turns out that ${\boldsymbol{\mu}}_{0}$ and ${\boldsymbol{\mu}}_{1}$ are orthogonal to each other. Now, multiply (\ref{epsilon1 coef.}) by ${\boldsymbol{\mu}}_{0}^{T}$ from left, then use a property of the orthogonality of ${\boldsymbol{\mu}}_{0}$ and ${\boldsymbol{\mu}}_{1}$ in (\ref{mu.Ortho.}) to obtain the following:
\[ {\boldsymbol{\mu}}_{0}^{T}[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}+ {\boldsymbol{\mu}}_{0}^{T} {\boldsymbol{\Sigma}}_{0}{\boldsymbol{\mu}}_{1}= \lambda_{0} {\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{1}+ \lambda_{1} {\boldsymbol{\mu}}_{0}^{T}{\boldsymbol{\mu}}_{0},\]
using (\ref{epsilon0 coef.}) and the previous result in (\ref{mu.Ortho.}), respectively, the second term on the left hand side and the first term on the right hand side equal zero. Then
\begin{equation}\label{lamda1}
\lambda_{1}={\boldsymbol{\mu}}_{0}^{T}[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}= \left({\boldsymbol{\mu}}_{0}^{T}({\bf z}-{\boldsymbol{\alpha}}_{0}) \right)^{2}-\lambda_{0}.
\end{equation}
If we let ${\bf z}={\boldsymbol{\alpha}}_{0}+ \gamma {\boldsymbol{\mu}}_{0}$, then $\lambda_{1}=O(\gamma^{2})$ as $\gamma\rightarrow\infty$. Along all directions not perpendicular to ${\boldsymbol{\mu}}_{0}.$ ${\bf z}={\boldsymbol{\alpha}}_{0}+ \gamma {\bf v}$ that is $\lambda_{1}=O(\|{\bf z}\|^{2})$ provided ${\bf v}^{T}{\boldsymbol{\mu}}_{0}\neq 0$.\\
Now, to find ${\boldsymbol{\mu}}_{1}$, rearrange (\ref{epsilon1 coef.}) to be in the form:
\begin{equation}\label{epsilon1 coef.2}
({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I}){\boldsymbol{\mu}}_{1}=\lambda_{1}{\boldsymbol{\mu}}_{0}-[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}.
\end{equation}
But $({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})$ is singular matrix, since it does not have full rank. Then, instead, multiply by a generalized inverse $({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}$. To prepare for this step, we have to note that ${\boldsymbol{\Sigma}}_{0}$ could be written as:
\[{\boldsymbol{\Sigma}}_{0}=\lambda_{0}{\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T}+\sum_{j\neq0}\lambda_{j}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\] then
\[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})=
\lambda_{0}{\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T}+\sum_{j\neq0}\lambda_{j}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T}
-\lambda_{0}{\bf I},\]
but ${\bf I}$ also could be written in the form
\[{\bf I}={\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T}+\sum_{j\neq0}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\]
thus
\[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})=
\sum_{j\neq0}(\lambda_{j}-\lambda_{0}){\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\]
and so it is natural to define
\[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}=
\sum_{j\neq0}(\lambda_{j}-\lambda_{0})^{-1}{\boldsymbol{\mu}}_{j}{\boldsymbol{\mu}}_{j}^{T},\]
which turns out to be the Morre-Penrose inverse (\citet{Mardia&Kent&Bibby:1979}).
Moreover,
\[({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})=
{\bf I}-{\boldsymbol{\mu}}_{0}{\boldsymbol{\mu}}_{0}^{T},\]
which is results in
\begin{equation}\label{mu0}
({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I}){\boldsymbol{\mu}}_{0}= {\bf 0}
\end{equation}
\begin{equation}\label{mu-1}
({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I}) {\boldsymbol{\mu}}_{1}= {\boldsymbol{\mu}}_{1}.
\end{equation}
Now, multiplying (\ref{epsilon1 coef.2}) from the left by the generalized inverse yields:
\begin{equation}\label{}
{\boldsymbol{\mu}}_{1}=({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+}[\lambda_{1}{\boldsymbol{\mu}}_{0}-[({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0}],
\end{equation}
then using (\ref{mu0}) and (\ref{mu-1}) where $\lambda_{1}$ is given by (\ref{lamda1}) to obtain:
\begin{eqnarray}\label{mu1}
{\boldsymbol{\mu}}_{1} &=& -({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} [({\bf z}-{\boldsymbol{\alpha}}_{0})({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}-{\boldsymbol{\Sigma}}_{0}]{\boldsymbol{\mu}}_{0} \nonumber\\
&=& - [ ({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}{\boldsymbol{\mu}}_{0}] ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} ({\bf z}-{\boldsymbol{\alpha}}_{0}).
\end{eqnarray}
The quadratic form in (\ref{mu1}) is unbounded. To justify this result, let us consider ${\bf z}$ to be the following linear combination:
\begin{equation}\label{newz}
{\bf z}={\boldsymbol{\alpha}}_{0} + \gamma {\boldsymbol{\mu}}_{0} + \eta {\bf v},
\end{equation}
where {\bf v} is an orthogonal to ${\boldsymbol{\mu}}_{0}$, i.e. such that ${\bf v}^{T}{\boldsymbol{\mu}}_{0}=0$. Now, rewriting (\ref{mu1}) as a function of ${\bf z}$ as in (\ref{newz}) yields:
\begin{eqnarray*}
{\boldsymbol{\mu}}_{1}({\bf z}) &=& -(\gamma {\boldsymbol{\mu}}_{0} + \eta {\bf v})^{T} {\boldsymbol{\mu}}_{0} ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} (\gamma {\boldsymbol{\mu}}_{0} + \eta {\bf v})\\
&=& - \eta \gamma ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} {\bf v}.
\end{eqnarray*}
In the previous equation, the quantity $({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} {\bf v}\neq {\bf 0}$, since ${\boldsymbol{\Sigma}}_{0}$ has a full rank, ${\bf v}$ is an orthogonal to ${\boldsymbol{\mu}}_{0}$ and $\eta$ and $\gamma$ can be chosen to make $\|{\boldsymbol{\mu}}_{1}\|$ as large as we desire. Moreover, $\|{\boldsymbol{\mu}}_{1}\|= O(\|{\bf z}\|^{2})$ as $\|{\bf z}\| \rightarrow \infty$ and the $O(\|{\bf z}\|^{2})$ rate is achieved in all directions except those for which $\gamma =0$ or $\eta =0$.\\
The influence function for ${\boldsymbol{\mu}}$ is:
\begin{eqnarray*}
IF_{{\boldsymbol{\mu}}}({\bf z}, F) &=& \lim _{\epsilon \rightarrow 0}\left(\frac{{\boldsymbol{\mu}}_{[\epsilon]}-{\boldsymbol{\mu}}_{0}}{\epsilon}\right)={\boldsymbol{\mu}}_{1}({\bf z}) \nonumber \\
&=& - \big( ({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}{\boldsymbol{\mu}}_{0} \big) ({\boldsymbol{\Sigma}}_{0}-\lambda_{0}{\bf I})^{+} ({\bf z}-{\boldsymbol{\alpha}}_{0}).
\end{eqnarray*}
where ${\boldsymbol{\mu}}_{[\epsilon]}$ is given in (\ref{mu-eps}). We can follow the same steps to find the IF for ${\lambda}_{[\epsilon]}$ which is:
\begin{eqnarray*}
IF_{{\boldsymbol{\lambda}}}({\bf z}, F) = \lim _{\epsilon \rightarrow 0}\left(\frac{{\boldsymbol{\lambda}}_{[\epsilon]}-{\boldsymbol{\lambda}}_{0}}{\epsilon}\right)={\boldsymbol{\lambda}}_{1}({\bf z}) \nonumber = \big( ({\bf z}-{\boldsymbol{\alpha}}_{0})^{T}{\boldsymbol{\mu}}_{0} \big)^2 - \lambda_0,
\end{eqnarray*}
where ${\lambda}_{[\epsilon]}$ is defined in (\ref{lamda-eps}). Further, let $z={\boldsymbol{\alpha}}_{0} + \gamma {\boldsymbol{\mu}}_{0} + \eta {\boldsymbol{v}}$ where ${\boldsymbol{v}}$ is orthogonal to ${\boldsymbol{\mu}}_{0}$ and $\gamma,\eta\neq 0$ then
\begin{equation*}
\lim _{||\boldsymbol{z}|| \rightarrow \infty}||{\boldsymbol{\mu}}_{1}(\boldsymbol{z})|| = \infty,
\end{equation*}
and similarly for ${{\lambda}}_{1}({\bf z})$, completing the proof that both IFs are unbounded.
\end{comment}
\subsection{Proof of Proposition \ref{influence:func:cauchy:pca}}
\label{robust:cauchy:proof}
Let us first make the symbolism more explicit and denote $l_F({\bf u}|\boldsymbol{\theta})$ the Cauchy log-likelihood function with respect to the distribution $F$ and $\hat{{\bf u}}_F$ the respective leading Cauchy principal direction. Then, our goal is to calculate the limit of
$$
\frac{1}{\epsilon} (\hat{{\bf u}}_{F_{\epsilon,{\bf z}}} - \hat{{\bf u}}_F)
$$
as $\epsilon\to 0$ where $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}$ is the leading Cauchy principal direction for the distribution $F_{\epsilon,{\bf z}}=(1-\epsilon)F+\epsilon \Delta_{\bf z}$. The optimality condition for the leading Cauchy principal direction reads
\begin{equation}
{\bf P}_{\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}} \left. \frac{\partial}{\partial{\bf u}} l_{F_{\epsilon,{\bf z}}}\big({\bf u}|\boldsymbol{\theta}_{F_{\epsilon,{\bf z}}}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}} = 0
\label{opt:cond:perturbed}
\end{equation}
and
$$
{\bf P}_{\hat{{\bf u}}_{F}} \left. \frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F}} = 0
$$
Moreover, $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}$ is a unit vector which can be represented as
$$
\hat{{\bf u}}_{F_{\epsilon,{\bf z}}} = \cos(\rho) \hat{{\bf u}}_F + \sin(\rho) {\bf h}
$$
where ${\bf h}$ is a unit vector perpendicular to $\hat{{\bf u}}_F$ and $\rho$ is a (small) real number. Under these assumptions, $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}$ is a unit vector since
$$
||\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}||_2^2 = \cos^2(\rho) ||\hat{{\bf u}}_F||_2^2 + \sin^2(\rho) ||{\bf h}||_2^2 = 1
$$
Obviously, $\rho$ depends on $\epsilon$ and ${\bf z}$ (i.e., $\rho=\rho(\epsilon,{\bf z})$) and $\lim_{\epsilon\to 0} \rho = 0$ but we choose to avoid denoting their explicit relationship because it is not required in our proof. Moreover, a Taylor expansion for the representation leads to
$$
\hat{{\bf u}}_{F_{\epsilon,{\bf z}}} = \hat{{\bf u}}_F + \rho {\bf h} + O(\rho^2)
$$
thus we obtain that
$$
{\bf P}_{\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}} = {\bf P}_{\hat{{\bf u}}_{F}} - \rho (\hat{{\bf u}}_{F} {\bf h}^T + {\bf h} \hat{{\bf u}}_{F}^T) + O(\rho^2)
$$
Next, we compute the partial derivative using the chain rule
$$
\frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) =
\int_{\mathbb R^p} \left[ \frac{\partial}{\partial c} g(c({\bf u}), \boldsymbol{\theta}_{F}({\bf u})) \frac{\partial}{\partial {\bf u}} c({\bf u})
+ \frac{\partial}{\partial \boldsymbol{\theta}} g(c({\bf u}), \boldsymbol{\theta}_{F}({\bf u})) \frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) \right] dF({\bf x})
$$
Therefore,
\begin{equation*}
\begin{aligned}
\left. \frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F}} &=
\int_{\mathbb R^p} \left[ \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x}
+ \bar{g}_{\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F})
\frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) \Big|_{{\bf u}=\hat{{\bf u}}_{F}} \right] dF({\bf x}) \\
&= \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
+ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x})
\frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) \Big|_{{\bf u}=\hat{{\bf u}}_{F}} \\
&= \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
\end{aligned}
\end{equation*}
The second summand equals to zero because $\hat{{\bf u}}_{F}$ maximizes the Cauchy log-likelihood function thus it holds that $\int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x}) = {\bf 0}$. Similarly,
\begin{equation*}
\begin{aligned}
&\left. \frac{\partial}{\partial{\bf u}} l_{F_{\epsilon,{\bf z}}}\big({\bf u}|\boldsymbol{\theta}_{F_{\epsilon,{\bf z}}}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}}
= \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF_{\epsilon,{\bf z}} ({\bf x}) \\
=& (1-\epsilon) \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF({\bf x})
+ \epsilon \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z}
\end{aligned}
\end{equation*}
Next, we further Taylor expand $\bar{g}_{c}({\bf x}; {\bf u}_{F_{\epsilon,{\bf z}}})$ using $\hat{{\bf u}}_{F_{\epsilon,{\bf z}}} = \hat{{\bf u}}_F + \rho {\bf h} + O(\rho^2)$
$$
\bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) = \bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F})
+ \rho {\bf h} \frac{\partial}{\partial{\bf u}} \bar{g}_{c}({\bf x}; {{\bf u}}) \Big|_{{\bf u}=\hat{{\bf u}}_{F}} + O(\rho^2)
$$
Using again the chain rule, we obtain that
$$
\frac{\partial}{\partial{\bf u}} \bar{g}_{c}({\bf x}; {{\bf u}}) =
\bar{g}_{cc}({\bf x}; {{\bf u}}) {\bf x} + \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {{\bf u}}) \frac{\partial}{\partial{\bf u}} \boldsymbol{\theta}_{F}({\bf u})
$$
The computation of the partial derivative $\frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u})$ follows. Formula
$\boldsymbol{\theta}_{F}({\bf u}) = \argmax_{\boldsymbol{\theta}} l_F({\bf x}^T{\bf u}|\boldsymbol{\theta})$ implies that
\begin{equation*}
\frac{\partial}{\partial \boldsymbol{\theta}} l_F(c({\bf u})|\boldsymbol{\theta}) \Big|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{F}({\bf u})} = 0 \ .
\end{equation*}
Differentiating with respect to ${\bf u}$ and using the implicit function theorem, we get
\begin{equation*}
\begin{aligned}
\frac{\partial}{\partial {\bf u}} \boldsymbol{\theta}_{F}({\bf u}) &=
- \frac{\partial}{\partial {\bf u}} \frac{\partial}{\partial \boldsymbol{\theta}} l_F(c({\bf u})|\boldsymbol{\theta}) \Big|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{F}({\bf u})} \left[ \frac{\partial^2}{\partial \boldsymbol{\theta}^2} l_F(c({\bf u})|\boldsymbol{\theta}) \Big|_{\boldsymbol{\theta}=\boldsymbol{\theta}_{F}({\bf u})} \right]^{-1} \\
&= - \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};{\bf u}) dF({\bf x})
\left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\bftheta}({\bf x};{\bf u})dF({\bf x}) \right]^{-1}
\end{aligned}
\end{equation*}
Thus,
\begin{equation*}
\begin{aligned}
&\bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) = \bar{g}_{c}({\bf x}; \hat{{\bf u}}_{F}) \\
+& \rho {\bf h} \left[\bar{g}_{cc}({\bf x}; \hat{{\bf u}}_F) {\bf x} + \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x})
\left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\bftheta}({\bf x};\hat{{\bf u}}_{F})dF({\bf x}) \right]^{-1} \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {\hat{{\bf u}}_F}) \right] + O(\rho^2)
\end{aligned}
\end{equation*}
Overall, (\ref{opt:cond:perturbed}) becomes
\begin{equation*}
\begin{aligned}
&\left[ {\bf P}_{\hat{{\bf u}}_F} - \rho (\hat{{\bf u}}_{F} {\bf h}^T + {\bf h} \hat{{\bf u}}_{F}^T) + O(\rho^2) \right] \cdot
\left[ (1-\epsilon) \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF({\bf x})
+ \epsilon \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right] = 0 \\
\Rightarrow&
{\bf P}_{\hat{{\bf u}}_F} \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf x} dF({\bf x})
- \rho (\hat{{\bf u}}_{F} {\bf h}^T + {\bf h} \hat{{\bf u}}_{F}^T) \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x}) + O(\rho^2) \\
&= \epsilon {\bf P}_{\hat{{\bf u}}_F} \left[\int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
- \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right] + O(\epsilon\rho) \\
\Rightarrow&
\rho {\bf h} \left[ \int_{\mathbb R^p}\bar{g}_{cc}({\bf x}; \hat{{\bf u}}_F) {\bf x}^T {\bf x} dF({\bf x})
+ \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) \hat{{\bf u}}_{F}^T {\bf x} dF({\bf x}) \right. \\
&+ \left. \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x})
\left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\bftheta}({\bf x};\hat{{\bf u}}_{F})dF({\bf x}) \right]^{-1} \int_{\mathbb R^p} \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {\hat{{\bf u}}_F}) {\bf x} dF({\bf x}) \right] + O(\rho^2) \\
&= \epsilon {\bf P}_{\hat{{\bf u}}_F} \left[\int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
- \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right] + O(\epsilon\rho)
\end{aligned}
\end{equation*}
where we use the facts that
$${\bf P}_{\hat{{\bf u}}_F} {\bf h} = {\bf h}$$
and
$${\bf h}^T \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
= {\bf P}_{\hat{{\bf u}}_F} \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
= {\bf P}_{\hat{{\bf u}}_F} \left. \frac{\partial}{\partial{\bf u}} l_{F}\big({\bf u}|\boldsymbol{\theta}_{F}({\bf u})\big) \right|_{{\bf u}=\hat{{\bf u}}_{F}} = 0
$$
Thus, the influence function is
$$
IF_{\hat{{\bf u}}_F} ({\bf z}, F) = \lim_{\epsilon\to0} \frac{\rho{\bf h}}{\epsilon} = {\bf A}^{-1} {\bf b}
$$
where
\begin{equation*}
\begin{aligned}
{\bf A} &= {\bf I}_{d} \left[ \int_{\mathbb R^p}\bar{g}_{cc}({\bf x}; \hat{{\bf u}}_F) {\bf x}^T {\bf x} dF({\bf x})
+ \int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) \hat{{\bf u}}_{F}^T {\bf x} dF({\bf x}) \right] \\
&+ \int_{\mathbb R^p} {\bf x} \bar{g}_{c\boldsymbol{\theta}}({\bf x};\hat{{\bf u}}_{F}) dF({\bf x})
\left[ \int_{\mathbb R^p} \bar{g}_{\boldsymbol{\theta}\bftheta}({\bf x};\hat{{\bf u}}_{F})dF({\bf x}) \right]^{-1} \int_{\mathbb R^p} \bar{g}_{c\boldsymbol{\theta}}({\bf x}; {\hat{{\bf u}}_F}) {\bf x} dF({\bf x})
\end{aligned}
\end{equation*}
and
$$
{\bf b} = {\bf P}_{\hat{{\bf u}}_F} \left[\int_{\mathbb R^p} \bar{g}_c({\bf x};\hat{{\bf u}}_{F}) {\bf x} dF({\bf x})
- \bar{g}_c({\bf z};\hat{{\bf u}}_{F_{\epsilon,{\bf z}}}) {\bf z} \right]
$$
\begin{comment}
\subsection{Proof of Lemma \ref{d/du(ThetaF(u))}}
To find the influence function for the functional version of the parameter vector ${\boldsymbol {\theta}}_{F}({\bf u})$ at fixed ${\bf u}$, starting by step $1$ in section (\ref{Pre}). In this step we need to find the derivative of ${\hat{\boldsymbol {\theta}}}_{F}({\bf u})$ with respect to ${\bf u}$ as the following:\\
\begin{eqnarray*}
\frac{\partial}{\partial {\boldsymbol{\theta}}} m({\bf u},{\boldsymbol {\theta}}) &=&
\frac{\partial}{\partial {\boldsymbol{\theta}}} \int_{{\bf x} \in {\mathbb{R}}^{p}} l[{\bf x}; {\bf u}] dF({\bf x}) = \int_{{\bf x} \in {\mathbb{R}}^{p}} l_{; {\boldsymbol {\theta}}}[{\bf x}; {\bf u}] dF({\bf x}).
\end{eqnarray*}
Choose ${\boldsymbol {\theta}}_{F}({\bf u})$ in step $1$ to satisfy the condition $M-$estimator which is
\begin{equation}\label{1}
\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] dF({\bf x}) = {\bf 0}.
\end{equation}
Suppose we now rotate ${\bf u} \mapsto {\bf v}$, where ${\bf v}$ is defined in (\ref{v}) such that:
\begin{equation*}
{\bf v}= {\bf u} \cos{\alpha} + {\bf h} \sin{\alpha},
\end{equation*}
Now (\ref{1}) still holds for ${\bf v}$, so
\begin{equation}\label{2}
\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf v}] dF({\bf x}) = {\bf 0}.
\end{equation}
For small ${\alpha},$ a Taylor expansion gives
\begin{eqnarray}\label{3}
\nonumber l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf v}]&=& l_{;{\boldsymbol {\theta}}}[{\bf x}; ({\bf u} \cos{\alpha}+ {\bf h} \sin{\alpha})]\\
&=& l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]+ l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h} \alpha + l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Delta_{F,{\bf h},\alpha}+ O(\alpha^{2}),
\end{eqnarray}
where $\Delta_{F,{\bf h},\alpha}$ is defined in (\ref{Delta(F,h,alpha)}). Substituting (\ref{3}) into (\ref{2}) yields
\begin{equation}\label{3-a}
\int_{{\bf x} \in \mathbb{R}^{p}} \left[l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] + l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] {\bf x}^{T}{\bf h} \alpha + l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Delta_{F,{\bf h},\alpha}\right]dF({\bf x}) = O(\alpha^{2}).
\end{equation}
Using (\ref{1}) on the first term in the integrand of (\ref{3-a}) and rearranging the equation, we obtain
\begin{equation*}
\left(\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]dF({\bf x}) \right) \Delta_{F,{\bf h},\alpha} = - \alpha \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] {\bf x}^{T}{\bf h} dF({\bf x}) + O(\alpha^{2}),
\end{equation*}
which leads to
\begin{equation}\label{3-b}
\frac{\Delta_{F,{\bf h},\alpha}}{\alpha} = - \left(\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]dF({\bf x}) \right)^{-1} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}dF({\bf x}) + O(\alpha)
\end{equation}
Taking the limit for (\ref{3-b}), as $\alpha \rightarrow 0,$ we result in
\begin{equation}\label{4a}
\lim _{\alpha \rightarrow 0} \frac{\Delta_{F,h,\alpha}}{\alpha} = - {\boldsymbol \Xi}^{-1} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}dF({\bf x}).
\end{equation}
where ${\boldsymbol \Xi}$ is defined in (\ref{xi-CPC}).
\hfill$\square$
\subsection{Proof of Lemma \ref{d/dF(ThetaF(u))}}
Choose ${\boldsymbol {\theta}}_{F}({\bf u})$ in step $1$ to satisfy the condition $M-$estimator which is
\begin{equation}\label{1}
\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}({\bf x}^{T}{\bf u}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u})) dF_{{\bf z},\epsilon}({\bf x}) = {\bf 0}.
\end{equation}
For fixed ${\bf u}$ and variable $F$:
\begin{equation}\label{1-1CPC}
l_{;{\boldsymbol {\theta}}}({\bf x}^{T}{\bf u}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u}))= l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]+ l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Gamma_{F,{\bf h},\alpha}+ O(\epsilon^{2}),
\end{equation}
where $\Gamma_{F,{\bf h},\alpha}$ is given in (\ref{Gamma(F,h,alpha)}).
Then
\begin{eqnarray*}
\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}}({\bf x}^{T}{\bf u}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u})) dF_{{\bf z},\epsilon}({\bf x}) &=&
+ \int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}[{\bf x}; {\bf u}] \Gamma_{F,{\bf h},\alpha} dF_{\bf z,\epsilon}({\bf x}) \nonumber \\
& & + \epsilon {\hspace{1mm}} l_{;{\boldsymbol {\theta}}}[{\bf z}; {\bf u}] + O(\epsilon^{2}),
\end{eqnarray*}
which implies that
\begin{eqnarray}\label{4b}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}{\hspace{1mm}} \Gamma_{F,{\bf h},\alpha} &=& - \left(\int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta}}{\boldsymbol {\theta}}}({\bf x}; {\bf u})dF_{\bf z,\epsilon}({\bf x}) \right)^{-1} l_{;{\boldsymbol {\theta}}} [{\bf z}; {\bf u}] \nonumber \\
&=& - {\boldsymbol \Xi}^{-1} l_{;{\boldsymbol {\theta}}} [{\bf z}; {\bf u}].
\end{eqnarray}
where ${\boldsymbol \Xi}$ is defined in (\ref{xi-CPC})
\hfill$\square$
\subsection{Proof of Lemma \ref{lemma_cauchyIF}}
In Section \ref{d/dF(ThetaF(u))}, we found the derivative of ${\boldsymbol {\theta}}_{F}({\bf u})$ for fixed ${\bf u}$, by satisfying step $1$ in which $m({\bf u},{\boldsymbol {\theta}})$ is maximised. In this section we want to find the influence function for ${\hat{\bf u}}$, by satisfying step $2$ in section \ref{Pre}. Using (\ref{1}), note that
\begin{eqnarray*}
\nonumber \frac{\partial}{\partial{\bf u}} \int_{{\bf x} \in \mathbb{R}^{p}} l[{\bf x}; {\bf u}]dF({\bf x}) &=& \int_{{\bf x} \in \mathbb{R}^{p}}\left(l_{c;}[{\bf x}; {\bf u}]{\bf x} + l_{;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]\frac{\partial{\boldsymbol {\theta}}_{F}}{\partial{\bf u}}\right) dF({\bf x}) \\
&=& \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}]{\bf x}dF({\bf x}),
\end{eqnarray*}
where the second term above is $0$ by definition of ${\boldsymbol {\theta}}_{F}({\bf u}).$
Therefore, a necessary condition for ${\bf u}$ to be a minimum of $m({\bf u}; {\boldsymbol {\theta}}_{F}({\bf u}))$ is
\begin{equation}\label{5}
{\bf P_{u}} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}]{\bf x}dF({\bf x})= {\bf 0},
\end{equation}
where ${\bf P_{u}}= {\bf I}_{p} - {\bf u}{\bf u}^{T};$ see (\ref{Pu0}) is a projection matrix which appears because ${\bf u}$ is a unit vector as we explain in section \ref{P} that we treat {\bf u} as a general vector and then project onto the subspace that is orthogonal to {\bf u}.
Now, consider a mixture distribution which is defined in (\ref{mix.dis.}) where $\epsilon \in {[0,1)}$ is small and $\delta_{\bf z}$ is the distribution which assigns all probability to ${\bf z}$ as in (\ref{delta}). Assume ${\bf v}= {\bf u}\cos{\alpha}+{\bf h} \sin{\alpha}$ where ${\bf h}$ and ${\alpha}$ depend on $\epsilon$ and ${\bf z}$. Then
\begin{equation}\label{6}
{\bf P}_{\bf v} \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}\left({\bf x}^{T}{\bf v}; {\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf v})\right){\bf x} dF_{{\bf z},\epsilon}({\bf x})= {\bf 0}
\end{equation}
When $\epsilon$ and $\alpha$ are small, then
\begin{eqnarray}\label{7}
{\bf P}_{\bf v}={\bf P}_{({\bf u}\cos{\alpha}+{\bf h} \sin{\alpha})} &=& {\bf I} - ({{\bf u}\cos{\alpha}+{\bf h} \sin{\alpha}})({{\bf u}\cos{\alpha}+{\bf h} \sin{\alpha}})^{T} \nonumber\\
&=& {\bf I}-{\bf u}{\bf u}^{T}-\alpha({\bf uh}^{T}+{\bf hu}^{T})+ O(\alpha^{2}) \nonumber\\
&=& {\bf P_{u}}- \alpha({\bf uh}^{T}+{\bf hu}^{T})+ O(\alpha^{2}).
\end{eqnarray}
Therefore, from Lemmas \ref{d/du(ThetaF(u))} and \ref{d/dF(ThetaF(u))}, using the fact that
\begin{eqnarray}
({\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf u})) &=& ({\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf v}))+({\boldsymbol {\theta}}_{F}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf u})) \nonumber\\
&=& ({\boldsymbol {\theta}}_{{\bf z},\epsilon}({\bf u})-{\boldsymbol {\theta}}_{F}({\bf u}))+({\boldsymbol {\theta}}_{F}({\bf v})-{\boldsymbol {\theta}}_{F}({\bf u}))+ O(\epsilon^{2})\nonumber\\
&=& \Gamma_{F, {\bf h}, \alpha} + \Delta_{F, {\bf h}, \alpha} + O(\epsilon^{2}).
\end{eqnarray}
we have
\begin{eqnarray}\label{8}
l_{c;}({\bf x}^{T}{\bf v};{\boldsymbol{\theta}}_{{\bf z},\epsilon}({\bf v}))&=&l_{c;}[{\bf x}; ({\bf u}\cos{\alpha}+{\bf h}\sin{\alpha})]\nonumber\\
&=&l_{c;}[{\bf x}; {\bf u}] + l_{cc;}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}\alpha + l_{c;\boldsymbol{\theta}}[{\bf x}; {\bf u}]\Delta_{F,{\bf h},\alpha} \nonumber \\
& &+ \ l_{c; \boldsymbol{\theta}}[{\bf x}; {\bf u}]\Gamma_{F, {\bf h}, \alpha} + O(\alpha^{2}),
\end{eqnarray}
where $\Delta_{F, {\bf h}, \alpha} $ and $\Gamma_{F, {\bf h}, \alpha}$ are defined in section \ref{P} and \ref{d/du(ThetaF(u))} respectively.
Substituting (\ref{7}) and (\ref{8}) into (\ref{6}), and ignoring $O(\alpha^{2})$ terms, we obtain
\begin{equation}\label{}
\left({\bf P_{u}} - \alpha({\bf hu}^{T}+{\bf uh}^{T})\right) \int_{{\bf x} \in \mathbb{R}^{p}} {\bf T}.{\bf x} [(1-\epsilon)dF({\bf x}) + \epsilon {\hspace{1mm}} d \delta_{\bf z}({\bf x})]={\bf 0},
\end{equation}
where
\[{\bf T}= l_{c;}[{\bf x}; {\bf u}] + l_{cc;}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf h}\alpha + l_{c;\boldsymbol{\theta}}[{\bf x};{\bf u}]\Delta_{F,{\bf h},\alpha}+l_{c;\boldsymbol{\theta}}[{\bf x}; {\bf u}]\Gamma_{F,{\bf h},\alpha}.\]
Then collecting the $O(\alpha)$ terms we get:
\begin{eqnarray*}
O(\alpha) &=& -\alpha({\bf hu}^{T}+ {\bf uh}^{T})\int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}] {\bf x} dF({\bf x}) + \epsilon l_{c;}[{\bf z}; {\bf u}]{\bf z} \\
& & + {\bf P_{u}} \int_{{\bf x} \in \mathbb{R}^{p}} \left( l_{cc;}[{\bf x}; {\bf u}]{\bf x}^{T} {\bf h} \alpha + l_{c;{\boldsymbol \theta}}[{\bf x}; {\bf u}]\Delta_{F,h,\alpha}+ l_{c;{\boldsymbol \theta}}[{\bf x};{\bf u}]\Gamma_{F,h,\alpha} \right) {\bf x} dF({\bf x}) \\
&=& {\bf 0}.
\end{eqnarray*}
Now (\ref{1}) implies that
\[{\bf h}^{T}\int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}] {\bf x} dF({\bf x}) ={\bf 0},\]
and using (\ref{4b}),
\begin{equation}\label{Gamma approx}
\Gamma_{F,h,\alpha}\simeq - \epsilon \left[\int_{{\bf x} \in \mathbb{R}^{p}} l_{; {\boldsymbol {\theta \theta}}}[{\bf x}; {\bf u}]\right]^{-1} l_{;{\boldsymbol \theta}}[{\bf z}; {\bf u}].
\end{equation}
Consequently,
\[\alpha {\bf A h}= \epsilon {\bf B}+O(\epsilon^{2}),\]
where
\begin{eqnarray}\label{A}
{\bf A} &=& {\bf I}_{p} \displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{c;}[{\bf x}; {\bf u}]{\bf x}^{T}{\bf u} dF({\bf x}) - {\bf P_{u}} \displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{cc;}[{\bf x}; {\bf u}]{\bf x}{\bf x}^{T} dF(\bf x) {\bf P_{u}} \nonumber\\
& & + {\bf P_{u}} \left(\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} {\bf x} l_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]dF({\bf x})\right)\left(\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta \theta}}}[{\bf x}; {\bf u}]dF({\bf x})\right)^{-1}
\nonumber \\
& & \left(\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l^{T}_{c;{\boldsymbol {\theta}}}[{\bf x}; {\bf u}]{\bf x}^{T} dF({\bf x})\right) {\bf P_{u}}.
\end{eqnarray}
and
\begin{eqnarray}\label{B}
{\bf B}=l_{c;}[{\bf z}; {\bf u}]{\bf z}- \displaystyle \int_{{\bf x} \in \mathbb{R}^{p}}{\bf x} l_{c;{\boldsymbol{\theta}}}[{\bf x}; {\bf u}]dF({\bf x})
\left(-\displaystyle \int_{{\bf x} \in \mathbb{R}^{p}} l_{;{\boldsymbol {\theta \theta}}} [{\bf x}; {\bf u}]dF({\bf x}) \right)^{-1} l_{;{\boldsymbol {\theta}}}[{\bf z}; {\bf u}].
\end{eqnarray}
Therefore
\[\frac{\alpha{\bf h}}{\epsilon}={\bf A}^{-1} {\bf B}+ O(\epsilon), \]
and finally, letting $\epsilon \rightarrow 0$
\begin{equation}\label{CauchyIF}
IF_{\hat{\bf u}}({\bf z}; F)= \lim _{\epsilon \rightarrow 0}\left( \frac{{\bf v}-{\bf u}}{\epsilon}\right)=\lim _{\epsilon \rightarrow 0}\frac{{\bf h}\alpha}{\epsilon}={\bf A}^{-1} {\bf B}.
\end{equation}
\end{comment}
\clearpage |
1005.3709 | \section{Introduction}
\label{sec:intro}
Two lines in an affine space $\mathbb{R}^N$ are called {\em skew}
if they are neither parallel nor have a point in common or
equivalently if their affine span has dimension $3$. More
generally, affine subspaces $U_1,\ldots, U_l$ of $\mathbb{R}^N$
are called {\em skew} if their affine span has dimension ${\rm
dim}(U_1)+\cdots +{\rm dim}(U_l)+l-1$, in particular a pair $U,V$
of affine subspaces of $\mathbb{R}^N$ is skew if and only if each
two lines $p\subset U$ and $q\subset V$ are skew.
\medskip
An embedding $f : M^n\rightarrow \mathbb{R}^{N}$ of a smooth
manifold is called {\em totally skew} if for each two distinct
points $x,y\in M^n$ the affine subspaces $df(T_xM)$ and $df(T_yM)$
of\, $\mathbb{R}^N$ are skew. Define $N(M^n)$ as the minimum $N$
such that there exists a totally skew embedding of $M^n$ into
$\mathbb{R}^N$.
\medskip
Ghomi and Tabachnikov began in \cite{Gho-Tab} the study of totally
skew embeddings of mani\-folds and established a surprising
connection of $N(M^n)$ with some classical invariants of geometry
and topology. For example they showed \cite[Theorem~1.4]{Gho-Tab}
that the problem of estimating $N(\mathbb{R}^n)$ is intimately
related to the generalized vector field problem and the immersion
problem for real projective spaces, as exemplified by the
inequality
$$
N(\mathbb{R}^n)\geq r(n) +n
$$
where $r(n)$ is the minimum $r$ such that the Whitney sum
$r\xi_{n-1}$ of $r$ copies of the canonical line bundle over
$\mathbb{R}P^{n-1}$ admits $n+1$ linearly independent continuous
cross-sections.
\medskip
Another example (\cite[Theorem~1.2]{Gho-Tab}) is the inequality
$$
N(S^n)\leq n + m(n) +1
$$
where $m(n)$ is an equally well-known function defined as the
minimum $m$ such that there exists a non-singular, symmetric
bilinear form $B : \mathbb{R}^{n+1}\times
\mathbb{R}^{n+1}\rightarrow \mathbb{R}^m$. As a consequence they
deduced the inequalities $N(S^n)\leq 3n+2$ and $N(S^{2k+1})\leq
3(2k+1)+1$.
\medskip
It appears that very little is known about the exact values of
$N(M)$. Indeed, according to \cite{Gho-Tab}, the only currently
known exact values of this invariant are,
$$N(\mathbb{R}^1) = 3,\quad N(S^1)=4,\quad N(\mathbb{R}^2)=6.$$
Finally for a general $n$-manifold $M^n$ Ghomi and Tabachnikov
established upper and lower bounds
\begin{equation}\label{eqn:lower-upper}
2n+1\leq N(M^n)\leq 4n+1
\end{equation}
and showed that the lower bound can be improved to $2n+2$ if $M^n$
is a closed manifold.
\bigskip
In this paper we are interested in topological obstructions to
totally skew embeddings of manifolds, in particular we address the
problem of finding good lower bounds for $N(M^n)$. We demonstrate
that in many classes of manifolds there are examples where the
upper bound $4n+1$ from (\ref{eqn:lower-upper}) is very close to
the actual value of $N(M^n)$. For example $N(\mathbb{R}{P}^n)$ is
by Proposition~\ref{prop:proj-donja-ocena} one of the numbers
$4n-1, 4n, 4n+1$ if $n=2^k$ is a power of $2$, in particular
$N(\mathbb{R}P^2)$ is 7, 8, or 9. More generally, if $M^n =
\mathbb{R}P^{n_1}\times\cdots\times \mathbb{R}P^{n_k}$ is a
product of real projective spaces and $n_i=2^{r_i}$ are different
powers of $2$, then (Theorem~\ref{thm:proj-product})
$$
N(M^n) = N(\mathbb{R}{P}^{n_1}\times\cdots\times
\mathbb{R}{P}^{n_k}) \geq 4n - 2\alpha(n) + 1
$$
where $\alpha(n)$ is number of non-zero digits in the binary
representation of $n$. A similar bound
(Theorem~\ref{thm:compl-proj-product})
$$
N(X)\geq 8n-4\alpha (n)+1 = 4\cdot{\rm dim}_{\mathbb{R}}(X) -
4\alpha(n) +1
$$
is obtained if $X=\mathbb{C}P^{n_1}\times \cdots \times
\mathbb{C}P^{n_k}$ where $n_i=2^{r_i}$ are different powers of $2$
and $n=n_1+\cdots +n_k={\rm dim}_{\mathbb{C}}(X)$.
\medskip In pursuit of other examples of manifolds where $N(M^n)$
gets very close to the upper bound $4n+1$ we continue with the
analysis of Grassmann manifolds $G_k(\mathbb{R}^{n+k})$ and their
oriented counterparts $\tilde{G}_k(\mathbb{R}^{n+k})$. For example
(Theorems~\ref{t1} and \ref{t3}) we prove that
$N\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)\geq 4\cdot
2^{r+1}-3$ and $N(\tilde{G}_2(\mathbb{R}^{2^r+2}))\geq 3\cdot
2^{r+1}+1$. Similar inequalities can be expected for many other
Grassmannians as illustrated by the inequalities
$$
N(G_3(\mathbb{R}^6))\geq 31,\quad N(G_3(\mathbb{R}^7))\geq
43,\quad N(\tilde{G}_3(\mathbb{R}^7))\geq 41, \mbox{ {\rm etc.} }
$$
These results are in sharp contrast with the fact that very little
is known about the exact values of the invariant $N(M^n)$, for
example the exact value of $N(M^2)$ is not known for any closed
surface $M^2$. This and a sample of other open problems and
conjectures can be found in the final section of the paper where
we also offer a brief outlook to future research.
Possibly the most intriguing and attractive is the
Conjecture~\ref{conj:najinteresatnije} which, in analogy with the
classical {\em Immersion Conjecture} \cite{Cohen}, predicts that
for $n>1$
$$
N(M^n)\leq 4n - 2\alpha(n)+1.
$$
\section{Vector bundle decomposition}
\label{sec:vector-bundle}
Let $F_2(M):= M^2\setminus\Delta_M$ be the configuration space
(manifold) of all distinct ordered pairs of points in $M$. The
tangent bundle $T(F_2(M))$ admits a splitting
\begin{equation}\label{eqn:splitting}
T(F_2(M))\cong \pi_1^\ast TM\oplus \pi_2^\ast TM
\end{equation}
where $\pi_1, \pi_2 : F_2(M)\rightarrow M$ are the natural
projections. Simplifying the notation let $T_{(x,y)}(F_2(M)) \cong
T_x(M)\oplus T_y(M)$ be the fibre of this bundle at $(x,y)\in
F_2(M)$.
\begin{figure}[hbt]
\centering
\includegraphics[scale=0.45]{skew.eps}
\caption{Fibre of the bundle
$T(F_2(M))\oplus\varepsilon^1$.}\label{fig:prvi-kvadrant}
\end{figure}
\medskip
If $f : M^n\rightarrow \mathbb{R}^{N}$ is an embedding, then there
is a trivial line bundle $L$ over $F_2(M)$ such that for $(x,y)\in
F_2(M)$ the fibre $L_{(x,y)}$ is the line $\mathbb{R}\cdot
(f(y)-f(x))$. If $f : M^n\rightarrow \mathbb{R}^{N}$ is a totally
skew embedding, then there arises a monomorphism of vector bundles
$$
\Phi = \Phi^{(1)}\oplus \Phi^{(2)}: T(F_2(M))\oplus \varepsilon^1
\longrightarrow F_2(M)\times \mathbb{R}^N
$$
where $\Phi^{(1)}_{(x,y)} : T_x(M)\oplus T_y(M) \rightarrow
\mathbb{R}^N$ is the map defined by $\Phi^{(1)}_{(x,y)}(u,v) =
df_x(u) + df_y(v)$ and $\Phi^{(2)}$, defined by
$\Phi^{(2)}(\lambda)=\lambda(f(y)-f(x))$, maps the trivial line
bundle $\varepsilon^1$ to $L$. In this case the trivial
$N$-dimensional bundle $\varepsilon^N$ over $F_2(M)$ splits as
\begin{equation}\label{eqn:splits}
\varepsilon^N \cong T(F_2(M))\oplus \varepsilon^1\oplus \nu
\end{equation}
where $\nu$ is a $(N-2n-1)$-dimensional ``normal'' bundle. As a
consequence (\cite[Section~4]{Mi-Sta}) we obtain the following
proposition.
\begin{prop}\label{prop:dual}
If the dual Stiefel-Whitney class
$$\overline{w}_k(T(F_2(M))) := w_k(\nu)\in
H^k(F_2(M),\mathbb{F}_2)$$ is non-zero, then $2n+k+1\leq N$. In
particular, $N(M)\geq 2n+k+1$.
\end{prop}
\section{Characteristic classes of $T(F_2(M))$}
\label{sec:characteristic-classes}
The cohomology of $F_2(M)= M^2\setminus \Delta_M$ can be
calculated from the long exact sequence of the pair
$(M^2,M^2\setminus \Delta_M)$,
\begin{equation}\label{eqn:long-pair}
\ldots \longrightarrow H^\ast(M^2,M^2\setminus \Delta_M)
\stackrel{\alpha}{\longrightarrow} H^\ast(M^2)
\stackrel{\beta}{\longrightarrow} H^\ast(F_2(M))
\longrightarrow\ldots
\end{equation}
We are interested in the (dual) Stiefel-Whitney classes
(Proposition~\ref{prop:dual}) so we tacitly assume that all
cohomology has coefficients $\mathbb{F}_2$ unless otherwise noted.
By naturality, in order to check non-triviality of
$\overline{w}_k(T(F_2(M)))$, it is sufficient to check that
$\overline{w}_k(M^2)$ is not in the image of the map $\alpha$.
\medskip
The image $A:={\rm Image}(\alpha)$ of $\alpha$ is determined in
\cite[Theorem~11.11]{Mi-Sta} (see also \cite[Chapter VI,
Section~12]{Bredon}). It is generated, as a $H^\ast(M)$-module, by
the ``diagonal cohomology class''
\begin{equation}\label{eqn:generator}
u'' = \sum_{i=1}^r b_i\times b_i^\sharp
\end{equation}
where $\{b_i\}_{i=1}^r$ is an additive basis of $H^\ast(M)$ and
$b_i^\sharp$ the class dual to $b_i$.
\medskip
There are two actions of the ring $H^\ast(M)$ on $H^\ast(M\times
M)$ associated with the projections $\pi_1,\pi_2 : M^2\rightarrow
M$. However in light of \cite[Lemma~11.8]{Mi-Sta}, which says that
if $a\in H^\ast(M)$ then
$$
(a\times 1)\cup u'' = (1\times a)\cup u'',
$$
these two actions have the same effect on $u''$. As a consequence
we obtain the following proposition.
\begin{prop}\label{prop:image}
\begin{equation}
\begin{array}{ccccccc}
A & = & \mbox{\rm Image}(\alpha) & = & H^\ast(M)\cdot u''& = &
\{(1\times a)\cup u'' \mid a\in H^\ast(M)\} \\
&&&&& = & \{(a\times 1)\cup u'' \mid a\in H^\ast(M)\}
\end{array}
\end{equation}
\end{prop}
\medskip
The following proposition provides a simple and efficient
criterion for testing if a class is in the image of the map
$\alpha$. Note that the condition $k\leq n-1$ is essential since
\begin{equation}
H^{2n}(M\times M) \cong H^n(M)\otimes H^n(M)\subset \mbox{\rm
Image}(\alpha).
\end{equation}
\begin{prop}\label{prop:sinisa}
\label{diagonal} Let $M$ be a closed and smooth $n$-dimensional
manifold. Let $k\leq n-1$ and assume that $\theta\in H^k(M)\otimes
H^k(M)\subset H^\ast(M\times M)\cong H^\ast(M)\otimes H^\ast(M)$
is a non-zero class. Then $\theta\notin {\rm Image}(\alpha )$.
More generally, if $\omega\in H^{n+p}(M\times M)$ is a non-zero
class which is in the image of $\alpha$ then it must have as a
component of bidegree $(p,n)$ a non-zero ``edge class'' of the
form $a\times z$ for some $a\in H^p(M)$, where $z\in H^n(M)$ is
the fundamental cohomology class of $M$.
\end{prop}
\medskip
\noindent {\bf Proof:} If $z\in H^n(M)$ is the fundamental
cohomology class of $M$ then the diagonal class $u''$ has the
following form
$$u''=z\times 1 + x_1\times y_{1}+\cdots +x_r\times y_r + 1\times
z$$ where $x_i\times y_i$ is a class of bidegree $(t,n-t)$ for
some $0<t<n$. If $\omega\in H^{n+p}(M\times M)$ is in the image of
$\alpha$ then we deduce from Proposition~\ref{prop:image} that
$$\omega = (a\times 1)u'' = az\times 1 + A + a\times z$$
where $A=ax_1\times y_1 +\cdots$ is a class whose homogeneous
components are of bidegree $(q,n+p-q)$ for some $q>p$, and the
proposition follows. \hfill $\square$
\begin{cor}\label{cor:lepo-1}
If $k:={\rm max}\{i\mid \overline{w}_i(M)\neq 0\}$ then
$\overline{w}_{2k}(T(F_2(M)))=w_{2k}(\nu)\neq 0$.
\end{cor}
\medskip\noindent
{\bf Proof:} It follows from the naturality of Stiefel-Whitney
classes that
$$
w_{2k}(\nu) = \overline{w}_{2k}(T(F_2(M))) =
\beta(\overline{w}_{2k}(M^2)) = \beta(\overline{w}_k(M)\times
\overline{w}_k(M)).
$$
We observe that $k\leq n-1$. Indeed, each $n$-dimensional smooth
manifold can be embedded in $\mathbb{R}^{2n}$ and
$\overline{w}_n(M)=0$ by \cite[Corollary~11.4.]{Mi-Sta}.
Since $k\leq n-1$ we are allowed to use
Proposition~\ref{prop:sinisa} which implies that
$\overline{w}_k(M)\times \overline{w}_k(M)\notin {\rm
Image}(\alpha)$. From here and the exactness of the sequence
(\ref{eqn:long-pair}) we finally deduce that $w_{2k}(\nu)\neq 0$.
\hfill $\square$
\section{Real projective spaces}
As a first application let us analyze the case when $M =
\mathbb{R}P^n$ is the $n$-dimensional real projective space.
\medskip
The cohomology algebra $$H^\ast(\mathbb{R}P^n)\cong
\mathbb{F}_2[t]/(t^{n+1}=0)$$ is a truncated polynomial ring with
one generator $t\in H^1(\mathbb{R}P^n)$.
\medskip
The total Stiefel-Whitney class of $T(\mathbb{R}P^n)$ is given
(\cite[Theorem~4.5.]{Mi-Sta}) by the formula
\begin{equation}\label{eqn:total-class-proj}
w(\mathbb{R}P^n) = (1+t)^{n+1}
\end{equation}
and the dual classes are
$$
\overline{w}(\mathbb{R}P^n) = w(\mathbb{R}P^n)^{-1}.
$$
Suppose that $n=2^r$ is a power of $2$. Then
$$
w(\mathbb{R}P^n)= 1 +t + t^n \quad\mbox{\rm and}\quad
\overline{w}(\mathbb{R}P^n) = 1+t+t^2+\cdots + t^{n-1}.
$$
It follows that
$$\overline{w}_{n-1}(\mathbb{R}P^n)=t^{n-1}\neq 0.$$
As a consequence of Corollary~\ref{cor:lepo-1} we obtain that
$\overline{w}_{2n-2}(F_2(\mathbb{R}P^n))\neq 0$ and deduce from
Proposition~\ref{prop:dual} the following result.
\begin{prop}\label{prop:proj-donja-ocena}
If $f : \mathbb{R}P^n\rightarrow \mathbb{R}^N$ is a totally skew
embedding and $n=2^r$ for some $r$ then
$$
N \geq 4n-1.
$$
\end{prop}
\begin{cor}\label{cor:proj} For each integer $n$,
$$
N(\mathbb{R}P^n)\geq 4\cdot 2^{[\log_2 n]}-1.
$$
\end{cor}
It follows from Proposition~\ref{prop:proj-donja-ocena} and the
inequalities (\ref{eqn:lower-upper}) that if $n=2^r$ is a power of
$2$ then $N(\mathbb{R}P^n)$ is $4n-1, 4n$ or $4n+1$, in particular
$N(\mathbb{R}P^2)$ is $7,8$ or $9$.
\section{Products of real projective spaces}
\label{sec:prod-proj-spaces}
Suppose that $X=\mathbb{R}P^{n_1}\times \cdots \times
\mathbb{R}P^{n_k}$ is a product of real projective spaces where
each $n_i=2^{r_i}$ is a power of $2$. Let $n={\rm dim}(X) =
n_1+\cdots +n_k$.
\medskip The cohomology $H^*(X)\cong
\mathbb{F}_2[u_1,...,u_k]/(u_1^{n_1+1}=\ldots =u_k^{n_k+1}=0)$ of
$X$ is a truncated polynomial algebra with $k$ generators
$u_1,...,u_k\in H^1(X)$. Since $T(X)=T(\mathbb{R}P^{n_1})\times
\cdots \times T(\mathbb{R}P^{n_k}),$ the total Stiefel-Whitney
class of $T(X)$ is given by the formula
$$w(X)=(1+u_1)^{n_1+1}\cdots (1+u_k)^{n_k+1},$$
\noindent and its dual total class is $\overline{w}(X)=w(X)^{-1}$.
By assumption all integers $n_i$ are powers of $2$, hence
$$w(X)=(1+u_1+u_1^{n_1})\cdots (1+u_k+u_k^{n_k}),$$
\noindent and the dual class has the form
$$\overline{w}(X)=(1+u_1+\cdots +u_1^{n_1-1})
\cdots (1+u_k+\cdots +u_k^{n_k-1}).$$
From here we deduce that $\overline{w}_{n-k}=u_1^{n_1-1}\cdots
u_k^{n_k-1}$ is non-zero and observe, by a reference to
Proposition~\ref{prop:sinisa} and Corollary~\ref{cor:lepo-1}, that
$\overline{w}_{2n-2k}(F_2(X)) \neq 0$. This fact allows us to use
Proposition~\ref{prop:dual} which in turn implies the following
theorem.
\begin{theo}\label{thm:proj-product} Suppose that
$X=\mathbb{R}P^{n_1}\times \cdots \times \mathbb{R}P^{n_k}$ where
$n_i=2^{r_i}$ are powers of $2$. Let $n={\rm dim}(X)=n_1+\cdots
+n_k$. If there exists a totally skew embedding of $X$ in
$\mathbb{R}^N$ then $N\geq 4n-2k+1$. In particular if $n_i\neq
n_j$ for $i\neq j$,
$$
N(X)\geq 4n-2\alpha (n)+1
$$
where $\alpha(n)$ is the number of non-zero digits in the binary
representation of $n$.
\end{theo}
\section{Complex manifolds}
In some cases, for example if $M$ is a complex manifold, it may
be convenient to use Pontryagin classes for estimating the
invariant $N(M)$. However, the inequalities obtained by the use of
Pontryagin classes are in general not as sharp as the inequalities
obtained with the aid of Stiefel-Whitney classes so we focus on
the latter method.
\subsection{Complex projective spaces}
\label{sec:compl-proj}
The cohomology of the complex projective space with $\mathbb{Z}$
coefficients is a truncated polynomial algebra so by the Universal
Coefficient Theorem we have $H^\ast(\mathbb{C}P^n;
\mathbb{F}_2)\cong \mathbb{F}_2[t]/(t^{n+1}=0)$ where $\mbox{\rm
deg}(t)=2$.
Since the second Stiefel-Whitney class $w_2$ of any oriented
$2$-plane bundle is the mod 2 reduction of the Euler class, we
observe that $t = w_2(\xi_{\mathbb{R}}) =
w_2(\xi^\ast_{\mathbb{R}})$ where $\xi$ is the canonical complex
line bundle over $\mathbb{C}{P}^n$ and $\xi_{\mathbb{R}}$ the
underlying real 2-plane bundle.
\medskip
The complex tangent bundle of the projective space $\mathbb{C}P^n$
is
$$
T(\mathbb{C}P^n)\cong {\rm Hom}(\xi,\xi^\perp)
$$
where $\xi^\perp$ is the complex $n$-plane bundle, complementary
to the tautological complex line bundle $\xi$. Since ${\rm
Hom}(\xi,\xi)\cong \varepsilon^1_{\mathbb{C}}$ is a trivial
complex line bundle, we conclude that
\begin{equation}\label{eqn:complex-proj-tang-bundle}
T(\mathbb{C}P^n)\oplus\varepsilon^1_{\mathbb{C}} \cong
(\xi^\ast)^{\oplus (n+1)}
\end{equation}
where $\xi^\ast$ is the line bundle dual to $\xi$. By forgetting
the complex structure (realification) we obtain the isomorphism of
real bundles
\begin{equation}\label{eqn:complex-proj-real-bundle}
T(\mathbb{C}P^n)_{\mathbb{R}}\oplus\varepsilon^2_{\mathbb{R}}
\cong (\xi^\ast_{\mathbb{R}})^{\oplus (n+1)}.
\end{equation}
It follows that the total Stiefel-Whitney class of
$T(\mathbb{C}P^n)_{\mathbb{R}}$ is
$$
w = w(T(\mathbb{C}P^n)_{\mathbb{R}}) = (1+t)^{n+1} = (1+w_2)^{n+1}
$$
where $w_2 = w_2(\xi_{\mathbb{R}})\cong
w_2(\xi^\ast_{\mathbb{R}})=t$ is the second Stiefel-Whitney class
of the realification of the canonical bundle $\xi$.
\medskip Consequently, the dual Stiefel-Whitney class is
\begin{equation}\label{eqn:dual-complex}
\overline{w} = (1+ w_2)^{-n-1}=\sum_{j=0}^n {{n+j}\choose{j}}
w_2^j.
\end{equation}
We observe that the top class $\overline{w}_{2n}$ is always zero,
which is an instance of a much more general result of Massey
(Theorem~\ref{thm:Massey}). Following Corollary~\ref{cor:lepo-1}
we search for the largest value of $j$ such that
$\overline{w}_{2j}={{n+j}\choose{j}}w_2^j\neq 0$. We observe that
$\overline{w}_{2n-2}\neq 0$ precisely when $n=2^r$ is a power of
$2$.
Again, by invoking Corollary~\ref{cor:lepo-1}, we conclude that
$\overline{w}_{4n-4}(\nu)\neq 0$ and finally by
Proposition~\ref{prop:dual} we obtain the following result.
\begin{theo}\label{thm:compl-proj}
Suppose that $n=2^r$ for some $r\geq 0$. Then
$$
N(\mathbb{C}P^n) \geq 4n + (4n-4) + 1 = 4\cdot\mbox{\rm
dim}(\mathbb{C}P^n) - 3.
$$
\end{theo}
\subsection{Products of complex projective spaces}
\label{sec:prod-compl-proj}
Suppose that $X = \mathbb{C}P^{n_1}\times\cdots\times
\mathbb{C}P^{n_k}$. As in Section~\ref{sec:prod-proj-spaces} we
focus on the case when $n_i=2^{r_i}$ for some $r_i$. As before $n
= (1/2){\rm dim}(X)=n_1+\cdots +n_k$. The cohomology ring of the
space $X$ with $\mathbb{F}_2$ coefficients is
$$H^*(X)\cong \mathbb{F}_2[u_1,...,u_k]/
(u_1^{n_1+1}=\ldots =u_k^{n_k+1}=0)$$ where ${\rm deg}(u_1)=\ldots
= {\rm deg}(u_{n_k})=2$.
\medskip
We have already observed in Section~\ref{sec:compl-proj} that if
$n=2^r$ then
\begin{equation}\label{eqn:dual-complex-jos}
\overline{w}(T(\mathbb{C}P^n)) = \sum_{j=0}^n {{n+j}\choose{j}}
t^j = 1 + t + \cdots + t^{n-1}.
\end{equation}
It follows, as in Section~\ref{sec:prod-proj-spaces}, that the
total dual Stiefel-Whitney class of $T(X)$ has the form
$$\overline{w}(X)=(1+u_1+\cdots +u_1^{n_1-1})\cdots
(1+u_k+\cdots +u_k^{n_k-1}).$$
We conclude that $\overline{w}_{2n - 2k}= u_1^{n_1-1}\cdots
u_k^{n_k-1}$ is non-trivial, and as a consequence of
Proposition~\ref{prop:dual} and Corollary~\ref{cor:lepo-1} obtain
the following estimate.
\begin{theo}\label{thm:compl-proj-product}
Suppose that $X=\mathbb{C}P^{n_1}\times \cdots \times
\mathbb{C}P^{n_k}$ where $n_i=2^{r_i}$ are powers of $2$ and let
$n={\rm dim}_{\mathbb{C}}(X) = (1/2){\rm
dim}_{\mathbb{R}}(X)=n_1+\cdots +n_k$. Then $N(X)\geq 8n-4k+1$. In
particular if all integers $n_i$ are distinct,
$$
N(X)\geq 8n-4\alpha (n)+1 = 4\cdot{\rm dim}_{\mathbb{R}}(X) -
4\alpha(n) +1
$$
where $\alpha(n)$ is the number of non-zero digits in the binary
representation of $n$.
\end{theo}
\section{Grassmannians}\label{sec:grassmannians}
We illustrate our method also for some cases of the Grassmann
manifold $G_k(\mathbb{R}^{n+k})$ of $k$-dimensional subspaces of
$\mathbb{R}^{n+k}$, and some cases of the oriented Grassmann
manifold $\tilde{G}_k(\mathbb{R}^{n+k})$ of oriented
$k$-dimensional subspaces of $\mathbb{R}^{n+k}$.
\medskip
Let $\gamma_k$ be the canonical vector bundle over
$X=G_k(\mathbb{R}^{n+k})$, and $\tau$ the tangent bundle. Then
from the relation $\tau \oplus {\rm Hom}(\gamma_k,\gamma_k)\simeq
(n+k)\gamma_k$ we obtain
\begin{equation}\label{eqn:1}
w(X)\cdot w(\gamma_k\otimes \gamma_k^*)=w(\gamma_k)^{n+k},
\end{equation}
where $w(\gamma_k\otimes \gamma_k^*)=p_k(w_1,...,w_k)$, $p_k$ is
the polynomial over $\mathbb{F}_2$ defined by
\begin{equation}\label{eqn:2}
p_k(\sigma_1,...,\sigma_k)=\prod_{i=1}^k\prod_{j=1}^k(1+x_i+x_j),
\end{equation}
and $\sigma_1,...,\sigma_k$ are the elementary symmetric
polynomials in variables $x_1,...,x_k$, see \cite[Problem
7C]{Mi-Sta}.
In the special case when $k=2$ and $k=3$, by a direct computation
we check that $w(\gamma_2 \otimes \gamma_2^*)=1+w_1^2$ and
$w(\gamma_3\otimes \gamma_3^*)= 1+w_1^4+w_2^2+w_1^2w_2^2+w_3^2$.
Since $w(\gamma_k)=1+w_1+w_2+\cdots +w_k$, it follows that the
total Stiefel-Whitney class of the complementary bundle to the
tangent bundle $\tau$ equals
\begin{equation}\label{eqn:3}
\overline{w}(X)=w(\gamma_k\otimes \gamma_k^*)(1+w_1+w_2+\cdots
+w_k)^{-(n+k)}.
\end{equation}
Completely analogous formulae are true in the case of the oriented
Grassmann manifold $\tilde{X}=\tilde{G}_k(\mathbb{R}^{n+k})$, the
only difference being the vanishing of the first Stiefel-Whitney
class, $w_1(\tilde{\gamma}_k)=0$.
\subsection{$G_k(\mathbb{R}^{n+k})$}
First we treat the case $k=2$ and $n=2^r$, that is the case of the
Grassmann manifold $G_2\left(\mathbb{R}^{2^r+2}\right)$. We shall
need the following lemma.
\begin{lema}
\label{l1} The class $w_1^2w_2^{2^r-2}\in
H^{2^{r+1}-2}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)$ is
non-trivial.
\end{lema}
\medskip
\noindent {\bf Proof:} Let us assume, to the contrary, that
$w_1^2w_2^{2^r-2}=0$. Then $w_1^2w_2^{2^r-1}=0$. Since the map
\begin{equation}\label{eqn:4}
H^{2^{r+1}-1}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)
\stackrel{\cup w_1} {\longrightarrow}
H^{2^{r+1}}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)
\end{equation}
is an isomorphism by Poincar\' e duality, it follows that
$w_1w_2^{2^r-1}=0$.
Let us show that $w_1^{2^{r+1}-2}$ and $w_2^{2^r-1}$ are
non-trivial classes in
$H^{2^{r+1}-2}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)$.
The first observation is a consequence of a result of Stong
\cite{s} about the height of $w_1$, which is in this case
$\mbox{{\rm ht}}(w_1)=2^{r+1}-2$. The second observation follows
from the well-known fact that $w_k^n$ is a non-trivial element in
$H^{kn}(G_k(\mathbb{R}^{k+n}))$. Let us show that these two
classes are different. We have
\begin{equation}\label{eqn:5}
Sq^2\left(w_1^{2^{r+1}-2}\right)={2^{r+1}-2 \choose
2}w_1^{2^{r+1}}=0,
\end{equation}
again by the same result of Stong. Since by the Wu formula
$Sq^1(w_2)=w_1w_2$ (see \cite[Problem 8A]{Mi-Sta}), we have
\begin{equation}\label{eqn:6}
Sq^2\left(w_2^{2^r-1}\right)=(2^r-1)w_2^{2^r}+{2^r-1 \choose 2}
w_1^2w_2^{2^r-1}=w_2^{2^r}\neq 0.
\end{equation}
So, $H^{2^{r+1}-2}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)
\cong \mathbb{Z}/2\oplus \mathbb{Z}/2$ is generated by
$w_1^{2^{r+1}-2}$ and $w_2^{2^r-1}$.
The map $\phi :
H^{2^{r+1}-2}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)
\stackrel{\cup w_1} {\longrightarrow}
H^{2^{r+1}-1}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)$
satisfies the relations $\phi (w_1^{2^{r+1}-2})=w_1^{2^{r+1}-1}=0$
and $\phi (w_2^{2^r-1})=w_1w_2^{2^r-1}=0$, as we proved in the
beginning of the proof. So, $\phi =0$. This is a contradiction,
since $2^{r+1}-1$ is odd and
$H^{2^{r+1}-1}\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)
\cong \mathbb{Z}/2$ could be generated only by the element of the
type $w_1^{2s-1}w_2^t$. So, our assumption is false, and we have
$w_1^2w_2^{2^r-2}\neq 0$. \hfill $\square$
\bigskip
\begin{theo}
\label{t1} $N\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)\geq
4\cdot 2^{r+1}-3.$
\end{theo}
\medskip
\noindent {\bf Proof:} The total Stiefel-Whitney class of the
normal bundle of $X=G_2\left(\mathbb{R}^{2^r+2}\right)$ equals, by
the equation (\ref{eqn:3}),
\begin{eqnarray*}
\overline{w}(X) & = & (1+w_1^2)(1+w_1+w_2)^{-(2^r+2)}\\
& = & (1+w_1^2)(1+w_1+w_2)^{-2^{r+1}}(1+w_1+w_2)^{2^r-2}\\
& = & (1+w_1^2)(1+w_1^{2^{r+1}})(1+w_1+w_2)^{2^r-2}\\
& = & (1+w_1^2)(1+w_1+w_2)^{2^r-2}.
\end{eqnarray*}
It follows that $\overline{w}_{2^{r+1}-2}=w_1^2w_2^{2^r-2}\neq 0$,
which by Corollary~\ref{cor:lepo-1} and
Proposition~\ref{prop:dual} implies the inequality
$$N\left(G_2\left(\mathbb{R}^{2^r+2}\right)\right)\geq 4\cdot 2^{r+1}-3=4\cdot \dim
G_2\left(\mathbb{R}^{2^r+2}\right)-3.$$ \hfill $\square$
\bigskip
As an illustration of our methods in the case $k>2$, we outline
the computations in the particular case of the Grassmann manifold
$G_3(\mathbb{R}^7)$.
\begin{theo}
\label{t2} $N(G_3(\mathbb{R}^7))\geq 43.$
\end{theo}
\medskip
\noindent {\bf Proof:} The cohomology of $X=G_3(\mathbb {R}^7)$ is
generated by the Stiefel-Whitney classes $w_1,w_2,w_3$ subject to
the relation
$(1+w_1+w_2+w_3)(1+\overline{w}_1+\overline{w}_2+\overline{w}_3
+\overline{w}_4)=1$. It follows that $\overline{w}_1=w_1$,
$\overline{w}_2=w_1^2+w_2$, $\overline{w}_3=w_1^3 +w_3$, and
$\overline{w}_4=w_1^4+w_1^2w_2+w_2^2$. Moreover, $R_1:=
w_1^5+w_1w_2^2+w_1^2w_3=0$, $R_2:= w_1^4w_2+w_1^2w_2
^2+w_2^3+w_1^3w_3+w_3^2=0$, and $R_3:=
w_1^4w_3+w_1^2w_2w_3+w_2^2w_3=0$.
It could be shown that a consequence of these relations is also
the relation
$$0=(w_1^3+w_1w_2+w_3)R_1+w_1^2R_2+w_1R_3=w_1^8.$$
It requires a few more steps to show that the class
$w_1^2w_2^2w_3+w_3^3=w_1^5w_2^2$ is non-trivial. It is actually
one of the additive generators of the cohomology group
$H^9(G_3(\mathbb{R}^7))$.
The total Stiefel-Whitney class of the bundle complementary to the
tangent bundle equals
\begin{eqnarray*}
\overline{w}(X) & = &
(1+w_1+w_2+w_3)^{-7}\cdot (1+w_1^4+w_2^2+w_1^2w_2^2+w_3^2)\\
& = & (1+w_1+w_2+w_3)(1+w_1+w_2+w_3)^{-8}(1+w_1^4+w_2^2+w_1^2w_2^2+w_3^2)\\
& = & (1+w_1+w_2+w_3)(1+w_1^4+w_2^2+w_1^2w_2^2+w_3^2).
\end{eqnarray*}
We already noticed that $\overline{w}_9(X)=w_1^2w_2^2w_3+w_3^3$ is
a non-trivial class, and it is the top-dimensional one.
Altogether, we conclude that $N(G_3(\mathbb{R}^7)) \geq
24+1+18=43.$ \hfill $\square$
\bigskip
Let us add that in a similar way but more easily one obtains by
the same method also: $N(G_2(\mathbb{R}^5))\geq 21$,
$N(G_2(\mathbb{R}^7))\geq 29$, $N(G_3(\mathbb{R}^6))\geq 31$, and
$N(G_3(\mathbb{R}^8))\geq 43$.
\subsection{$\tilde{G}_k(\mathbb{R}^{n+k})$}
For comparison we include an analysis of some cases where the
manifold $M$ is the Grassmannian of all oriented $k$-dimensional
subspaces in $\mathbb{R}^{n+k}$.
\medskip Let us denote by $p :
\tilde{G}_k(\mathbb{R}^{n+k})\rightarrow G_k(\mathbb{R}^{n+k})$
the two-fold covering. Then, $\tilde{w}_i=
w_i(\tilde{G}_k(\mathbb{R}^{n+k}))=p^*(w_i(G_k(\mathbb{R}^{n+k})))$,
and we know that $\tilde{w}_1=0$. Since $\tilde{w}_i=p^*(w_i)\neq
0$ implies $w_i\neq 0$, the estimates obtained in this way for the
oriented Grassmann manifold $\tilde{G}_k(\mathbb{R}^{n+k})$ cannot
be better than those for $G_k(\mathbb{R}^{n+k})$.
However, the cohomology ring of the oriented Grassmann manifold
$H^*(\tilde{G}_k(\mathbb{R}^{n+k}))$ is more complicated, and it
is more difficult to determine which Stiefel-Whitney classes are
non-trivial in this case. Aside from triviality of $\tilde{w}_1$,
we know that $H^*(\tilde{G}_k(\mathbb{R}^{n+k}))$ has some
additional generators and some additional relations.
\medskip
Let $B_n^k = (w_1)$ be the principal ideal in
$H^\ast(G_2(\mathbb{R}^{n+k}))$ generated by $w_1$. In order to
determine which Stiefel-Whitney classes are non-trivial in the
oriented case, we use the calculations in
$H^*(G_k(\mathbb{R}^{n+k}))$ and the Gysin exact sequence in
cohomology (cf.\ \cite[Theorem 12.4]{Mi-Sta}),
$$\cdots \rightarrow H^{i-1}(G_k(\mathbb{R}^{n+k}))\stackrel{\cup w_1}
{\longrightarrow} H^{i}(G_k(\mathbb{R}^{n+k})) \stackrel{p^*}
{\longrightarrow} H^{i}(\tilde{G}_k(\mathbb{R}^{n+k})) \rightarrow
\cdots .
$$
From the exactness of this sequence we deduce that for a given
Stiefel-Whitney class $w_{i_1}^{j_1}\cdots w_{i_r}^{j_r}\in
H^*(G_k(\mathbb{R}^{n+k}))$, $p^*(w_{i_1}^{j_1}\cdots
w_{i_r}^{j_r})=0$ if and only if $w_{i_1}^{j_1}\cdots
w_{i_r}^{j_r}\in B_n^k = (w_1)$.
Also, we easily check that in this case the polynomials $p_2$ and
$p_3$ from the equation (\ref{eqn:2}) reduce to the following,
$\tilde{p}_2(\tilde{w}_2)=1$ and
$\tilde{p}_3(\tilde{w}_2,\tilde{w}_3)
=1+\tilde{w}_2^2+\tilde{w}_3^2$.
\medskip
We now turn to the case $k=2$. Let us determine the height of the
class $\tilde{w}_2$ in $H^*(\tilde{G}_2(\mathbb{R}^{n+2}))$,
$\textrm{ht}(\tilde{w}_2)=\max \{ m\in \mathbb{N} \mid
\tilde{w}_2^m\neq 0\}$.
In $H^{*}(G_2(\mathbb{R}^{n+2}))$ we have $(1+w_1+w_2)(1+
\overline{w}_1+...+\overline{w}_n)=1$, and so
\begin{equation}\label{eqn:jos-malo}
\overline{w}_r=w_1\overline{w}_{r-1}+w_2\overline{w}_{r-2}, \quad
3\leq r\leq n.
\end{equation}
If as before $B_n^2=(w_1)$ is the principal ideal in
$H^{*}(G_2(\mathbb{R}^{n+2}))$ generated by $w_1$, then
inductively, using the relations (\ref{eqn:jos-malo}), we show
that
$$\overline{w}_{2k-1}\in B_n^2, \quad 2k-1\leq n$$ and
$$\overline{w}_{2k}\equiv w_2^k \quad (\textrm{mod }B_n^2), \quad 2k\leq n.$$
Note that in dimensions $\leq n$ there are no polynomial relations
among $w_1$ and $w_2$.
\begin{lema}
\label{l2} $\mbox{{\rm ht}}(\tilde{w}_{2})=[\frac{n}{2}]$.
\end{lema}
\medskip
\noindent {\bf Proof:} It is well known that $\ker p^{*}=B_{n}^2$.
The dimension of $w_2^{[\frac{n}{2}]}$ is $2\cdot
[\frac{n}{2}]\leq 2\cdot \frac{n}{2}=n$, hence this class cannot
be written as a multiple of $w_1$ (for in dimensions $\leq n$
there are no relations among $w_1$ and $w_2$). Thus
$w_2^{[\frac{n}{2}]} = \notin \ker p^*$ and so
$\tilde{w}_2^{[\frac{n}{2}]} = p^*(w_2^{[\frac{n}{2}]}) \neq 0$.
\smallskip
In order to show that $\tilde{w}_2^{[\frac{n}{2}]+1}=0$ we
distinguish two cases.
If $n=2l$, in dimension $2l+2$ we have the relation
$w_2\overline{w}_{2l}=0$. But $\overline{w}_{2l}=w_2^l+w_1\cdot u$
for some class $u$, so $w_2^{l+1}+w_1w_2u=0$ and $w_2^{l+1}\in
B_n^2$. So we obtain
$$\tilde{w}_2^{[\frac{n}{2}]+1}=\tilde{w}_2^{l+1}=p^*(w_2^{l+1})=0.$$
If $n=2l+1$, in dimension $2l+2$ we have the relation
$w_1\overline{w}_ {2l+1}+w_2\overline{w}_{2l}=0$. The first
summand belongs to $B_{n}^2$, so we show (as in the first case)
that $w_2^{l+1}\in B_n^2$. Here we also have that
$l=[\frac{n}{2}]$ and the Lemma follows. \hfill $\square$
\medskip
Let us also notice that by the equation (\ref{eqn:3}) and the fact
that $\tilde{p}_2=1$, the total Stiefel-Whitney class of the
complementary normal bundle to the tangent bundle of the space
$X=\tilde{G}_2(\mathbb{R}^{n+2})$ equals
$$\overline{w}(X)=(1+\tilde{w}_2)^{-(n+2)}=((1+\tilde{w}_2)^{-1})^{n+2}.$$
In the light of Lemma \ref{l2}, we have
$$(1+\tilde{w}_2)\left(1+\tilde{w}_2+\tilde{w}_2^2+\cdots +\tilde{w}_2^{[\frac{n}{2}]}\right)=
1+\tilde{w}_2^{[\frac{n}{2}]+1}=1,$$
\noindent and so
$(1+\tilde{w}_2)^{-1}=1+\tilde{w}_2+\tilde{w}_2^2+\cdots
+\tilde{w}_2^{[\frac{n}{2}]}$.
Finally, we obtain
$$\overline{w}(\tilde{G}_2(\mathbb{R}^{n+2}))=
\left(1+\tilde{w}_2+\tilde{w}_2^2+\cdots
+\tilde{w}_2^{[\frac{n}{2}]}\right)^{n+2}.$$
\begin{theo}
\label{t3} $N(\tilde{G}_2(\mathbb{R}^{2^r+2}))\geq 3\cdot
2^{r+1}+1= 3\cdot \dim \tilde{G}_2(\mathbb{R}^{2^r+2})+1.$
\end{theo}
\medskip
\noindent {\bf Proof:} Substituting $n=2^r$ in the above
considerations and using Lemma \ref{l2}, we have
\begin{eqnarray*}
\overline{w}(\tilde{G}_2(\mathbb{R}^{2^r+2})) & = &
(1+\tilde{w}_2+\tilde{w}_2^2+\cdots
+\tilde{w}_2^{2^{r-1}})^{2^r+2}\\
& = & (1+\tilde{w}_2+\tilde{w}_2^2+\cdots
+\tilde{w}_2^{2^{r-1}})^2\\
& = & 1+\tilde{w}_2^2+\tilde{w}_2^4+\cdots +\tilde{w}_2^{2^{r-1}}.
\end{eqnarray*}
By Lemma \ref{l2}, $\tilde{w}_2^{2^{r-1}}\neq 0$, and by
Corollary~\ref{cor:lepo-1},
$N(\tilde{G}_2(\mathbb{R}^{2^r+2}))\geq 1+2\cdot 2^{r+1}+2\cdot
2^r=3\cdot 2^{r+1}+1=3\cdot \dim
\tilde{G}_2(\mathbb{R}^{2^r+2})+1. $ \hfill $\square$
\medskip
Let us add that by the same methods one easily obtains
$N(\tilde{G}_2(\mathbb{R}^{2^r+1}))\geq 3\cdot 2^{r+1}-7=3\cdot
\dim \tilde{G}_2(\mathbb{R}^{2^r+1})-1$,
$N(\tilde{G}_2(\mathbb{R}^{2^r+3}))\geq 3\cdot 2^{r+1}+5=3\cdot
\dim \tilde{G}_2(\mathbb{R}^{2^r+3})-1$, and
$N(\tilde{G}_2(\mathbb{R}^{2^r+4}))\geq 3\cdot 2^{r+1}+9=3\cdot
\dim \tilde{G}_2(\mathbb{R}^{2^r+4})-3$. It is also seen from the
proof that our method cannot give better lower bounds in all these
cases.
Let us now prove the result corresponding to Theorem \ref{t2} in
the case of the oriented Grassmannian.
\begin{theo}
\label{t4} $N(\tilde{G}_3(\mathbb{R}^7))\geq 41.$
\end{theo}
\medskip
\noindent {\bf Proof:} In the cohomology of the oriented
Grassmannian $\tilde{X}=\tilde{G}_3(\mathbb{R}^7)$ we have
\begin{eqnarray*}
\overline{w}(\tilde{X})=p^*(\overline{w}(X)) & = &
(1+\tilde{w}_2+\tilde{w}_3)^{-7}(1+\tilde{w}_2^2+\tilde{w}_3^2)\\
& = & (1+\tilde{w}_2+\tilde{w}_3)(1+\tilde{w}_2+\tilde{w}_3)^{-8}
(1+\tilde{w}_2^2+\tilde{w}_3^2)\\
& = & (1+\tilde{w}_2+\tilde{w}_3)(1+\tilde{w}_2^2+\tilde{w}_3^2).
\end{eqnarray*}
In the previous subsection we showed that the class
$\overline{w}_9(X)=w_1^2w_2^2w_3+w_3^3=w_1^5w_2^2$ is non-trivial,
but it is trivial in the cohomology of the oriented Grassmannian.
However, it can be shown (using the computations in $H^*(G_3
(\mathbb{R}^7))$) that the class
$\overline{w}_8(X)=w_1^2w_2^3+w_2w_3^2$ cannot be written as a
product of $w_1$ with some other class. So,
$$
\overline{w}_8(\tilde{X})=\tilde{w}_2\tilde{w}_3^2\neq 0.
$$
As a consequence, by Corollary~\ref{cor:lepo-1} we have
$N(\tilde{G}_3(\mathbb{R}^7))\geq 1+24+16=41. \hfill\square$
\bigskip
We end this section by presenting a slightly different method of
calculation applied to the Grassmannian
$Y=\tilde{G}_3(\mathbb{R}^{13})$. In this case $\dim (Y)=30$ and
\begin{eqnarray*}
\overline{w}(Y) & = &
(1+\tilde{w}_2^2+\tilde{w}_3^2)(1+\tilde{w}_2+\tilde{w}_3)^{-13}\\
& = & (1+\tilde{w}_2^2+\tilde{w}_3^2)(1+\tilde{w}_2+\tilde{w}_3)^3
(1+\tilde{w}_2+\tilde{w}_3)^{-16}\\
& = &
(1+\tilde{w}_2^2+\tilde{w}_3^2)(1+\tilde{w}_2+\tilde{w}_3)^3\\
& = & \tilde{w}_3^5+\tilde{w}_2\tilde{w}_3^4+\cdots ,
\end{eqnarray*}
\noindent where dots represent some lower-dimensional classes. In
order to check whether some of the classes
$\overline{w}_{15}(Y)=\tilde{w}_3^5$ and
$\overline{w}_{14}(Y)=\tilde{w}_2\tilde{w}_3^4$ are non-trivial we
use a criterion from \cite{Kor06}. It says that
$w_2^{i_2}w_3^{i_3}\in H^*(G_3(\mathbb{R}^{13}))$ cannot be
expressed as a multiple of the class $w_1$ if it does not belong
to the ideal $J_{n,3}$ of $\mathbb{Z}_2[w_2,w_3]$ generated by the
homogeneous components of
$$
\frac 1{1+w_2+w_3}=(1+w_2+w_3)^{2^{r+3}-1}=
\sum_{i=0}^{2^{r+3}-1}\sum_{j=0}^i{i \choose j}w_2^{i-j}w_3^j
$$
\noindent in dimensions $n-2,n-1,n$, which we respectively denote
by $g_{n-2},g_{n-1},g_n$. The integer $r$ satisfies the
inequalities $2^r<n\leq 2^{r+1}$, which for dimensional reasons
leads to the desired relation $(1+w_2+w_3)^{2^{r+3}}=1$. Now it is
not difficult to see that
$$
g_k=\sum_{k/3\leq i\leq k/2}{i \choose 3i-k}w_2^{3i-k}w_3^{k-2i}.
$$
By an easy computation, $g_{13}=0, g_{12}=w_3^4+w_2^6$ and
$g_{11}=w_2^4w_3$. It turns out that
$w_3^5=w_3g_{12}+w_2^2g_{11}\in J_{n,3}$, but $w_2w_3^4\notin
J_{n,3}$, since it does not belong to the span of
$w_2g_{12}=w_2w_3^4+w_2^7$ and $w_3g_{11}=w_2^4w_3^2$.
Consequently, $\overline{w}_{14}(Y)=\tilde{w}_2\tilde{w}_3^4\neq
0$, and by Corollary~\ref{cor:lepo-1},
$N(\tilde{G}_3(\mathbb{R}^{13})\geq 1+2\cdot 30+2\cdot 14=89$.
\section{Concluding remarks}
\subsection{Embeddings with multiple regularity}
The problem of estimating the invariant $N(M^n)$ was in
\cite{GS-1} (see also \cite{GS-thesis}) incorporated in a more
general question of studying $(k,l)$-regular embeddings. By
definition a smooth map $f : M^n \rightarrow \mathbb{R}^N$ is
$(k, l)$-regular if for every collection of $k+l$ distinct points
$x_1,\ldots, x_k, y_1,\ldots, y_l$ in $M^n$ and $l$ tangent lines
$L_i \subset T_{y_i}(M^n), \, i = 1,\ldots,l$, the set of points
and lines
$$ f(x_1),\ldots, f(x_k),\, df(L_1),\ldots, df(L_l)$$
is affinely independent.
\medskip
When $l = 0$, the notion of $(k, l)$-regularity reduces to affine
$(k-1)$-regularity in the sense of Handel and Segal
\cite{Han-Seg}. On the other hand, a smooth map is $(0,2)$-regular
if and only if it is totally skew.
\medskip
The existence of a $(k,l)$-regular embedding $f : M^n\rightarrow
\mathbb{R}^N$ implies, essentially by the arguments of
Section~\ref{sec:vector-bundle}, a splitting
$$\varepsilon^N \cong \pi^\ast T(F_{l}(M))\oplus \varepsilon^{k+l-1}\oplus \nu$$
of the trivial $N$-dimensional bundle over the configuration space
$F_{k+l}(M)$ of all ordered collections of $k+l$ distinct points
in $M^n$. By definition $\pi^\ast T(F_l(M))$ is the pull-back of
the tangent bundle $T(F_l(M))$ along the projection map $\pi :
F_{k+l}(M)\rightarrow F_l(M)$ and $\nu$ is a bundle of dimension
$N-(n+1)l-k+1$.
This is a clear indication that the problem of studying
$(k,l)$-regular embeddings is amenable to the methods of
Sections~\ref{sec:characteristic-classes} and we hope to return to
this question in a subsequent publication.
\subsection{Open problems}
In this section we collect some open problems pointing to some of
the most interesting questions about totally skew embeddings of
manifolds.
\begin{prob}{\em Determine the exact values of $N(S^2)$ and
$N(\mathbb{R}P^2)$. More generally what is the exact value of
$N(M^2)$ if $M^2$ is a closed or open surface? According to
\cite{Gho-Tab} the only known result is $N(M)=6$ where $D^2\subset
M \subset \mathbb{R}^2$.}
\end{prob}
\medskip An immersion $\phi : M^n \looparrowright \mathbb{R}^N$ is
called {\em totally skew} if whenever $\phi(x)\neq \phi(y)$ the
affine subspaces $d\phi(T_x(M^n))$ and $d\phi(T_y(M^n))$ are skew.
If $f : M\rightarrow \mathbb{R}^N$ is a totally skew embedding and
if $g: \widetilde{{M}}\rightarrow M$ is a covering map then $\phi
= f\circ g$ is clearly a totally skew immersion. The following
conjecture reflects our impression that in this case a totally
skew immersion can be perturbed to a genuine totally skew
embedding.
\begin{conj}{\em If $M$ is a closed, smooth manifold and $\Gamma$ a
finite group acting freely on $M$ then
$$
N(M)\leq N(M/\Gamma),
$$
in particular $N(S^n)\leq N(\mathbb{R}P^n)$.}
\end{conj}
\medskip
It follows from the splitting (\ref{eqn:splits}) that the
geometric dimension $g$-dim$(\nu)$ of the normal bundle $\nu_1 =
\nu(T(F_2(M)))$ satisfies the inequality
$$
N(M^n)-1 \geq 2n + g\mbox{\rm -dim}(\nu_1).
$$
Similar inequalities hold for manifolds $X, Y$ and $X\times Y$ and
their comparison suggests the possibility of the following general
result.
\begin{conj}{\em For two manifolds $X$ and $Y$, $N(X\times Y)\geq
N(X)+N(Y)-1$.}
\end{conj}
This conjecture, if true, would together with the bound
$N(\mathbb{R}^n) \geq 3n$ for $n$ a power of $2$ (obtained in
\cite{Gho-Tab}), imply the lower bound $N(\mathbb{R}^n)\geq
3n-\alpha (n)+1$.
\medskip
The well-known {\em Immersion Conjecture}, resolved positively by
R.~Cohen \cite{Cohen} in 1985, predicted that any compact smooth
$n$-manifold for $n>1$ can be immersed in
$\mathbb{R}^{2n-\alpha(n)}$, where $\alpha(n)$ is the number of
non-zero digits in the binary representation of $n$. The following
result of Massey, which preceded Cohen's theorem by 15 years,
played an important role by providing strong evidence in favor of
the conjecture.
\begin{theo}{\rm (W.S.~Massey, \cite{Massey})}\label{thm:Massey}
Let $M^n$ be a smooth, compact $n$-dimensional manifold $(n
> 1)$. Then $\overline{w}_j(M) = 0$
for $j > n -\alpha(n)$.
\end{theo}
Theorem~\ref{thm:Massey} together with our
Corollary~\ref{cor:lepo-1} provides interesting initial evidence
for the following bold conjecture.
\begin{conj}\label{conj:najinteresatnije}
For every $n$-dimensional, compact smooth manifold $M^n$ $(n>1)$,
$$N(M^n)\leq 4n-2\alpha (n)+1.$$
\end{conj}
If correct, Conjecture~\ref{conj:najinteresatnije} would, together
with Proposition~\ref{prop:proj-donja-ocena} and
Theorem~\ref{thm:proj-product}, yield some exact computations of
the invariant $N(M^n)$. For example it would imply
$$N(\mathbb{R}P^2)=7$$
and more generally the following result.
\begin{conj}
\label{wild} Suppose that $n_i=2^{r_i}\; (i=1,...,k)$ and assume
that $r_i\neq r_j$ for $i\neq j$. Let $n=n_1+\cdots +n_k\geq 2$.
Then,
$$N(\mathbb{R}P^{n_1}\times \cdots \times \mathbb{R}P^{n_k})=4n-2\alpha (n)+1.$$
\end{conj} |
0901.1535 | \section{Introduction}
\label{sec:intro}
The study of heavy flavour production at HERA provides an important
test of perturbative Quantum Chromodynamics (pQCD) and also
valuable information for the measurements to be made at the LHC.
I will discuss a small selection of the many measurements
of heavy flavour production that have been made at HERA,
concentrating on the more recent results.
HERA running started in 1992 and the accelerator stopped running
at the end of June 2007. HERA collided \unit[27.5]{GeV} electrons or
positrons\footnote{Hereafter unless explicitly stated both electrons
and positrons are referred to as electrons.} with \unit[920]{GeV}
(\unit[820]{GeV} until the end of 1997) protons, giving a
centre-of-mass energy of \unit[318]{GeV}. A long shutdown in 2000 and
2001 was used to upgrade the machine and the detectors, with the aim
of increasing the luminosity by about a factor of 4. The period up to
2000 is usually called HERA~I and after 2001 HERA~II. By the end of
the running both of the colliding beam experiments, H1 and ZEUS, had
collected about \unit[0.5]{fb$^{-1}$} of data.
Several kinematic variables are used to characterise the $ep$
scattering process:
\begin{itemize}\setlength{\itemsep}{0pt}
\item \ensuremath{Q^{2}}, the negative squared four-momentum exchanged at the
electron or positron vertex;
\item $x$, the Bjorken scaling variable;
\item $y$, the inelasticity;
\item $W$, the invariant mass of the hadronic final state.
\end{itemize}
The measurements are usually separated into the deep inelastic
scattering (DIS), $\ensuremath{Q^{2}} \gtrsim \unit[1]{GeV^{2}}$, or
photoproduction, $\ensuremath{Q^{2}} \lesssim \unit[1]{GeV^{2}}$, regimes, depending
on whether the scattered electron is detected in the in the main
calorimeter or not.
The main production mechanism for heavy quarks is the
so-called Boson Gluon Fusion (BGF) process which is illustrated in
Figure~\ref{fig:bgf}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\columnwidth]{bgf_direct1}
\caption{Feynman diagram of the boson gluon fusion process.}
\label{fig:bgf}
\end{figure}
In practice higher order contributions also have to be taken into
account. The photon can fluctuate into a \ensuremath{q\bar{q}}{} pair and one of the
quarks then participates in the hard interaction (resolved
photoproduction); or in so-called excitation the scattering from charm
or beauty can take place with intrinsic charm or beauty inside the
proton or photon. Monte Carlo models usually include these process in
addition to the direct boson-gluon fusion. These higher order
processes are also sometimes referred to collectively as non-direct.
Next-to-leading oder (NLO) QCD calculations exist for heavy quark
production at HERA. These are implemented for photoproduction in the
FMNR programme \cite{np:b412:225,asdhep:15:609} and for DIS in the
HVQDIS programme \cite{pr:d57:2806}. The programmes include simple
independent fragmentation of the $b$ or $c$ quark, but on their own
are not able to give predictions for correlations between final-state
particles. The FMNR programme has been interfaced \cite{Geiser:2007py}
to the \textsc{Pythia}{} Monte Carlo and its predictions have been used for
studies of dimuon final states.
Most heavy flavour measurements rely on the central tracking detectors
and are helped significantly by the presence of a microvertex
detector. H1 had such a detector for some of the HERA~I running, while
both ZEUS and H1 had such detectors for the HERA~II running period.
Several different methods have been used by the collaborations to
identify the production of heavy quarks. Each of them has its
advantages and disadvantages and often cover different kinematic
ranges. Traditionally the identification of \ensuremath{D^{*}}{} mesons and
semileptonic decays have been used most often. With the advent of
microvertex detectors lifetime information is being used more and more
often. This works best for events with high energy jets, while tagging
with semileptonic decays to electrons or double tags enables one to go
to lower transverse momenta.
\section{\ensuremath{D^{*}}{} Production}
\label{sec:dstar}
The H1 collaboration has made a series of measurements of \ensuremath{D^{*}}{}
production both in the photoproduction \cite{H1:DIS08:DstarPhP}
(\unit[96]{pb$^{-1}$} of data taken in 2006 and 2007) and the
DIS \cite{h1:DIS08:DstarDIS} (\unit[347]{pb$^{-1}$} from the HERA~II running
period) regimes. For these measurements they make use of their new
Fast Track Trigger, which enables events with \ensuremath{D^{*}}{} candidates to
be selected early in the trigger chain. In both photoproduction and
DIS clear signals are seen as illustrated in
Figure~\ref{fig:dstar-peak}. The kinematic cuts are indicated in the
figures.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{H1prelim-08-073-fig1}\\
\includegraphics[width=0.9\columnwidth]{H1prelim-08-072-fig2}
\caption{$\Delta M = m(K\pi\pi) - m(K\pi)$ distributions for the
photoproduction (top) and DIS (bottom) samples. The points show
the data, the curves in the lower figure, the results of a fit to
the distribution. The dotted line shows the background shape. The
kinematic cuts are indicated.}
\label{fig:dstar-peak}
\end{figure}
Differential cross-sections as a function of a wide range of variables
have been determined. The cross-section as a function of \ensuremath{Q^{2}}{} is
shown in Figure~\ref{fig:dstar-qsq}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\columnwidth]{H1prelim-08-072-fig7}
\caption{\ensuremath{D^{*}}{} production cross-section as a function of
\ensuremath{Q^{2}}. The data (points) are compared to NLO QCD predictions with
two different PDFs. The lower plot shows the NLO QCD prediction
scaled by the ratio of the data to NLO QCD visible
cross-sections. The range of variation of the quark mass and
factorisation and renormalisation scales are indicated in the
figure.}
\label{fig:dstar-qsq}
\end{figure}
The measurements are compared to the predictions of the HVQDIS
programme using two different Parton Distribution Functions (PDF). The
different PDFs show a very similar behaviour as a function of \ensuremath{Q^{2}}.
In contrast the cross-section as a function of the pseudorapidity,
$\eta$, shows a clear dependence (see Figure~\ref{fig:dstar-eta-dis}),
demonstrating the sensitivity of charm production to the gluon
structure function.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\columnwidth]{H1prelim-08-072-fig9}
\caption{\ensuremath{D^{*}}{} production cross-section in DIS as a function of
$\eta(\ensuremath{D^{*}})$ compared to NLO QCD predictions. For further
details see the caption of Figure~\protect\ref{fig:dstar-qsq}.
}
\label{fig:dstar-eta-dis}
\end{figure}
Making the same comparison for photoproduction one sees that the
NLO QCD prediction underestimates the data in the forward direction
(see Figure~\ref{fig:dstar-eta-php}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\columnwidth]{H1prelim-08-073-fig6}
\caption{\ensuremath{D^{*}}{} production cross-section in photoproduction as a
function of $\eta(\ensuremath{D^{*}})$ compared to the NLO QCD prediction. The
band shows the uncertainty in the prediction. The range of variation
of the quark mass and factorisation and renormalisation scales
are indicated in the figure.}
\label{fig:dstar-eta-php}
\end{figure}
Comparing the data to Monte Carlo predictions, the \textsc{Cascade}{}
generator, which is based upon \ensuremath{k_{T}}-factorisation, is able to provide a
better description of the data in DIS than RAPGAP with two different
PDFs (see Figure~\ref{fig:dstar-eta-phpmc}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\columnwidth]{H1prelim-08-072-fig8}
\caption{\ensuremath{D^{*}}{} production cross-section in DIS as a
function of $\eta(\ensuremath{D^{*}})$. The data are compared to the
different Monte Carlo generators as indicated in the figure.}
\label{fig:dstar-eta-phpmc}
\end{figure}
However, in photoproduction \textsc{Pythia}, using the massless scheme for
the generation of the heavy quarks, provides the best description of
the data, while \textsc{Cascade}{} and \textsc{Pythia}{} in the massive mode both
undershoot the data in the forward direction.
\section{Beauty in Photoproduction}
\label{sec:bsl}
The ZEUS collaboration recently reported two new measurements of
$b$-quark production in photoproduction.
The first measurement uses semi\-leptonic decays to
muons \cite{ZEUS:EPS07:Btomu} to identify heavy quark decays, while the
second one uses electrons \cite{Chekanov:2008aa}. Dijet events are selected
by requiring at least two jets with $|\eta| < 2.5$ and a transverse
momentum (muon measurement) or energy (electron measurement) greater
than \unit[7]{GeV} for the highest transverse energy and
\unit[6]{GeV} for the 2nd highest transverse energy jet.
The first measurement uses \unit[124]{pb$^{-1}$} of data collected in 2005
and requires a well identified muon with $\ensuremath{p_{T}} >
\unit[2.5]{GeV}$ and $-1.6 < \eta < 2.3$. The
microvertex detector is used to measure the impact parameter
($\delta$) of identified muons. This is combined with the transverse
momentum of the muon with respect to the jet axis (\ensuremath{p_{T}^{\mathrm{rel}}}) to separate
$b$-quark events from background. The cross-sections as a function of
the transverse momentum and pseudorapidity of the muon are shown in
Figures~\ref{fig:btomu-xsect-ptmu} and \ref{fig:btomu-xsect-etamu}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.80\columnwidth]{ZEUS-btomu-diff_pt}
\caption{$b$-quark production cross-section in dijet photoproduction as a
function of the transverse momentum of the
muon. The measurement is compared to an earlier ZEUS
publication \protect\cite{pr:d70:012008} and the NLO QCD
prediction. The uncertainty on the prediction is indicated by the
band.}
\label{fig:btomu-xsect-ptmu}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.80\columnwidth]{ZEUS-btomu-diff_eta}
\caption{$b$-quark production cross-section in dijet photoproduction as a
function of the pseudorapidity of the muon. The measurement is
compared to an earlier ZEUS
publication \protect\cite{pr:d70:012008} and the NLO QCD
prediction. The uncertainty on the prediction is indicated by the
band.}
\label{fig:btomu-xsect-etamu}
\end{figure}
Both the charm and beauty contributions are left free in the fit.
The figures also show the results of an analysis using the
HERA~I data \cite{pr:d70:012008}, which only used \ensuremath{p_{T}^{\mathrm{rel}}}{} for
separation. For this earlier analysis the charm content was
fixed. Good agreement between the analyses is seen. The predictions of
the NLO QCD prediction are also in good agreement with the data.
The second measurement identifies electrons from the semi\-leptonic
decays of heavy quarks using an integrated luminosity of
\unit[120]{pb$^{-1}$} collected during the HERA~I running period. Electron
identification uses the measurement of the specific energy loss,
\ensuremath{dE/dx}, in the Central Tracking Detector, CTD, the fraction of the
energy deposited in the electromagnetic calorimeter as well as the
ratio of the energy deposited in the calorimeter to the momentum
measured in the tracking detectors. Semileptonic decays are separated
from background using \ensuremath{p_{T}^{\mathrm{rel}}}{} and the azimuthal angle between the
electron direction and the missing transverse momentum vector,
$\ensuremath{\Delta\phi}$. As illustrated in Figure~\ref{fig:btoe-vars}, \ensuremath{p_{T}^{\mathrm{rel}}}{} can
separate $b$-quark from $c$-quark decays, while \ensuremath{\Delta\phi}{} separates
semileptonic decays from background.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.60\columnwidth]{ZEUS-btoe-DESY-08-056_03}
\caption{Variables used to separate semileptonic heavy quark decays
from background. The solid line shows the distribution for
electrons from semileptonic $b$-quark decays, the dashed line for
$c$-quark decays and the dotted line the background (Bkg).}
\label{fig:btoe-vars}
\end{figure}
The variables are combined using a likelihood ratio method, which is
optimised for the identification of electrons from semileptonic
$b$-quark decays. The distribution of the likelihood ratio is shown in
Figure~\ref{fig:btoe-like}. The distribution is fit using the expected
distributions for beauty, charm and background to determine the
fractions of events from each source. The fit provides a very good
description of the data.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.70\columnwidth]{ZEUS-btoe-DESY-08-056_4}
\caption{The distribution of the likelihood ratio for electron
candidates, \ensuremath{N_{\mathrm{cand}}}, in data compared to the Monte Carlo
expectation after the scaling the predictions to best fit the
data. The arrow indicates the region included in the fit ($-2 \ln
T < 10)$. The shaded areas show the fitted contributions from $b$
quarks, $c$ quarks and background as denoted in the figure.}
\label{fig:btoe-like}
\end{figure}
The visible $ep$ cross sections (at hadron level) for $b$-quark
and $c$-quark production and the subsequent semileptonic decay to an
electron with $\ensuremath{p_{T}^{e}} > \unit[0.9]{GeV}$ in the range $|\ensuremath{\eta^{e}}| < 1.5$
in photoproduction events with $\ensuremath{Q^{2}} < \unit[1]{GeV^{2}}$ and $0.2 <
y < 0.8$ and at least two jets with $\ensuremath{E_{T}} > \unit[7 (6)]{GeV}$,
$|\eta| < 2.5$ were determined separately for
$\sqrt{s}=\unit[300]{GeV}$ and $\sqrt{s}=\unit[318]{GeV}$. For the
complete data set ($96 \hbox{$\,\textrm{--}\,$} 00$) the cross-sections evaluated at
$\sqrt{s}=\unit[318]{GeV}$ are
\begin{align*}
\ensuremath{\sigma_{b}^{\textrm{vis}}} & =
\left( 125 \pm 11 (\ensuremath{\textrm{stat.}}) ^{+10}_{-11} (\ensuremath{\textrm{syst.}}) \right)\mathrm{pb},\\
\ensuremath{\sigma_{c}^{\textrm{vis}}} & =
\left( 278 \pm 33 (\ensuremath{\textrm{stat.}}) ^{+48}_{-24} (\ensuremath{\textrm{syst.}}) \right)\mathrm{pb}.
\end{align*}
These cross-sections are in agreement with the corresponding NLO QCD
predictions:
\begin{align*}
\ensuremath{\sigma_{b}^{\textrm{NLO}}} & =
\left( 88 ^{+22}_{-13} \right)\mathrm{pb},\\
\ensuremath{\sigma_{c}^{\textrm{NLO}}} & =
\left( 380 ^{+48}_{-24} \right)\mathrm{pb}.
\end{align*}
The cross-sections as a function of the transverse momentum and
pseudorapidity of the electron are shown in
Figure~\ref{fig:btoe-xsect}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{ZEUS-btoe-DESY-08-056_10}
\caption{Differential cross sections as a function of
a), c) the transverse momentum
and b), d) the pseudorapidity of the electrons. Plots a) and b) are for
$b$-quark production while c) and d) are for $c$-quark
production.
The measurements are shown as points. The inner error bar shows the
statistical uncertainty and the outer error bar shows the statistical and
systematic uncertainties added in quadrature.
The solid line shows the NLO QCD prediction after hadronisation corrections,
with the theoretical uncertainties indicated by the band; the
dashed line shows the scaled prediction from \textsc{Pythia}.
}
\label{fig:btoe-xsect}
\end{figure}
Good agreement with the NLO QCD prediction is seen. The data also agree
well with the \textsc{Pythia}{} prediction scaled by a factor of 1.75 for
$b$-quark production and 1.28 for $c$-quark production.
\section{Beauty Correlations}
\label{sec:bcorr}
The identification of both heavy-quark decays in an event has several
advantages: the background is reduced substantially and the kinematic
range accessible is larger. The disadvantage is a significant
reduction of statistics. If both $b$-quark jets can be identified,
dijet correlations can be directly measured which probe
next-to-leading order effects. The ZEUS collaboration used the HERA~I
data sample to select events with $\ensuremath{E_{T}} > \unit[8]{GeV}$ and two
identified muons \cite{thesis:bloch:2005}. Separating the samples
according to the relative charge of the muons as well as their
invariant mass allows much of the background to be evaluated directly
from the data.
The extracted cross-section is shown in Figure~\ref{fig:bcorr-dphi2}.
It is compared with the NLO QCD prediction as well as with a leading
order Monte Carlo. Given the limited statistics it is not possible to
say whether the NLO calculation provides a better description of the
data.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.90\columnwidth]{ZEUS-bto2mu-muphicross08}
\caption{The dimuon cross-section as a function of the azimuthal
angle between the muons in dijet events. The data (points) are
compared to the NLO QCD prediction calculated using the
FMNRxPYTHIA interface as well as to the RAPGAP prediction scaled
by a factor of 1.84. The inner error bar shows the statistical
uncertainty and the outer error bar shows the statistical and
systematic uncertainties added in quadrature.
}
\label{fig:bcorr-dphi2}
\end{figure}
\section{\ensuremath{F_{2}^{b\bar{b}}}{} and \ensuremath{F_{2}^{c\bar{c}}}}
\label{sec:f2bc}
The H1 collaboration have used the impact parameter significance to
determine \ensuremath{F_{2}^{b\bar{b}}}{} and \ensuremath{F_{2}^{c\bar{c}}}{} for $ \unit[12]{GeV^{2}} < \ensuremath{Q^{2}} <
\unit[650]{GeV^{2}}$ and $0.0002 < x < 0.032$ using \unit[56]{pb$^{-1}$}
of data taken in 2006 \cite{H1:LP07:F2bc}. The distribution of the
impact parameter in the transverse plane is shown in
Figure~\ref{fig:f2bc-ip}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.80\columnwidth]{H1prelim-07-171-fig1}
\caption{Distribution of the signed impact parameter for the data
(points) and the contributions from $b$, $c$ and light
quarks, evaluated using Monte Carlo events.}
\label{fig:f2bc-ip}
\end{figure}
In order to separate beauty and charm from background the two tracks
with the most significant impact parameters ($S_{1}$ and $S_{2}$
respectively) are selected, rejecting events where the signs of the
impact parameters differ. The significance is defined as $\delta /
\sigma(\delta)$, where $\sigma(\delta)$ is the error on
$\delta$. Events with one good track are used to make the $S_{1}$
distribution; all other events are used for the $S_{2}$ distribution.
The contents of the negative significance bins are subtracted from the
corresponding positive significance bins. This yields the
distribution of $S_{2}$ shown in Figure~\ref{fig:f2bc-sig}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.80\columnwidth]{H1prelim-07-171-fig3b}
\caption{Distribution of the second highest significance and the
contributions from $b$, $c$ and light quarks, evaluated using
Monte Carlo events.}
\label{fig:f2bc-sig}
\end{figure}
The charm contribution dominates the distribution. At high
significance the beauty contribution becomes larger. The data are
split into bins in $\ensuremath{Q^{2}}$ and $x$ and the contributions of beauty and
charm are determined separately in each bin using a least squares
simultaneous fit to the $S_{1}$ and $S_{2}$ distributions. The overall
normalisation is determined by also including in the fit the total
number of inclusive events without any cut on the impact parameter
significance. The results of the fit in each $x - \ensuremath{Q^{2}}$ bin are
converted to a ``reduced cross-section'' using
\begin{equation*}
\tilde{\sigma}^{\ensuremath{c\bar{c}}}(x, \ensuremath{Q^{2}}) =
\frac{d^{2}\sigma^{\ensuremath{c\bar{c}}}}{dx\,d\ensuremath{Q^{2}}}
\frac{x Q^{4}}{2 \pi \alpha^{2} (1 + (1-y)^{2})}\,.
\end{equation*}
The reduced cross section for \ensuremath{c\bar{c}}{}
production as a function of $x$ in different \ensuremath{Q^{2}}{} bins is shown in
Figure~\ref{fig:f2bc-xsectr}.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\columnwidth]{H1prelim-07-171-fig4}
\caption{The reduced cross-section for \ensuremath{c\bar{c}}{} production in
different \ensuremath{Q^{2}}{} bins as a function of $x$. The inner errors bars
show the statistical uncertainty, the outer error bars represent
the statistical and systematic uncertainties added in
quadrature. The measurements are compared with those from HERA~I
and the averaged data are also shown. The measurements are
compared to the MRST04 prediction.}
\label{fig:f2bc-xsectr}
\end{figure}
Measurements made with the HERA~I data in the same kinematic range are
also shown in the figure. The results are in good agreement with each
other and can be combined, taking into account the correlated
systematic errors. The prediction using the Variable Flavour Number
Scheme (VFNS) in the MRST04 PDF agrees well with the data.
From the reduced cross-section \ensuremath{F_{2}^{c\bar{c}}}{} can be extracted:
\begin{equation*}
\tilde{\sigma}^{\ensuremath{c\bar{c}}}(x, \ensuremath{Q^{2}}) = \ensuremath{F_{2}^{c\bar{c}}} -
\frac{y^{2}}{(1 + (1-y)^{2})} \ensuremath{F_{L}^{c\bar{c}}}\, .
\end{equation*}
The correction due to the longitudinal structure function, \ensuremath{F_{L}^{c\bar{c}}}, is
small.
The same formulae with $c$ replaced by $b$
can be used to extract $\tilde{\sigma}^{\ensuremath{b\bar{b}}}$ and \ensuremath{F_{2}^{b\bar{b}}}.
The measurements of \ensuremath{F_{2}^{c\bar{c}}}{} and \ensuremath{F_{2}^{b\bar{b}}}{} are shown in
Figures~\ref{fig:f2bc-f2c} and \ref{fig:f2bc-f2b}, respectively.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{H1prelim-07-171-fig9}
\caption{\ensuremath{F_{2}^{c\bar{c}}}{} as a function of \ensuremath{Q^{2}}{} for different $x$
ranges. Also shown are measurements from the H1 and ZEUS collaborations
using \ensuremath{D^{*}}{} mesons to identify the charm quarks. The inner errors bars
show the statistical uncertainty, the outer error bars represent
the statistical and systematic uncertainties added in
quadrature.
}
\label{fig:f2bc-f2c}
\end{figure}
The rapid increase in \ensuremath{F_{2}^{c\bar{c}}}{} as a function of \ensuremath{Q^{2}}{} at low $x$ is
clearly visible. The \ensuremath{F_{2}^{b\bar{b}}}{} measurements are consistent with the same
trend, but are less precise.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{H1prelim-07-171-fig11}
\caption{\ensuremath{F_{2}^{b\bar{b}}}{} as a function of \ensuremath{Q^{2}}{} for different $x$
ranges. Also shown is a preliminary measurement from the ZEUS
collaboration using the data from 2004. The inner errors bars
show the statistical uncertainty, the outer error bars represent
the statistical and systematic uncertainties added in
quadrature.
}
\label{fig:f2bc-f2b}
\end{figure}
\section{Conclusions}
\label{sec:conc}
The many HERA measurements of beauty production in photoproduction are
compared in Figure~\ref{fig:ptb}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\columnwidth]{plotptb_apr08}
\caption{Differential cross section for $b$-quark production as a
function of transverse momentum, \ensuremath{p_{T}^{b}}, compared to the results of
previous ZEUS measurements as indicated in the figure. The
measurements are shown as points. The inner error bar shows the
statistical uncertainty and the outer error bar shows the
statistical and systematic uncertainties added in quadrature. The
solid line shows the NLO QCD prediction from the FMNR program with
the theoretical uncertainty shown as the shaded band. }
\label{fig:ptb}
\end{figure}
The measurements presented here agree
well with the previous values, giving a consistent picture of
$b$-quark production in $ep$ collisions in the photoproduction regime,
and are well reproduced by the NLO QCD calculations.
For all the measurements leading order Monte Carlo predictions also
describe well the shapes of the distributions.
There is a tendency for the charm measurements to overshoot the predictions
in the forward direction. Comparing charm quark predictions using
different PDFs shows that the cross-section is, as expected, sensitive
to the gluon PDF.
In the near future a number of final HERA~I measurements will be
published. With the HERA~II data the kinematic range of the
measurements can be extended and a combination of different tagging
methods should increase the precision of the measurements. The
improved HERA~II forward tracking will allow much improved studies of
heavy quark production in the forward direction.
\section*{Acknowledgements}
\label{sec:thanks}
It is a pleasure to thank the organisers for making this an
informative and enjoyable conference. I would like to thank Cristi
Diaconu, Achim Geiser, Markus Jüngst, Monica Turcato, André Schöning
and Matthew Wing for their help in preparing this talk.
{
\bibliographystyle{./elsarticle-num}
\raggedright\small |
0901.2694 | \section{Introduction}
Statistical analysis of the energy distribution is the base of the black body
radiation \cite{Planck1} and the Einstein's theory of the light emission and absorption
\cite{Einstein_Q}. Success of Einstein hypothesis of photons, de Broglie wave concept
of particles \cite{dB} and Schr\"odinger's equation for hydrogen atom \cite{Sch1} paved
the way to corpuscular-wave duality of matter.
This conceptual line was logically finished by Dirac in his
method of the second quantization \cite{Dirac1}. This approach is perfectly fits to
many-body weakly interacting quantum systems and it was assumed that the
``corpuscule-wave duality" is universal. However the application of this method to
single quantum ``elementary" particles destroys this harmony. Physically it is clear
why: quantum particle is self-interacting system and this interaction is at least of the
order of its rest mass. Since the nature of the mass is the open problem we do not know
the energy distribution in quantum particles up to now. Here I try to show a possible
approach to this problem in the framework of simple model in pure deductive manner.
A long time it is was assumed
that the dynamical model may be found in the framework of the string theory, but the
epitaph to string theory \cite{Schroer} clearly shows the deep crisis of particle
physics in its present form. Notice, Einstein \cite{EPR} and
Schr\"odinger \cite{Schr} treated the statistical fundament of quantum theory as a
perishable and temporal. Quantum theory solved a lot of fundamental problems, but
(as it happen with fundamental theory), it posed number of deeper questions.
Even first steps in wave picture of quantum particles brought sudden surprises.
First of all there was a big discrepancy between intuitive Schr\"odinger's imagination
and real properties of ``corpuscular waves".
Initially Schr\"odinger thought that there is a possibility to build stable wave packet
from plane waves of de Broglie that may be treated as the wave model of localized
electron; he understood soon that it is impossible. Only in some special
case of quantized harmonic oscillator he could build such stable wave packet moving
(in average) like material point under Hooke's elastic force \cite{Sch2}.
Historically, the impossibility to get wave description of localizable particle
led to probabilistic interpretation of the wave function. In fact, this is the fork point
changing all character of fundamental physics: state vector is treated as amplitude of
probability for particle to be in some particular state. This paradigm is the
source of all fundamental unsolved problems mentioned above: measurement problem,
localization, divergences, etc. However, practical applications of quantum theory are so
convincible and prolific that any attempts to find answers on ``children questions"
are frequently treated as a pathology. Nevertheless, we should analyze these problems
again and again till all situation will be absolutely clear without any references to
the beauty of quantum mystic \cite{Feynman1}. Two fundamental problems
should be solved: how to get (from first principles) non-linear quantum field equations
with localizable solutions and how to formulate objective quantum measurement of
dynamical variables?
Dirac's equations for electron in Coulomb
potential of nuclei are realistic in first approximation but higher approximations
suffer from divergences. Expressions for self-energy, electric charge and
magnetic momentum of self-interacting electron demonstrate divergences too.
Renormalization procedure is in fact the attempt to correct this construction.
The same equations being applied to ``free electron" have plane
wave solutions that never where observed and that may be rather related to co-vector
generated by periodic lattice \cite{G}. Physical interpretation of plane wave solution
being applied to single free particle requires essential efforts. Furthermore, the
price of these efforts is unacceptably high: probabilistic interpretation,
collapse of wave function, parallel worlds, Multiverse etc., are invoked to explain the
formal, unobservable and artificial solution.
Plane waves and $\delta$-function are examples of improper states that may be formally
incorporated in Hilbert manifolds on equal footing with square-integrable states
in the framework of ``functional relativity" \cite{Kryukov1,Kryukov2}.
Thereby, the classical ideal notion of pointwise interaction is deemed as legitimized.
I think, however, that such approach is acceptable as effective temporal method when
discussion in nature of quantum interaction is postponed.
Generally, improper states like $\delta$-function and plane waves are only artifact
arose as assumption of applicability of our fundamental linear equations
for ``free" quantum particle. In fact, the principles of Hamilton or
Lagrange can not fundamentally to determine the quantum dynamics
because the concept of quantum amplitudes
shows that these principles (being formulated as the principle
of the least action) are merely (sometimes bad mathematically defined) \emph{approximation}.
One should found some more general quantum principle of
self-organization (morphogenesis) and dynamics of quantum matter.
Few years ago I derived field quasi-linear PDE as a consequence
of conservation law of local Hamiltonian on quantum phase space $CP(N-1)$ \cite{Le6,Le7}.
This conservation law was expressed as affine parallel transport of
Hamiltonian vector field in $CP(N-1)$ in connection agreed with Fubini-Strudy
metric \cite{KN,Le2,Le6,Le7}. These quasi-linear PDE have soliton-like solutions
whose physical status is unknown. New investigations in so-called ``unparticle"
area \cite{Georgi} gave me some hint on possible interpretation of gotten equations.
I would like to discuss here a morphogenesis of quantum particle in the spirit of reaction
$e^-\to \mathcal{U} \to e^-$. In other words I propose to study the particle/unparticle
sectors of matter \cite{Georgi} in wide range of energy in order to solve localization
problem in foundations of quantum physics. The concept of \emph{scale invariance}
\cite{Georgi,Yuan} will be replaced by the principle of super-relativity \cite{Le1,Le2}.
I should note that Blochintzev about 60 years ago discussed the unparticle
sector in the framework of universality
of wave - particle ``duality" for interacting quantum fields \cite{Bl1,Bl2}.
For such fields the universality is generally broken.
Namely, attempt to represent two interacting boson fields as the set of free quantum
oscillators leads to two types of oscillators: quantized and non-quantized. The second
one arises under simple relation $g > \frac{m_1 m_2c^2}{h^2}$ between coupling constant $g$
and masses $m_1$ and $m_2$ of two scalar fields. For such intensity of coupling we obtain a
field with excitation states in two sectors: particle and ``unparticle". Furthermore,
the excitations in ``unparticle" sector has an imaginary mass and they propagate with group
velocity larger than $c$. For self-interacting scalar field
of mass $m$ the intensity of self-interaction $g$ leads to breakdown of the universality
of the wave - particle ``duality" if it is larger than the inverse square of the Compton
wavelength: $g > \frac{m^2c^2}{h^2}=\frac{1}{\lambda^2_C}$.
\section{The Action State Space}
Blochintzev's examples were oversimplified for clarity.
We have to have the process of morphogenesis of quantum particle/unparticle sectors
that should be dynamically described.
One may even think that interacting observable particles are
immersed into the sea of ``unparticle" excitations somehow related
with ``dark matter". It leads to necessity to modify
the second quantization method. Besides arguments of Blochintzev
there at least two reasons for such modification.
{\it First.} In the second quantization method one has formally
given particles whose properties are defined by some commutation
relations between creation-annihilation operators. Note, that the
commutation relations are only the simplest consequence of the
curvature of the dynamical group manifold in the vicinity of the
group's unit (in algebra). Dynamical processes require, however,
finite group transformations and, hence, the global group structure.
The main my technical idea is to use vector fields over a group
manifold instead of Dirac's abstract q-numbers. This scheme
therefore seeks the dynamical nature of the creation and
annihilation processes of quantum particles.
{\it Second.} The quantum particles (energy bundles) should
gravitate. Hence, strictly speaking, their behavior cannot be
described as a linear superposition. Therefore the ordinary second
quantization method (creation-annihilation of free particles) is
merely a good approximate scheme due to the weakness of gravity.
Thereby the creation and annihilation of particles are time
consuming dynamical non-linear processes. So, linear operators of
creation and annihilation (in Dirac sense) do exist as approximate
quantities.
POSTULATE 1.
\noindent {\it There are elementary quantum states $|\hbar a>,
a=0,1,...$ belonging to the Fock space of an abstract Planck
oscillator whose states correspond to the quantum motions with given
number of Planck action quanta}.
One may image some {\it ``elementary quantum states"
(EAS) $|\hbar a>$ as a quantum motions with entire number $a$ of the
action quanta}. These $a,b,c,...$ takes the place of the ``principle
quantum number" serving as discrete indices $0 \leq a,b,c... <~
\infty$. Since the action by itself does not create gravity, but only velocity
of action variation, i.e. energy/matter, it is
possible to create the linear superposition of $|\hbar
a>=(a!)^{-1/2} ({\hat \eta^+})^a|\hbar 0>$ constituting $SU(\infty)$
multiplete of the Planck's action quanta operator $\hat{S}=\hbar
{\hat \eta^+} {\hat \eta}$ with the spectrum $S_a=\hbar a$ in the
separable Hilbert space $\cal{H}$. Therefore, we shall primarily
quantize the action, not the energy. The relative (local) vacuum
of some problem is not necessarily the state with minimal energy, it
is a state with an extremal of some action functional.
The space-time representation of these states and their coherent
superposition is postponed on the dynamical stage as it is described
below. We shall construct non-linear field equations describing
energy (frequency) distribution between EAS's $|\hbar a>$, whose
soliton-like solution provides the quantization of the dynamical
variables. Presumably, the stationary processes are represented by
stable particles and quasi-stationary processes are represented by
unstable resonances or unparticle stuff.
Generally the coherent superposition
\begin{eqnarray}
|F>=\sum_{a=0}^{\infty} f^a| \hbar a>,
\end{eqnarray}
may represent of a ground state or a ``vacuum" of some quantum
system with the action operator
\begin{eqnarray}
\hat{S}=\hbar A({\hat \eta^+} {\hat \eta}).
\end{eqnarray}
Then one can define the action functional
\begin{eqnarray}
S[|F>]=\frac{<F|\hat{S}|F>}{<F|F>},
\end{eqnarray}
which has the eigen-value $S[|\hbar a>]=\hbar a$ on the eigen-vector
$|\hbar a>$ of the operator $\hbar A({\hat \eta^+} {\hat
\eta})=\hbar {\hat \eta^+} {\hat \eta}$ and that deviates in general
from this value on superposed states $|F>$ and of course under a
different choice of $\hat{S}=\hbar A({\hat \eta^+} {\hat \eta}) \neq
\hbar {\hat \eta^+} {\hat \eta}$. In order to study the variation of
the action functional on superposed states one need more details on
geometry of their superposition.
In fact only finite, say, $N$ elementary quantum states (EQS's)
($|\hbar 0>, |\hbar 1>,...,|\hbar (N-1)>$) may be involved in the
coherent superposition $|F>$. Then $\cal{H}=C^N$ and the ray space
$CP(\infty)$ will be reduced to finite dimensional $CP(N-1)$.
Hereafter we will use the indices as follows: $0\leq a,b \leq N-1$,
and $1\leq i,k,m,n,s \leq N-1$. This superposition physically
corresponds to the complete amplitude of quantum motion in setup $S$.
Then GCS corresponding to this amplitude is controlled by $SU(N)$
dynamical group. One may assume that following postulate takes the place:
POSTULATE 2.
\noindent {\it Matter (energy) distribution is determined by velocities of GCS variations
by LDV's like local Hamiltonian}.
Realization of this assumption will be discussed below.
\section{From flexible setup to quantum reference frame in super-relativity}
Let me show how ordinary quantum formalism hints us how to formulate
functionally invariant quantum dynamics.
\subsection{Flexible setup in the action state space}
The ordinary quantum formalism of operations with amplitudes was brightly
demonstrated by Feynman in popular lectures \cite{Feynman1}. This formalism shows
that generally two setups $S_1$ and $S_2$ lead to
different amplitudes $|\Psi_1>, |\Psi_2>$ of outcome event. There are infinite number
of different setups and not only in the sense of different space-time position but
in different parameters of fields, using devices, etc. Symmetries
relative space-time transformations of whole setup have been studied in ordinary
quantum approach. Such approach reflects, say, the
\emph{first order of relativity}:
the physics is same if any \emph{complete setup} subject (kinematical, not
dynamical!) shifts, rotations, boosts as whole in Minkowski space-time.
Next step leading to new type of relativity may be formulated as
invariance of physical properties of quantum particles
lurked behind two amplitudes $|\Psi_1>, |\Psi_2>$.
Similar idea in the framework of ``functional relativity" was formulated by A. Kryukov
\cite{Kryukov1,Kryukov2} as a requirement that before and after interaction the wave
function of electron should have functionally invariant form. I will treat this
requirement
as ``global functional relativity" since the process of transition from ``in"-state to
``out"-state is left outside of envision. It is shown by clear example of ``interaction"
with a spectrometer. In this ad hoc taken measurement all root problems are hidden in
two assumptions:
1. classical motion of pointwise electron in spectrometer, and
2. in pointwise absorption of the electron by screen.
These simplifications gave a possibility to treat the inverse Fourier transform
as the spectrometer action and to use the Gaussian kernel
$k_{\tilde{H}}(y,v)=e^{-\frac{1}{2}(y-v)^2}$ playing the role of metric in Hilbert space
of improper states. The question however is: what happen in more general kind of
interaction where electron is participated? Is it possible to build the mathematical
model of interaction in relativistic case, say, for high energy reactions like
$e^- e^+ \to \gamma + \gamma \to \tau^- \tau^+$? Definitely, this problem could not
formulated in the spirit of ``global functional relativity". In order to describe
smooth quantum evolution let me ask: what happen if I slightly variate some device
in the setup, say, rotate a filter or, better, change magnetic field around dense flint
\cite{Le4} in complete setup? In other words I will use
``local functional relativity" or ``super-relativity" by declaration that infinitesimal
variation of setup by small variations of its parameters leads to small variations of
output state. Now not space-time coordinates
play essential role but some internal parameters like strength of field used in given setup.
But how we should formalize
``physics" and its invariance mathematically? Our model should be maximally simple
since we would like to study very basic properties of quantum physics.
There is a fine technical question about parametrization of output state
as function of fields in devices, adjustments, etc. It would be a mistake
to start our description from ``given" particles in space-time and fields of setup
since neither particle nor space-time are good enough defined at this stage.
Any real physical setup is even much more
complicated system and its classical parametrization is, however,
unacceptable for our aim since it returns us to Borh's tenet
that all quantum relations should be expressed ``by classical language".
Then we will be involved in the routine round of quantum measurement problem.
The key step to the
invariant description of quantum state $|S>=\sum_{a=0}^{N-1}S^a|\hbar a>$ is transition
to local functional coordinates $\pi^i_{(j)}=\frac{S^i}{S^j}$ of its GCS in $CP(N-1)$
that carrying representation of $SU(N)$ dynamical group \cite{Le1,Le2,Le8}. Now local
quantum reference frame parameterizations by local functional coordinates $\pi^i$
of GCS should be used. Here arises the \emph{second order of relativity which I
called super-relativity: the physics of some quantum object corresponding to GCS
of $|S>$ is same in any setup}.
\subsection{Super-relativity}
The principle of super-relativity arose as development of
Fock's idea of ``relativity to measuring device" \cite{Fock}. This idea may be
treated as generalization of the relativity principle in space-time to
``functional relativity" in the state space \cite{Kryukov1,Kryukov2}
under some reservations and specification. However the power of Fock's program is
limited in comparison with power of Einstein's concepts of special and general relativity.
The main reason is that the notion of the ``measuring device" could not be
correctly formulated in the own framework of the standard quantum theory. Some
additional and, in fact, outlandish classical ingredients should be involved.
Same argument may be applied to the ``global functional relativity" since only
in some particular case it is possible to find theoretically analyzable model
of quantum setup comprising classical improper states (like plane waves and
$\delta$-function) as it was discussed above.
In order to overcome this problem we should to clarify relations between
state vector and dynamical variables of quantum system.
It is very strange to think that state vector being treated as basic element
of the \emph{full} description of quantum system does not influence on dynamical
variable of quantum system. Ordinary quantum dynamical variables are represented by
hermitian operator in Hilbert space carrying representation of symmetry group,
say, Lorentz group. All formal apparatus of quantum theory is based on the
assumption that operators of position, momentum, etc. depend only upon the parameters of
Lorentz group.
Lets now assume that we would like to investigate general behavior of quantum state
vector $|S>$ in Hilbert space $\mathcal{H}=C^N$ subject control
by unitary group $SU(N)$ \cite{Le1,Le2,Le3} through group parameters
$\Omega^{\alpha}: \quad 1 \leq \alpha \leq N^2-1$. This state may be represented as
$|S>=\sum_{a=0}^{N-1}S^a|\hbar a>$ where space-time coordinates are not even mentioned
and whose dynamics and ``morphogenesis" somehow
related to space-time coordinates which I will discuss later.
I argue that this approach:
1. does not require any classical model,
2. it represents due to its generality the $SU(N)$ in dynamical
space-time (inverse representation) through the ``morphogenesis" of the
``field shell",
3. ``field shell" of GCS obeys to quasi-linear PDE in dynamical space-time that
may me solved in some reasonable approximation.
The pure local in quantum state space $CP(N-1)$ theory uses the local geometry of
$SU(N)$ group. The group parameters takes the place of non-Abelian gauge fields
surrounding quantum object (particle/unparticle) whose properties a priori are unknown.
But small variations of these fields lead to small variation
of GCS that and may be associated with some state-dependent LDV modeling flexible setup
or quantum reference frame (QRF).
In order to keep the invariant
properties of some quantum particle (probably, better to say ``quantum process" since
particle may or may not arise during this process) involved in this manifold of
setups one should know \emph{difference in amplitudes arose due to setup variation}.
\emph{Most technically important approach is the comparison of dynamical variables at
infinitesimally close quantum states arose in slightly different setups.}
Since stationary states may be represented by rays in Hilbert space, I will work
in projective Hilbert state space $CP(N-1)$. This approach leads to the concept
of LDV taking the place of quantum reference frame and to the principle of
super-relativity \cite{Le1,Le2} that may be expressed as follows:
POSTULATE 3.
\noindent {\it Unitary transformations
$U(\tau)=\exp(i\Omega^{\alpha}\hat{\Lambda}_{\alpha}\tau)$ of the action amplitudes
may be identified with physical fields. Field functions $\Omega^{\alpha}$ are in the
adjoint representation of $SU(N)$, $\hat{\Lambda}_{\alpha} \in AlgSU(N)$, and $\tau$
is an evolution parameter. The coset transformations $G/H=SU(N)/S[U(1)\times U(N-1)]=CP(N-1)$
is the quantum analog of classical force; its action is equivalent to physically
distinguishable deformation of GCS in $CP(N-1)$, isotropy group $H=U(1)\times U(N-1)$
takes the place of the of gauge group}.
Since any state $|S>$ has the isotropy group
$H=U(1)\times U(N)$, only the coset transformations $G/H=SU(N)/S[U(1)
\times U(N-1)]=CP(N-1)$ effectively act in $C^N$. One should remember,
however, that the Cartan decomposition of unitary group has the
physical sense only in respect with initially chosen state vector. Therefore the
parametrization of these decomposition is state-dependent
$[h_{|S>},h_{|S>}] \subseteq h_{|S>}, [b_{|S>},b_{|S>}]
\subseteq h_{|S>}, [b_{|S>},h_{|S>}] \subseteq b_{|S>}$ \cite{Le1,Le2,Le3}.
It means that physically it is interesting not abstract unitary group relations
but realization of the unitary group transformations resulting in motion of the pure
quantum states represented by rays in projective Hilbert space. Therefore the
ray representation of $SU(N)$ in $C^N$, in particular, the embedding
of $H$ and $G/H$ in $G$, is a state-dependent parametrization.
This is a key point of all construction
invoking to life the concept of the LDV expressed by tangent
vectors fields to $CP(N-1)$. Technically it means that the local $SU(N)$ unitary
classification of the quantum motions of GCS and distinction between particles
and unparticles requires
the transition from the matrices of Pauli $\hat{\sigma}_{\alpha},(\alpha=1,...,3)$,
Gell-Mann $\hat{\lambda}_{\alpha},(\alpha=1,...,8)$, and in general $N \times N$ matrices
$\hat{\Lambda}_{\alpha}(N),(\alpha=1,...,N^2-1)$ of $AlgSU(N)$ to the tangent vector
fields to $CP(N-1)$ in local coordinates \cite{Le1}.
Hence, there is a diffeomorphism between the space of the rays
marked by the local coordinates
\begin{equation}
\pi^i_{(j)}=\cases{\frac{S^i}{S^j},&if $ 1 \leq i < j$ \cr
\frac{S^{i+1}}{S^j}&if $j \leq i < N-1$}
\end{equation}\label{coor}
in the map
$U_j:\{|S>,|S^j| \neq 0 \}, j\geq 0$
and the group manifold of the coset transformations
$G/H=SU(N)/S[U(1) \times U(N-1)]=CP(N-1)$.
This diffeomorphism is provided by the coefficient functions
\begin{equation}\label{17}
\Phi_{\sigma}^i = \lim_{\epsilon \to 0} \epsilon^{-1}
\biggl\{\frac{[\exp(i\epsilon \hat{\Lambda}_{\sigma})]_m^i S^m}{[\exp(i
\epsilon \hat{\Lambda}_{\sigma})]_m^j S^m }-\frac{S^i}{S^j} \biggr\}=
\lim_{\epsilon \to 0} \epsilon^{-1} \{ \pi^i(\epsilon
\hat{\Lambda}_{\sigma}) -\pi^i \}
\end{equation}
of the local generators
\begin{equation}\label{18}
\overrightarrow{D}_{\sigma}=\Phi_{\sigma}^i \frac{\partial}{\partial \pi^i} + c.c.
\end{equation}
comprise of non-holonomic overloaded basis of $CP(N-1)$ \cite{Le1}.
Here $\epsilon$ is used for one of the $SU(N)$ parameters $\Omega^{\sigma}$.
Now one may
introduce local Hamiltonian as a tangent vector fields
\begin{equation}\label{19}
\overrightarrow{H}=\hbar \sum_{\sigma = 1}^{N^2 -1}\Omega^{\sigma}(\tau)
\overrightarrow{D}_{\sigma}=\hbar
\sum_{\sigma = 1}^{N^2-1}\Omega^{\sigma}(\tau)\Phi_{\sigma}^i \frac{\partial}{\partial
\pi^i} + c.c.
\end{equation}
whose coefficient functions $\Omega^{\sigma}(\tau)$ may be found under the condition of
self-conservation expressed as affine parallel transport of Hamiltonian vector field
$H^i=\Omega^{\sigma}(\tau)\Phi_{\sigma}^i$ agrees with Fubini-Study metric.
The problem of finding $\Omega^{\sigma}(\tau)$ treated in the
context of gauge field application as surrounding fields of quantum lump was discussed
in \cite{Le5,Le6}.
The ``visualization" of this gauge field requires the attachment
of co-movable ``Lorentz frame" in DST. I use the analogy with clock's arm
shows the Abelian phase of wave function (see Figure 1) in Feynman's simplified
explanation of quantum electrodynamics \cite{Feynman1}. Feynman discussed the
\emph{amplitude of an event in stationary situation} since operations with
amplitudes refer to fixed setup.
\begin{figure}[h]
\includegraphics[width=1in]{Graphic3.eps}\\
\caption{Fixed setup - summation and multiplication of state vectors
of being particles. Feynman's summation of amplitudes corresponding
to the time of light propagation from internal points of glass plate
to detector. Equivalent amplitude arises as sum of ``forward" and ``backward"
reflection from border surfaces.}\label{fig.1}
\end{figure}
Dynamical GCS moving due to $\Omega^{\sigma}(\tau)$ variation requires operation with
velocities of
state deformation. This variable setup is described by LDV wrapped into ``field shell" that
should dynamically conserve local Hamiltomian vector field \cite{Le2,Le3,Le4}.
I attached qubit spinor and further ``Lorentz frame" that define ``4-velocity" of
some imaging point (belonging to the DST) of the quantum dynamics. This imaging point
is the mentioned above analog of clock's arrow but now in $4D$ DST, see Figure 2.
Quasi-linear partial differential equations arising as a consequence of conservation
law of local Hamiltonian of evolving quantum system, define morphogenesis of non-Abelian
(phase) gauge soliton-like ``field shell" \cite{Le5,Le6,Le8}. So, we have a concentrated
``lump" associated with becoming quantum particle.
\begin{figure}[h]
\includegraphics[width=1in]{Graphic4.eps}\\
\caption{Dynamical setup for becoming lump - operations with LDV.
In order to get effective sum of non-Abelian phases of $SU(N)$ transformation
shaping the lump, one should integrate quasi-linear partial differential
equations. The ``4-velocity" $V$ of imaging point in DST is parameterized by
boosts and angle velocities of co-moving ``Lorentz reference frame" attached
to trajectory in $CP(N-1)$.}\label{fig.2}
\end{figure}
Such particle may be represented as a dynamical process due to morphogenesis
of the ``field shell" of generalized coherent state of N-level system.
\section{Local dynamical variables}
The action state space ${\cal H}=C^N$
contains ``initial" and ``final" stationary states with finite action quanta.
Quantum dynamics is described by {\it the
velocities of the GCS variation} representing some ``elementary
excitations'' (quantum particles or unparticles). Their dynamics is specified by
the Hamiltonian, giving time variation velocities of the action
quantum numbers in different directions of the tangent Hilbert space
$T_{(\pi^1,...,\pi^{N-1})} CP(N-1)$ which takes the place of the
ordinary linear quantum scheme as will be explained below. The rate
of the action variation gives the energy of the excitations in accordance
with POSTULATE 2.
The local dynamical variables correspond to internal symmetries of
the GCS and their evolution should be expressed now in terms of the
local coordinates $\pi^k$. The Fubini-Study metric
\begin{equation}
G_{ik^*} = [(1+ \sum |\pi^s|^2) \delta_{ik}- \pi^{i^*} \pi^k](1+
\sum |\pi^s|^2)^{-2} \label{FS}
\end{equation}
and the affine connection
\begin{eqnarray}
\Gamma^i_{mn} = \frac{1}{2}G^{ip^*} (\frac{\partial
G_{mp^*}}{\partial \pi^n} + \frac{\partial G_{p^*n}}{\partial
\pi^m}) = - \frac{\delta^i_m \pi^{n^*} + \delta^i_n \pi^{m^*}}{1+
\sum |\pi^s|^2} \label{Gamma}
\end{eqnarray}
in these coordinates will be used. Hence the internal dynamical
variables and their norms should be state-dependent, i.e. local in
the state space \cite{Le1,Le2}. These local dynamical variables
realize a non-linear representation of the unitary global $SU(N)$
group in the Hilbert state space $C^N$. Namely, $N^2-1$ generators
of $G = SU(N)$ may be divided in accordance with the Cartan
decomposition. There are
$(N-1)^2$ generators
\begin{eqnarray}
\Phi_h^i \frac{\partial}{\partial \pi^i}+c.c. \in H,\quad 1 \le h
\le (N-1)^2
\end{eqnarray}
of the isotropy group $H = U(1)\times U(N-1)$ of the ray (Cartan
sub-algebra) and $2(N-1)$ generators
\begin{eqnarray}
\Phi_b^i \frac{\partial}{\partial \pi^i} + c.c. \in B, \quad 1 \le b
\le 2(N-1)
\end{eqnarray}
are the coset $G/H = SU(N)/S[U(1) \times U(N-1)]$ generators
realizing the breakdown of the $G = SU(N)$ symmetry of the GCS.
Furthermore, the $(N-1)^2$ generators of the Cartan sub-algebra may
be divided into the two sets of operators: $1 \le c \le N-1$ ($N-1$
is the rank of $Alg SU(N)$) Abelian operators, and $1 \le q \le
(N-1)(N-2)$ non-Abelian operators corresponding to the
non-commutative part of the Cartan sub-algebra of the isotropy
(gauge) group. Here $\Phi^i_{\sigma}, \quad 1 \le \sigma \le N^2-1 $
are the coefficient functions of the generators of the non-linear
$SU(N)$ realization. They give the infinitesimal shift of the
$i$-component of the coherent state driven by the $\sigma$-component
of the unitary multipole field $\Omega^{\sigma}$ rotating the
generators of $Alg SU(N)$ and they are defined as by (5)
\cite{Le1,Le2}. Then the sum of the $N^2-1$ the energies associated with
intensity of deformations of the GCS is represented by the local
Hamiltonian vector field $\vec{H}$ which is linear in the partial
derivatives $\frac{\partial }{\partial \pi^i} = \frac{1}{2}
(\frac{\partial }{\partial \Re{\pi^i}} - i \frac{\partial }{\partial
\Im{\pi^i}})$ and $\frac{\partial }{\partial \pi^{*i}} = \frac{1}{2}
(\frac{\partial }{\partial \Re{\pi^i}} + i \frac{\partial }{\partial
\Im{\pi^i}})$. In other words it is the tangent vector to $CP(N-1)$
\begin{eqnarray}
\vec{H}=\hbar \Omega^c \Phi_c^i \frac{\partial }{\partial
\pi^i} + \hbar \Omega^q \Phi_q^i \frac{\partial }{\partial \pi^i} +
\hbar \Omega^b \Phi_b^i \frac{\partial }{\partial \pi^i} + c.c.
\label{field}
\end{eqnarray}
Thereby in the framework of the local state-dependent
approach one can formulate a quantum
scheme with help more flexible mathematical structure
than matrix formalism. It means that matrix elements of
transitions between {\it two arbitrary far states} are
associated with, in fact, bi-local dynamical
variables that
bring a lot of technical problems in quantum field area.
However the local dynamical
variables related to infinitesimal deformations of quantum
states are well defined in projective Hilbert
space as well as quantum states itself.
They are local tangent vector fields to the
projective Hilbert space $CP(N-1)$ which
are $SU(N)$ generators (differential operators
of first order) \cite{Le1,Le2,Le3}.
In the local coordinates $\pi^i_{(j)} = \frac{S^i}{S^j}$
one can build the infinitesimal generators of the
Lie algebra $AlgSU(N)$.
Then one has to use explicit form
$\Phi^i_\sigma$ for $N^2-1$ of infinitesimal generators of
the Lie algebra $AlgSU(N)$. For example for the three-level
system, algebra $SU(3)$ has 8 infinitesimal generators which
are given by the vector fields:
\begin{eqnarray}
\vec{D}_1&=&i \frac{\hbar}{2}[[1-(\pi^1)^2]\frac{\partial}{\partial \pi^1}
-\pi^1 \pi^2 \frac{\partial}{\partial \pi^2}
-[1-(\pi^{1*})^2]\frac{\partial}{\partial \pi^{1*}}
+\pi^{1*} \pi^{2*} \frac{\partial}{\partial \pi^{2*}}] , \cr
\vec{D}_2&=&- \frac{\hbar}{2}[[1+(\pi^1)^2]\frac{\partial}{\partial \pi^1}
+\pi^1 \pi^2 \frac{\partial}{\partial \pi^2}
+[1+(\pi^{1*})^2]\frac{\partial}{\partial \pi^{1*}}
+\pi^{1*} \pi^{2*} \frac{\partial}{\partial \pi^{2*}}] , \cr
\vec{D}_3&=&-i \hbar[\pi^1 \frac{\partial}{\partial \pi^1}+\frac{1}{2}\pi^2
\frac{\partial}{\partial \pi^2}
+\pi^{1*} \frac{\partial}{\partial \pi^{1*}}+\frac{1}{2}\pi^{2*} \frac{\partial}{\partial
\pi^{2*}}], \cr
\vec{D}_4&=& i\frac{\hbar}{2}[[1-(\pi^2)^2]\frac{\partial}{\partial \pi^2}
-\pi^1 \pi^2 \frac{\partial}{\partial \pi^1}
-[1-(\pi^{2*})^2]\frac{\partial}{\partial\pi^{2*}}
+\pi^{1*} \pi^{2*} \frac{\partial}{\partial \pi^{1*}}] , \cr
\vec{D}_5&=& -\frac{\hbar}{2}[[1+(\pi^2)^2]\frac{\partial}{\partial \pi^2}
+\pi^1 \pi^2 \frac{\partial}{\partial \pi^1}
+[1+(\pi^{2*})^2]\frac{\partial}{\partial \pi^{2*}}
+\pi^{1*} \pi^{2*} \frac{\partial}{\partial \pi^{1*}}], \cr
\vec{D}_6&=&i\frac{\hbar}{2}[\pi^2 \frac{\partial}{\partial \pi^1}
+\pi^1 \frac{\partial}{\partial \pi^2}
-\pi^{2*}\frac{\partial}{\partial \pi^{1*}}
-\pi^{1*} \frac{\partial}{\partial \pi^{2*}}] , \cr
\vec{D}_7&=&\frac{\hbar}{2}[\pi^2 \frac{\partial}{\partial \pi^1}
-\pi^1 \frac{\partial}{\partial \pi^2}
+\pi^{2*}\frac{\partial}{\partial \pi^{1*}}
-\pi^{1*} \frac{\partial}{\partial \pi^{2*}}] , \cr
\vec{D}_8&=&-\frac{3^{1/2}}{2}i\hbar[\pi^2 \frac{\partial}{\partial \pi^2}
-\pi^{2*} \frac{\partial}{\partial \pi^{2*}}].
\end{eqnarray}
Let me assume that $|G>=\sum_{a=0}^{N-1} g^a|\hbar a>$ is a ``ground
state" of some the least action problem.
Then the velocity of the ground state evolution relative ``world
time" $\tau$ is given by the formula
\begin{eqnarray}
|\Psi> \equiv |T> =\frac{d|G>}{d\tau}=\frac{\partial g^a}{\partial
\pi^i}\frac{d\pi^i}{d\tau}|\hbar a>+\frac{\partial g^a}{\partial
\pi^{*i}}\frac{d\pi^{*i}}{d\tau}|\hbar a> \cr
=|T_i>\frac{d\pi^i}{d\tau}+|T_{*i}>\frac{d\pi^{*i}}{d\tau}=H^i|T_i>+H^{*i}|T_{*i}>,
\end{eqnarray}
is the tangent vector to the evolution curve $\pi^i=\pi^i(\tau)$,
where
\begin{eqnarray}
|T_i> = \frac{\partial g^a}{\partial \pi^i}|\hbar a>=T^a_i|\hbar a>,
\quad |T_{*i}> = \frac{\partial g^a}{\partial
\pi^{*i}}|\hbar a>=T^a_{*i}|\hbar a>.
\end{eqnarray}
Then the variation velocity of the $|\Psi>$ is given by the equation
\begin{eqnarray}\label{43}
|A> &=&\frac{d|\Psi>}{d\tau} \cr &=&
(B_{ik}H^i\frac{d\pi^k}{d\tau}+B_{ik^*}H^i\frac{d\pi^{k*}}{d\tau}
+B_{i^*k}H^{i^*}\frac{d\pi^k}{d\tau} +B_{i^*
k^*}H^{i^*}\frac{d\pi^{k*}}{d\tau})|N>\cr &+&
(\frac{dH^s}{d\tau}+\Gamma_{ik}^s
H^i\frac{d\pi^k}{d\tau})|T_s>+(\frac{dH^{s*}}{d\tau}+\Gamma_{i^*k^*}^{s*}
H^{i*}\frac{d\pi^{k*}}{d\tau})|T_{s*}>,
\end{eqnarray}
where I introduce the matrix $\tilde{B}$ of the second quadratic
form whose components are defined by following equations
\begin{eqnarray}\label{45}
B_{ik}|N> =\frac{\partial |T_i>}{\partial \pi^k}-\Gamma_{ik}^s|T_s>,
\quad B_{ik^*}|N> = \frac{\partial |T_i>}{\partial \pi^{k*}} \cr
B_{i^*k}|N> =\frac{\partial |T_{i*}>}{\partial \pi^k}, \quad B_{i^*
k^*}|N> = \frac{\partial |T_{i*}>}{\partial
\pi^{k*}}-\Gamma_{i^*k^*}^{s*}|T_{s*}>
\end{eqnarray}
through the state $|N>$ normal to the ``hypersurface'' of the ground
states. I should emphasize that ``world time" is the time of evolution from
the one GCS to another one which is physically distinguishable.
Thereby the unitary evolution of the action amplitudes generated by
leads in general to the non-unitary evolution of the tangent
vector to $CP(N-1)$ associated with ``state vector" $|\Psi>$.
Assuming that the ``acceleration'' $|A>$ is gotten by the
action of some linear ``Hamiltonian" $\hat{L}$ describing the
evolution (or a measurement), one has the ``Schr\"odinger equation
of evolution"
\begin{eqnarray}\label{56}
\frac{d|\Psi>}{d\tau}&=&-i\hat{L}|\Psi> \cr
&=&(B_{ik}H^i\frac{d\pi^k}{d\tau}+B_{ik^*}H^i\frac{d\pi^{k*}}{d\tau}
+B_{i^*k}H^{i^*}\frac{d\pi^k}{d\tau} +B_{i^*
k^*}H^{i^*}\frac{d\pi^{k*}}{d\tau})|N> \cr &+&
(\frac{dH^s}{d\tau}+\Gamma_{ik}^s
H^i\frac{d\pi^k}{d\tau})|T_s>+(\frac{dH^{s*}}{d\tau}+\Gamma_{i^*k^*}^{s*}
H^{i*}\frac{d\pi^{k*}}{d\tau})|T_{s*}>.
\end{eqnarray}
This ``Hamiltonian" $\hat{L}$ is non-Hermitian and its expectation
values is as follows:
\begin{eqnarray}\label{57}
<N|\hat{L}|\Psi>&=&
i(B_{ik}H^i\frac{d\pi^k}{d\tau}+B_{ik^*}H^i\frac{d\pi^{k*}}{d\tau}
+B_{i^*k}H^{i^*}\frac{d\pi^k}{d\tau} +B_{i^*
k^*}H^{i^*}\frac{d\pi^{k*}}{d\tau}),\cr <\Psi|\hat{L}|\Psi>&=&
iG_{p^*s}(\frac{dH^s}{d\tau}+\Gamma_{ik}^s
H^i\frac{d\pi^k}{d\tau})H^{p*}+iG_{ps^*}(\frac{dH^{s*}}{d\tau}+\Gamma_{i^*
k^*}^{s*} H^{i^*}\frac{d\pi^{k*}}{d\tau})H^p\cr
&=&i<\Psi|\frac{d}{d\tau}|\Psi>.
\end{eqnarray}
The minimization of the $|A>$ under the transition from point $\tau$
to $\tau+d\tau$ may be achieved by the annihilation of the
tangential component
\begin{equation}
\frac{dH^s}{d\tau}+\Gamma_{ik}^s H^i\frac{d\pi^k}{d\tau}=0, \quad
\frac{dH^{s*}}{d\tau}+\Gamma_{i^* k^*}^{s*}
H^{i^*}\frac{d\pi^{k*}}{d\tau}=0
\end{equation}
i.e. under the condition of the affine parallel transport of the
Hamiltonian vector field. The last equations in (26) shows that the
affine parallel transport of $H^i$ agrees with Fubini-Study metric
leads to Berry's ``parallel transport" of $|\Psi>$.
\section{Dynamical space-time as ``objective observer"}
I have assumed that the quantum measurement of the LDV being encoded with help
infinitesimal Lorentz transformations of qubit spinor leads to emergence of
the dynamical space-time that takes the place of the objective ``quantum
measurement machine" formalizing the process of numerical encoding the results
of comparisons of LDV's. Two these procedures are described below.
\subsection{LDV's comparison}
Local representation of unitary group $SU(N)$ is reliable geometric
tool for classification of the GCS motions in $CP(N-1)$ during
quantum dynamics due to interaction or self-interaction. This
evolution of GCS may be used in objective measuring process. Two essential
components of any measurement are identification and comparison. The Cartan's idea
of reference to the previous infinitesimally close GCS has been used. So one could
avoid the necessity of the ``second body" used as a reference frame. Thereby, LDV
is now a new important element of quantum dynamics \cite{Le4}. We should be able
to compare some LDV at two infinitesimally close GCS represented by points of $CP(N-1)$.
Since LDV's are vector fields on $CP(N-1)$, the most natural mean of comparison of
the LDV's is affine parallel transport agrees with Fubini-Study metric \cite{Le1}.
This parallel transport expresses the conservation law of local Hamiltonian
\begin{equation}\label{20}
\frac{\delta H^i}{\delta \tau}=
\frac{\delta (\Omega^{\sigma}(\tau)\Phi_{\sigma}^i) }{\delta \tau}= 0,
\end{equation}
reflecting objective identification of evolving quantum process. It gives a natural
mean for the comparison of LDV at different GCS's. Field equations will be discussed
in the paragraph 5.
\subsection{Encoding the results of comparison}
The results of the comparison of LDV's should be formalized by numerical encoding.
Thus one may say that ``LDV has been measured". The invariant encoding is based on
the geometry of $CP(N-1)$ and LDV dynamics, say, dynamics of the local Hamiltonian field.
Its affine parallel transport expresses the self-conservation of quantum object
associated with ``particle" or ``unparticle". In order to build the qubit spinor
$\eta$ of the quantum question
$\hat{Q}$ \cite{Le5} two orthogonal vectors $\{|N>,|\Psi>\}$ have been used.
Here $|N>$ is the complex normal and $|\Psi>$ tangent vector to $CP(N-1)$.
I will use following qubit spinor
\begin{eqnarray}\label{24}
\eta=\left(
\begin{array}{cc}
\eta^0_{(\pi^1,...,\pi^{N-1})} \\
\eta^1_{(\pi^1,...,\pi^{N-1})} \\
\end{array}
\right) = \left(
\begin{array}{cc}
\frac{<N|\hat{L}|\Psi>}{<N|N>} \\
\frac{<\Psi|\hat{L}|\Psi>}{<\Psi|\Psi>} \\
\end{array}
\right)
\end{eqnarray}
for the measurement of the Hamiltonian $\hat{H}$ at corresponding GCS.
\subsection{Quantum boosts and angle velocities}
Any two infinitesimally close spinors $\eta$ and $\eta+\delta
\eta$ may be formally connected with infinitesimal ``Lorentz spin transformations
matrix'' \cite{G}
\begin{eqnarray}\label{31}
\hat{L}=\left( \begin {array}{cc} 1-\frac{i}{2}\delta \tau ( \omega_3+ia_3 )
&-\frac{i}{2}\delta \tau ( \omega_1+ia_1 -i ( \omega_2+ia_2)) \cr
-\frac{i}{2}\delta \tau
( \omega_1+ia_1+i ( \omega_2+ia_2))
&1-\frac{i}{2}\delta \tau( -\omega_3-ia_3)
\end {array} \right).
\end{eqnarray}
I have assumed that there is not only formal but dynamical reason for such transition
when Lorentz reference frame ``follows" for GCS.
Then ``quantum accelerations" $a_1,a_2,a_3$ and ``quantum angle velocities" $\omega_1,
\omega_2, \omega_3$ may be found in the linear approximation from
the equation $\delta \eta = \hat{L} \eta-\eta$, or, strictly speaking, from
its consequence - the equations for the velocities $\xi$ of $\eta$ spinor variations
\begin{eqnarray}
\hat{R}\left(
\begin{array}{cc}
\eta^0 \cr
\eta^1
\end{array}
\right) =
\frac{\hat{L}-\hat{1}}{\delta \tau}\left(
\begin{array}{cc}
\eta^0 \cr
\eta^1
\end{array}
\right) = \left(
\begin{array}{cc}
\xi^0 \cr
\xi^1
\end{array}
\right).
\end{eqnarray}
One should take into account that in the linear approximation
the normal component of the qubit spinor does not change, i.e. $\xi^0=0$ but tangent
component $\eta^1$ subjected the affine parallel transport back to the initial GCS:
$\xi^1=\frac{\delta \eta^1}{\delta \tau}=-\Gamma \eta^1 \frac{\delta \pi}{\delta \tau}$.
If one put $\pi=e^{-i\phi} \tan(\theta/2)$ then $\frac{\delta \pi}{\delta \tau}=
\frac{\partial \pi}{\partial \theta}\frac{\delta \theta}{\delta \tau}+
\frac{\partial \pi}{\partial \phi}\frac{\delta \phi}{\delta \tau}$, where
\begin{eqnarray}
\frac{\delta \theta}{\delta \tau}=-\omega_3\sin(\theta)-((a_2+\omega_1)\cos(\phi)+
(a_1-\omega_2)\sin(\phi))\sin(\theta/2)^2 \cr
-((a_2-\omega_1)\cos(\phi)+
(a_1+\omega_2)\sin(\phi))\cos(\theta/2)^2; \cr
\frac{\delta \phi}{\delta \tau}=a_3+(1/2)(((a_1-\omega_2)\cos(\phi)-
(a_2+\omega_1)\sin(\phi))\tan(\theta/2) \cr
-((a_1+\omega_2)\cos(\phi)-
(a_2-\omega_1)\sin(\phi))\cot(\theta/2)),
\end{eqnarray}
then one has linear non-homogeneous system of 6 real equation
\begin{eqnarray}
\Re(\hat{R}_{00}\eta^0+\hat{R}_{01}\eta^1)&=&0, \cr
\Im(\hat{R}_{00}\eta^0+\hat{R}_{01}\eta^1)&=&0, \cr
\Re(\hat{R}_{10}\eta^0+\hat{R}_{11}\eta^1+\Gamma \eta^1 \frac{\delta \pi}{\delta \tau})&=&0,
\cr
\Im(\hat{R}_{10}\eta^0+\hat{R}_{11}\eta^1+\Gamma \eta^1 \frac{\delta \pi}{\delta \tau})&=&0,
\cr
\frac{\delta \theta}{\delta \tau}&=&F_1, \cr
\quad \frac{\delta \phi}{\delta \tau}&=&F_2,
\end{eqnarray}
giving $\vec{a},\vec{\omega}$ as functions of local coordinates of GCS and 2 real
perturbation frequencies $F_1, F_2$ of coset deformation acting along some geodesic in $CP(N-1)$.
Since $CP(N-1)$ is totally geodesic manifold \cite{KN}, each geodesic belongs to some
$CP(1)$ parameterized by single $\pi$ used above.
Quantum lump takes the place of extended ``pointer".
This extended pointer may be mapped onto dynamical space-time if one assumes
that transition from one GCS to another is accompanied by dynamical
transition from one Lorentz frame to another, see Figure 2.
Thereby, infinitesimal Lorentz transformations define small
``dynamical space-time'' coordinates variations. It is convenient to take
Lorentz transformations in the following form
\begin{eqnarray}
ct'&=&ct+(\vec{x} \vec{a}) \delta \tau \cr
\vec{x'}&=&\vec{x}+ct\vec{a} \delta \tau
+(\vec{\omega} \times \vec{x}) \delta \tau
\end{eqnarray}
where I put
$\vec{a}=(a_1/c,a_2/c,a_3/c), \quad
\vec{\omega}=(\omega_1,\omega_2,\omega_3)$ \cite{G} in order to have
for $\tau$ the physical dimension of time. The expression for the
``4-velocity" $ V^{\mu}$ is as follows
\begin{equation}\label{29}
V^{\mu}=\frac{\delta x^{\mu}}{\delta \tau} = (\vec{x} \vec{a},
ct\vec{a} +\vec{\omega} \times \vec{x}) .
\end{equation}
The coordinates $x^\mu$ of imaging point in dynamical space-time serve here merely for
the parametrization of the energy distribution in the ``field
shell'' arising under ``morphogenesis" described by quasi-linear field
equations \cite{Le2,Le6,Le7}.
\section{Morphogenesis of the lump and unparticle sectors}
The conservation law of local Hamiltonian is expressed by the
affine parallel transport (22) in $CP(N-1)$. This parallel transport
provides the ``self-conservation" of extended object, i.e.
the affine gauge fields couple the soliton-like system \cite{Le2,Le3}.
The field equations for the $SU(N)$ parameters $\Omega^{\alpha}$
dictated by the affine parallel transport of the Hamiltonian vector field
$H^i=\hbar \Omega^{\alpha}\Phi^i_{\alpha}$ (5)
read as quasi-linear PDE together with ``riccator" describing evolution of GCS
\begin{equation}\label{40}
\frac{\delta \Omega^{\alpha}}{\delta \tau} = V^{\mu} \frac{\partial
\Omega^{\alpha}}{\partial x^{\mu} } = -
(\Gamma^m_{mn} \Phi_{\beta}^n+\frac{\partial
\Phi_{\beta}^n}{\partial \pi^n}) \Omega^{\alpha}\Omega^{\beta},
\quad \frac{d\pi^k}{d\tau}= \Phi_{\beta}^k \Omega^{\beta},
\end{equation}
comprising self-consistent system. It is impossible of course to solve this
self-consistent problem
analytically even in this simplest case of the two state system, but
it is reasonable to develop a numerical approximation in the
vicinity of the following exact solution.
Let me discuss initially only quasi-linear PDE obtained as a consequence of the
parallel transport of the local Hamiltonian
\begin{equation}
(\vec{x} \vec{a},
ct\vec{a} +\vec{\omega} \times \vec{x}) \frac{\partial
\Omega^{\alpha}}{\partial x^{\mu} } = -
(\Gamma^m_{mn} \Phi_{\beta}^n+\frac{\partial
\Phi_{\beta}^n}{\partial \pi^n}) \Omega^{\alpha}\Omega^{\beta}
\end{equation}
for two-level system living in $CP(1)$ \cite{Le2,Le6,Le7}. In this simplest case of
GCS dynamics with coordinate $\pi=u+iv$ the indexes are as follows:
$1\leq \alpha,\beta \leq3,\quad i,k,n=1$, and the field components
$\Omega^1=(\omega+i\gamma) \sin \Theta \cos \Phi$,
$\Omega^2=(\omega+i\gamma) \sin \Theta \sin \Phi$,
$\Omega^3=(\omega+i\gamma) \cos \Theta $ that should be defined.
This system in the case of the spherical symmetry being splited into the
real and imaginary parts takes the form
\begin{eqnarray}
\matrix{ (r/c)\omega_t+ct\omega_r=-2\omega \gamma F(u,v), \cr
(r/c)\gamma_t+ct\gamma_r=(\omega^2 - \gamma^2) F(u,v), \cr u_t=\kappa
U(u,v,\omega,\gamma), \cr v_t=\kappa V(u,v,\omega,\gamma), }
\label{self_sys}
\end{eqnarray}
where
$\kappa$ is a coefficient and $U(u,v,\omega,\gamma), V(u,v,\omega,\gamma)$ are functions
which I avoid to write explicitly here.
Let me put $\omega=\rho
\cos \psi, \quad \gamma=\rho \sin \psi$, then, assuming for
simplicity that $\omega^2+\gamma^2=\rho^2=constant$, the two first
PDE's may be rewritten as follows:
\begin{equation}
\frac{r}{c}\psi_t+ct\psi_r=F(u,v) \rho \cos \psi.
\end{equation}
The two exact solutions of this quasi-linear PDE is as follows
\begin{eqnarray}
\psi_{1}(t,r)= \arctan \frac{\exp(2c\rho F(u,v)
f(r^2-c^2t^2))(ct+r)^{2\rho F(u,v)}-1}{\exp(2c\rho F(u,v)
f(r^2-c^2t^2))(ct+r)^{2\rho F(u,v)}+1},
\end{eqnarray}
and
\begin{eqnarray}
\psi_{2}(t,r)= \arctan \frac{2 \exp(c\rho F(u,v)
f(r^2-c^2t^2))(ct+r)^{\rho F(u,v)}}{\exp(2c\rho F(u,v)
f(r^2-c^2t^2))(ct+r)^{2\rho F(u,v)}+1},
\end{eqnarray}
where $f(r^2-c^2t^2)$ is an arbitrary function of the interval.
What is the physical interpretation of these solutions may be given?
It is interesting that this non-monotonic distribution of the force field
$\psi_{1}(t,r)$ describing ``lump" \cite{Le1,Le2,Le6,Le7} that looks like a bubble
in the dynamical space-time. These field
equations describes energy distribution in the lump which does not exist
a priori but is becoming during the self-interaction, see Figure 3.
\begin{figure}[h]
\includegraphics[width=2in]{lump}\\
\caption{The non-monotonic distribution of the force field
in the lump looks like a bubble in the dynamical space-time.}\label{fig.3}
\end{figure}
It should be noted that attempts to treat the field dynamics literally in
spirit of ``particle in potential" are almost hopeless since we have self-consistent
dynamics. The monotonic solution $\psi_{2}(t,r)$ looks like unparticle entity
corresponding to imaginary field mass $i\omega(r,t)$.
In order to realize the physical interpretation of these equations I will
find the stationary solution for (32). Let me put $\xi=r-ct$. Then
one will get ordinary differential equation
\begin{equation}
\frac{d\Psi(\xi)}{d \xi} = -F(u,v) \rho \frac{\cos \Psi(\xi)}{\xi}.
\end{equation}
Two solutions
\begin{equation}
\Psi(\xi) =arctan(\frac{\xi^{-2M} e^{-2CM}-1}{\xi^{-2M} e^{-2CM}+1},
\frac{2\xi^{-M} e^{-2CM}}{\xi^{-2M} e^{-2CM}-1}),
\end{equation}
where $M=F(u,v) \rho$ are concentrated in the vicinity of the
light-cone looks like solitary waves, see Fig.4.
\begin{figure}[h]
\includegraphics[width=4in]{station_waves2.eps}\\
\caption{Two solutions of (35) in the light-cone vicinity.}\label{fig.4}
\end{figure}
The problem of the physical status of these field equation may be solved
if one could point out some transformations from found equations to well known
relativistic, say, Dirac equation. I almost sure that it is impossible
to find this transition as perturbation in small parameter.
Here I will give only some hints for such transition.
Standard Dirac's equation
\begin{equation}
\hat{\gamma}^{\mu} \frac{\partial \psi}{\partial x^{\mu}}+\frac{imc}{\hbar}
\hat{\gamma}^5\psi=0
\end{equation}
is linear. Dirac assumed that matrices $\hat{\gamma}^{\mu}$ should be
coordinate independent since empty Minkowskian space-time is homogeneous and isotropic.
However Dirac's equation in curved space-time should have coordinate-dependent
matrices $\hat{\gamma}^{\mu}(x) \equiv b^{\mu}_a(x) \hat{\gamma}^a$,
where $b^{\mu}_a(x)$ is vierbein defined as follows:
$g^{\mu\nu}=b^{\mu}_a(x)b^{\mu}_a(x)\eta^{ab}$ \cite{Parker}.
It is known that matrices $\hat{\gamma}^{\mu}$ have a sense of instant
velocities with modulus $c$.
Quasi-linear equation (30) has similar structure but ``4-velocity"
$V^{\mu}=\frac{\delta x^{\mu}}{\delta \tau} = (\vec{x} \vec{a},
ct\vec{a} +\vec{\omega} \times \vec{x})$ of imaging point evidently depends on
coordinates in DST. They serves as parameters of field distribution in the lump and
unparticle energy distribution. Probably it is possible to establish some relations
between $V^{\mu}(x)$ and $\hat{\gamma}^{\mu}(x)$ but presently this connection is
unclear.
\section{Conclusion}
1. Action states serve as ``initial" and ``final" conditions in fixed setup.
Manipulations with quantum amplitudes shows that two setups $S_1$ and $S_2$
generates generally
two different amplitudes $|S_1>$ and $|S_2>$ of outcome events.
2. It is reasonable to find physical invariance lurked behind $|S_1>$ and $|S_2>$.
``Relativity to measuring device``
by Fock and ``functional relativity" by A. Kryukov express this invariance
in the global manner. Since there is no strict definition of the
measuring device in terms of standard quantum theory, the special ad hoc example
of manipulation with improper states like plane wave and $\delta$-function have been used
in order to show functional invariance of state equation before and after measurement.
3. I use flexible setup for transition to local quantum reference frame in super-relativity.
Amplitude of outcome event $|S>$, its GCS and dynamical group $SU(N)$ with $N^2-1$
non-Abelian fields parameters $\Omega^{\alpha}$ are main ingredients for objective quantum
measurement. Desirable quantum localization is realized in functional space:
infinitesimal variation of fields parameters $\Omega^{\alpha}$ defines local dynamical
variables (LDV) expressed in local coordinates $\pi^1,...,\pi^{N-1}$. Non-linear
realization of $SU(N)$ generators by tangent
vector field to $CP(N-1)$ serves for invariant classification of quantum motions
and particle/unparticle excitations instead of classification of ``elementary"
quantum particles.
4. Objective quantum measurement of LDV creates dynamical space-time due to:
a. Comparison of LDV in infinitesimally close GCS is provided by non-Abelian affine gauge
field agreed with Fubini-Study metric,
b. Qubit spinor encoding of the result of this comparison whose components are
parameterized by quantum boosts and quantum rotations that define dynamics of attached
local Lorentz frame in DST.
5. Identification of quantum objects (processes) and its conservation law expresses
by parallel transport of local Hamiltonian. Quasi-linear PDE is consequence of
this conservation law that generate dynamics of GCS and morphogenesis of ``field shell".
Particle and unparticle sectors of these excitations should be classified by comparison with
know quantum field equations.
\section{Discussion}
1. The intrinsically geometric scheme of the quantum
measurement of local dynamical variable has been proposed.
The self-interaction supporting localizable ``lump" configuration
arose due to the breakdown of global $G=SU(N)$ symmetry is
used for such measurement and it is represented by the affine gauge
``field shell" propagated in the dynamical state-dependent
space-time.
2. The concept of ``super-relativity" \cite{Le1,Le2} is in fact a
different kind of attempts of ``hybridization" of internal and
space-time symmetries. In distinguish from SUSY where a priori
exists the extended space-time - ``super-space", in my approach the
dynamical space-time arises under ``yes/no" quantum measurement of
$SU(N)$ local dynamical variables.
3. The locality in the quantum phase space $CP(N-1)$ leads to
extended quantum particles - ``field shell" that obey the
quasi-linear PDE \cite{Le2}.
4. The main technical problem
is to find non-Abelian gauge field arising from conservation law of the local Hamiltonian
vector field. The last one may be expressed as parallel transport of local
Hamiltonian in projective Hilbert space $CP(N-1)$. Co-movable local ``Lorentz frame"
being attached to GCS is used for qubit encoding result of comparison of the
parallel transported local Hamiltonian in infinitesimally close points. This
leads to quasi-linear relativistic field equations with soliton-like solutions
for ``field shell" in emerged DST. The terms ``comparison" and ``encoding" resemble
human's procedure, but here they have objective content realized in invariant
quantum dynamics.
There is a possibility for generalization of scalar in DST ``field shell"
$\Omega^{\sigma}\Phi_{\sigma}^i$ to vector $\Omega_{\mu}^{\sigma}\Phi_{\sigma}^i$ and
tensor fields $\Omega_{\mu \nu}^{\sigma}\Phi_{\sigma}^i$ assuming invariant contraction
in iso-index $\sigma$. Then will arise more complicated field equation with essential
dependence of global space-time structure since one need to know metric
connection $\Gamma^{\lambda}_{\mu \nu}$ for covariant derivatives.
5. One need to find connection between quasi-linear field equations and known filed equation
(like Dirac equation). Probably it is possible to use some
analogy with Skyrmion field quantization \cite{Aitchison} although there is of
course essential difference between lump and monopole solutions.
6. DST forms granular structure of global space-time and paves a way to build
quantum gravity ``from inside".
\vskip 0.2cm |
1403.5310 | \section{Introduction}
In spite of being an attractive material with excellent electronic properties \cite{ahcn09}, practical applications of graphene as in conventional semiconductor devices are still questionable due to its gapless nature. In particular, the ON/OFF current ratio is low while the saturation of current is poor in pristine graphene transistors \cite{schw10}. Many efforts of bandgap engineering in graphene \cite{yhan07,khar11,lher13,jbai10,zhan09} have been made to solve these issues. The pioneer technique proposed \cite{yhan07} is to cut 2D graphene sheets into 1D narrow nanoribons. In 2D graphene sheets, some options as Bernal-stacking of graphene on hexagonal boron nitride substrate \cite{khar11}, nitrogen-doped graphene \cite{lher13}, graphene nanomesh lattice \cite{jbai10,berr13} and Bernal-stacking bilayer graphene \cite{zhan09} have been explored. However, the possibility to open a sizable bandgap in graphene as large as those of standard semiconductors is still very unlikely. In particular, it requires a very good control of lattice geometry and edge disorder in narrow graphene nanoribbons (GNRs) \cite{quer08} and in graphene nanomesh lattices \cite{hung13}, while the bandgap opening in bilayer graphene by a perpendicular electric field may not be large enough for realistic applications \cite{fior09}. Other methods should be further verified by experiments.
\begin{figure}[!t]
\centering
\includegraphics[width=2.8in]{Fig01.pdf}
\caption{Schematic of unstrained/strained graphene junctions investigated in this work.}
\label{fig_sim1}
\end{figure}
On the other hand, graphene was experimentally demonstrated to be able to sustain a much larger strain than conventional semiconductors, making it a promising candidate for flexible electronics (see in a recent review \cite{shar13}). Indeed, strain engineering has been suggested to be an alternative approach to modulating efficiently the electronic properties of graphene nanomaterials. In particular, the bandgap has periodic oscillations in the armchair GNRs \cite{ylu210} while the spin polarization at the ribbon edges (and also the bandgap) can be modulated by the strain in the zigzag cases. In 2D graphene sheets, a finite gap can open under large strains, otherwise, it may remain close to zero but the Dirac points are displaced \cite{cocc10,per209,pere09,huan10}. Many interesting electrical, optical, and magnetic properties induced by strain in graphene have been also explored, e.g. see in \cite{bunc07,pere09,kuma12,per010,pell10,guin10,tlow10,zhai11}.
Besides, local strain is a good option to improve the electrical performance of graphene devices \cite{pere09,ylu010,fuji10,juan11,baha13}. For instance, it has been shown to enhance the ON current in a GNR tunneling FET \cite{ylu010} and to fortify the transport gap in GNR strained junctions \cite{baha13}. In a recent work \cite{hung14}, we have investigated the effects of uniaxial strain on the transport in 2D unstrained/strained graphene junctions and found that due to the strain-induced shift of Dirac points, a significant conduction gap of a few hundreds meV can open with a small strain of a few percent. This type of strained junction was then demonstrated to be an excellent candidate to improve the electronic operation of graphene transistors. It hence motivates us to further investigate the properties of this conduction gap so as to optimize the performance of graphene devices. On the one hand, the effects of strain should be, in principle, dependent on its applied direction. On the other hand, because the appearance of conduction gap is a consequence of the shift of Dirac points along the $k_y$-axis, it is predicted that this gap should also depend on the transport direction. Note that here, Oy (Ox) - axis is assumed to be perpendicular (parallel) to the transport direction. The effects of both strain and transport directions will be clarified systematically in the current work.
\section{Model and calculations}
In this work, the $\pi$-orbital tight binding model constructed in \cite{per209} is used to investigate the electronic transport through the graphene strained junctions schematized in Fig. 1. The Hamiltonian is ${H_{tb}} = \sum\nolimits_{nm} {{t_{nm}}c_n^\dag {c_m}}$ where $t_{nm}$ is the hopping energy between nearest neighbor \emph{n}th and \emph{m}th atoms. The application of a uniaxial strain of angle $\theta$ causes the following changes in the $C-C$ bond vectors:
\begin{eqnarray}
{{\vec r}_{nm}}\left( \sigma \right) &=& \left\{ {1 + {M_s}\left( \sigma, \theta \right)} \right\}{{\vec r}_{nm}}\left( 0 \right) \\
{M_s}\left( \sigma, \theta \right) &=& \sigma \left[ {\begin{array}{*{20}{c}}
{{{\cos }^2}\theta - \gamma {{\sin }^2}\theta }&{\left( {1 + \gamma } \right)\sin \theta \cos \theta }\\
{\left( {1 + \gamma } \right)\sin \theta \cos \theta }&{{{\sin }^2}\theta - \gamma {{\cos }^2}\theta }
\end{array}} \right] \nonumber
\end{eqnarray}
where $\sigma$ represents the strain and $\gamma \simeq 0.165$ is the Poisson ratio \cite{blak70}. The hopping parameters are defined as $t_{nm} \left( \sigma \right) = t_0 \exp\left[-3.37\left(r_{nm} \left( \sigma \right) /r_0 - 1\right)\right]$, where the hopping energy $t_0 = -2.7$ $eV$ and the bond length $r_{nm} \left( 0 \right) \equiv r_0 = 0.142$ $nm$ in the unstrained case. Therefore, there are three different hoping parameters $t_{1,2,3}$ corresponding to three bond vectors ${\vec r}_{1,2,3}$, respectively, in the strained graphene part of the structure (see Fig. 1). Here, we assume a 1D profile of applied strain, i.e., the strain tensor is a function of position along the transport direction Ox while it is constant along the Oy-axis. The transport direction, $\phi$, and strain direction, $\theta$, are determined as schematized in Fig. 1. Based on this tight binding model, two methods described below can be used to investigate the conduction gap of the considered strained junctions.
\textbf{Green's function calculations.} First, we split the graphene sheet into the smallest possible unit cells periodically repeated along the Ox/Oy directions with the indices $p/q$, respectively (similarly, see the details in \cite{hung12}). The tight-binding Hamiltonian can therefore be expressed in the following form:
\begin{eqnarray}
{H_{tb}} = \sum\limits_{p,q} {\left( {{H_{p,q}} + \sum\limits_{{p_1},{q_1}} {{H_{p,q \to p_1,q_1}}} } \right)}
\end{eqnarray}
where $H_{p,q}$ is the Hamiltonian of cell $\{p,q\}$, and $H_{p,q \to p_1,q_1}$ denotes the coupling of cell $\{p,q\}$ to its nearest neighbor cell $\{p_1,q_1\}$. We then Fourier transform the operators in Eq. (2) as follows:
\begin{eqnarray}
{c_{p,q}} = \frac{1}{{\sqrt {{M_{cell}}} }}\sum\limits_{{\kappa_y}} {{e^{i{q\kappa_y}}}} {{\hat c}_{p,{\kappa_y}}},
\end{eqnarray}
where $M_{cell}$ is the number of unit cells and $\kappa_y \equiv k_y L_y$ with the size $L_y$ of unit cells along the Oy direction. The Hamiltonian (2) is finally rewritten as a sum of $\kappa_y$-dependent 1D-components:
\begin{eqnarray}
{H_{tb}} &=& \sum\limits_{{\kappa_y}} {\hat H\left( {{\kappa_y}} \right)} \\
\hat H\left( {{\kappa_y}} \right) &=& \sum\limits_p {{{\hat H}_{p \to p - 1}}\left( {{\kappa_y}} \right) + {{\hat H}_p}\left( {{\kappa_y}} \right) + {{\hat H}_{p \to p + 1}}}\left( {{\kappa_y}} \right) \nonumber
\end{eqnarray}
With this Hamiltonian form, the Green's function formalism can be easily applied to compute transport quantities in the graphene strained junction with different transport directions. In particular, the conductance at zero temperature is determined as:
\begin{eqnarray}
\mathcal{G} \left( \epsilon \right) = \frac{{e^2 W}}{{\pi h L_y}}\int\limits_{BZ} {d{\kappa_y} \mathcal{T}\left( {\epsilon, {\kappa_y}} \right)}
\end{eqnarray}
where $\mathcal{T}\left( {\epsilon,{\kappa_y}} \right)$ is the transmission probability computed from the Green's functions. The integration over $\kappa_y$ is performed in the whole first Brillouin zone. As in ref. \cite{hung13}, the gap of conductance (conduction gap) is then measured from the obtained data of conductance.
\textbf{Bandstructure analyses.} To determine the conduction gap of strained junctions, we find that another simple way based on the analysis of graphene bandstructures could be efficiently used. It is described as follows. Since the conductance is computed from Eq. (5), the appearance of conduction gap is essentially governed by the gaps of transmission probability, which is determined from the energy gaps in the unstrained and strained graphene sections. These energy gaps can be defined directly from the graphene bandstructures. Therefore, our calculation has two steps, similar to that in \cite{hung14}. From the graphene bandstructures obtained using the tight-binding Hamiltonian above, we first look for the energy gaps $E_{unstrain}^{gap}\left( {{\kappa_y}} \right)$ and $E_{strain}^{gap}\left( {{\kappa_y}} \right)$ for a given $\kappa_y$ of two graphene sections. The maximum of these energy gaps determines the gap $E_{junc}^{gap}\left( {{\kappa_y}} \right)$ of transmission probability through the junction. Finally, the conduction gap $E_{cond.gap}$ is obtained by looking for the minimum value of $E_{junc}^{gap}\left( {{\kappa_y}} \right)$ when varying $\kappa_y$ in the whole Brillouin zone.
In particular, the energy bands of strained graphene are given by
\begin{eqnarray}
E\left( {\vec k} \right) = \pm \left| {{t_1}{e^{i\vec k{{\vec a}_1}}} + {t_2}{e^{i\vec k{{\vec a}_2}}} + {t_3}} \right|
\end{eqnarray}
where the plus/minus sign corresponds to the conduction/valence bands, respectively. For a given direction $\phi$ of transport, in principle, the vectors $\vec L_{x,y}$ defining the sizes of unit cell along the Ox and Oy directions, respectively, can be always expressed as ${\vec L_x} = {n_1}{\vec a_1} + {n_2}{\vec a_2}$ and ${\vec L_y} = {m_1}{\vec a_1} + {m_2}{\vec a_2}$ with $\cos \phi = \frac{{{{\vec L}_x}\vec L_x^0}}{{{L_x}L_x^0}}$ and $\sin \phi = \frac{{{{\vec L}_x}\vec L_y^0}}{{{L_x}L_y^0}}$ while $\vec L_{x,y}^0 = {\vec a_1} \pm {\vec a_2}$. Note that $n_{1,2}$ and $m_{1,2}$ are integers while $\frac{{{m_1}}}{{{m_2}}} = - \frac{{{n_1} + 2{n_2}}}{{{n_2} + 2{n_1}}}$, i.e., ${\vec L_{x}} {\vec L_{y}} = 0$. In other words, we have the following expressions
\begin{eqnarray}
{{{\vec a}_1} = \frac{{ - {m_2}{{\vec L}_x} + {n_2}{{\vec L}_y}}}{{{n_2}{m_1} - {n_1}{m_2}}};\,\,\,{{\vec a}_2} = \frac{{{m_1}{{\vec L}_x} - {n_1}{{\vec L}_y}}}{{{n_2}{m_1} - {n_1}{m_2}}}}
\end{eqnarray}
On this basis, the energy bands can be rewritten in terms of $\kappa_{x, y} = \vec k \vec L_{x,y} \left( { \equiv {k_{x,y}}{L_{x,y}}} \right)$ by substituting Eqs. (7) into Eq. (6). This new form of energy bands is finally used to compute the conduction gap of strained junctions.
As a simple example, in the case of $\phi = 0$ (armchair direction), we calculate the conduction gap as follows. First, Eq. (6) is rewritten in the form
\begin{eqnarray}
E_{\phi = 0}\left( {\vec \kappa} \right) = \pm \left| {{t_1}{e^{i\kappa_y/2}} + {t_2}{e^{ - i\kappa_y/2}} + {t_3}{e^{ - i\kappa_x/2}}} \right|
\end{eqnarray}
with the vectors $\vec L_{x,y} \equiv \vec L_{x,y}^0$. Using this new form, the energy gap of strained graphene for a given $\kappa_y$ is determined as
\begin{equation}
{E_{strain}^{gap}}\left( {{\kappa_y}} \right) = 2 \left| {\sqrt {{{\left( {{t_1} - {t_2}} \right)}^2} + 4{t_1}{t_2}{{\cos }^2}\frac{{{\kappa_y}}}{2}} + {t_3}} \right|
\end{equation}
while ${E_{unstrain}^{gap}}\left( {{\kappa_y}} \right)$ is given by the same formula with $t_1$ = $t_2$ = $t_3$ $\equiv$ $t_0$. The gap of transmission probability through the junction is then determined as ${E_{junc}^{gap}}\left( {{\kappa_y}} \right) = \max \left[ {E_{unstrain}^{gap}\left( {{\kappa_y}} \right),E_{strain}^{gap}\left( {{\kappa_y}} \right)} \right]$ and, finally, the conduction gap is given by ${E_{cond.gap}} = \min \left[ {E_{junc}^{gap}\left( {{\kappa_y}} \right)} \right]$ for $\kappa_y$ in the whole Brillouin zone.
We would like to notice that the Green's function calculations and the banstructure analyses give the same results of conduction gap in the junctions where the transition region between unstrained and strained graphene sections is long enough, i.e., larger than about 5 to 6 nm. In the case of short length, as discussed in \cite{baha13,hung14}, this transition zone can have significant effects on the transmission between propagating states beyond the energy gaps and hence can slightly enlarge the gap of conductance, compared to the results obtained from the bandstructure calculations.
\section{Results and discussion}
\begin{figure}[!t]
\centering
\includegraphics[width=3.0in]{Fig02.pdf}
\caption{Dependence of graphene bandgap (in the unit of eV) on the applied strain and its direction: tensile (a) and compressive (b). The radius from the central point indicates the strain strength ranging from 0 (center) to 30 $\%$ (edge of maps) while the graphene lattice is superimposed to show visibly the strain direction. The orange circle corresponds to the strains of $\sigma = 23 \%$.}
\label{fig_sim2}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{Fig03.pdf}
\caption{Conductance ($G_0 = e^2W/hL_y$) as a function of energy in graphene strained junctions for $\sigma = 4 \%$ with different strain directions. The transport along the armchair direction ($\phi = 0$) is considered. The data obtained in a uniformly strained graphene is displayed for the comparison.}
\label{fig_sim6}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=5.8in]{Fig04.pdf}
\caption{Local density of states (left panels) and corresponding transmission coefficient (right panels) for three different wave-vectors $k_y$ obtained in an unstrained/strained graphene junction of $\sigma = 4 \%$, and $\theta \equiv \phi = 0$. On the top is a schematic of graphene bandedges illustrating the strain-induced shift of Dirac points along the $k_y$-direction.}
\label{fig_sim4}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=5.6in]{Fig05.pdf}
\caption{Maps of conduction gap in unstrained/strained graphene junctions: tensile (a,c) and compressive cases (b,d). The transport is along the armchair $\phi = 0$ (a,b) and zigzag $\phi = 30^\circ$ directions (c,d). The strain strength ranges from 0 (center) to 6 $\%$ (edge of maps) in all cases.}
\label{fig_sim4}
\end{figure*}
First, we re-examine the formation of the bandgap of graphene under a uniaxial strain. From Eq. (9), it is shown that a strain-induced finite-bandgap appears only if ${E_{strain}^{gap}}\left( {{\kappa_y}} \right) > 0$ for all $k_y$ in the first Brillouin zone, i.e., ${k _y} \in \left[ { - \frac{\pi}{L_y}, \frac{\pi}{L_y}} \right]$, otherwise, the bandgap remains zero. Hence, the condition for the bandgap to be finite is either
\begin{equation*}
\left| {{t_1} - {t_2}} \right| > \left| {{t_3}} \right|\,\,\,\,\,{\rm{OR}}\,\,\,\,\,\left| {{t_3}} \right| > \left| {{t_1} + {t_2}} \right|
\end{equation*}
and the corresponding values of bandgap are
\begin{equation*}
{E_{gap}} = 2\left( {\left| {{t_1} - {t_2}} \right| - \left| {{t_3}} \right|} \right)\,\,\,\,\,{\rm{OR}}\,\,\,\,\,2\left( {\left| {{t_3}} \right| - \left| {{t_1} + {t_2}} \right|} \right)
\end{equation*}
This result was actually reported in \cite{per209,hase06}. We remind as displayed in Fig. 2(a) that a finite bandgap opens only for strain larger than $\sim 23 \%$ and the zigzag (not armchair) is the preferred direction for bandgap opening under a tensile strain \cite{per209}. We extend our investigation to the case of compressive strain and find (see in Fig. 2(b)) that (i) the same gap threshold of $\sigma \simeq 23 \%$ is observed but (ii) the preferred direction to open the gap under a compressive strain is the armchair, not the zigzag as the case of tensile strain. This implies that the properties of graphene bandstructure at low energy should be qualitatively the same when applying strains of $\left\{ {\sigma ,\theta } \right\}$ and of $\left\{ {-\sigma ,\theta + 90^\circ} \right\}$. This feature can be understood by considering, for example, strains of $\left\{ {\sigma , \theta = 0} \right\}$ and of $\left\{ {-\sigma , \theta = 90^\circ} \right\}$. Indeed, these strains result in the same qualitative changes on the bond-lengths, i.e., an increased bond-length $r_3$ and reduced bond-lengths $r_{1,2}$. However, for the same strain strength, because of the exponential dependence of hoping energies on the bond-lengths, the compressive strain generally induces a larger bandgap than the tensile one, as can be seen when comparing the data displayed in Figs. 2(a) and 2(b). To conclude, we would like to emphasize that a large strain is necessary to open a bandgap in graphene. This could be an issue for practical applications, compared to the use of graphene strained junctions explored in \cite{hung14}.
We now go to explore the properties of conduction gap in the graphene strained junctions. In Fig. 3, we display the conductance as a function of energy computed from Eq. (5) using the Green's function technique. As discussed above, a small strain of a few percent (e.g., 4 $\%$ here) can not change the gapless character of graphene, i.e., there is no gap of conductance in the case of uniformly strained graphene. However, similar to that reported in \cite{hung14}, a significant conduction-gap of a few hundreds meV can open in the unstrained/strained graphene junctions. The appearance of this conduction gap, as mentioned previously, is due to the strain-induced shift of Dirac points and is explained as follows. Actually, the strain causes the lattice deformation and can result in the deformation of graphene bandstructure. Therefore, the bandedges as a function of wave-vector $k_y$ in unstrained and strained graphene can be illustrated schematically as in the top panel of Fig. 4. As one can see, the shift of Dirac points leads to the situation where there is no value of $\kappa_y$, for which the energy gaps $E_{unstrain}^{gap}\left( {{\kappa_y}} \right)$ and $E_{strain}^{gap}\left( {{\kappa_y}} \right)$ are simultaneously equal to zero. This means that the transmission probability always shows a finite gap for any $\kappa_y$. For instance, the energy gap is zero (or small) in the unstrained (resp. strained) graphene section but finite in the strained (resp. unstrained) one in the vicinity of Dirac point $k_y = K_{unstrain}$ (resp. $K_{strain}$). Accordingly, as illustrated in the pictures of LDOS in the left panels of Fig. 4 and confirmed in the corresponding transmissions in the right panels, clear gaps of transmission are still obtained. Far from these values of $k_y$, $E_{unstrain}^{gap}\left( {{\kappa_y}} \right)$ and $E_{strain}^{gap}\left( {{\kappa_y}} \right)$ are both finite (e.g., see the LDOS plotted for $k_y = K_{gap}$) and hence a finite gap of transmission also occurs. On this basis, a finite gap of conductance is achieved. More important, Fig. 3 shows that besides the strength of strain, the strain effect is also strongly dependent on the applied direction. For instance, the conduction gap takes the values of $\sim$ 295, 172 and 323 meV for $\theta = 0$, $30^\circ$ and $90^\circ$, respectively.
Below, we will discuss the properties of the conduction gap with respect to the strain, its applied direction, and the direction of transport. Note that due to the lattice symmetry, the transport directions $\phi$ and $\phi + 60^\circ$ are equivalent while the applied strain of angle $\theta$ is identical to that of $\theta + 180^\circ$. Hence, the data obtained for $\phi$ ranging from $-30^\circ$ to $30^\circ$ and $\theta \in \left[ {0^\circ ,180^\circ } \right]$ covers the properties of conduction gap in all possible cases.
In Fig. 5, we present the maps of conduction gap with respect to the strain and its applied direction in two particular cases: the transport is either along the armchair ($\phi = 0$) or the zigzag ($\phi = 30^\circ$) directions. Both tensile and compressive strains are considered. Let us first discuss the results obtained in the armchair case. Figs. 5(a,b) show that (i) a large conduction gap up to about 500 meV can open with a strain of 6 $\%$ and (ii) again the conduction gap is strongly $\theta$-dependent, in particular, its peaks occur at $\theta = 0$ or $90^\circ$ while the gap is zero at $\theta \approx 47^\circ$ and $133^\circ$ for tensile strain and at $\theta \approx 43^\circ$ and $137^\circ$ for compressive strain. In principle, the conduction gap is larger if the shift of Dirac points in the $\kappa_y$-axis is larger, as discussed above about Figs. 3-4. We notice that the strain-induced shifts can be different for the six Dirac points of graphene \cite{kitt12} and the gap is zero when there is any Dirac point observed at the same $\kappa_y$ in the two graphene sections. From Eq. (9), we find that the Dirac points are determined by the following equations:
\begin{eqnarray*}
{\cos}\frac{\kappa_y}{2} &=& \pm \frac{1}{2}\sqrt{\frac{{t_3^2 - {{\left( {{t_1} - {t_2}} \right)}^2}}}{{{t_1}{t_2}}}}, \\
\cos \frac{{\kappa_x}}{2} &=& \frac{{{t_1} + {t_2}}}{{\left| {{t_3}} \right|}}\cos \frac{{\kappa_y}}{2},\,\,\,\sin \frac{{\kappa_x}}{2} = \frac{{{t_2} - {t_1}}}{{\left| {{t_3}} \right|}}\sin \frac{{\kappa_y}}{2},
\end{eqnarray*}
which simplify into ${\cos}\frac{\kappa_y}{2} = \pm \frac{1}{2}$ and, respectively, $\cos \left( {\frac{{{\kappa _x}}}{2}} \right) = \mp 1$ in the unstrained case. Hence, the zero conduction gap is obtained if
\begin{equation*}
\frac{{t_3^2 - {{\left( {{t_1} - {t_2}} \right)}^2}}}{{4{t_1}{t_2}}} = \frac{1}{4}
\end{equation*}
Additionally, it is observed that the effects of a strain $\{\sigma,\theta\}$ are qualitatively similar to those of a strain $\{-\sigma,\theta+90^\circ\}$, i.e., the peaks and zero values of conduction gap are obtained at the same $\theta$ in these two situations. To understand this, we analyze the strain matrix $M_s \left(\sigma,\theta\right)$ and find that in the case of small strains studied here, there is an approximate relationship between the bond lengths under these two strains, given by \[{r \left( \sigma, \theta \right)} - {r \left( -\sigma, \theta + 90^\circ\right)} \simeq \sigma \left( {1 - \gamma } \right) r_0,\] which is $\theta$-independent for all \emph{C-C} bond vectors. It implies that there is a fixed ratio between the hopping energies $t_i \left( \sigma, \theta \right)$ and $t_i \left( -\sigma, \theta + 90^\circ\right)$ and hence there is the similar shift of Dirac points in these two cases.
\begin{figure}[!t]
\centering
\includegraphics[width=3.4in]{Fig06.pdf}
\caption{Map showing the dependence of conduction gap on the directions ($\theta,\phi$) for $\sigma = 4 \%$. The top is a diagram illustrating the rotation of Dirac points in the \emph{k}-space with the change in the transport direction $\phi$.}
\label{fig_sim6}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=5.5in]{Fig07.pdf}
\caption{Maps of conduction gap obtained in tensile/compressive strained junctions. The transport along the armchair/zigzag directions is considered in (a,b)/(c,d), respectively. The strains $\sigma_c = -2 \%$ and $\sigma_t = 2 \%$ are applied in (a,c) while $\sigma_c = -1 \%$ and $\sigma_t = 3 \%$ in (b,d).}
\label{fig_sim4}
\end{figure*}
We now go to analyze the properties of conduction gap shown in Figs. 5(c,d) where the transport is along the zigzag direction $\phi = 30^\circ$. In fact, the conduction gap in this case can reach a value as high as that of the case of $\phi = 0$ but has different $\theta$-dependence. In particular, the conduction gap has peaks at $\theta \approx 47^\circ$ and $133^\circ$ for tensile strain and at $\theta \approx 43^\circ$ and $137^\circ$ for compressive strain, where it is zero in the case of $\phi = 0$. It is also equal to zero at $\theta = 0$ and $\theta = 90^\circ$ where the peaks of conduction gap occur in the latter case of $\phi = 0$. The relationship between these two transport directions can be explained as follows. On the one hand, based on the analyses above for $\phi = 0$, we find that for a given strength of strain, a maximum shift of Dirac points along the $k_y$-axis corresponds to a minimum along the $k_x$-one and vice versa when varying the strain direction $\theta$. On the other hand, as schematized in the top of Fig. 6 below, the change in the transport direction results in the rotation of the first Brillouin zone, i.e., the $k_x$ (resp. $k_y$) axis in the case of $\phi = 30^\circ$ is identical to the $k_y$ (resp. $k_x$) axis in the case of $\phi = 0$. These two features explain essentially the opposite $\theta$-dependence of conduction gap for $\phi = 30^\circ$, compared to the case of $\phi = 0$ as mentioned. Again, we found the same qualitative behavior of conduction gap when applying the strains of $\{\sigma,\theta\}$ and $\{-\sigma,\theta+90^\circ\}$.
Next, we investigate the conduction gap with respect to different transport directions $\phi$. We display a ($\theta,\phi$)-map of conduction gap for $\sigma = 4 \%$ in Fig. 6 and, in the top, an additional diagram illustrating the rotation of Dirac points in the $k-$space with the change in the transport direction. It is clearly shown that (i) a similar scale of conduction gap is obtained for all different transport directions, (ii) there is a smooth and continuous shift of $E_{cond.gap}-\theta$ behavior when varying $\phi$, and (iii) the same behavior of $E_{cond.gap}$ is also observed when comparing the two transport directions of $\phi$ and $\phi+30^\circ$, similarly to the comparison above between $\phi = 0^\circ$ and $30^\circ$. The data plotted in Fig. 6 additionally shows that $E_{cond.gap}$ takes the same value in both cases of $\{\phi,\theta\}$ and $\{-\phi,-\theta\}$ with a remark that the strains of $-\theta$ and $180^\circ-\theta$ are identical. Moreover, the values of $\theta$ and $\phi$, for which the conduction gap has a peak or is equal to zero, have an almost linear relationship. In particular, the relationship for conduction gap peaks is approximately given by $\theta = \theta_A - \eta_s \phi$. For tensile strains, $\eta_s$ takes the values of $\sim 1.5667$ and $1.4333$ for $\theta_A = 0$ and $90^\circ$, respectively. On the opposite, it is about $1.4333$ and $1.5667$ for $\theta_A = 0$ and $90^\circ$, respectively, for compressive strain cases. All these features are consequences of the rotation of Dirac points in the $k$-space with respect to the transport direction $\phi$ as illustrated in the diagram on the top and the lattice symmetry of graphene.
Finally, we investigate other junctions based on compressive and tensile strained graphene sections. The idea is that in this type of strained junction, the shifts of Dirac points are different in two graphene sections of different strains, which offers the possibilities to use smaller strains to achieve a similar conduction gap, compared to the case of unstrained/strained junction. In Fig. 7, we display the maps of conduction gap with respect to the directions of compressive ($\theta_c$) and tensile ($\theta_t$) strains in two cases of transport direction $\phi = 0$ (armchair) and $30^\circ$ (zigzag) for given strain strengths. Indeed, as seen in Fig. 7(a,b), with smaller strains $\left\{ {{\sigma _c},{\sigma _t}} \right\} = \left\{ { - 2\% ,2\% } \right\}$ or $\left\{ { - 1\% ,3\% } \right\}$, similar conduction gap of about 310 meV can be achieved (see Figs. 7(a,b)) while it requires a strain of 4 $\%$ in the unstrained/strained junctions discussed above. However, since the shift of Dirac points is strongly dependent on the direction of applied strains and the transport direction, the properties of conduction gap are more complicated than in the latter case. In particular, our calculations show that the preferred transport directions to achieve a large conduction gap are close to the armchair one. Otherwise, the conduction gap is generally smaller, similarly to the data for $\phi = 30^\circ$ compared to $\phi = 0$, as shown in Fig. 7. Additionally, it is shown that the preferred directions of applied strains in the case of $\phi = 0$ are close to ${\theta _c} \equiv {\theta _t} = 0$ or $90^\circ$.
\section{Conclusion}
Based on the tight binding calculations, we have investigated the effects of uniaxial strain on the transport properties of graphene strained junctions and discuss systematically the possibilities of achieving a large conduction gap with respect to the strain, its applied direction and the transport direction. It has been shown that due to the strain-induced deformation of graphene lattice and hence of graphene bandstructure, a finite conduction gap higher than 500 meV can be achieved for a strain of only 6 $\%$. Moreover, as a consequence of the shift of Dirac points along the $k_y$-axis, the conduction gap is strongly dependent not only on the strain strength but also on the direction of applied strain and the transport direction. A full picture of these properties of conduction gap has been presented and explained. The study hence could be a good guide for the use of this type of unstrained/strained graphene junction in electronic applications.
\textbf{\textit{Acknowledgment.}} This research in Hanoi is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.02-1012.42. We also acknowledges the French ANR for financial support under the projects NANOSIM-GRAPHENE (Grant no. ANR-09-NANO-016) and MIGRAQUEL (Grant no. ANR-10-BLAN-0304). |
0912.1170 | \section{Introduction}
Besides the intrinsic interest in open clusters (OCs) in their own right, it is generally
accepted that these objects are fundamental landmarks to probe the Galactic disk
properties (see, e.g., Friel 1995). They are among the very few Galactic objects for which
meaningful distances can be derived over a large range, which makes them an essential tool
to constrain Galactic evolution theories. They also make it possible to derive more
accurate ages than are possible with other disk objects. Therefore, the study of the
Galactic OC system proves very useful to clarify the many queries concerning the assessment of
chemical abundance gradients in the disk (see, e.g., Twarog, Ashman \& Anthony-Twarog 1997;
Chen, Hou \& Wang 2003), Galactic structure and evolution (e.g., Janes \& Adler 1982;
Janes \& Phelps 1994), interactions between thin and thick disks (e.g., Sandage 1988),
as well as theories of stellar formation and evolution (e.g., Meynet, Mermilliod \&
Maeder 1993; Phelps \& Janes 1993).
The OC catalogue by Lyng\aa\ \shortcite{l87} includes 1700 entries. However, very little is
known about many of them, except for their positions and approximate values of their
angular sizes. At present, fewer than half of the known OCs have been studied in detail
to derive their fundamental parameters. The current paper is part of a larger project
aimed at looking into the formation and evolution of the Galactic disk by making use of a
growing data base of photometric observations of as many OCs as possible. Thus, this study
represents a further, intermediate step in a long-term programme devoted to obtain the
fundamental parameters or to refine the quality of observationally determined properties for
some unstudied or poorly studied OCs.
$UBVI_{KC}$ photometry has proved to be a valuable tool to obtain the fundamental parameters
of star clusters since information on cluster membership, distance, reddening, metallicity
and age is obtained through the analysis of ($V$,$B-V$) and ($V$,$V-I$) Colour-Magnitude
Diagrams (CMDs). In the year 2000, we carried out at Cerro Tololo Inter-American Observatory
(CTIO, Chile) an observational program focused on still unstudied OCs at that time. We favoured
the observation of OCs which were interesting not only because of the derivation of their basic
parameters but also because of the possibility they offered of studying the morphology of
their red giant evolutionary phases in relation to previous results (see, e.g., Mermilliod et
al. 2001).
In this study, we report the results obtained from high-quality CCD $UBVI_{KC}$ photometry down
to V $\approx$ 21.0 in the fields of the selected OCs Berkeley\,26, Czernik\,27, Melotte\,72,
NGC\,2479 and BH\,37. These objects promised to be very interesting for their
relatively old appearance due either to the observed stellar population in the CMDs or
to their shapes and clustered nature. The basic parameters of the observed OCs are
given in Table\,1, where the Trumpler class was taken from Archinal \& Hynes \shortcite{ah03}.
The last two columns list the total number of measured stars in this study and the
inferred total number of cluster stars. The latter, together with the availability
of a larger sample of data treated in the same way, will make it easier to establish a
future calibration of the Trumpler richness class. All the selected clusters are located
in the third Galactic quadrant near the Galactic plane ($\mid$$b$$\mid$ $\leq$ 6$\degr$).
A brief description of these OCs, along with earlier photometric observations, is given
below.
{\it Berkeley\,26}: Also known as Biurakan 12 \cite{i60} or C0647+058, this cluster seems to
be a faint and probably old object in Monoceros. As indicated by its Trumpler class (III\,1m), it
shows no strong central concentration but can be identified by its relatively dense population
compared to that of the field stars (Fig. 1). Using 2-Micron All-Sky Survey (2MASS) data, Tadross
\shortcite{t08} derived a heliocentric distance of 2.7 $\pm$ 0.1 kpc, $E(B-V)$ = 0.54 and
an age of 600 Myr. These values, however, do not agree at all with those recently obtained by
Hasegawa, Sakamoto \& Malasan \shortcite[hereafter HSM08]{hetal08} from CCD $VI$ photometry
carried out with a 65 cm telescope. In fact, according to these authors, Berkeley\,26 is a very
old (4.5 Gyr), highly reddened ($E(V-I)$ = 0.80) and very distant (d = 7.8 kpc) OC.
{\it Czernik\,27}: This is a relatively faint cluster first recognized in Monoceros by Czernik
\shortcite{c66}. As indicated by its Trumpler class (III\,1p), Czernik\,27 (IAU designation
C0700+064) is one of the most poorly defined objects of the present sample. It has a relatively
small angular size of about 3$\arcmin$ \cite{l87}. Kim et al. \shortcite{kimetal05} and HSM08
reported $BV$ and $VI$ CCD photometry in the cluster field, respectively. The cluster parameters
determined in both cases, however, do not show a close correlation. Kim et al.
\shortcite{kimetal05}
found that Czernik\, 27 is a moderately reddened ($E(B-V)$ = 0.15) Hyades like age cluster
located at 5.8 kpc from the Sun, while HSM08 concluded that this is a slightly reddened
($E(B-V)$ $\approx$ 0.08) and older (1.1 Gyr) OC located at 4.3 kpc from the Sun.
{\it Melotte\,72}: According to Archinal \& Hynes \shortcite{ah03}, this object is the same
as Collinder\,467 \cite{c31}. They described it as a small, compressed cluster, with its core
about 3$\arcmin$ in diameter and with a 5$\arcmin$ long stream extending to the north
between the two bright stars HD\,61277 ($V$ = 7.05, K5) and HD\,61401 ($V$ = 8.49, B9). The
cluster lies about 1.3$\degr$ southwest of $\alpha$ Mon (Fig. 1). As far as we are
aware, the only photometric study of this object was carried out by HSM08 using $VI$ CCD images.
They found this object to be an intermediate-age cluster (1.6 Gyr) located at a distance of 3.2 kpc,
with reddening $E(V-I)$ = 0.10.
{\it NGC\,2479}: This object, also referred to as Collinder\,167 \cite{c31}, C0752-175 or
Trumpler\,8 \cite{t30}, is a relatively bright cluster in Puppis. As shown in Fig. 1, there are
several bright stars in the cluster field, many of which seem to be foreground stars.
Lyng\aa\ \shortcite{l87} reported an angular diameter of 11$\arcmin$ for NGC\,2479. Kharchenko
et al. \shortcite{ketal05} presented a catalogue of astrophysical data for 520 Galactic OCs -among
them NGC\,2479- which could be identified in their All-Sky Compiled Catalogue (ASCC-2.5). By
applying homogeneous methods and algorithms, they determined basic parameters for their cluster
sample. For NGC\,2479, they found the following results: $E(B-V)$ = 0.10, d = 1.2 kpc and $\sim$ 1
Gyr. We should be cautious, however, when considering these findings since the limiting magnitude
of the ASCC-2.5 is $V$ $\approx$ 12.5.
{\it BH\,37}: This is a detached, moderately rich and relatively faint OC (Fig. 1). As far as we
know, no previous data exist for this compact object (IAU designation C0834-434) first recognized
as an open cluster in Vela by van den Bergh \& Hagen \shortcite{vh75}.
The present photometric data are used to determine reddening, distance, age and metallicity of
the selected OCs. The layout of the paper is as follows: Section 2 presents the observational
material and the data reduction, whereas in Section 3 we determine the cluster centres and the
stellar density radial profiles. In Section 4 we explain how to minimize the field star
contamination in the CMDs. Section 5 deals with the determination of cluster fundamental
parameters through the fitting of theoretical isochrones and with the comparison with
previous results. Finally, Section 6 summarizes our findings and conclusions.
\section{Data collection and reduction}
We obtained images for the cluster sample in December 2000 with the $UBVI_{KC}$ filters and a
2048$\times$2048 pixel Tektronix CCD attached to the CTIO 0.9 m telescope. The detector used has a
pixel size of 24 $\mu$m, producing a scale on the chip of 0.4\arcsec pixel$^{-1}$ (focal ratio
f/13.5) and a 13.6$\arcmin$x13.6$\arcmin$ field of view. In order to standardize our photometry, we
carried out observations of standard stars of the Selected Areas PG0231+051, 92 and 98 of Landolt
\shortcite{l92}.
By the end of each night, we had collected an average of 45 different measures of magnitude per
filter for the selected standard star sample.
Table 2 shows the logbook of the observations with filters, exposure times, airmasses and
seeing estimates. Observational setups, data reduction procedures, stellar point spread function
photometry and transformation to the standard system, follow the same prescriptions described
in detail in Piatti, Clari\'a \& Ahumada \shortcite{petal09}. The standard star photometry shows the
mean square root deviation of the observations from the fits to be less than 0.015 mag, indicating
that the nights were photometric. Once the standard magnitudes and colours were obtained, we
produced a master table containing the average of $V$, $U-B$, $B-V$, and $V-I$, their
errors $\sigma$($V$), $\sigma$($U-B$), $\sigma$($B-V$) and $\sigma$($V-I$) and the number of
observations for each star, respectively. Tables 3 to 7 provide this information for Berkeley\,26,
Czernik\,27, Melotte\,72, NGC\,2479 and BH\,37, respectively. Only a portion of these tables is
shown here for guidance regarding their form and content. Tables 3 to 7 are shown in entirety
in the online version of the journal. The deepest CCD images obtained for the cluster sample are
shown in Fig. 1. In most cases the cluster region is only a small part of the observed frame, as
indicated by the solid circles.
A simple inspection of Tables 3 to 7 shows that stars with three measures of $B-V$ and $V-I$
colours extend from the brightest limit down to $V$ = 19 mag and 20 mag, respectively. The
stars with two measures of $B-V$ and $V-I$ colours cover $V$ ranges from 13.0 to 20.0 mag and
from 13.0 to 21 mag, respectively. Finally, the stars with only one measure of $B-V$ and $V-I$
are fainter than $V$ = 18.0 and 19.0 mag, respectively, and they reach the photometric magnitude
limits. According to these crude statistics, stars lying within the brightest
$\sim$ 6 mags of our $\sim$ 9 mag range were measured two and three times. Therefore, they are
the most appropriate ones to use to derive astrophysical information.
The behaviour of the photometric errors for the $V$ magnitude and $U-B$, $B-V$ and $V-I$ colours
as a function of $V$ is shown in Table 8. Since those observed only once have practically
no statistical weight, we decided to use all the stars. In addition, the knowledge of the
behaviour of the photometric errors with the magnitude for these stars, allows us to rely on
the accuracy of the morphology and position of the main cluster features in the CMDs. The
resulting CMDs are drawn in Figs. 3, 4 and 5 which show, in general, broad sequences. It
should be noticed that since we have measured just a handful of stars in the $U$ passband for
Berkeley\,26, we only show the ($V$,$B-V$) and ($V$,$V-I$) CMDs for this cluster.
Kim et al. \shortcite{kimetal05} obtained $BV$ CCD photometry for stars in the field of
Czernik\,27. For 992 stars measured in common by Kim et al. \shortcite{kimetal05} and in this
study, we derived $V_{\rm Kim}$ - $V_{\rm our}$ = 0.35 $\pm$ 0.07 mag, $(B-V)_{\rm Kim}$ -
$(B-V)_{\rm our}$ = -0.05 $\pm$ 0.11 mag and $(V-I)_{\rm Kim}$ -
$(V-I)_{\rm our}$ = 0.07 $\pm$ 0.08 mag, with a marginal dependence with the magnitude (see
Fig. 2). Since Kim et al. \shortcite{kimetal05} also found $B$ and $V$ magnitude offsets when
comparing their photometry for Berkeley\,29 with that of Kaluzny \shortcite{k94}, they decided
to apply such offset to the Czernik\,27's photometry. This prevented us from using their data
as a photometric reference. Kim et al. \shortcite{kimetal05} also obtained $BV$ data for only
18 stars in the field of NGC\,2479 reaching a limiting magnitude of $V$ $\approx$ 12.5-13.0 mag,
so that only a small portion of the cluster Main Sequence (MS) could be traced. When comparing
their magnitudes and colours with those observed by us, we find $V_{\rm k05}$ - $V_{\rm our}$ =
-0.03 $\pm$ 0.39 mag and $(B-V)_{\rm k05}$ - $(B-V)_{\rm our}$ = -0.02 $\pm$ 0.34 mag. Recently,
HSM08 published CCD $VI$ photometry for Berkeley\,26, Czernik\,27 and Melotte\,72. Unfortunately,
since the data are neither available in the WEBDA database \cite{mp03} nor upon request to
the authors, we could not compare our photometry with theirs.
\section{Cluster dimensions and structure}
We first determined the location of the clusters' centres in order to construct stellar
density profiles. The coordinates of the clusters' centres and their estimated uncertainties
were determined, for each cluster, by fitting Gaussian distributions to the star counts
in the $x$ and $y$ directions. The fits of the Gaussians were performed using the NGAUSSFIT
routine in the STSDAS/IRAF package. We adopted a single Gaussian and fixed the constant to
the corresponding background level and the linear terms to zero. The stars projected along the
$x$ and $y$ directions were counted within intervals of 50 pixels. In addition, we checked
that using spatial bins from 20 to 50 pixels or from 50 to 100 pixels does not lead to significant
changes in the derived centres. We iterated the fitting procedure once on average, after
eliminating a couple of discrepant points. Then, we determined the clusters' centres with a
typical NGAUSSFIT standard deviation of $\pm$ 10 pixels. The centres of the Gaussians for
Berkeley\,26, Czernik\,27, Melotte\,72, NGC\,2479 and BH\,37 were finally fixed at ($x_{c},y_{c}$)
= (1170, 1300), (1320, 1270), (1370, 1210), (1215, 1250) and (1230, 1030) pixels, respectively.
We constructed the clusters' radial profiles from star counts made in boxes of 50 pixels
by 50 pixels, distributed throughout the whole field of each cluster. The chosen size of the box
allowed us to sample, statistically, the stellar spatial distributions avoiding spurious effects
caused mainly by the presence of localized groups of stars, rows or columns of stars. Thus, the
number of stars per unit area, at a given radius $r$, can be directly calculated through the
expression:
\begin{equation}
(n_{r+25} - n_{r-25})/((m_{r+25} - m_{r-25}) \times 50^2),
\end{equation}
\noindent where $n_j$ and $m_j$ represent the number of stars and boxes included in a circle of
radius $j$, respectively. Note that this method does not necessarily require a complete circle of
radius $r$ within the observed field to estimate the mean stellar density at such distance. This
is important to consider since having a stellar density profile, which extends far away from the
cluster centre, allows us to estimate the background level with higher precision. This is also
helpful to measure the FWHM of the stellar density profile for it plays a significant role - from
a stellar content point of view - in the construction of the cluster CMDs.
The resulting density profiles are shown in Fig. 6. The uncertainties estimated at various distances
from the cluster centres follow Poisson statistics. Table 9 lists the estimated background
levels, the radii at the FWHM ($r_{FWHM}$) and the field star contamination estimated in
percentages. Note that the percentage of field stars is relatively high, which indicates a
relatively small ratio between the number of each cluster's stars and the number of field stars.
No cluster stands out clearly in its surrounding field within $r_{FWHM}$.
\section{Colour magnitude diagram cleaning}
Without a careful analysis of the observed sequences in the CMDs, one could come to the
conclusion that they are in fact the clusters' MSs. However, all the CMDs present both cluster
and field star MSs more or less superimposed. This means that we have observed both star
clusters and their respective foreground fields affected by nearly similar reddenings, which
makes it difficult to separate the fiducial cluster features and renders the analysis of
the CMDs challenging.
To statistically clean the cluster CMDs from stars that can potentially belong to the
foreground/background fields, we built star field CMDs using the stars located in the
easternmost strip of the observed fields, i.e., $x$ $<$ 500 pixels and 0 $<$ $y$ (pixels)
$<$ 2050 (see, Fig. 1). We separately treated the CMDs for $B-V$ and $V-I$. Using these field
CMDs, we counted how many stars lie in different magnitude-colour bins with sizes
[$\Delta$$V$, $\Delta$$(B-V)$=$\Delta$$(V-I)$] = (0.5,0.1) mag. We then subtracted from each
cluster CMD the number of stars counted for each range of the field ($V$, $B-V$ or $V-I$) CMD,
by removing those stars closer in magnitude and colour to the ones in the star field. Figs. 7,
8 and 9 show the CMDs of the cluster surrounding field regions, while Figs. 10, 11 and 12
show, with filled circles, the circular extracted CMDs which were obtained after cleaning
them for field star contamination. We show overplotted the CMDs directly obtained with all
the measured stars (dots). When comparing observed and cleaned cluster CMDs, the differences
in stellar composition became evident. Although the fiducial features of some clusters looked
clearer, they appeared somewhat dispersed and scattered. This is mainly due to some
unavoidable field interference. Other sources of dispersion such as photometric errors,
differential internal cluster reddening, evolutionary effects and binarity can also
account for such effect. In the subsequent analysis, we used the cleaned CMDs to estimate the
cluster fundamental parameters. Note that we will only use the extracted ($V$,$V-I$) CMD of
Berkeley\,26 since no star remains in its extracted ($V$,$B-V$) CMD.
\section{Estimates of the clusters fundamental parameters}
In order to estimate the ages of the observed clusters, we used the Morphological Age Index
(MAI) defined by Janes \& Phelps \shortcite{jp94} on the basis of the $\delta$$V$, $\delta$$(B-V)1$
and $\delta$$(V-I)1$ indices of Phelps, Janes \& Montgomery \shortcite{phetal94} as well as
the $\Delta$$V$ age index calibrated by Carraro \& Chiosi \shortcite{cc94}. We also illustrate
possible solutions for the fundamental cluster parameters by matching theoretical isochrones
computed by Lejeune \& Schaerer \shortcite{ls01} to the observed CMDs. The previously known
values of some cluster physical properties were used as reference to select the isochrones which
best matched the CMDs.
{\it Berkeley\,26:} the region delimited by $V$ $<$ 18 and $V-I$ $>$ 1.4 would seem to contain
cluster giants. We derived an age between 2 and 6 Gyr from the $\Delta$$V$ index, which
agrees with the age resulting from the MAI (2.0 - 5.5 Gyr). If we use a 4 Gyr isochrone, which
corresponds to the average age obtained from the MAI and the $\Delta$$V$ index, and we match it
to the observed ($V$,$V-I$) CMD, we then get a fit consistent with the data. Indeed, the
theoretical turnoff and subgiant branch magnitudes, the loci of the MS and of the red giant
branch (RGB), and the slope of the RGB, all appear to be reasonably located with respect to the
observed features. Using isochrones of 2 or 6 Gyr, we did not find successful fits that reproduced
the observed $V-I$ distance between the MS turnoff and the bluest RGB point.
{\it Czernik\,27:} since it is not possible to obtain only one solution from the ZAMS fitting
for the reddening-distance modulus pair, we assumed solar metallicity to evaluate which of both
sets of published fundamental parameters (see Sect. 1) best resembles the fiducial features
observed in the CMDs. We find the curvature and shape of the upper MS, the brightest magnitude
of the MS and the bluest point of the turnoff, reasonably well fitted by the isochrones of 660
Myr and 1.1 Gyr.
{\it Melotte\,72:} a Red Giant Clump (RGC) is visible at $V$ $\sim$ 13 mag and $B-V$ $\sim$
$V-I$ $\sim$ 1.0 mag. We derived an age between 0.4 and 1.0 Gyr from both the $\Delta$$V$ age
index and the MAI. By using a solar metallicity 0.6 Gyr isochrone, it is possible to obtain a
reasonable match to the cluster CMDs. The theoretical locus of the RGC, the brightest magnitude
of the MS and the bluest point of the turnoff are the features which appear to be consistent
with the data.
{\it NGC\,2479:} the cluster shows a very long star sequence and a compact RGC at $V$ $\sim$ 12
and $U-B$ = $V-I$ $\sim$ 1.0 mag. Firstly, we derived the colour excesses from both colour-colour
diagrams, by shifting the ZAMS along the directions of the corresponding reddening vectors
\cite{s92}. Secondly, once the reddening effect was accounted for, we used the observed ZAMS
($V$ $>$ 14 mag) to obtain the apparent distance modulus. The cluster age turned out to be in
the 0.6 - 1.2 Gyr range from both the MAI and the $\Delta$$V$ index. Moreover, by using a solar
metallicity 1 Gyr isochrone, we achieved a good match to the fiducial features observed in the CMDs.
{\it BH\,37:} the cluster CMDs present a possible red giant branch and a reasonably well-defined
evolved upper MS, particularly in the ($V$,$V-I$) CMD. As the cluster has not been studied in
detail yet, we attempted a subjective isochrone match to the cluster
CMDs.
Schlegel, Finkbeiner \& Davis's (1998, hereafter SFD) obtained full-sky maps
from 100-μm dust emission. They found that in high Galactic latitude regions, the dust map
correlates well with maps of H\,I emission. However, deviations are coherent in the sky
and are especially conspicuous in regions of H\,I emission saturation towards denser
clouds and in regions of formation of H$_2$ in molecular clouds \cite{petal03,petal08}.
Even if the SFD's reddenings would not be exactly correct, they may still be valuable to
compare with the reddenings derived here. We obtained $E(B-V)_{SFD}$ values of 0.61,
0.19, 0.22, 0.16 and 2.19 mags for Berkeley\,26, Czernik\,27, Melotte\,72, NGC\,2479 and
BH\,37, respectively. Since the $E(B-V)_{SFD}$ value for BH\,37 turned out to be
more than double the one we estimated, we assumed that the $E(B-V)_{SFD}$
value must be saturated. It is worth considering that the five clusters lie further
than 70 pc from the Galactic plane, with heights out of the Galactic plane of 0.17
kpc, 0.49 kpc, 0.28 kpc, 0.14,kpc and -0.07, respectively.
Table 10 shows the values of the resulting fundamental parameters, while Figs. 13, 14
and 15 indicate how the isochrones match the cluster features in the CMDs. Note
that these values illustrate only possible solutions for the cluster fundamental parameters.
Such solutions prove to be consistent with the data obtained. Fig. 13 shows that the MS of
Berkeley\,26 is very broad, probably due to differential reddening and field star contamination.
Although the three brighter and bluer stars than the turnoff are considered to be foreground
stars, some of them could be blue stragglers. Similarly, some of the brighter and bluer
stars than the MS turnoff of Czernik\,27 (Fig. 14) should also be considered blue straggler
candidates. In order to improve the trace of the cluster features, it would be of great value
to carry out deeper MS photometry and spectroscopic observations of the red giant branch. Note
that the theoretically computed bluest stage, during the core He-burning phase, is redder than
the observed RGC in the CMDs of Melotte\,72, a behaviour which has also been detected in other
studies of Galactic and Magellanic Cloud clusters \cite[for example]{getal03,petal04a,petal04b}.
\subsection{Comparison with previous results}
{\it Berkeley\,26} : The cluster parameters found here are very different from those found by
Tadross \shortcite{t08} from archival $JHK$ 2MASS photometry. By inspecting the CMDs obtained by
Tadross to derive the cluster parameters (see his Fig. 6), we realized that he did not include
upper MS and red giant branch stars. This error could have been due to the methods he
employed for cleaning the CMDs and for selecting the cluster stars. In addition, the isochrone
fit to the cluster CMDs does not resemble the cluster sequence at all. Our findings show a
better agreement with the parameters recently determined by HSM08 from $VI$ CCD photometry. In
fact, in their study and in ours, the old and metal-poor character of Berkeley\, 26 is confirmed.
Although the reddening derived in both studies agrees, within the errors, the heliocentric distance
found by HSM08 is larger than ours.
{\it Czernik\,27} : Using Girardi et al.'s (2002) isochrones for solar metallicity content,
Kim et al. \shortcite{kimetal05} estimated a cluster $E(B-V)$ reddening of 0.15, a distance from the
Sun of 5.8 kpc, and an age of 600 Myr. HSM08 recently reported $VI$ CCD photometry in the cluster
field. The parameters they found, however, do not show agreement, in general terms, with
those derived by Kim et al. \shortcite{kimetal05}. Actually, according to HSM08, Czernik\,27
seems to be reddened by scarcely $E(V-I)$ = 0.10, it is located closer to the Sun (4.3 kpc) and it is
about 1.1 Gyr old. However, we find that both sets of parameters reasonably reproduce the
current CMDs we obtained for the cluster (see Fig. 14).
{\it Melotte\,72} : As far as we are aware, the only detailed study of this cluster was carried
out by HSM08. Their heliocentric distance and metallicity are similar with our adopted
values, their reddening ($E(V-I)$ = 0.25) and age (= 1.6 Gyr) being somewhat larger. However, when
comparing their $(V,V-I)$ CMD for the central part of the cluster (see their Fig. 2) with our
$(V,V-I)$ CMD (Fig. 14), we see that the turnoff of their selected isochrone is fainter that the
turnoff we observed. We believe that the fit would have been much better if they had used a
younger isochrone.
{\it NGC\,2479} : The catalogue by Kharchenko et al. \shortcite{ketal05}, based on the information
provided by their ASCC-2.5, includes an analysis of 20 stars in the cluster field with $B,V$
magnitudes and proper motions in the Hipparcos system. Only a small portion of the cluster
MS can be traced due to the limiting magnitude of these stars, which reaches $V$ $\approx$ 12.5-13.0
mag. Based on a comprehensive analysis to determine membership, Kharchenko et al. showed
that the 20 stars should be members according to the photometric data they used, although only
four of them have proper motion membership probabilities higher than 80\%. The 20 stars do not
define any clear MS in the ($V$,$B-V$) CMD, while the four stars with the highest proper motion
membership probabilities are located between the cluster turnoff and the RGC. For comparison
purposes, we have drawn these stars with open circles in Fig. 15. Surprisingly, all the cluster
parameters determined by Kharchenko et al. (i.e., distance, age, reddening, core and cluster
radii, etc.) coincide with our estimates.
\section{SUMMARY AND CONCLUSIONS}
New CCD $UBVI_{KC}$ photometry in the field of the open clusters Berkeley\,26, Czernik\,27,
Melotte\,72, NGC\,2479 and BH\,37 is reported here. The analysis of the photometric data leads to
the following main conclusions:
(i) Once the cluster centres were determined by fitting Gaussian distributions to the star
counts in the $x$ and $y$ directions, radial density profiles were produced.
(ii) Cluster CMDs cleaned from field star contamination were built by statistically
subtracting the number of stars counted in the field CMDs. Those stars closer in magnitude and
colour to the ones in the respective star fields were thus removed.
(iii) Estimates of the cluster ages were obtained for Berkeley\,26, Melotte\,72 and
NGC\,2479 from both the $\Delta$$V$ age index and the MAI. On the other hand, we outlined
possible solutions for cluster fundamental parameters by matching theoretical isochrones,which
reasonably reproduce the main cluster features in their CMDs. In the case of NGC\,2479, the
$E(B-V)$ and $E(V-I)$ colour excesses and apparent distance modulus were estimated from the
fit of the ZAMS to the colour-colour and magnitude-colour diagrams, respectively.
\section*{ACKNOWLEDGEMENTS}
We are gratefully indebted to the CTIO staff for their hospitality and support during the
observations. We also thank referees Bruce Twarog and Kenneth Janes whose comments and
suggestions have helped us to improve the manuscript. This work was partially supported by
the Argentinian institutions CONICET, SECYT (Universidad Nacional de C\'ordoba) and Agencia
Nacional de Promoci\'on Cient\'{\i}fica y Tecnol\'ogica (ANPCyT). This work is based
on observations made at Cerro Tololo Inter-American Observatory, which is operated by AURA,
Inc., under cooperative agreement with the National Science Foundation. This research
also used the SIMBAD database, operated at CDS, Strasbourg, France; also the WEBDA database,
operated at the Institute for Astronomy of the University of Vienna, and the NASA's
Astrophysics Data. |
0912.1889 | \section{The XMM-Newton FERO project}
The FERO project is based on one of the largest collection of Type-1 radio quiet AGN (149) ever assembled with targeted {\it XMM-Newton} observations. It aims at establishing on a statistical base how often the effects of X-ray illumination of relativistic accretion disks are present in AGN.
In order to derive meaningful constraints on the properties of the (unknown) parent population of local radio-quiet AGN, we applied the FERO selection criteria to the {\it RXTE} all-sky Slew Survey (\cite{rev}) and we selected the sources having a count rate
in the 3-8~keV energy band greater than 1 cts/sec. This defines a flux-limited, almost complete (80\%) sample of 33 sources, 31 of which are included in the FERO analysis and provide an unbiased subsample of sources with good signal-to-noise ratio within the initial collection of 149 AGNs.
The results of the spectral analysis (de la Calle et al. submitted) are presented here, whereas the study of the line profile in the stacked spectral residuals will be reported in Longinotti et al. (in prep).
The spectra were fitted in the 2--10~keV band. The baseline model for the X-ray continuum comprises a power law, an intervening warm absorber and a Compton reflection component, plus 4 narrow Gaussian lines to model K$\alpha$ emission from Fe~I, Fe~XXV, Fe~XXVI and Fe~I K$\beta$, and a Gaussian line at 6.3~keV with width set to 50~eV to model the Fe I Compton shoulder. The effects of general relativity are taken into account by analysing the data with the suite of routines KY (\cite{dov}). Within this suite, {\it kyrline} has been specifically designed to constrain the parameters of the broad relativistic line. The spin of the black hole, the index of the emissivity profile of the accretion disk and the disk inclination angle to the observer are free parameters. The intensity of the relativistic Fe line against the continuum (i.e. its Equivalent Width, EW) is used as a proxy for line detection.
The statistical study of the flux limited sample sources show that:
\begin{figure}[h!]
\includegraphics[height=.44\textheight]{fero_sample}
\caption{The {\it XMM-Newton} FERO collection: broad Fe~K$\alpha$ line EW versus 2-10~keV counts. Filled circles mark 5$\sigma$ detections of the Fe line, stars mark the 31 sources in the flux limited sample.}
\end{figure}
\begin{itemize}
\item The fraction of sources in the FERO sample (Fig. 1) that
present 5$\sigma$ detections for a relativistic Fe K$_\alpha$ line
is 9\% (13/149), but this fraction rises to 36$\%$ (11/31) for AGNs in the flux-limited sample.
Considering the upper limits to the line EW, a broad line at the level of 40~eV can be rejected only in 4/31 objects.
\item There is no significant difference between
Sy and QSO in terms of detection fraction. All detections have luminosities below $\sim$1 $\times$ 10$^{44}$ erg s$^{-1}$.
\item The average relativistic Fe K$_\alpha$ line EW is of the order of 100~eV.
\item The average disk inclination angle is $<\theta>$=28$\pm$5$^\circ$, consistent with an intrinsic random distribution
of inclination angles.
\item The index of the radial dependence of the disk emissivity ($\propto$ r$^{-\beta}$) is $<\beta>$=2.4$\pm$0.4, with a wide spread of values.
\item The spin value {\it a} is in general poorly constrained except for MCG-6-30-15 ({\it a}=0.86$^{+0.01}_{-0.02}$) and MRK509 ({\it a}=0.78$^{+0.03}_{-0.04}$)
\item The broad line EW is not significantly dependent on the above line parameters.
\item No correlation was found between the line EW or disk emissivity ($\beta$) and the source physical
properties investigated, such as, black hole mass, accretion rate, 2--10~keV
luminosity and optical H$\beta$ FWHM.
\end{itemize}
\bibliographystyle{aipprocl} |
0912.2185 | \section{Introduction and results}
Since the appearance of the paper \cite{A:V:V}, monotonicity
properties
of the functions
\begin{equation}\label{eq:Fa}
F_a(x)=\frac{\ln \Gamma (x+1)}{x\ln(ax)}, \quad x>0, a>0
\end{equation}
have attracted the attention of several authors in connection with
monotonicity properties of the volume $\Omega_n$ of the unit ball in Euclidean
$n$-space. A recent paper about inequalities
involving $\Omega_n$ is \cite{Al}.
Let us first consider the case $a=1$. In \cite{B:P1} the authors proved
that $F_1$ is a {\it Bernstein function}, which means that it is positive and
has a completely monotonic derivative, i.e.,
\begin{equation}\label{eq:BP1}
(-1)^{n-1}F_1^{(n)}(x)\geq 0,\quad x>0, n\ge 1.
\end{equation}
This extended monotonicity and concavity proved in \cite{A:Q} and
\cite{E:L} respectively.
We actually proved a stronger statement than \eqref{eq:BP1}, namely that the reciprocal function
$x\ln x/\ln\Gamma(x+1)$ is a Stieltjes transform, i.e. belongs to the
Stieltjes cone $\mathcal S$ of functions of the form
\begin{equation}\label{eq:int-rep}
g(x)=c+\int_0^\infty\frac{d\mu(t)}{x+t},\quad x>0,
\end{equation}
where $c\geq 0$ and $\mu$ is a non-negative measure on $[0,\infty[$ satisfying
$$\int_0^\infty\frac{d\mu(t)}{1+t}<\infty.$$
The result was obtained using the holomorphic extension of the
function $F_1$ to the cut plane $\mathcal A=\mathbb C\setminus]-\infty,0]$,
leading to an explicit formula for the measure $\mu$ in
\eqref{eq:int-rep}. Our derivation used the fact that the holomorphic function $\log\Gamma(z)$
only vanishes in $\mathcal A$ at the points $z=1$ and $z=2$, a result
interesting in itself and included as an appendix in \cite{B:P1}. A
simpler proof of the non-vanishing of $\log\Gamma(z)$ appeared in \cite{B:P2}.
In a subsequent paper \cite{B:P2} we proved an almost equivalent
result, namely that $F_1$ is a Pick function, and obtained the following
representation formula
\begin{equation}\label{eq:F1}
F_1(z)=1- \int_0^\infty \frac{d_1(t)}{z+t}\,dt,\quad z\in\mathcal A
\end{equation}
where
\begin{equation}\label{eq:d}
d_1(t)=\frac{\ln|\Gamma(1-t)|+(k-1)\ln t}{t((\ln
t)^2+\pi^2)}\quad\mbox{for}
\quad t\in \left]k-1,k\right[,\quad k=1,2,\ldots
\end{equation}
and $d_1(t)$ tends to infinity when $t$ approaches $1,2,\ldots$. Since
$d_1(t)>0$ for $t>0$, \eqref{eq:BP1} is an immediate consequence of \eqref{eq:F1}.
We recall that a Pick function is holomorphic function $\varphi$ in the upper
half-plane $\mathbb H=\{z=x+iy \in\mathbb C\mid y>0\}$ satisfying
$\Im\varphi(z)\ge 0$ for $z\in\mathbb H$, cf. \cite{D}.
For $a=2$ Anderson and Qiu proved in \cite{A:Q} that $F_2$ is
strictly increasing on $[1,\infty[$, thereby proving a conjecture from
\cite{A:V:V}. Alzer proved in \cite{Al} that $F_2$ is concave on
$[46,\infty[$. In \cite{Q:G} the concavity was extended to the optimal
interval $]\tfrac12,\infty[$.
We will now describe the main results of the present paper.
We also denote by $F_a$ the holomorphic extension of \eqref{eq:Fa} to
$\mathcal A$ with an isolated singularity at $z=1/a$, which is
a simple pole with residue $\ln\Gamma(1+1/a)$ assuming $a\neq 1$, while $z=1$ is a
removable singularity for $F_1$. For details about this extension see
the beginning of section 2. Using the residue theorem we obtain:
\begin{thm}\label{thm:1} For $a>0$ the function $F_a$ has the integral representation
\begin{equation}\label{eq:Farep}
F_a(z)=1+\frac{\ln\Gamma(1+1/a)}{z-1/a}-\int_0^\infty\frac{d_a(t)}{z+t}\,dt,
\quad z\in\mathcal A\setminus\{1/a\},
\end{equation}
where
\begin{equation}\label{eq:darep}
d_a(t)=\frac{\ln|\Gamma(1-t)|+(k-1)\ln(at)}{t((\ln
(at))^2+\pi^2)}\quad \mbox{for}
\quad t\in \left]k-1,k\right[,\quad k=1,2,\ldots,
\end{equation}
and $d_a(0)=0, d_a(k)=\infty,k=1,2,\ldots$.
We have $d_a(t)\ge 0$ for $t\ge 0,a\ge 1/2$\footnote{This is slightly
improved in Remark~\ref{thm:bettera} below.} and $F_a$ is a Pick function
for $a\ge 1$ but not for $0<a<1$.
\end{thm}
From this follows the monotonicity property conjectured in \cite{Q:G}:
\begin{cor}\label{thm:cor1} Assume $a\ge 1$. Then
\begin{equation}\label{eq:Faprop}
(-1)^{n-1}F_a^{(n)}(x)>0,\quad x>1/a,n=1,2,\ldots.
\end{equation}
\end{cor}
In particular, $F_a$ is strictly increasing and strictly concave on
the interval $]1/a,\infty[$.
The function
\begin{eqnarray}\label{eq:vol}
f(x)=\left(\frac{\pi^{x/2}}{\Gamma(1+x/2)}\right)^{1/(x\ln x)}
\end{eqnarray}
has been studied because the volume $\Omega_n$ of the unit ball in
$\mathbb R^n$ is
$$
\Omega_n=\frac{\pi^{n/2}}{\Gamma(1+n/2)},n=1,2,\ldots.
$$
We prove the following integral representation of the extension of
$\ln f(x+1)$ to the cut plane $\mathcal A$.
\begin{thm}\label{thm:2} For $z\in\mathcal A$ we have
\begin{equation}\label{eq:logf}
\log
f(z+1)=-\frac12+\frac{\ln(2/\sqrt{\pi})}{z}+\frac{\ln(\sqrt{\pi})}{\Log(z+1)}+\frac12\int_1^\infty\frac{d_2((t-1)/2)}{z+t}\,dt.
\end{equation}
In particular $1/2+\log f(x+1)$ is a Stieltjes function and $f(x+1)$
is completely monotonic.
\end{thm}
We recall that completely monotonic functions
$\varphi:\left]0,\infty\right[\to\mathbb R$ are characterized by
Bernstein's theorem as
\begin{equation}\label{eq:Bern}
\varphi(x)=\int_0^\infty e^{-xt}\,d\mu(t),
\end{equation}
where $\mu$ is a positive measure on $[0,\infty[$ such that the
integrals above make sense for all $x>0$.
We also recall that a sequence $\{a_n\}_{n\ge 0}$ of positive numbers is a
Hausdorff moment sequence if it has the form
\begin{equation}\label{eq:Hau}
a_n=\int_0^1 x^n\,d\sigma(x),\;n\ge 0,
\end{equation}
where $\sigma$ is a positive measure on the unit interval. Note that $\lim_{n\to\infty}a_n=\sigma(\{1\})$.
For a discussion of these concepts see \cite{B:C:R} or \cite{W}. It is
clear that if $\varphi$ is completely monotonic with the integral
representation \eqref{eq:Bern}, then
$a_n=\varphi(n+1),n\ge 0$ is a Hausdorff moment sequence, because
$$
a_n=\int_0^\infty e^{-(n+1)t}\,d\mu(t)=\int_0^1 x^n\,d\sigma(x),
$$
where $\sigma$ is the image measure of $e^{-t}\,d\mu(t)$ under
$e^{-t}$.
Since $\lim_{x\to\infty}f(x+1)=e^{-1/2}$ we get
\begin{cor}\label{thm:cor2}
The sequence
\begin{equation}\label{eq:f(n)}
f(n+2)=\Omega_{n+2}^{1/((n+2)\ln(n+2))}, n=0,1,\ldots
\end{equation}
is a Hausdorff moment sequence tending to $e^{-1/2}$.
\end{cor}
A Hausdorff moment sequence is clearly decreasing and convex and by
the Cauchy-Schwarz inequality is is even logarithmically convex,
meaning that $a_n^2\le a_{n-1}a_{n+1},\;n\ge 1$. The
latter property
was obtained in \cite{Q:G} in a different way.
\section{Properties of the function $F_a$}
In this section we will study the holomorphic extension of the function $F_a$ defined in
\eqref{eq:Fa}. First a few words about notation. We use $\ln$ for the
natural logarithm but only applied to positive numbers. The
holomorphic extension of $\ln$ from the open half-line $]0,\infty[$ to
the cut plane $\mathcal A=\mathbb C\setminus ]-\infty,0]$ is denoted
$\Log z=\ln|z|+i\Arg z$, where $-\pi<\Arg z<\pi$ is the principal
argument. The holomorphic branch of the logarithm of $\Gamma(z)$ for $z$ in the simply
connected domain $\mathcal A$ which equals $\ln\Gamma(x)$ for $x>0$ is
denoted $\log\Gamma(z)$. The imaginary part of $\log\Gamma(z)$ is a
continuous branch of argument of $\Gamma(z)$ which we denote
$\arg\Gamma(z)$, i.e.,
$$
\log\Gamma(z)=\ln|\Gamma(z)|+i\arg\Gamma(z),\;z\in\mathcal A.
$$
We shall use the following property of
$\log\Gamma(z)$, cf. \cite[Lemma 2.1]{B:P1}
\begin{lemma}\label{thm:logGamma}
We have, for any $k\geq 1$,
$$
\lim_{z\to t,\Im z>0}\log \Gamma (z)= \ln|\Gamma (t)| -i\pi k
$$
for $t\in ]-k,-k+1[$ and
$$
\lim_{z\to t,\Im z>0}|\log \Gamma (z)|=\infty
$$
for $t=0,-1,-2, \ldots$.
\end{lemma}
The expression
$$
F_a(z)=\frac{\log\Gamma(z+1)}{z\Log(az)}
$$
clearly defines a holomorphic function in $\mathcal A\setminus\{1/a\}$,
and $z=1/a$ is a simple pole unless $a=1$, where the residue
$\ln\Gamma(1+1/a)$ vanishes.
\begin{lemma}\label{thm:behonR}
For $a>0$ and $t \le 0$ we have
\begin{equation}\label{eq:ytozero}
\lim_{y\to 0^+} \Im F_a(t+iy)=\pi d_a(-t),
\end{equation}
where $d_a$ is given by \eqref{eq:darep}.
\end{lemma}
{\it Proof.} For $-1<t<0$ we get
$$
\lim_{y\to 0^+} F_a(t+iy)=\frac{\ln\Gamma(1+t)}{t(\ln(a|t|)+i\pi)},
$$
hence $\lim_{y\to 0^+}\Im F_a(t+iy)=\pi d_a(-t)$.
For $-k<t<-k+1,\, k=2,3,\ldots$ we find using Lemma~\ref{thm:logGamma}
$$
\lim_{y\to 0^+}
F_a(t+iy)=\frac{\ln|\Gamma(1+t)|-i(k-1)\pi}{t(\ln(a|t|)+i\pi)},
$$
hence $\lim_{y\to 0^+}\Im F_a(t+iy)=\pi d_a(-t)$ also in this case.
For $t=-k,\;k=1,2,\ldots$ we have
$$
|F_a(-k+iy)|\ge
\frac{\left|\ln|\Gamma(-k+1+iy)|\right|}{|-k+iy||\Log(a(-k+iy))|}\to\infty
$$
for $y\to 0^+$ because $\Gamma(z)$ has poles at
$z=0,-1,\ldots$. Finally, for $t=0$ we get \eqref{eq:ytozero} from the
next Lemma.
$\quad\square$
\begin{lemma}\label{thm:behatzero} For $a>0$ we have
$$
\lim_{z\to 0,z\in\mathcal A}|F_a(z)|=0.
$$
\end{lemma}
{\it Proof.} Since $\log\Gamma(z+1)/z$ has a removable singularity for
$z=0$ the result follows because
$|\Log(az)|\ge |\ln(a|z|)|\to \infty$ for $|z|\to 0,z\in\mathcal A$.$\quad\square$
\begin{lemma}\label{thm:behoncircles} For $a>0$ we have the radial
behaviour
\begin{equation}\label{eq:radial}
\lim_{r\to\infty} F_a(re^{i\t})=1\;\mbox{for}\; -\pi<\t<\pi,
\end{equation}
and there exists a constant $C_a>0$
such that for $k=1,2,\ldots$ and $-\pi<\t<\pi$
\begin{equation}\label{eq:bhc1}
|F_a((k+\tfrac12)e^{i\t})|\le C_a.
\end{equation}
\end{lemma}
{\it Proof.} We first note that
\begin{equation}\label{eq:quot}
F_a(z)=F_1(z)\frac{\Log(z)}{\Log(az)},
\end{equation}
and since
$$
\lim_{|z|\to\infty,z\in\mathcal A}\frac{\Log(z)}{\Log(az)}=1
$$
it is enough to prove the results for $a=1$. We do this by using a
method introduced in \cite[Prop. 2.4]{B:P1}.
Define
$$
R_k=\{ z = x+iy \in \mathbb C \,\mid \,-k\leq x < -k+1,\,
0<y\leq 1\, \}\;\;\mbox{for}\;\; k\in \mathbb Z
$$
and
$$
R=\cup_{k=0}^{\infty}R_k,\quad S=\{ z = x+iy \in \mathbb C \,\mid
\,x\le 1,|y|\le 1\}.
$$
By Lemma~\ref{thm:logGamma} it is clear that
\begin{equation}\label{eq:Mk}
M_k=\sup_{|\t|<\pi}|F_1((k+\tfrac12)e^{i\t})|<\infty
\end{equation}
for each $k=1,2,\ldots$, so it is enough to prove that $M_k$ is
bounded for $k\to\infty$.
Stieltjes (\cite[formula 20]{S}) found the following
formula for $\log \Gamma (z)$ for $z$ in the cut plane $\mathcal{A}$
\begin{equation}\label{eq:St1}
\log \Gamma (z+1)= \ln\sqrt{2\pi} + (z+1/2)\Log z -z +\mu(z).
\end{equation}
Here
$$
\mu(z)=\sum_{n=0}^{\infty} h(z+n)= \int_0^{\infty} \frac{P(t)}{z+t}dt,
$$
where $h(z)=(z+1/2)\Log (1+1/z) -1$ and $P$ is periodic with period
1 and $P(t)=1/2-t$ for $t\in [0,1[$. A derivation of these formulas can
also be found in \cite{Ar}. The integral above is improper,
and integration by parts yields
\begin{equation}\label{eq:mu}
\mu(z)=\frac{1}{2} \int_0^{\infty} \frac{Q(t)}{(z+t)^2}dt,
\end{equation}
where $Q$ is periodic with period 1 and $Q(t)=t-t^2$ for $t\in [0,1[$.
Note that by \eqref{eq:mu} $\mu$ is a completely monotonic
function. For further properties of Binet's function $\mu$ see \cite{K:P}.
We claim that
$$
|\mu(z)|\leq \frac{\pi}{8} \;\mbox{for}\; z \in \mathcal
A\setminus S.
$$
In fact, since $0\leq Q(t)\leq 1/4$, we get for $z=x+iy
\in \mathcal A$
$$
|\mu(z)|\leq \frac{1}{8}\int_0^{\infty}\frac{dt}{(t+x)^2+y^2}.
$$
For $x>1$ we have
$$
\int_0^{\infty}\frac{dt}{(t+x)^2+y^2}
\leq \int_0^{\infty}\frac{dt}{(t+1)^2} =1,
$$
and for $x\le 1, |y|\geq 1$ we have
$$
\int_0^{\infty}\frac{dt}{(t+x)^2+y^2}
= \int_x^{\infty}\frac{dt}{t^2+y^2} < \int_{-\infty}^\infty \frac{dt}{t^2+1}=\pi.
$$
Since
$$
F_1(z) =
1+ \frac{ \ln\sqrt{2\pi} + 1/2\Log z -z +\mu(z)}{z\Log z},
$$
for $z \in \mathcal A$, we immediately get \eqref{eq:radial} and
\begin{equation}
\label{eq:upper1}
|F_1(z)|\le 2
\end{equation}
for all $z\in \mathcal A\setminus S$ for which $|z|$ is sufficiently
large. In particular, there exists $N_0\in\mathbb N$ such that
\begin{equation}\label{eq:bhc2}
|F_1((k+\tfrac12)e^{i\t})|\le 2\;
\mbox{for}\; k\ge N_0,\;(k+\tfrac12)e^{i\t}\in\mathcal A\setminus S.
\end{equation}
By continuity the quantity
\begin{equation}\label{eq:cont}
c=\sup\left\{ |\log\Gamma(z)|\;\mid\;z=x+iy,\tfrac12\le x\le 1,0\le y \le 1\right\}
\end{equation}
is finite.
We will now estimate the quantity $|F_1((k+\tfrac12)e^{i\t})|$ when
$(k+\tfrac12)e^{i\t}\in S$, and since
$F_1(\overline{z})=\overline{F_1(z)}$, it is enough to consider the
case when $(k+\tfrac12)e^{i\t}\in R_{k+1}$. To do this we use the
relation
\begin{equation}
\label{eq:smart}
\log \Gamma (z+1) = \log \Gamma (z+k+1)-\sum_{l=1}^{k} \Log (z+l)
\end{equation}
for $z \in \mathcal A$ and $k\in\mathbb N$. Equation \eqref{eq:smart} follows from the
fact that the functions on both sides of the equality sign are holomorphic
functions in $\mathcal A$, and they agree on the positive half-line by
repeated applications of the functional equation for the Gamma function.
For $z=(k+\tfrac12)e^{i\t}\in R_{k+1}$ we get $|\log\Gamma(z+k+1)|\le
c$ by \eqref{eq:cont}, and hence by \eqref{eq:smart}
$$
|\log\Gamma(z+1)|\le c+\sum_{l=1}^k |\Log(z+l)|\le
c+k\pi+\sum_{l=1}^k |\ln|z+l||.
$$
For $l=1,\ldots,k-1$ we have $k-l<|z+l|<k+2-l$, hence
$0<\ln|z+l|<\ln(k+2-l)$. Furthermore, $1/2\le |z+k|\le \sqrt{2}$,
hence $-\ln 2<\ln|z+k|\le (\ln 2)/2$. Inserting this we get
$$
|\log\Gamma(z+1)|\le c+k\pi +\sum_{j=2}^{k+1} \ln j <
c+k\pi+k\ln(k+1).
$$
From this we get for $z=(k+\tfrac12)e^{i\t}\in R_{k+1}$
\begin{equation}\label{eq:upper2}
|F_1(z)|\le \frac{c+k\pi+k\ln(k+1)}{(k+\tfrac12)\ln(k+\tfrac12)}
\end{equation}
which tends to 1 for $k\to\infty$. Combined with \eqref{eq:bhc2} we
see that there exists $N_1\in\mathbb N$ such that
$$
|F_1((k+\tfrac12)e^{i\t})|\le 2\;\mbox{for}\; k\ge N_1,\,-\pi<\t<\pi,
$$
which shows that $M_k$ from \eqref{eq:Mk} is a bounded sequence.
$\quad\square$
\begin{lemma}\label{thm:boundFa} Let $a>0$. For $k=1,2,\ldots$ there exists an
integrable function
$f_{k,a}:\left]-k,-k+1\right[\to\left[0,\infty\right]$ such that
\begin{equation}\label{eq:boundFa}
|F_a(x+iy)|\le f_{k,a}(x)\;\mbox{for}\; -k<x<-k+1,0<y\le 1.
\end{equation}
\end{lemma}
{\it Proof.} For $z=x+iy$ as above we get using \eqref{eq:smart}
$$
|\log\Gamma(z+1)|\le |\log\Gamma(z+k+1)|+\sum_{l=1}^k|\Log(z+l)|
\le L+k\pi +\sum_{l=1}^k|\ln|z+l||,
$$
where $L$ is the maximum of $|\log\Gamma(z)|$ for
$z\in\overline{R_{-1}}$.
We only treat the case $k\ge 2$ because the case $k=1$ is a simple
modification combined with Lemma~\ref{thm:behatzero}.
For $l=1,\ldots,k-2$ we have $1<|z+l|<1+k-l$, and for $l=k-1,k$
$\ln|x+l|\le \ln|z+l|\le (1/2)\ln 2$, so we find
\begin{equation}\label{eq:boundFa1}
|\log\Gamma(z+1)|\le L+k\pi+\sum_{j=2}^k \ln j+|\ln|x+k-1||+|\ln|x+k||,
\end{equation}
so as $f_{k,1}$ we can use the right-hand side of \eqref{eq:boundFa1}
divided by $(k-1)\ln(k-1)$. Using \eqref{eq:quot} we next define
$$
f_{k,a}(x)=f_{k,1}(x)\max_{z\in\overline{R_k}}\frac{|\Log
z|}{|\Log(az)|}.
$$
$\quad\square$
\medskip
{\bf Proof of Theorem~\ref{thm:1}}
For fixed $w\in\mathcal A\setminus\{1/a\}$ we choose $\varepsilon>0,k\in\mathbb N$ such that
$\varepsilon < |w|,1/a < k+\tfrac12$ and consider the positively oriented contour
$\gamma(k,\varepsilon)$ in
$\mathcal A$ consisting of the half-circle $z=\varepsilon e^{i\t},\t\in
[-\tfrac{\pi}2,\tfrac{\pi}2]$ and the half-lines $z=x \pm i\varepsilon,x\le 0$
until they cut the circle $|z|=k+\tfrac12$, which closes the contour.
By the residue theorem we find
$$
\frac{1}{2\pi i}\int_{\gamma(k,\varepsilon)}
\frac{F_a(z)}{z-w}\,dz=F_a(w)+\frac{\ln\Gamma(1+1/a)}{1/a-w}.
$$
We now let $\varepsilon\to 0$ in the contour integration. By
Lemma~\ref{thm:behatzero} the contribution from the half-circle with
radius $\varepsilon$ will tend to zero, and by Lemma~\ref{thm:behonR} and Lemma~\ref{thm:boundFa} we get
$$
\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{F_a((k+\tfrac12)e^{i\t})}{(k+\tfrac12)e^{i\t}-w}(k+\tfrac12)e^{i\t}\,d\t
+\int_{-k-\tfrac12}^0 \frac{d_a(-t)}{t-w}\,dt=F_a(w)+\frac{\ln\Gamma(1+1/a)}{1/a-w}.
$$
For $k\to\infty$ the integrand in the first integral converges to 1
for each $\t\in\left]-\pi,\pi\right[$ and by
Lemma~\ref{thm:behoncircles} Lebesgue's theorem on dominated
convergence can be applied, so we finally get
$$
F_a(w)=1+\frac{\ln\Gamma(1+1/a)}{w-1/a}-\int_0^\infty\frac{d_a(t)}{t+w}\,dt.
$$
The last integral above appears as an improper integral, but we shall
see that the integrand is
Lebesgue integrable. We show below that $d_a(t)\ge 0$ when $a\ge 1/2$
and for these values of $a$ the integrability is obvious. The function
$d_a$ tends to 0 for $t\to 0$ and has a logarithmic singularity at
$t=1$ so $d_a$ is integrable over $]0,1[$. For $k-1<t<k,\;k\ge 2$ we
have
\begin{equation}\label{eq:intda}
d_a(t)=\frac{(\ln(t))^2+\pi^2}{(\ln(at))^2+\pi^2}d_1(t)+\frac{(k-1)\ln
a}{t\left((\ln(at))^2+\pi^2\right)},
\end{equation}
and the factor in front of $d_1(t)$ is a bounded continuous function
with limit 1 at 0 and at infinity. Therefore
$$
\int_1^\infty \frac{|d_a(t)|}{t}\,dt<\infty
$$
follows from the finiteness of the corresponding integral for $a=1$
provided that we establish
$$
S:=\sum_{k=2}^\infty
(k-1)\int_{k-1}^k\frac{dt}{t^2\left((\ln(at))^2+\pi^2\right)}<\infty.
$$
Choosing $N\in\mathbb N$ such that $aN>1$, we can estimate
\begin{eqnarray*}
S &<& \sum_{k=1}^\infty \int_{ka}^{(k+1)a}\frac{dt}{t(\ln^2(t)+\pi^2)}
<\int_a^{Na}\frac{dt}{t(\ln^2(t)+\pi^2)}+\sum_{k=N}^\infty\int_{ka}^{(k+1)a}\frac{dt}{t\ln^2(t)}\\
&=&\int_a^{Na}\frac{dt}{t(\ln^2(t)+\pi^2)}+\frac{1}{\ln(aN)}<\infty.
\end{eqnarray*}
We next examine positivity of $d_a$.
For $0<t<1$ we have
$$
d_a(t)=\frac{\ln|\Gamma(1-t)|}{t((\ln(at))^2+\pi^2)}>0
$$
because $\Gamma(s)>1$ for $0<s<1$.
For $k\ge 2$ and $t\in\left]k-1,k\right[$ the numerator $N_a$ in $d_a$ can
be written
$$
N_a(t)=\ln\Gamma(k-t)+\sum_{l=1}^{k-1}\ln\frac{ta}{t-l},
$$
where we have used the functional equation for $\Gamma$, hence
$$
N_a(t) \ge \sum_{l=1}^{k-1}\ln\frac{k}{k-l}+(k-1)\ln a=(k-1)\ln
k-\ln\Gamma(k)+(k-1)\ln a,
$$
because $\Gamma(k-t)>1$ and $t/(t-l)$ is decreasing for
$k-1<t<k$. From \eqref{eq:St1} we get
\begin{equation}\label{eq:lnGamma}
\ln\Gamma(k)=\ln\sqrt{2\pi} +(k-1/2)\ln k -k +\mu(k)
\end{equation}
and in particular for $k=2$
$$
\mu(2)=2-\frac{3}{2}\ln 2-\ln\sqrt{2\pi}.
$$
Using \eqref{eq:lnGamma} we find
$$
N_a(t)\ge k - \frac12\ln k -\ln\sqrt{2\pi}-\mu(k) +(k-1)\ln a \ge k -
\frac12\ln k -2+\frac{3}{2}\ln 2 +(k-1)\ln a,
$$
because $\mu$ is decreasing on $]0,\infty[$ as shown by \eqref{eq:mu}.
For $a\ge 1/2$ and $k-1<t<k$ with $k\ge 2$ we then get
$$
N_a(t)\ge k(1-\ln 2)-\frac12\ln k +\frac{5}{2}\ln 2-2\ge 0,
$$
because the sequence $c_k,k\ge 2$ on the right-hand side is increasing with $c_2=0$.
We also see that $d_a(t)$ tends to infinity for $t$ approaching the
end points of the interval $]k-1,k[$. For $z=1/a+iy,y>0$ we get from
\eqref{eq:Farep}
$$
\Im F_a(1/a+iy)=-\frac{\ln\Gamma(1+1/a)}{y}+\int_0^\infty\frac{yd_a(t)}{(1/a+t)^2+y^2}\,dt.
$$
The last term tends to 0 for $y\to 0$ while the first term tends to
$-\infty$ when $0<a<1$. This shows that $F_a$ is not a Pick function
for these values of $a$.
$\quad\square$
\begin{rem}\label{thm:bettera} {\rm We proved in Theorem~\ref{thm:1}
that $d_a(t)$ is non-negative on $[0,\infty[$ for $a\ge 1/2$. This
is not best possible, and we shall explain that the smallest value of $a$ for which
$d_a(t)$ is non-negative is $a_0=0.3681154742..$.
Replacing $k$ by $k+1$ in the numerator $N_a$ for $d_a$ given by
\eqref{eq:darep}, we see that
$$
N_a(t)=\ln|\Gamma(1-t)|+k\ln(at) \;\;\mbox{for}\;\; t\in
]k,k+1[,\;\;k=1,2,\ldots
$$
is non-negative if and only if
$$
\ln(1/a) \le \ln(k+s)+\frac{1}{k}\ln|\Gamma(1-k-s)| \;\;\mbox{for}\;\; s\in
]0,1[,\;\;k=1,2,\ldots,
$$
and using the reflection formula for $\Gamma$ this is equivalent to
$\ln(1/a)\le \rho(k,s)$ for all $0<s<1$ and all $k=1,2,\ldots$, where
\begin{equation}\label{eq:rho}
\rho(k,s)= \ln (k+s)-\frac{1}{k}\ln\left(\Gamma(k+s)\frac{\sin(\pi
s)}{\pi}\right).
\end{equation}
Using Stieltjes' formula \eqref{eq:St1}, we find that
\begin{eqnarray}\label{eq:necs}
\lefteqn{\rho(k,s)= 1+\frac{\ln(\pi/2)}{2k}}\nonumber\\
&&-(1/k)\left[(s-1/2)\ln(s+k)+\ln\sin(\pi s)
-s+\mu(s+k)\right]
\end{eqnarray}
for all $s\in\left]0,1\right[$ and $k=1,2,\ldots$.
For fixed $s\in \left]0,1\right[$ we see that $\rho(k,s)\to 1$ as
$k\to\infty$, so
$\ln(1/a) \le 1$ is a necessary condition for non-negativity of
$d_a(t)$. This condition is not sufficient, because for $\ln(1/a)=1$
the inequality $1\leq \rho(k,s)$
is equivalent to
$$
0\ge (1/2)\ln(2/\pi)+(s-1/2)\ln(s+k)+\ln\sin(\pi s)
-s+\mu(s+k)
$$
which does not hold when $k$ is sufficiently large and $1/2<s<1$.
For each $k=1,2,\ldots$ it is easy to verify that the function $\rho_k(s)=\rho(k,s)$ has a unique minimum
$m_k$ over $]0,1[$, and clearly
\begin{equation}\label{eq:smallest}
\ln(1/a_0)=\inf\{m_k,k\ge 1\}
\end{equation}
determines the
smallest value of $a$ for which $d_a(t)$ is non-negative. Using Maple
one obtains that $m_k$ is decreasing for $k=1,\ldots,510$ and increasing
for $k\ge 510$ with limit 1. Therefore $m_{510}=\inf m_k=0.9993586013..$ corresponding to $a_0=0.3681154742..$.
We add that $m_1=1.6477352344..,m_{178}=1.0000028637.. ,m_{179}=0.9999936630..$.}
\end{rem}
\section{Properties of the function $f$}
{\bf Proof of Theorem~\ref{thm:2}} The function
$$
\ln f(x)=\frac{(x/2)\ln \pi-\ln\Gamma(1+x/2)}{x\ln x}
$$
clearly has a meromorphic extension to $\mathcal A\setminus{1}$ with a
simple pole at $z=1$ with residue $\ln 2$. We denote this meromorphic
extension $\log f(z)$ and have
$$
\log f(z+1)=\frac{\ln\sqrt{\pi}}{\Log(z+1)}-\frac12 F_2\left(\frac{z+1}2\right).
$$
Using the representation \eqref{eq:Farep}, we immediately get
\eqref{eq:logf}. It is well-known that $1/\Log(z+1)$ is a Stieltjes
function, cf. \cite[p.130]{B:F}, and the integral representation is
\begin{equation}\label{eq:log}
\frac{1}{\Log(z+1)}=\int_1^\infty\frac{dt}{(z+t)((\ln(t-1))^2+\pi^2)}.
\end{equation}
It follows that $\ln(\sqrt{e}f(x+1))$ is a Stieltjes function, in
particular completely monotonic, showing that $\sqrt{e}f(x+1)$ belongs
to the class $\mathcal L$ of logarithmically completely monotonic
functions studied in \cite{F:G:C} and in \cite{B1}. Therefore also
$f(x+1)$ is completely monotonic.$\quad\square$
\section{Representation of $1/F_a$}
For $a>0$ we consider the function
\begin{equation}\label{eq:Ga}
G_a(z)=1/F_a(z)=\frac{z\Log(az)}{\log\Gamma(z+1)}
\end{equation}
which is holomorphic in $\mathcal A$ with an isolated singularity at
$z=1$, which is a simple pole with residue $\ln a/\Psi(2)=\ln
a/(1-\gamma)$ if $a\ne 1$, while it is a removable singularity when
$a=1$. Here $\Psi(z)=\Gamma'(z)/\Gamma(z)$ and $\gamma$ is Euler's constant.
\begin{thm}\label{thm:Ga} For $a>0$ the function $G_a$ has the integral representation
\begin{equation}\label{eq:Garep}
G_a(z)=1+\frac{\ln a}{(1-\gamma)(z-1)}+\int_0^\infty\frac{\rho_a(t)}{z+t}\,dt,
\quad z\in\mathcal A\setminus\{1\},
\end{equation}
where
\begin{equation}\label{eq:rhoarep}
\rho_a(t)=t\frac{\ln|\Gamma(1-t)|+(k-1)\ln(at)}{(\ln
|\Gamma(1-t)|)^2+((k-1)\pi)^2}\quad \mbox{for}
\quad t\in \left]k-1,k\right[,\quad k=1,2,\ldots,
\end{equation}
and $\rho_a(0)=1/\gamma, \rho_a(k)=0,\;k=1,2,\ldots$, which makes
$\rho_a$ continuous on $[0,\infty[$.
We have $\rho_a(t)\ge 0$ for $t\ge 0,\;a\ge a_0=0.3681154742..$, cf. Remark~\ref{thm:bettera}, and $G_a(x+1)$ is a Stieltjes function
for $a\ge 1$ but not for $0<a<1$.
\end{thm}
{\it Proof.} We notice that
for $-k<t<-k+1,\, k=1,2,\ldots$ we get using Lemma~\ref{thm:logGamma}
$$
\lim_{y\to 0^+}
G_a(t+iy)=\frac{t(\ln(a|t|)+i\pi)}{\ln|\Gamma(1+t)|-i(k-1)\pi},
$$
and for $t=-k,k=1,2,\ldots$ we get
$$
\lim_{y\to 0^+}
|G_a(-k+iy)|=0
$$
because of the poles of $\Gamma$,
hence $\lim_{y\to 0^+}\Im G_a(t+iy)=-\pi \rho_a(-t)$ for $t<0$.
For fixed $w\in\mathcal A\setminus\{1\}$ we choose $\varepsilon>0,k\in\mathbb N$ such that
$\varepsilon < |w|,1 < k+\tfrac12$ and consider the positively oriented contour
$\gamma(k,\varepsilon)$ in $\mathcal A$ which was used in the proof of Theorem~\ref{thm:1}.
By the residue theorem we find
$$
\frac{1}{2\pi i}\int_{\gamma(k,\varepsilon)}
\frac{G_a(z)}{z-w}\,dz=G_a(w)+\frac{\ln a}{(1-\gamma)(1-w)}.
$$
We now let $\varepsilon\to 0$ in the contour integration. The contribution
from the $\varepsilon$-half circle tends to 0 and we get
$$
\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{G_a((k+\tfrac12)e^{i\t})}{(k+\tfrac12)e^{i\t}-w}(k+\tfrac12)e^{i\t}\,d\t
-\int_{-k-\tfrac12}^0 \frac{\rho_a(-t)}{t-w}\,dt=G_a(w)+\frac{\ln a}{(1-\gamma)(1-w)}.
$$
Finally, letting $k\to\infty$ we get \eqref{eq:Garep}, leaving the
details to the reader. Clearly, $\rho_a\ge 0$ if and only if $d_a$
defined in \eqref{eq:darep} is non-negative. It follows that
$G_a(x+1)$ is a Stieltjes function for $a \ge 1$ but not for $0<a<1$,
since in the latter case $\Im G_a(1+iy)>0$ for $y>0$ sufficiently small.
$\quad\square$
\begin{rem} {\rm The integral representation in Theorem~\ref{thm:Ga}
was established in \cite[(6)]{B:P1} in the case of $a=1$. Since
$$
G_a(z)=G_1(z)+\ln(a)\frac{z}{\log\Gamma(z+1)},
$$
the formula for $G_a$ can be deduced from the formula for $G_1$ and
the following formula
\begin{equation}\label{eq:zlogGamma}
\frac{z}{\log\Gamma(z+1)}=\frac{1}{(1-\gamma)(z-1)}+\int_0^\infty
\frac{\tau(t)dt}{z+t},\quad z\in\mathcal A\setminus\{1\},
\end{equation}
where
\begin{equation}\label{eq:tau}
\tau(t)=\frac{(k-1)t}{(\ln
|\Gamma(1-t)|)^2+((k-1)\pi)^2}\quad \mbox{for}
\quad t\in \left]k-1,k\right[,\quad k=1,2,\ldots.
\end{equation}
}
\end{rem} |
0912.1329 | \section{Introduction}
One popular measure for evaluating the goodness of scheduling algorithms,
is the maximum completion time (makespan) of any job.
Typically the objective is to find a schedule that minimizes
the maximum completion time over all jobs. If jobs arrive over
time, or have release times, then we measure the maximum response time
or waiting time of a job.
In this paper we consider a generalization of these measures.
Our goal is to minimize $T$, such that a {\em given fraction} of jobs
can be scheduled with a response time of at most $T$.
For example, we could claim to provide a much better
response time for $95\%$ of the jobs, and allow the remaining $5\%$
of jobs to have worse response times.
While this measure is not
completely fair, in many applications, it makes sense to provide excellent
service to the majority of jobs, while ignoring a few jobs. In addition, the
jobs in this context are not ``critical'' in the sense of real-time scheduling
where drastic consequences follow if a job is not done in time (such as in
flight controllers, space shuttle navigation etc), hence it makes sense to
consider a model where a small number of jobs may be dropped.
Using this measure we could obtain schedules where {\em most} jobs have a much lower
response time, even though the maximum response time is higher.
\iffalse
There has been a lot of interest lately in data
dissemination services, where clients request information
from a source. Advances in networking and the need
to provide data to mobile and wired devices have led to
the development of large-scale data dissemination
applications (election results, stock market information etc).
While the WWW provides a platform for developing these
applications, it is hard to provide a completely
scalable solution. Hence researchers have been focusing
their attention on Data Broadcasting methods.
\fi
Broadcasting is an appropriate mechanism to disseminate data since
multiple clients can have their requests satisfied simultaneously.
A large amount of work in the database and algorithms literature
has focused on scheduling problems based on a broadcasting model
(including several PhD theses from Maryland and Brown)
\cite{BarNoy,Bhatia,Aksoy,Acharya,AksoyF,AFZ,BM,W,AW}.
There are two primary kinds of models that have been studied -- the first
kind is a {\em push}-based scheme, where some assumptions are made
on the access probability for a certain data item and a broadcast
schedule is generated \cite{AAFZ,Bhatia,BarNoy,Young,AW}. We focus our
attention on the second kind, namely
{\em pull-based} schemes, where clients request the
data that they need (for example via phone lines)
and the data is delivered on a fast broadcast medium (often using satellites)
\cite{AksoyF}. This model is motivated by wireless web applications.
This work deals entirely with the {\em pull-based} model, where
requests for data arrive over time and a good broadcast schedule needs
to be created.
A key consideration is the design of a good broadcast schedule.
The challenge is in designing an algorithm that {\em guarantees
good response time}.
A lot of work has been done on minimizing the average response time for
broadcast scheduling in both the online \cite{BM,EP02,EP04,KPV00}
and offline settings \cite{KPV00,BM,EH,GK,GKKW,GKPS,BCKN}.
In trying to evaluate the performance of online algorithms,
it is useful to compare them to an optimal offline solution.
\iffalse
In addition, when the demands are known for a small window
of time into the future (also called the look-ahead model in
online algorithms) being able to quickly compute an
optimal offline solution can be extremely useful.
Many kinds of demands for data (e.g., web traffic) exhibit good
predictability over the short term, and thus knowledge
of requests in the immediate future leads to a situation where
one is trying to compute a good offline solution.
\fi
One could also view the requests in the offline problem
as release times of jobs, and one is interested
in minimizing the {\em maximum response time}.
One crucial
difference between broadcast scheduling problems and traditional scheduling problems
is the fact that scheduling a job satisfies
{\em many} requests simultaneously. (The term ``overlapping jobs'' has
also been used to describe such scheduling problems in the past.)
The informal description of the problem is as follows. There are
$n$ data items, $1, \ldots, n$, called pages. Time is broken into ``slots''.
A time slot is defined as the unit of time to transmit one
page on the wireless channel.
A request for a page $j$ arrives at time $t$ and then waits. When
page $j$ has been transmitted, this request has been satisfied.
The difference between the broadcast time and the time at which the
request was made is the response time of the request.
Arrival times of requests for pages are known in advance, and one
problem of interest is to
find a broadcast schedule that {\em minimizes the maximum response time}.
This problem was first studied by Bartal and Muthukrishnan \cite{BM}.
They showed that there is a 2 approximation for the offline
problem, and claimed that FIFO is a
2-competitive algorithm for the online version.
The idea behind the offline algorithm is the following. If we know the
optimal value $T^*$, then consider the last request for a page $p$ at time
$t$. We will certainly schedule this page within $T^*$ steps. We can thus
remove all requests for page $p$ for $T^*$ time units before $t$, since they
will be satisfied within $2T^*$ time units. Now
we can use Earliest Deadline First as a scheduling policy since each
page now satisfies exactly one request (no overlaps). Note that
this method crucially assumes that all pages are scheduled within $T^*$
steps and thus we cannot use this argument for our problem, where only
a certain fraction of requests are satisfied within $T^*$ time units.
Note that the maximum response time could be very high if there is
some small period of time when a large number of different pages
are requested. For example, if all $n$ pages are requested
at time $t$, then the makespan is at least $n$. The trivial schedule
that broadcasts all $n$ pages in turn achieves an optimal makespan in this
case, however it forces the maximum response time to be very high for most
jobs.
However, in our model, we could ignore these requests, if they represent
a small fraction of the input, and could provide a very low response time
for most of the jobs, at the cost of ignoring a small number of jobs.
To address this problem we consider the following
scheduling problem that generalizes the problem of minimizing the makespan.
Given a schedule, we want to minimize the maximum response time of $N' \leq N$
requests. Suppose there are $N$ requests; the way we defined the
problem earlier, we took the maximum response time over all $N$
requests. Now we are allowed to ignore the response time for a small
number of requests $(N-N')$ and the cost is measured by the
maximum response time of the remaining $N'$ requests. Going back to the
previous situation, by ignoring many of the requests for the $n$ different
pages, we could provide a very small response time for majority of the
requests. This model can be applied to many different problems where
it is not crucial to schedule all jobs quickly, but at the same time we would
like to provide a fast response time. This makes sense especially in
situations where there is burstiness in the input and there are a few periods
of time when many distinct pages are requested. In this situation, the
capability to drop a few requests could give us much
more satisfactory schedules
where the majority of jobs are satisfied with a low response time.
However this makes the problem harder, as we have to make a decision
as to which requests to drop.
\mnote{Removed average response time from section title. Should we
remove detailed description of prior work ?}
\subsection{Related work}\ \
Although the specific formulation of broadcast scheduling we present
here has not been studied before, we mention some related work on
variants of broadcast scheduling and of outlier formulations of
optimization problems in other contexts.
One possible way to combat the sensitivity of the maximum completion
time measure is to use the average completion time measure
instead.
\mnote{Removed comment that approximation guarantees are weak.}
The paper by Kalyanasundaram et al. \cite{KPV00}
studies the problem of minimizing average response time.
They showed that for any fixed $\epsilon, 0 < \epsilon\le
\frac{1}{3}$, it is possible to obtain a $\frac{1}{\epsilon}$-speed
$\frac{1}{1-2\epsilon}$-approximation algorithm for minimizing the
average response time, where a $k$-speed algorithm is one where
the server is allowed to broadcast $k$ pages in each time slot.
For example by setting $\epsilon = \frac{1}{3}$ they obtain
a 3-speed, 3-approximation.
The approximation factor bounds the cost of the $k$-speed solution
compared to the cost of an optimal $1$-speed solution.
(This kind of approximation guarantee is also referred to
as a ``bicriteria'' bound in many papers.)
\mnote{Why do we make this comment on $\epsilon=1/2$.}
Note that we cannot set $\epsilon = \frac{1}{2}$
to get a $2$-speed, constant approximation.
Their algorithm is based on rounding a fractional solution for
a ``network-flow'' like problem that is obtained from an
integer programming formulation.
This problem has recently shown to be NP-hard by Erlebach and Hall \cite{EH}
(see \cite{GK} for a simpler proof).
Recently, Gandhi et. al (see journal version of \cite{GKPS}) obtained a
2-speed 1 approximation, improving the results by \cite{GKKW,EH,GKPS}.
Bansal et al. \cite{BCS} recently obtained an $O(\log^2 n)$ approximation
for this measure without any increase in the speed, improving on the
previous best result of $O(\sqrt{n})$ \cite{BCKN}.
Another problem that has been considered before is that of maximizing
throughput in broadcast scheduling. Here, the model is that every request
is associated with a deadline and some requests can be dropped by the
algorithm. The goal is to maximize the number of requests satisfied
before their deadlines.
The results of Bar-Noy et al \cite{BarNoy2}
gave a $1/2$-approximation for this problem.
This was improved to factor $3/4$ by Gandhi et al. \cite{GKPS}
and recently to $5/6$ by Bansal et al. \cite{BCS}.
The form of robust measure we use (i.e., exclude part of the input so
as to minimize an objective function on the rest) for broadcast
scheduling has been studied in other contexts before.
For clustering and facility location, \cite{CKMN}
showed that one can obtain $O(1)$ approximation algorithms for
several problems under this robust measure that allows the
exclusion of a certain number of outliers. One could also view the plethora
of work on the $k$-MST problem in this vein.
\subsection{Outline of Results}\ \
We demonstrate the applicability of this robust maximum response
time measure
in the context of broadcast scheduling.
Our main result is a constant factor
polynomial time offline approximation
algorithm for the problem of minimizing the maximum response time for a given
fraction of jobs in the context of broadcast scheduling.
The algorithm is combinatorial and achieves an approximation factor of 5.
We show that in the online
setting no constant factor online approximation is possible
for the problem of minimizing the maximum response time for a given
fraction of jobs in the context of broadcast scheduling.
This contrasts with the situation
for minimizing maximum response time for broadcast scheduling,
where FIFO gives a 2 approximation \cite{BM}.
In the online model we consider, the algorithm is required
to construct a schedule in an online fashion but does not need to
commit to which requests are dropped.
At the end of the request sequence, the requests with the longest
completion times are dropped and the performance on the remaining
requests is compared to the offline optimal.
Our lower bound also holds for randomized algorithms against
oblivious adversaries.
\mnote{This intuition does not seem right for the relaxed model.}
\subsection{Formal Problem Definition}\ \
The problem is formally stated as follows.
There are $n$ possible pages,
$P=\{1, 2, \ldots, n\}$. We assume that time is discrete and
at time $t$, any subset of pages can be requested.
Let $(p,t)$ represent a request for page $p$ at time $t$. Let $r^p_t$
denote number of requests $(p,t)$.
A time slot $t$ is the window of
time between time $t-1$ and time $t$. The server can broadcast a page in
each time slot.
When a page is broadcast in time-slot $t$, we will simply say that it
has been broadcast at time $t$.
We say that a request $(p,t)$ is satisfied at
time $S^p_t$, if $S^p_t$ is the first time instance {\em after} $t$ when page
$p$ is broadcast. In this paper, we work in the offline setting in
which the server is aware of all future requests.
Our goal is to
schedule the broadcast of pages in a way so as to minimize the maximum
response time of most requests. Formally, let $S$ be a set
of requests. Let $r(S)= \sum_{(p,t) \in S} r^p_t$. In other
words $r(S)$ is the total number of requests corresponding to
a set of $(p,t)$ pairs.
The objective is defined as
\[ \min_{S| r(S) \ge N'} \max_{(p,t) \in S} (S^p_t-t) .\]
In other words, we wish to find the minimum $T$ such that
at least $N'$ requests can be satisfied with response time at most $T$.
When $N'=N$, the
total number of requests then this problem is {\em exactly}
the problem of minimizing the maximum response time. The problem of
minimizing the maximum response time is
not known to be $NP$-hard. However, when $N'$ is arbitrary, the problem has
been claimed to be $NP$-hard (R. Gailis, personal communication (2003)).
Consider the example shown in Fig.~\ref{fig:example}. The table on
the left shows requests for the three pages $A,B,$ and $C$ at
different times. One optimal schedule for this instance broadcasts
pages $B,C,A,B,C$ at times $1,2,3,4,5$ respectively. The table on the
right of Fig.~\ref{fig:example} shows the response time for each request in the
optimal schedule. The maximum response time is 3. Note that if
we only compute the maximum response time for 13 out of 15 requests, then it
can be reduced to 2. For example, scheduling $B,A,C,A,C,B$ gives a maximum
response time of 2 for 13 requests, and a response time of 4 for $(B,2)$
(see Fig.~\ref{fig:ex0}).
\begin{figure*}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|llll|c|c|c|c|c|c|} \cline{2-7}\cline{12-17}
&\multicolumn{1}{c}{ }& \multicolumn{4}{c}{Input:$r^p_t$} & & & & & &
\multicolumn{5}{c}{Response time:$r^p_t(S^p_t-t)$} & \\ \cline{2-7}\cline{12-17}
& & t=0 & t=1 & t=2 & t=3 & t=4 & & & & & &t=0 & t=1 & t=2 & t=3 & t=4 \\
\cline{2-7}\cline{12-17}
&page A & 3 & 2 & 2 & 0 & 0 & & & & & page A & 3 & 2 & 1 & 0 &
0 \\ \cline{2-7}\cline{12-17}
&page B & 2 & 0 & 2 & 0 & 0 & & & & & page B & 1 & 0 & 2 & 0 & 0
\\ \cline{2-7}\cline{12-17}
&page C & 0 & 2 & 0 & 0 & 2 & & & & & page C & 0 & 1 & 0 & 0 & 1
\\ \cline{2-7}\cline{12-17}
\end{tabular}
\caption{The table on the left is an example input and the table on
the right shows the response time for each request in an optimal
schedule of broadcasting pages $B,C,A,B,C$ at times
$1,2,3,4,5$ respectively.}
\label{fig:example}
\end{center}
\end{figure*}
\begin{figure*}
\centerline{ \psfig{figure=figures/ex0.eps,height=2in}}
\caption{An example showing a lower response time schedule for scheduling
$N'=13$ requests.}
\label{fig:ex0}
\end{figure*}
\iffalse
Figure~\ref{fig:exp} shows the tradeoff between maximum response
time and the number of requests scheduled for randomly generated
instances, which were obtained as follows. The instances consisted
of request sequences for 30 pages
over 40 timesteps. The maximum number of requests at any timestep
was controlled by a parameter MaxDemand. (Experiments were run with
MaxDemand = 240,360,480 and 600). For each time instant, the number
of requests was chosen unformly in the interval [1,MaxDemand].
Further, each request was assigned independently to one of the
30 pages according to a Zipfian distribution.
The graph shows the fraction of requests that can be satisfied
for different values of the maximum response time ranging from
1 to 30.
\begin{figure*}
\caption{Tradeoff between maximum response time and number of requests
scheduled for randomly generated instances.}
\label{fig:exp}
\end{figure*}
\fi
\section{Offline Approximation Algorithm}
We are given a request sequence with $N$ requests.
Suppose the optimal algorithm can satisfy $N' \leq N$
requests with maximum response time $T$.
We will describe an algorithm that satisfies at least $N'$
requests with maximum response time at most $5T$.
We first give an {\em overview} of the algorithm.
Assume that the algorithm knows the optimum maximum response
time $T$ (in fact we can try all possible values of $T$).
The algorithm identifies portions of the request sequence of
length $T$ where the same page has been requested many times
(we refer to these as dense segments).
The goal is to schedule broadcasts of pages in order to
satisfy requests in such dense segments of the request sequence.
There are two problems we need to overcome in implementing this
basic idea.
Firstly, the identified dense segments might overlap.
Secondly, it may not be possible to satisfy requests in
all such dense segments.
The algorithm adopts a greedy approach.
We initially start with an empty broadcast schedule and
add pages to the schedule one by one.
For each page added, we allow a window of size $2T$ when
this page could be broadcast.
In fact we do not commit to exact times when pages will be
broadcast until the end -- the working broadcast schedule
consists of a set of intervals with the understanding
that each interval will eventually have a corresponding page
broadcast in the actual schedule.
A working schedule (a set of intervals) is said to be feasible
if there is a real broadcast schedule such that each interval $I_p$
in the working schedule has a corresponding page $p$ broadcast
during the interval $I_p$.
Pages are added to the broadcast schedule as follows:
We identify the most dense segment
and attempt to add a page to the current broadcast schedule
so as to satisfy the requests in the dense segment.
We check to see if we can add an interval corresponding to this
page to the current working schedule and still ensure that
the resulting working schedule is feasible.
If the new interval can be added, we do so,
and delete some requests from the request sequence (in an interval
of size $3T$)
that would be satisfied within a delay of $5T$
by the broadcast of the newly added page.
If adding the interval violates feasibility, we do nothing.
We then repeat this procedure with the next dense segment.
\subsection{Algorithm details}\ \\
\begin{figure*}
\centerline{ \psfig{figure=figures/ex1.eps,height=1in}}
\caption{An example of a request sequence.}
\label{fig:ex1}
\end{figure*}
We represent a request sequence by an ordered pair; the request $(p,t)$ refers
to a request for page $p$ at time $t$.
Given any set of requests $R$, we define {\em yield$(R,p,t)$} as the number
of requests from $R$ that are satisfied with a response time of at most $T$
if we broadcast page $p$ at time $t$.
Thus, {\em yield$(R,p,t)$} is the total number of requests for page $p$
at times $t-T,t-T+1,\ldots,t-1$ in the set $R$.
Consider the example shown in Fig.~\ref{fig:ex1}. Let $T=3$.
Note that {\em yield$(R,A,3)$} is 5. Also note that
{\em yield(R,A,4)} is 6 since the request $(A,0)$ cannot be included
in {\em yield(R,A,4)} when $T=3$.
\begin{figure*}
\centerline{ \psfig{figure=figures/ex2.eps,height=1.5in}}
\caption{Example to show inserted Interval $I_p[t+T,t+3T]$. Core is
shown by the rectangle.}
\label{fig:ex2}
\end{figure*}
We represent the current working solution by a set $S$
of intervals $\{I_p[t_s,t_e]\}$.
An interval $I_p[t_s,t_e)] \in S$
has $t_s<t_e$ and indicates that
page $p$ must be scheduled between times $t_s$ and $t_e$\footnote{This would correspond
to time-slots $t_{s}+1, t_{s}+2 \ldots t_{e}$.}.
The set $S$ of intervals is said to be feasible if every interval
$I_p[t_s,t_e] \in S$ can be assigned a unique broadcast of page $p$
in the final schedule in time slots $[t_s+1,t_e]$.
Note that we stipulate that two overlapping intervals for page $p$
must be assigned distinct broadcasts of page $p$.
This makes checking for feasibility very simple.
In order to check for feasibility of a set of of intervals
$S$, we build a bipartite graph with the intervals
$I_p[t_s,t_e] \in S$ on one side and the timeslots on the other.
An interval $I_p[t_s,t_e]$ is connected to timeslots
$t_s+1,t_s+2,\ldots,t_e$.
Feasibility of $S$ corresponds to checking for the existence of
a matching in this bipartite graph that matches every interval in $S$ to
some timeslot.
We now describe the algorithm precisely.
Let $T_{max}$ be the maximum arrival time of a request in the request
sequence.
\noindent{\bf Algorithm Construct-Schedule}\\
\noindent{\bf Input:} request sequence $R = \{(p,t)\}$, maximum response time $T$.
\begin{enumerate}
\item Let $S$ be the set of intervals in the working schedule
(initially empty).
\item Let $Q$ be the set of all ordered pairs $(p,t)$ where $p$ ranges over all
pages and $t \in [1,T_{max}+T]$.
\item Repeat until $Q$ is empty:
\begin{enumerate}
\item Find the pair $(p,t) \in Q$ with the maximum value of
{\em yield$(R,p,t)$}.
\item Let $G(A,B,E) = \mbox{Construct-Assignment-Graph}(S \cup I_p(t+T,t+3T))$.
\item If $G(A,B,E)$ has a matching saturating $A$,
add $I_p(t+T,t+3T)$ to $S$ and delete all requests
$(p,t'), t' \in [t-2T,t+T]$ from $R$.
\item Remove $(p,t)$ from $Q$.
\end{enumerate}
\item Let $G(A,B,E) = \mbox{Construct-Assignment-Graph}(S)$.
\item Let $M$ be matching in $G(A,B,E)$ saturating $A$.
\item Construct final broadcast schedule as follows:
If $M$ matches interval $I_p[t_s,t_e]$ to timeslot $t$ in $M$, then
broadcast page $p$ at time $t$.
\end{enumerate}
The function that constructs the assignment graph between
intervals and timeslots is as follows:
\noindent{\bf Function Construct-Assignment-Graph}\\
\noindent{\bf Input:} Set $S = \{I_p[t_s,t_e]\}$.
\begin{enumerate}
\item $A$ has a vertex corresponding to every interval in $S$.
\item $B$ has a vertex corresponding to every timeslot
$t \in [1,T_{max} + 4T]$.
\item For every vertex $v \in A$ (say $v$ corresponds to interval
$I_p[t_s,t_e] \in S$), place edges from $v$ to the vertices
in $B$ corresponding to timeslots $t_s+1,t_s+2,\ldots,t_e$.
\item return $G(A,B,E)$.
\end{enumerate}
There is a feasible schedule where each interval $I_p[t_s,t_e] \in S$
has a corresponding (unique) page $p$ in the final schedule
broadcast in $[t_s+1,t_e]$
if and only if the bipartite graph constructed has a perfect matching
\footnote{This can be proved quite easily. If a feasible solution exists,
then note that since the intervals for the same page are disjoint this
implies a perfect matching - match each interval node with the time slot
when the page is broadcast. Since these graphs are convex bipartite graphs,
even a greedy algorithm can be used to find a perfect matching \cite{Lawler}.}.
\subsection{Analysis}\ \\
In order to prove the correctness of the algorithm, we would
like to show that the total number of requests served by the
algorithm's schedule (within a response time of $5T$) is at least
the number of requests served by the optimal schedule (within
a response time of $T$).
To do this, we construct an injective mapping from the
requests served by the optimal schedule to the requests
served by the algorithm's schedule.
This is not straightforward since the pages served by the
algorithm may be very different from those served by the optimal.
If the algorithm's broadcast of a page $p$ covers requests satisfied
by OPT's broadcast of the same page $p$, such requests served by
OPT are also served by the algorithm.
If on the other hand, the algorithm's broadcast of a page $p$ does
not cover OPT's broadcasts completely, we devise a
matching between such requests covered by OPT and not covered
by the algorithm to requests covered by the algorithm and not
covered by OPT.
The existence of such a matching is shown via Hall's theorem,
using the properties of the algorithm
\footnote{Informally, the difficulty is that as requests are removed, some
pages that are scheduled by OPT may have their yield reduced and
are thus never scheduled by the algorithm. However, we need to argue
that we will still cover at least as many requests as covered by OPT.}.
\noindent
{\bf Definition:}
The core of an interval $I_p[t+T,t+3T]$ added to $S$ is the interval of time
$[t-T,t]$ which corresponds to the pair $(p,t)$ chosen with
maximum yield (see rectangle in Fig.~\ref{fig:ex2}).
\iffalse
\begin{lemma}
\label{lem:disjoint}
(Disjointness Lemma)
If two intervals are chosen by the algorithm
corresponding to the same page $p$ then they are disjoint.
\end{lemma}
\begin{proof}
The first interval chosen, namely $I_p[t+T,t+2T]$
removes all requests for page $p$ from the entire window of size
$3T$ before it (see Fig.~\ref{fig:ex2}).
Assume a second interval $I'_p$ is chosen for the same page.
If the two intervals overlap, then their cores
overlap as well. But the first interval $I_p[t+T,t+2T]$ would have forced the
yield of the core of the second interval $I'_p$ to be zero,
since all requests for page $p$ were removed from $R$ in the
entire region where the cores overlap. Hence it
could not have been chosen by the algorithm.
\end{proof}
\fi
Suppose there exists a schedule (OPT)
that satisfies $N'$ requests with a maximum
response time of $T$. Let $p_1, p_2, \ldots, p_{T'}$ be such a
schedule. Suppose
$p_t$ is the page broadcast at time slot $t$.
Let {\em yield($p_t$)} denote the number of requests satisfied by
the broadcast of page $p_t$ in OPT.
(This is the number of requests made before time $t$ for this page,
going back until either
the previous time slot when this page was broadcast, or going back
$T$ steps. For example, in Fig.~\ref{fig:ex3} we have $N'=19$ and we have
the yield as shown for each page chosen by OPT.)
\begin{figure*}
\centerline{ \psfig{figure=figures/ex3.eps,height=1.5in}}
\caption{Example to show yield function in an optimal schedule.
We have: $yield(p_1)=3,yield(p_2)=7,yield(p_3)=2,yield(p_4)=3,
yield(p_5)=4$.}
\label{fig:ex3}
\end{figure*}
In the proof, we show a mapping from each page $p_t$ to at most two intervals
in set $S$. Each page output by OPT is mapped to a primary
interval and a secondary interval.
For each page $p_t$ in OPT, we will
define two numbers {\em primary($p_t$)} and
{\em secondary($p_t$)}, such that they sum to {\em yield($p_t$)}.
Several pages may be mapped to the same interval
$I_p[t_s,t_e]$
as either primary or secondary.
Let {\em P($I_p[t_s,t_e]$)} and {\em S($I_p[t_s,t_e]$)}
be the pages of OPT that are mapped to this interval
as primary and secondary respectively.
Let Requests($I_p[t_s,t_e]$) denote the number of
requests removed (hence satisfied) by the interval
$I_p[t_s,t_e]$.
We {\em will show} that
\[ \mbox{Requests}(I_p[t_s,t_e]) \geq \]
\[ \sum_{p_t \in {P}(I_p[t_s,t_e])} \mbox{primary}(p_t) +
\sum_{p_t \in {S}(I_p[t_s,t_e])} \mbox{secondary}(p_t). \]
The number of requests satisfied by OPT is
\[ N' = \sum_{i=1}^{T'} \mbox{yield}(p_i) =
\sum_{i=1}^{T'} (\mbox{primary}(p_i) + \mbox{secondary}(p_i)) = \]
\[\sum_{I_p[t_s,t_e] \in S} (\sum_{p_t \in {P}(I_p[t_s,t_e])}
\mbox{primary}(p_t)
+ \sum_{p_t \in {S}(I_p[t_s,t_e])} \mbox{secondary}(p_t)) \]
\[ \le
\sum_{I_p[t_s,t_e] \in S} \mbox{Requests}(I_p[t_s,t_e]). \]
This shows that we schedule at least as many requests as OPT.
When we insert an interval $I_p[t+T,t+3T]$ we remove all requests
for page $p$ that were made between times $t-2T$ and $t+T$. These
requests will have a response time of at most $5T$ due to the
interval $I_p[t+T,t+3T]$.
We think of these requests as satisfied by this interval.
These removed requests may contribute to {\em yield($p_{t'}$)}
for pages $p_{t'}$ broadcast in OPT.
Such pages $p_{t'}$
(though not all such) are assigned to this interval as either
primary or secondary and the contribution assigned to this interval
is the number of requests (contributing to {\em yield($p_{t'}$)})
removed by the interval.
Consider the insertion of interval $I_p[t+T,t+3T]$ which
removes all requests
for page $p$ between times $t-2T$ and $t+T$.
If OPT
schedules any page $p$ at times $t'$ with $t-2T < t' < t-T$ then
these are mapped to the interval $I_p[t+T,t+3T]$ as secondary,
if any of their requests (i.e. requests counted in {\em yield($p_{t'}$)})
are removed by $I_p[t+T,t+3T]$. The number of
requests removed is defined as secondary($p_{t'}$).
(Also, for all such pages $p_{t'}$ assigned to the interval,
the current yield of $p_{t'}$ is set to
{\em yield($p_{t'}$)} minus secondary($p_{t'}$).)
If OPT schedules any page $p$ at time $t'$
with $t-T \le t' \le t+T$ then it is mapped to
the interval $I_p[t+T,t+3T]$ as its primary interval.
Observe that all pages of OPT for page $p$ in this range of $t'$ that are
mapped to $I_p[t+T,t+3T]$ as primary, in fact have all of their
requests satisfied within a delay of at most $5T$ since $p$ is
scheduled in the interval $I_p[t+T,t+3T]$.
We define primary($p_{t'}$) as the number of requests removed
(i.e. the number of requests counted in {\em yield($p_{t'}$)}
that are removed by the interval).
{\em Note that the total number of requests removed (i.e. satisfied)
by the interval $I_p[t_s,t_e]$ is at least
as large as the total (current) yield of the pages $p$ of OPT
that are assigned as primary as well as the pages $p$ of OPT
that are assigned as secondary.}
Consider a page $p_{t'}=p'$ of OPT that
may have lost some requests to an interval $I_{p'}[t_s,t_e]$
when the insertion of this interval removes requests for $p'$.
In fact, we show that it can lose requests at most once due to
a secondary mapping.
\begin{lemma}
Every page $p_{t'}$ in OPT is assigned to at most one
interval via a secondary mapping and at most one
interval via a primary mapping.
\end{lemma}
\begin{proof}
If $p_{t'}$ loses any requests due to a primary mapping, then it loses
all its requests. This is because $t-T \le t' \le t+T$, and all its
requests are made in the window $[t'-T,t']$ which is contained in
$[t-2T,t+T]$. Recall that
all requests for $p'$ are removed from this window of time.
If $t-2T < t' < t-T$ then it may lose requests when an interval
is inserted, but may not lose all the requests. If this event were
to happen again, the only way
this can happen is if the cores of the two intervals overlap. However,
the second interval's core would have no requests for
page $p'$
\end{proof}
Notice that all requests served by OPT may not get mapped by the above
described mapping.
Also, all intervals in $S$ may not get a primary assignment.
Intervals in $S$ that do get primary assignments are referred
to as {\em assigned} and the remaining intervals in $S$
are referred to as {\em unassigned}.
We now describe how to map the remaining requests
served by OPT (not mapped previously) to unassigned intervals
in $S$.
We construct an auxiliary bipartite graph between pages broadcast in OPT
and the intervals of $S$ and find a perfect matching in this graph
to compute a primary mapping for the remaining pages. The bipartite graph
is constructed as follows.
Consider a page $p_t$ of OPT. Assume that this page was not assigned
a primary mapping, and $p_t$ has not lost all of its requests
due to a secondary mapping.
However, for each page $p_t$ scheduled
by OPT, the algorithm will consider the pair $(p,t)$ at some
point in its execution. If its yield has become
zero, then it means that all of its requests have been removed
by the intervals added to $S$ and thus it is mapped due to a primary
or secondary mapping. Otherwise, the algorithm made an attempt to
insert the interval $I_p[t+T,t+3T]$ and failed to insert it into $S$.
The reason for this is that there is a window of time going from $t_1$
to $t_1+L_1$, such that $t_1 \le t+T \le t+3T \le t_1+L_1$. In addition, this
window of time already contains in it $L_1$ intervals belonging to
set $S$ that were previously chosen by the algorithm.
Note that some of
these $L_1$ intervals are assigned (i.e. already have been mapped to
via a primary mapping) and the rest are unassigned.
We add edges in the bipartite graph between the unassigned page $p_t$
of OPT and
all unassigned intervals in the time window $t_1$ to $t_1+L_1$.
Note that $t \in [t_1-2T,t_1+L_1-2T]$. This will be useful later.
Our objective is to map the remaining unassigned pages in
OPT to an unassigned interval in $S$
via a primary mapping. Note that the interval it is mapped to, may actually
be for a different page.
We will now show that a matching exists in this bipartite graph,
that maps all unassigned pages of OPT to unassigned intervals in $S$. This
defines the primary mapping. In addition we will show that under this
primary mapping
the total number of requests removed by the interval
(Requests($I_p[t_s,t_e]$)) is at least as
large as
\[ \sum_{p_t \in {P}(I_p[t_s,t_e])} \mbox{primary}(p_t) +
\sum_{p_t \in {S}(I_p[t_s,t_e])} \mbox{secondary}(p_t). \]
If unassigned page $p_t$ is connected to unassigned interval
$I_{p'}[t'+T,t'+3T]$ in this graph, the pair $(p',t')$ must
have been considered before the pair $(p,t)$ by the algorithm.
This means that the number of requests of page $p'$ in $[t'-T,t']$
is at least the current yield of $p_t$.
Since the interval $I_{p'}[t'+T,t'+3T]$ has not received
a primary assignment, no requests for page $p$ in the core
$[t'-T,t']$ have been used up to account for requests served by OPT.
Thus the portion of {\em yield($p_t$)} that is not
yet assigned to intervals in $S$ can be charged to the unassigned
interval it is connected to in the auxiliary bipartite graph.
In order to show that the auxiliary bipartite graph has a matching,
we will verify that the conditions for applying Hall's theorem hold.
Consider any subset $S'$ of unassigned pages of OPT and consider
their neighborhood in the set of unassigned intervals in the auxiliary
bipartite graph.
The goal is to show that the size of the neighborhood is at least
$|S'|$.
Each unassigned page of OPT is connected to all
the unassigned intervals in some window of time $t_i$ to $t_i+L_i$.
There are exactly $L_i$ such intervals. For the subset of unassigned
pages, the neighborhood can be viewed as a collection of windows
$[t_i,t_i+L_i]$. The neighborhood in the bipartite graph consists
of all the unassigned intervals in the union of the windows.
First, we give a lower bound on the total number of intervals
(assigned as well as unassigned) contained in the union of the time
windows.
\begin{lemma}
\label{lem-useful}
If we have windows $[t_i,t_i+L_i]$ such that $[t_i,t_i+L_i]$
contains exactly $L_i$ intervals in the algorithms collection.
Consider the union of the windows. Say this has length $L$.
Then the union must contain exactly $L$ intervals in the
algorithms collection.
\end{lemma}
\begin{proof}
We will prove it for the union of two intervals. This
can be extended to any number of intervals. Consider
two intervals of length $L_1$ and $L_2$. Let $C$ be the length of
their common portion (possibly 0). Then the length of the
union is $L_1+L_2-C$. Let $x$ be the number of intervals in
the algorithm's collection that are strictly contained in
the common portion. By the feasibility condition maintained
by the algorithm, $x \le C$. Now the number of intervals in
the union is at least $L_1+L_2-x \ge L_1+L_2-C$. Of course, by
the feasibility condition maintained by the algorithm, this
number cannot exceed $L_1+L_2-C$, hence must be equal.
By induction, this argument can be extended to the union of
any finite number of intervals.
\end{proof}
By Lemma
~\ref{lem-useful} if the union of windows $[t_i,t_i+L_i]$
has length $L$ then there are
exactly $L$ intervals in the neighborhood of $S'$. These $L$ intervals
are either assigned or unassigned by a primary mapping.
We claim that there are at least $|S'|$ unassigned intervals
in this set.
Every interval $I_p[t+T,t+3T]$ contained in this
union, that has already received a primary assignment,
must have been assigned a page $p_{t'}$ in the optimal schedule
with $t' \in [t-T,t+T]$.
Note that $t'$ lies in an interval which is simply the interval
$t+T,t+3T$ shifted by an amount $2T$.
The pages $p_{t'}$
in OPT with primary assignments to these intervals must
be contained in the union of the intervals $[t_i-2T,t_i+L_i-2T]$.
Each assigned interval must receive a distinct page in OPT's
schedule.
Thus the number of unassigned intervals contained in the union
$[t_i,t_i+L_i]$ is at least the number of unassigned jobs in OPT
in the union of the intervals $[t_i-2T,t_i+L_i-2T]$.
But note that the set $S'$ of unassigned requests is contained
in this interval.
Hence the number of unassigned intervals is at least $|S'|$.
Since Hall's condition is satisfied, the bipartite graph has
a perfect matching as claimed.
Putting all the pieces of the analysis together, we obtain the
following theorem:
\begin{theorem}
The total number of requests served by the algorithm's schedule
within a response time of $5T$ is at least the number of
requests served by OPT within response time $T$.
\end{theorem}
\section{Lower bound for Online Algorithms}
We consider the following model for online algorithms for broadcast
scheduling:
The algorithm receives the sequence of requests online and must
decide the schedule of pages to be broadcast in an online fashion.
At the end of the request sequence, we determine the $N'$ requests
with the lowest completion times and compute the maximum completion
time on this set.
Our lower bounds hold even when the parameter $N'$ as well as the
total number of requests $N$ are specified ahead of time.
Note that our model gives a lot of flexibility to the online algorithm.
The selection of requests which contribute to the maximum completion
time is done at the end, in a manner most beneficial to the algorithm.
An alternate model would be one where the algorithm must specify every
time it services a request whether it should count towards the maximum
completion time and these decisions cannot be changed later.
Clearly, our lower bounds work apply to this stricter model as well.
We show that no randomized online algorithm can be constant
competitive in this model.
\mnote{I realized that the argument I had in mind earlier
was not quite correct. Please check the current proof.}
Our lower bounds hold for randomized algorithms against an oblivious
adversary.
In order to prove a lower bound on randomized algorithms, we use
Yao's principle and give a distribution over request sequences
such that any deterministic algorithm does badly.
Let $A$ be a set of $n^2$ distinct pages numbered $1 \ldots n^2$.
Let $B$ be a set of $n^2$ distinct pages numbered $n^2+1 \ldots 2n^2$.
In addition, we have a separate page numbered $0$.
The request sequence will consist of a total of $N=3n^2$ requests.
The goal is to schedule $N'= 2n^2$ requests.
The adversary issues the request sequence in two parts.
The first part is fixed and the second part is chosen from
one of two possibilities at random.
The first part of the request sequence is as follows:
At each time $0,n,2n,3n,\ldots,(n-1)n$ requests arrive for
some $n$ of the distinct pages in $A$.
More specifically, at time $kn$, $0 \leq k \leq n-1$, requests arrive
for pages $kn, \ldots (k+1)n-1$.
Further, at each time $t$, $0 \leq t \leq n^2-1$, a request arrives
for page $0$.
The second part of the request sequence is as follows:
With probability $(1-1/n)$, $n^2$ requests for page $0$ arrive
at time $n^2$.
With probability $(1/n)$, requests for the $n^2$ pages in $B$ arrive
at time $n^2$.
First we claim that the expected value of the optimal solution is $O(1)$.
Consider the two possible choices for the request sequence.
Suppose the request sequence had $n^2$ requests for page $0$ at time $n^2$.
Then the optimal strategy is to schedule all $2n^2$ requests for
page $0$ as soon as they arrive, with a maximum completion time of $1$.
On the other hand, suppose the request sequence had requests for the
$n^2$ pages in $B$ at time $n^2$.
(This happens with probability $1/n$).
Then, there is a feasible schedule that schedules $2n^2$ pages with
a maximum completion time of $n$.
In order to achieve this, in the first part of the request sequence,
we broadcast page $0$ at time $n, 2n, \ldots n^2$, satisfying all $n^2$
requests for page $0$.
At other times $t \in \{1,n^2-1\}$, we broadcast the page numbered $t$
from $A$, satisfying $n^2-n$ requests from $A$.
Further, at times $t \in \{n^2+1,\ldots n^2+n\}$, we broadcast
the page numbered $t$ from $B$.
As claimed, this satisfies $2n^2$ requests with a maximum
completion time of $n$.
Recall that this possibility occurs with probability $1/n$.
Thus the expected value of the optimal solution is $O(1)$.
Now consider any deterministic online algorithm for the problem.
Consider the total number of requests from $A$ that are satisfied
by time $n^2$.
Suppose that $\leq n^2 - n^{1.5}$ requests from $A$ have been scheduled.
Then with probability $1/n$, we claim that the maximum completion
time will be $\Omega(n^{1.5})$ giving an expected value of $\Omega(\sqrt{n})$.
Consider the case when $n^2$ distinct pages from $B$ are requested
at time $n^2$.
Since at most $2n^2-n^{1.5}$ requests have been satisfied at time $n^2$,
the additional requests needed to be satisfied will have a maximum
completion time of $\Omega(n^{1.5})$.
On the other hand, suppose that $> n^2-n^{1.5}$ requests from $A$ have been
satisfied by time $n^2$.
Note that at least $n^2$ of the requests that arrive before time $n^2$
contribute to the maximum completion time of the online algorithm.
We claim that the maximum completion time in this case is
$\Omega(\sqrt{n})$.
Suppose that $n^2$ requests that arrive before time $n^2$ can
be completed with a maximum completion time of $\sqrt{n}/2$.
Then note that at most $n^{1.5}/2$ requests from $A$ can be
included in this set of $n^2$ requests.
Thus at least $n^2-n^{1.5}/2$ requests must consist of requests
to page $0$.
At most $n^{1.5}$ time slots can be devoted to satisfying these
requests.
Thus the maximum completion time for these requests
must be $\Omega(\sqrt{n})$.
This implies that the expected cost of any deterministic algorithm for
the distribution over request sequences is $\Omega(\sqrt{n})$.
\mnote{State lower bound in terms of number of pages, or length of request
sequence ?}
\begin{theorem}
No (randomized) online algorithm can be $c$-competitive for the problem of
minimizing maximum response time for a specified fraction of requests.
\end{theorem}
\section{Conclusions}
This measure may be an interesting one for scheduling situations where
scheduling {\em every} job quickly is not as important, as scheduling
most jobs quickly. It is clear that in the online setting, no $c$-competitive
algorithm is possible for any constant $c$. However, in the offline setting
we have been able to develop a constant factor approximation. It would be
interesting to close the gap, and obtain a 2 approximation (the best bound
that is known for minimizing the maximum response time). It would also
be nice to show that the problem of minimizing the maximum response time
is $NP$-hard.
It would be interesting to explore whether our ideas can be used to
improve the known results for minimizing average response time in
broadcast scheduling.
If the optimal average response time is $T$, it follows that
$1-\epsilon$ fraction of the requests can be served with
maximum completion time $T/\epsilon$.
Thus our methods can be used to obtain combinatorial
lower bounds on the average completion time.
The bound obtained by this technique appears to be different from
the LP based bound.
In addition, other problems such as scheduling tasks on unrelated
parallel machines may be interesting ones to study under the model
we have proposed. Previous work for minimizing makespan
gives a factor 2 approximation for
this problem \cite{LST}.
\section{The Greedy Algorithm}
In this section, we present a greedy algorithm that achieves
an approximation factor of $(2,1+\ln n)$.
The algorithm is similar to the standard set cover type greedy algorithm
and runs in iterations.
In each iteration, the most ``cost-effective'' set, the set that maximizes
the ratio of the incremental benefit of the set, to its cost,
is chosen and added to our solution set, until all elements are covered.
Given that a solution with activation cost $A$ and makespan $T$ exists,
at each step we wish to select a machine to activate based on its
``cost-effectiveness''.
Given a set $S$ of active machines, let $F(S)$ denote the maximum
number of jobs that can be scheduled with makespan $T$.
However, in this case, the quantity
$F(S)$, is NP-hard to compute, thus it is unlikely to have efficient procedures
either to test the feasibility of the current set of active machines or to find
the most cost-effective machine to activate.
The central idea is that instead of using the integral function $F(S)$ that
is hard to compute,
we use a fractional relaxation that is much easier to compute,
and allows us to apply the greedy framework.
Formally, for a value $T$,
we first set all $p_{i,j}$'s that are larger than $T$ to infinity
(or the corresponding $x_{i,j}$ to 0).
Let $f(S)$ be the maximum number of jobs that can be fractionally processed by a
set $S$ of machines that are allowed to run for time $T$ each.
In other words,
\begin{eqnarray}
\label{eq:lp2}
& & f(S)=\max \sum_{i,j}x_{i,j} \\
&s.t.& \sum_{i \in M}x_{i,j} \leq 1 \,\,\,\forall j \in J\nonumber\\
& & \sum_{j \in J} p_{ij} x_{i,j}\leq T \,\,\,\forall i\in S \nonumber \\
& & 0\leq x_{i,j}\leq 1 \,\,\,\forall i,j ;\;\;\;\; x_{i,j}=0 \,\,\,\text{if }i \notin S \text{ or }p_{ij}>T \nonumber
\end{eqnarray}
Note that $f(S)$ can be computed by
using a general LP solver or by a generalized flow computation.
The generalized flow problem is the same as the traditional network flow problem
except that, for each arc $e$, there is a gain factor $\gamma(e)$ and for each
unit of flow that enters the arc $\gamma(e)$ units exit.
To see that $f$ can be computed by a generalized flow computation,
we add a sink $t$ to the bipartite graph $G(M\cup J, E)$ and connect each
job to $t$ with an arc with capacity $1$. Each edge $(i,j), i\in M, j\in J$
has a capacity $p_{ij}$ and gain factor $1/p_{ij}$. Every machine $i\in S$
has a flow excess of $T$. It is easy to see the maximum amount of flow that
reaches $t$
is exactly the optimal solution of LP (\ref{eq:lp2}).
A function $z:2^N\rightarrow R$ is {\em submodular}
if $z(S)+z(P)\geq z(S\cap P)+z(S\cup P)$ for any $S,P\subseteq N$.
Let $z(S)$ be the maximum amount of flow that reach $t$ starting with
the excesses at nodes in $S$:
Recently, Fleischer \cite{Fleischer09} proved the following:
\begin{lemma} (Fleischer)
\label{lm_submodular}
For any generalized flow instance, $z(S)$ is a submodular function.
\end{lemma}
It is a direct consequence that $f(S)$ is submodular.
Define $gain(i,S)=f(S\cup i) -f(i)$ for any $i\in M$ and $S\subseteq M$.
Our greedy algorithm starts with an empty set $S$ of active machines,
and activates a machine $s$ in each iteration
that maximizes $gain(i,S)\over a_i$, until
$f(S)> n-1$. We then round the fractional solution to an integral one
using the scheme by Shmoys and Tardos \cite{ST}.
\begin{figure}[h]
\begin{tabbing}
{\bf Algorithm GREEDY-SCHEDULING}\\
$S=\emptyset$; \\
{\bf While}($f(S)\leq n-1$) {\bf do} \\
\hspace{.2in} \= Choose $i\in M\setminus S$ such that
$gain(i,S)\over a_i$ is maximized; \\
\>$S=S\cup\{i\}$; \\
Activate the machines in set $S$;\\
Round $f(S)$ to an integer solution to find an assignment.\\
\end{tabbing}
\label{fig:alg}
\end{figure}
The problem is actually a special case of the submodular set cover problem:
$\min\{\sum_{j\in S}a_j \mid z(S)=z(N), S\subset N\}$
where $z$ is a nondecreasing submodular function.
In fact, Wolsey \cite{Wolsey82} shows the following result about the greedy algorithm, rephrased in our notation.
\begin{theorem} (Wolsey)
\label{Wolsey}
Let $S_t$ be the solution set we have chosen after iteration $t$ in the greedy algorithm.
Then,
$$
\sum_{i\in S_t}a_i \leq OPT \left(1+\ln {z(N)-z(\emptyset)\over z(N)-z(S_{t-1})}\right)
$$
where $OPT$ is the optimal solution.
\end{theorem}
In particular, if $f()$ is integer-valued, the theorem yields a $1+\ln n$ approximation.
However, $f()$ is not necessarily integral in our problem.
Therefore, we terminate iterations only
when more than $n-1$ (rather than $n$) fractional jobs are satisfied,
thus $f(M)-f(S_{t-1})\geq 1$ and Theorem \ref{Wolsey} gives us a
$(1+\ln n)$-approximation for the activation cost.
Finally, we would like to remark that the rounding step guarantees to find a feasible integral solution although the fractional solution we start with only
satisfies more than $n-1$ jobs.
The reason lies in the construction by Shmoys and Tardos
(refer to \cite{ST} for more details).
\iffalse
Suppose ${\bf x}$ is the generalized flow that corresponds to $f(S)$ where $x_{ij}$ is the flow value on edge $(i,j)$.
We will constructs a new bipartite graph $B({\bf x})=(M'\cup J,E)$,
where $J$ is the set of jobs
and $M'=\{m_{is}, i\in M, s=1,2,\ldots,\lceil \sum_{j}x_{ij} \rceil \}$.
For simplicity of notation, assume for the moment that $p_{i1}\geq p_{i2}\geq \ldots, \geq p_{in}$.
There is an edge between job $j$ and node $m_{is}$
if $s-1\leq \sum_{k=1}^{j-1} x_{ik}< s$ or $s-1<\sum_{k=1}^j x_{ik}\leq s$.
It has been shown there is fractional matching $\bf x'$ of value
$\sum_{i,s,j}x'_{m_{is}j}=\sum x_{ij}$ in $B({\bf x})$.
Actually, $x'_{m_{is}j}=\left\{
\begin{array}{ll}
h_{ij}-s+1 ,& \hbox{if $h_{i,j-1}\leq s-1< h_{i,j}$} \\
s-h_{i,j-1}, & \hbox{if $h_{i,j-1}< s\leq h_{i,j}$} \\
x_{ij}, & \hbox{if $s-1<h_{i,j-1}< h_{i,j}<s$}
\end{array}
\right.$ is such a choice where $h_{i,j}=\sum_{k=1}^j x_{ik}$.
It is easy to verify that $\sum_{i,s,j}x'_{m_{is}j}=\sum_{i,j}x_{ij}$,
$\sum_{i,s}x'_{m_{is}j}=\sum_{i}x_{ij}\leq 1,\forall j$ and $\sum_j x'_{m_{is}j}\leq 1\forall m_{is}$.
So $\bf x'$ is a fractional matching.
\fi
Therefore, there exists an integral matching such that
all jobs are matched. Moreover, it is also proven that
the job assignment induced by any integral matching
has a makespan at most $T+\max p_{ij}$. Therefore, our final makespan is at most $2T$.
\section{Introduction}
Large scale data centers have emerged as an extremely popular way to
store and manage a large volume of data. Most large corporations, such as
Google, HP and Amazon have dozens of data centers. These data centers
are typically composed of thousands of machines, and have extremely
high energy requirements. Data centers are
now being used by companies such as Amazon Web Services,
to run large scale computation tasks for
other companies who do not have the resources to create their
own data centers. This is in addition to their own computing
requirements.
These data centers are designed to be able to handle extremely high
work loads in periods of peak demand. However,
since the workload on these data centers fluctuates over time, we
could selectively shut down part of the system to save energy when
the demand on the system is low. Energy savings results not just
from putting machines in a sleep state, but also from savings in
cooling costs.
Hamilton (see the recent SIGACT News article \cite{Birman})
argues that a ten fold reduction in the power
needs of the data center may be possible if we can simply build
systems that are optimized with power management as their primary goal.
Suggested examples (summarizing from the original text) are:
\begin{enumerate}
\item
Explore ways to simply do less during surge load periods.
\item
Explore ways to migrate work in time. The work load on
modern cloud platforms is very cyclical, with infrequent peaks and
deep valleys. Even valley time is made more expensive by the need
to own a power supply to be able to handle the peaks, a number of
nodes adequate to handle surge loads, a network provisioned for
worst case demand.
\end{enumerate}
This leads to the issue of
{\em which machines can we shut down}, since all machines in a data center are not
necessarily identical.
Each machine stores some data, and is
thus not capable of performing every single job efficiently
unless some data is first migrated to the machine.
We will formalize this question very shortly.
To quote from the recent article by Birman et al. (SIGACT News \cite{Birman})
``Scheduling mechanisms that assign tasks to machines, but more
broadly, play the role of provisioning the data center as a whole.
As we'll see below, this aspect of cloud computing is of growing
importance because of its organic connection to power consumption: both
to spin disks, and run machines, but also because active machines
produce heat and demand cooling. Scheduling, it turns out, comes
down to deciding how to spend money.''
Data is replicated on storage systems for both load balancing during
peak demand periods, as well as for fault tolerance. Typically many
jobs have to be scheduled on the machines in the data center.
In many cases
profile information for a set of jobs
is available in advance, as well as estimates of cyclical workloads.
Jobs may be I/O intensive or CPU intensive,
in either case, an estimate of its processing time on each type of machine
is available.
Jobs that need to access
specific data can be assigned to any one of the {\em subset} of
machines that store the needed data. Our goal is to first {\em select} a
subset of machines to activate, and then schedule the jobs on the
active machines. From this aspect our problems differ from standard
scheduling problems with multiple machines, where the set of active
machines is the set of all machines. Here we have to decide
{\em which machines to
activate} and then schedule all jobs on the active machines.
The scheduling literature is vast, and one can formulate
a variety of interesting questions in this model. We initiate
this work by focusing our attention on perhaps one of the most
widely studied machine scheduling problems since it matches
the requirements of the application.
We have a collection of jobs and unrelated machines, and need to decide
which subset of machines to activate. The jobs can only be scheduled
on active machines. This provides an additional dimension
for scheduling problems that was not previously considered.
This situation also makes sense when we have
a certain set of computational tasks to process, a cost budget, and can purchase access to
a set of machines.
One fundamental (and well studied) scheduling problem is
as follows: Given a collection of $n$ jobs, and $m$ machines where
the processing time of job $j$ on machine $i$ is $p_{i,j}$, assign all jobs to machines
such that the makespan, i.e., the time when all jobs are complete, is minimized.
This problem is widely referred to as {\em unrelated parallel machine scheduling}
\cite{LST,ST}. If machine $i$ does not have the data that job $j$
needs to run, then we set $p_{i,j} = \infty$, otherwise the processing time
$p_{i,j}$ is some constant $p_j$ which only depends on job $j$.
This special case is the so-called {\em restricted scheduling problem}
and known to be $NP$-hard. However, if a schedule exists
with makespan $T$, then the polynomial time algorithm developed by
Lenstra, Shmoys and Tardos \cite{LST} shows an elegant rounding
method to find a schedule with makespan $2T$. The subsequent
generalization by Shmoys and Tardos \cite{ST}, shows in fact that
even with a cost function to map each job to a machine, if a mapping
with cost $C$ and makespan $T$ exists, then their algorithm finds a
schedule with cost $C$ and makespan at most $2T$.
Motivated by the problem of shutting down machines when the demand
is low, we define the following ``machine activation'' problem.
Given a set $J$ of $n$ jobs and a set $M$ of $m$ machines, our goal is to activate a subset
$S$ of machines and then map each job to an active machine in
$S$, minimizing the overall makespan. Each machine has an activation
cost of $a_i$. The activation cost of the subset $S$ is $a(S)= \sum_{i \in S}
a_i$. We show that if there is a schedule with activation cost $A$
and makespan $T$, then we can find a schedule with activation cost
$2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1) A$ and makespan $(2+\epsilon) T$ for any $\epsilon>0$ by the LP-rounding scheme
(we call this is a $((2+\epsilon),2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1))$-approximation).
We also present a greedy algorithm which gives us a $(2,1+\ln n)$-approximation.
Actually, the $\ln n$ term in the activation cost with this general formulation
is unavoidable, since this problem is at least as hard
to approximate as the set cover problem\footnote{This is easy to see --
we can view a set cover instance as a bipartite graph connecting
elements (jobs) to corresponding sets (machines). If the element belongs to
a set, then the processing time of the corresponding job on the corresponding machine
is 0, o.w. it is $\infty$. An optimal set cover solution corresponds to
an optimal set of machines to activate with $0$ makespan.},
for which a $(1-\epsilon) \ln n$
approximation algorithm will imply that $NP \subseteq DTIME(n^{O(\log \log n)})$
\cite{Feige98}.
We also show that the recent PTAS developed by
Epstein and Sgall \cite{EpsteinSgall} can be extended
to the framework of machine activation problems for the
case of scheduling jobs on uniformly related parallel machines.
(The original PTAS by Hochbaum and Shmoys \cite{HS} is
slightly more complicated than the method suggested by Epstein and Sgall
\cite{EpsteinSgall}.)
We also consider a version of the problem in which a subset of the
jobs may be dropped to save energy (recall Hamilton's point(1)).
In this version of the problem, each job $j$ also has a benefit $\pi_j$
and we need to process a subset of jobs with total benefit of at least $\Pi$.
Suppose that a
schedule exists with cost $C_\Pi$ and makespan $T_\Pi$ that obtains
a total benefit at least $\Pi$.
We show that the method due to Shmoys and Tardos \cite{ST} can be
extended to find a collection of jobs to perform
with expected benefit at least $\Pi$ and expected cost $C_\Pi$,
with a makespan guaranteed to be at most $2T_\Pi$ (see Appendix~\ref{partialgap}) .
(The recent work by Gupta et al. \cite{Gupta} gives a clever deterministic scheme
with makespan $3T_\Pi$ and cost $(1+\epsilon)C_\Pi$ along with several other
results on scheduling with outliers. This has been further improved to $(2+\epsilon)T_{\Pi}$ and cost $(1+\epsilon)C_\Pi$ in \cite{ss:09}.)
\subsection{Related Work on Scheduling}
Generalizations of the work by Shmoys and Tardos \cite{ST}, have
considered the $L_p$ norm. Azar and
Epstein \cite{AE} give a 2-approximation for any $L_p$ norm for any $p>1$, and a $\sqrt{2}$-approximation
for the $L_2$ norm. The bounds for $p \neq 2$ have been subsequently improved
\cite{srin:focs00}.
In addition, we can have release times $r_{ij}$ associated with each job --
this specifies the earliest time when job $j$ can be started on machine
$i$. Koulamas et al. \cite{koulamas2004mmu}
give a heuristic solution to this problem
on uniformly related machines with a worst case approximation ratio
of $O(\sqrt{m})$.
Minimizing resource usage has been considered before.
In this framework, a collection of jobs $J$ needs to be executed
-- each job has a processing time $p_j$, a release time $r_j$
and a deadline $d_j$. In the continuous setting, a job
can be executed on any machine between its release time and
its deadline. In the discrete setting each job has a set of
intervals during which it can be executed. The goal is to
minimize the number of machines that are required to perform
all the jobs. For the continuous case, Chuzhoy and Codenotti \cite{Chuzhoy09}
have recently developed a constant factor approximation, improving
upon a previous algorithm given by Chuzhoy et al \cite{Chuzhoy2}.
For the discrete version Chuzhoy and Naor \cite{Chuzhoy3} have shown an
$\Omega(\log \log n)$ hardness of approximation.
However this framework does not model non-uniformity of machines, which is
one of the key issues in data centers. In addition,
non-uniformity of activation costs is not addressed in their work neither.
\subsection{Related Work on Energy Minimization}
Augustine, Irani and Swamy \cite{irani} develop online algorithms
to decide when a particular device should transition to a sleep state
when multiple sleep states are available. Each sleep state has a different
power consumption rate and a different transition cost. They provide
deterministic online algorithms with competitive ratio arbitrarily close
to optimal to decide in an online way which sleep state to enter
when there is an idle period. See also the survey by Irani and Pruhs
for other related work \cite{irani-pruhs}.
\iffalse
\subsection{Related work on Speed Scaling}
A well studied problem is that of processor speed scaling. In this model,
a processor can be run at speed $s$, that can be adjusted based on
workload. Jobs are arriving in an online manner with deadlines $d_j$
and processing requirement $p_j$. The goal is to complete all jobs
between their arrival time and their deadline in a way that minimizes
total energy consumed. Much work was devoted to the single processor
case \cite{Yao,Bansal}, and more recently to the multiprocessor
case \cite{Albers}.
This work is somewhat orthogonal to our problem -- in the single
processor model, the main issue is at what speed to run the
processor when there is a set of waiting jobs in the queue -- in
the absence of more jobs, its better to run the processor as slowly
as possible, while still completing all the jobs by their deadlines.
If suddenly a lot of new jobs arrive, then in trying to complete
partially processed jobs and the new jobs, we may have to run
at a significantly higher speed (using a lot of power) than
necessary had we finished the initial set of jobs earlier.
The paper by Albers et al. \cite{Albers} deals with
multiple processors and unit length jobs (each job has a release time and deadline).
The main focus of the paper is to show how to exploit techniques
for the single processor case to attack the multi processor case, and these
are shown to be effective in certain situations.
In contrast, our problem is an offline problem where we have a large
collection of jobs, and we have to decide which machines can go into
a sleep state and which machines will remain active.
\fi
\subsection{Our Contributions}
Our main contributions are:
\begin{itemize}
\item A randomized rounding method that approximates both activation
cost and makespan for unrelated parallel machines. This method is based
on rounding the LP solution of a certain carefully defined LP relaxation
and uses ideas from work on dependent rounding \cite{GKPS,srin:focs00}
(Section 2).
\item Extensions of the above method when we have assignment costs
in addition to activation costs as part of the objective function (Section 3).
\item A greedy algorithm that approximates both activation
cost and makespan for unrelated parallel machines and gives
a $(2, 1+\ln n)$-approximation (Section 4).
\item Extensions of these results to the case of handling outliers
using the methods from \cite{Gupta} as well as release times (Section 5).
\item A polynomial time approximation scheme for the cost activation
problem for uniformly related parallel machines extending the
construction given for the version of the problem with no activation costs
\cite{EpsteinSgall} (Section 6).
\item A simple dependent rounding scheme for the partial GAP problem (Appendix \ref{partialgap}).
\end{itemize}
\section{LP Rounding for Machine Activation on Unrelated Machines}
In this section, we first provide a simple roundinging scheme
with an approximation ratio of ($O(\log{n}),O(\log{n})$). Then we improve it
to a ($2+\epsilon, 2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1)$)-approximation by a new rounding scheme.
We can formulate the scheduling activation problem as an integer program. We
define a variable $y_i$ for each machine $i$, which is $1$ if the machine is open and
$0$, if it is closed. For every machine-job pair, we have a variable $x_{i,j}$, which
is $1$, if job $j$ is assigned to machine $i$ and is $0$, otherwise.
In the corresponding linear programming relaxation, we relax the $y_i$ and $x_{i,j}$ variables to be in $[0,1]$.
The first set of constraints require that each job is assigned to some machine.
The second set of constraints restrict the jobs to be assigned to only
active machines, and the third set of constraints limit the workload
on a machine. We require that $1 \ge x_{i,j}, y_j\geq 0$ and
if $p_{i,j} > T$ then $x_{i,j}=0$.
The formulation is as shown below:
\begin{eqnarray}
\label{eq:lp}
& & \min \sum_{i=1}^m a_iy_i \\
&s.t.& \sum_{i \in M}x_{i,j} = 1 \,\,\,\forall j \in J\nonumber\\
& & x_{i,j} \leq y_i \ \ \forall i \in M, j \in J \,\,\,\nonumber\\
& & \sum_{j} p_{i,j} x_{i,j}\leq Ty_i \,\,\,\forall i \nonumber
\end{eqnarray}
Suppose an integral solution with activation cost $A$ and makespan $T$ exist. The
LP relaxation will have cost at most $A$ with the correct choice of $T$.
All the bounds we show
are with respect to these terms.
In Section 2.2 we show that unless we relax the makespan constraint,
there is a large integrality gap for this formulation.
\subsection{Simple Rounding}
We first start with a simple rounding scheme. Let us denote the
optimum LP solution by $\bar{{\bf y}},\bar{{\bf x}}$. The rounding
consists of the following four steps:
\begin{enumerate}
\item Round each $y_i$ to $1$, with probability $\bar{y}_{i}$ and $0$ with probability $1-\bar{y}_{i}$. If $y_i$ is rounded to $1$, open machine $i$.
\item For each open machine $i$, consider the set of jobs $j$, that have
fractional assignment $> 0$ on machine $i$. For each such job, set
$X_{i,j}=\frac{\bar{x}_{i,j}}{\bar{y}_i}$.
If $\sum_{j}p_{i,j} X_{i,j}< T$, (it is always $\leq T$)
then uniformly increase $X_{i,j}$.
Stop increasing any $X_{i,j}$ that reaches $1$. Stop the process, when either
the total fractional makespan is $T$ or all $X_{i,j}$'s are $1$.
If $X_{i,j}=1$, assign job $j$ to machine $i$. If machine $i$ has no job fractionally assigned to it,
drop machine $i$ from further consideration. For each job $j$ that has
fractional assignment $X_{i,j}$, assign it to machine $i$ with probability $X_{i,j}$.
\item Discard all assigned jobs. If there are some unassigned jobs, repeat the procedure.
\item If some job is assigned to multiple machine, choose any one of them arbitrarily.
\end{enumerate}
In the above rounding scheme, we use $\bar{y}_i$'s as probabilities
for opening machines and for each opened machine, we assign jobs
following the probability distribution given by $X_{i,j}$'s.
It is obvious that the expected activation cost of machines in each iteration is exactly
the cost of the fractional solution given by the LP.
The following lemmas bound the number of iterations and the final load on each machine.
\begin{lemma}
The number of iterations required by the rounding algorithm is $O(\log{n})$.
\end{lemma}
\begin{proof}
Consider a job $j$. In a single iteration,
$
\mathsf{Pr}(\text{ job } j \text{ is not assigned to machine $i$ } )
\leq (1-\bar{y}_{i})+\bar{y}_{i}(1-\frac{\bar{x}_{i,j}}{\bar{y}_{i}})=1-\bar{x}_{i,j}.
$ Hence, $$\mathsf{Pr}(~\text{job $j$ is not assigned in an iteration}~)$$
$$\leq\prod_{i}(1-\bar{x}_{i,j})\leq(1-{1\over m})^m\leq \frac{1}{e}$$
The second inequality holds since $\sum_i \bar{x}_{ij}=1$
and the quantity is maximized when all $\bar{x}_{ij}$'s are equal.
Then, it is easy to see the probability
that job $j$ is not assigned after $2\ln n$ iterations
is at most ${1\over n^2}$.
Therefore, by union bound, with probability at least $1-{1\over n}$,
all jobs can be assigned in $2\ln n$ iterations.
\end{proof}
\begin{lemma}
The load on any machine is $O(T\log{n})$ with high probability.
\end{lemma}
\begin{proof}
Consider any iteration $h$. Denote the value of $X_{i,j}$ at iteration $h$, by $X_{i,j}^{h}$.
For each open machine $i$ and each job $j$,
define a random variable
\begin{equation}
Z_{i,j,h} =\begin{cases} \frac{p_{i,j}}{T} \text{ , if job $j$ is assigned to machine $i$ }\\
0 \text{ , otherwise} \end{cases}
\end{equation}
Clearly, $0 \leq Z_{i,j,h} \leq 1$.
Define, $Z_i=\sum_{j,h} Z_{i,j,h}$. Clearly,
$$\mathsf{E}[Z_i]=\frac{\sum_{h}\sum_{j}p_{i,j}X^{h}_{i,j}}{T}\leq \sum_{h} 1 \leq \Theta(\log{n})$$
Denote by $M_i$ the load on machine $i$. Therefore, $M_i=TZ_{i}$,
thus $\mathsf{E}[M_i]\leq \Theta(T\log{n})$. Now by the standard Chernoff-Hoeffding bound \cite{hoeffding, srin:chernoff},
we get the result.
\end{proof}
\subsection{Integrality Gap of the Natural LP, for Strict Makespan}
Let there be $m$ jobs and $m$ machines.
Call these machines $A_1,A_2,..,A_{m-1}$, and $B$.
Processing time for all jobs on machines $A_1, A_2, ..., A_{m-1}$ is $T$
and on $B$ it is $T\over m$.
Activation costs of opening machines $A_1,A_2,..,A_{m-1}$
is $1$, and for $B$ it
is very high compared to $m$, say $R ( R >> m )$.
An integral optimum solution has to open machine $B$ with
total cost at least $R$.
Now consider a fractional solution, where all machines $A_1, A_2,..,A_{m-1}$ are
fully open, but machine $B$ is open only to the extent of $1/m$.
All jobs are assigned to the extent of $1/m$ on each machine
$A_1,A_2,..,A_{m-1}$. So the total processing time on any machine $A_i$ is
$m \frac{T}{m}=T$.
The remaining $\frac{1}{m}$ part of each job is assigned to $B$.
So total processing time on $B$ is ${T\over {m\cdot m}}\cdot m= {T\over m}$.
It is easy to see the optimal fractional cost is at most $m+{R\over m}$ (by setting $y_B={1\over m}$).
Therefore, the integrality gap is at least $\approx m$.
\subsection{Main Rounding Algorithm for Minimizing Scheduling Activation Cost with Makespan Budget}
\label{sec:main}
In this section, we describe our main rounding approach, that achieves an
approximation factor of $2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1)$~for activation cost and $(2+\epsilon)$~for makespan.
Based on this new rounding scheme, we show in Section \ref{sec:assign} how to simultaneously
approximate both machine activation and job assignment cost along with makespan, and how
to extend it to handle outliers, when some jobs can be dropped (Section \ref{sec:extent}).
For the basic problem with only activation cost and makespan, we show in Section
4, that a greedy algorithm achieves an
approximation factor of $(2,1+\ln n)$. However, the greedy algorithm is
significantly slower than the LP rounding algorithm, since it requires
computations of $(m-i)$ linear programs at the $i$th step of greedy choice,
where $i$ can run from $1$ to $min(m,n)$ and $m,n$ are the number of machines and jobs respectively.
The algorithm begins by solving LP (Eq(\ref{eq:lp})). As
before $\bar{\bf x},\bar{\bf y}$ denote the optimum fractional solution
of the LP. Let $M$ denote the set of machines and $J$ denote the
set of jobs. Let $|M|=m$ and $|J|=n$. We define a bipartite graph $G=(M\cup J, E)$
as follows: $M\cup J$ are the vertices of $G$ and $e=(i,j) \in E$, if $\bar{x}_{i,j} >0$.
The weight on edge $(i,j)$ is $\bar{x}_{i,j}$ and the weight on machine node $i$ is $\bar{y}_i$.
Rounding consists of several iterations. Initialize $X=\bar{\bf x}$ and $Y=\bar{\bf y}$.
The algorithm iteratively modifies $X$ and $Y$, such that at the end $X$ and $Y$ become integral.
Random variables at the end of iteration $h$ are denoted by $X_{i,j}^{h}$ and $Y_{i}^{h}$.
The three main steps of rounding are as follow:
\begin{enumerate}
\item {\em Transforming the Solution: } It consists of creating two graphs $G_1$ and $G_2$ from $G$,
where $G_1$ has an almost forest structure and in $G_2$ the weight of an edge and the weight of the
incident machine node is very close. In this step, only $X_{i,j}$'s are modified, while $Y_{i}$'s remain fixed
at $\bar{y}_{i}$'s.
\item {\em Cycle Breaking:} It breaks the
remaining cycles of $G_1$ and convert it into a forest, by moving certain edges to $G_2$.
\item Exploiting the properties of $G_1$ and $G_2$, and
{\em rounding on} $G_1$ and $G_2$ separately.
\end{enumerate}
We now describe each of these steps in detail.
\subsection{Transforming the Solution}
\label{subsec:transform}
We decompose $G$ into two graphs $G_1$ and $G_2$ through several rounds.
Initially, $V(G_1)=V(G)=M\cup J$, $E(G_1)=E(G)$, $V(G_2)=M$ and $E(G_2)=\emptyset$. In each round,
we either move one job node and/or one edge from $G_1$ to $G_2$ or delete an
edge from $G_1$. Thus we always make progress. An edge moved to $G_2$ retains
its weight through the rest of the iterations, while the weights of the edges in
$G_1$ keep on changing.
We maintain the following invariants,
\begin{description}
\item[(I1)] $\forall (i,j) \in E(G_1)$, and $\forall h$, $X^{h}_{i,j}\in (0,y_{i}/\gamma)$, $p_{i,j} > 0$.
\item[(I2)] $\forall i \in M$ and $\forall h, \sum_{j} X^{h}_{i,j} p_{i,j} \leq Ty_{i} $.
\item[(I3)] $\forall (i,j) \in E(G_2)$ and $\forall h$, $1 \geq X^{h}_{i,j} \geq y_{i}/\gamma$.
\item[(I4)] Once a variable is rounded to $0$ or $1$, it is never changed.
\end{description}
Consider round one. Remove any machine node that has $Y_{i}^{1}=0$ from both $G_1$ and $G_2$.
Activate any machine that has $Y_{i}^{1}=1$. Similarly, discard any edge $(i,j)$ with $X_{i,j}^{1}=0$, and if
$X_{i,j}^{1}=1$, assign job $j$ to machine $i$ and remove $j$.
If $X_{i,j}^{1} \geq \bar{y}_{i}/\gamma$, then remove the edge
$(i,j)$ from $G_1$ and add the job $j$ (if not added yet) and the edge $(i,j)$ with weight $x_{i,j} (\geq \bar{y}_{i}/\gamma)$ to $G_2$. Note that, if for some $(i,j) \in G$, $p_{i,j}=0$, then we can
simply take $\bar{x}_{i,j}=\bar{y}_{i}$ and move the edge to $G_2$.
Thus we can always assume for every edge $(i,j) \in G_1$, $p_{i,j} > 0$.
It is easy to see that, after iteration one, all the invariants (\textbf{I1}-\textbf{I4}) are maintained.
Let us consider iteration $(h+1)$ and let $J',M'$ denote the set of jobs and machine
nodes in $G_1$ with degree at least $1$ at the beginning of the iteration.
Note that $Y_{i}^{h}=Y_{i}^{1}=\bar{y_{i}}$ for all $h$.
Let $|M'|=m'$ and $|J'|=n'$.
As in iteration one, any edge with $X_{i,j}^{h}=0$ in
$G_1$ is discarded and any edge with $X_{i,j}^{h} \geq \bar{y}_{i}/\gamma$ is moved to $G_2$
(if node $j$ does not belong to $G_2$, add it to $G_2$ also).
We denote by $w_{i,j}$ the weight of an edge $(i,j) \in G_2$.
Any edge and its weight moved to $G_2$ will not be changed further.
Since $w_{ij}$ is fixed when $(i,j)$ is inserted to $G_2$,
we can treated it as a constant thereafter.
Consider the linear system (${\bf Ax}={\bf b}$) as in Figure \ref{fig:linear}.
\begin{figure*}
\centering
\begin{eqnarray}
\label{eqfig:1}
\forall j \in J',~~ \sum_{\substack{i \in M', \\ (i,j) \in E(G_1)}}x_{i,j}&=& 1-\sum_{\substack{i \in M',\\ (i,j)\in E(G_2)}} w_{i,j} \\
\label{eqfig:2}
\forall i \in M',~~ \sum_{\substack{j \in J', \\ (i,j) \in E(G_1)}} p_{i,j}x_{i,j}&=&\sum_{j \in J'} p_{i,j}X_{i,j}^{h}-\sum_{\substack{j \in J', \\ (i,j) \in E(G_2)}} p_{i,j}w_{i,j}
\end{eqnarray}
\caption{Linear System at the beginning of iteration $(h+1)$}
\label{fig:linear}
\end{figure*}
We call the fractional solution ${\bf x}$ {\em canonical}, if $x_{i,j} \in (0,y_i/\gamma)$, for all $(i,j)$. Clearly $\{X_{i,j}^{h}\}$, for $(i,j) \in E(G_1)$ is a canonical feasible solution for the linear system in Figure \ref{fig:linear}.
Now, if a linear system is under-determined, we can efficiently find a non-zero
vector ${\bf r}$, with ${\bf Ar} ={\bf 0}$.
Since ${\bf x}$ is canonical, we can also efficiently identify strictly positive reals,
$\alpha$ and $\beta$, such that for all $(i,j), x_{i,j}+\alpha r_{i,j}$ and $x_{i,j}-\beta r_{i,j}$ lie in $[0,y_i/\gamma]$ and
there exists at least one $(i,j)$, such that one of the two entries, $x_{i,j}+\alpha r_{i,j}$ and $x_{i,j}-\beta r_{i,j}$, is in $\{0,y_{i}/\gamma\}$.
We now define the basic randomized rounding step,
$\mathbf{RandStep}({\bf A},{\bf x},{\bf b}):$ with probability $\frac{\beta}{\alpha+\beta}$, return the vector ${\bf x} + \alpha {\bf r}$ and
with complementary probability of $\frac{\alpha}{\alpha+\beta}$, return the vector ${\bf x}-\beta {\bf r}$.
If $\bf{X}=\mathbf{RandStep}({\bf A},{\bf x},{\bf b})$, then the returned solution has the following properties \cite{srin:focs00}:
\begin{equation}
\label{eqn:prop1}
\prob{{\bf AX}={\bf b}}=1
\end{equation}
\begin{equation}
\label{eqn:prop2}
\expect{X_{i,j}}= x_{i,j}
\end{equation}
If the linear system in Figure \ref{fig:linear} is under-determined,
then we apply $\mathbf{RandStep}$ to obtain the updated vector ${\bf X}^{h+1}$.
If for some $(i,j)$, $X^{h+1}_{i,j}=0$, then we remove that edge (variable) from $G_1$.
If $X^{h+1}_{i,j}=\bar{y}_{i}/\gamma$, then we remove the edge from $G_1$ and add it with weight $\bar{y}_{i}/\gamma$ to $G_2$. Thus
the invariants (\textbf{I1}, \textbf{I3} and \textbf{I4}) are maintained. Since
the weight of any edge in $G_2$ is never changed and
load constraints on all machine nodes belong to the linear system,
we get from \cite{srin:focs00},
\begin{lemma}
\label{lem:trans}
For all $i,j,h,u$, $\expect{X^{h+1}_{i,j}\mid{}X^{h}_{i,j}=u}=u$.
In particular, $\expect{X^{h+1}_{i,j}}=\bar{x}_{i,j}$.
Also for each machine $i$ and iteration $h$, $\sum_{j}X^{h}_{i,j}p_{i,j}=\sum_{j}x_{i,j}p_{i,j}$ with probability $1$.
\end{lemma}
Thus the invariant (\textbf{I2}) is maintained as well.
If the linear system (Figure \ref{fig:linear}) becomes determined, then this step ends and we proceed to the next step of ``Cycle Breaking''.
\subsection{Cycle Breaking:}
Let $M'$ and $N'$ be the machine and job nodes respectively in $G_1$, when the previous step ended. If $|M'|=m'$ and $|N'|=n'$, then the
number of edges in $G_1$ is $|E(G_1)| \leq m'+n'$. Otherwise, the linear system (Figure \ref{fig:linear}) remains underdetermined.
Actually, in each connected component of $G_1$, the number of edges is at most the number of vertices due to the same reason.
Therefore, each component of $G_1$ can contain at most one cycle.
If there is no cycle in $G_1$, we are done; else there is at most one cycle,
say $C=(v_0,v_1,v_2, \ldots, v_{k}=v_{0})$, with $v_0=v_k \in
M$, in each connected component of $G_1$. Note that since $G_1$ is bipartite, $C$ always has even length.
For simplicity of notation, let the current $X$ value on edge
$e_{t}=(v_{t-1},v_{t})$ be denoted by $Z_{t}$. Note that if $v_{t}$
is a machine node, then $Z_{t} \in (0,\bar{y}_{v_{t}}/\gamma)$, else $v_{t-1}$ is a
machine node and $Z_{t} \in (0, \bar{y}_{v_{t-1}}/\gamma)$. We next choose
values $\mu_{1},\mu_{2}, \ldots , \mu_{k}$
deterministically, and update the $X$ value of each edge
$e_t = (v_{t-1},v_t)$ to $Z_t+\mu_{t}$.
Suppose that we initialized some value
for $\mu_1$, and have chosen the increments $\mu_{1},
\mu_{2},\ldots, \mu_{t}$, for some $t \geq 1$. Then, the value
$\mu_{t+1}$ (corresponding to edge $e_{t+1} = (v_t,v_{t+1})$) is
determined as follows:
\begin{description}
\item[(P1)] If $v_t \in J$ (i.e., is a job node), then $\mu_{t+1}
= -\mu_{t}$ (i.e., we retain the total assignment value of
$w_t$);
\item[(P2)] If $v_t \in M$ (i.e., is a machine node), we set
$\mu_{t+1}$ in such a way so that the load on machine $v_{t}$
remains unchanged, i.e., we set $\mu_{t+1} =
-p_{v_t,v_{t-1}}\mu_t/p_{v_t,v_{t+1}}$, which ensures that the
incremental load $p_{v_t,v_{t-1}}\mu_t +
p_{v_t,v_{t+1}}\mu_{t+1}$ is zero.
Since $p_{v_{t},v_{t+1}}$ is
non-zero by the property of $G_1$ therefore, dividing by
$p_{v_t,v_{t+1}}$ is admissible.
\end{description}
The vector ${\bf \mu} = (\mu_1, \mu_2,\ldots, \mu_k)$ is
completely determined by $\mu_1$, for the cycle $C$.
Therefore, we can denote this ${\bf \mu}$ by $f(\mu_{1})$.
Let $\alpha$ be the smallest positive value, such that if we set
$\mu_1=\alpha$, then for all $X_{i,j}$ values (after incrementing by
the vector ${\bf \mu}$ as mentioned above stay in $[0,\bar{y}_{i}/\gamma]$,
and at least one of them becomes $0$ or $\bar{y}_{i}/\gamma$. Similarly let
$\beta$ be the smallest positive value such that if we set
$\mu_1=-\beta$, then again all $X_{i,j}$ values after increments
lie in $[0,\bar{y}_{i}/\gamma]$ and at least one of them is rounded to $0$ or
$\bar{y}_{i}/\gamma$. (It is easy to see that $\alpha$ and $\beta$ always exist and
they are strictly positive.) We now choose the vector ${\bf
\mu}$ as follows:
\begin{description}
\item[(R1)] Set $\mu=f(\alpha)$, if
$p_{v_0,v_1}-p_{v_{0},v_{k-1}}\mu_k/\mu_1 < 0$.
\item[(R2)] Set $\mu=f(-\beta)$, if
$p_{v_0,v_1}-p_{v_{0},v_{k-1}}\mu_k/\mu_1 \geq 0$.
\end{description}
If some $X_{i,j}$ is rounded to $0$, we remove that edge from $G_1$.
If some edge $X_{i,j}$ becomes $\bar{y}_{i}/\gamma$, then we remove it from $G_1$
and add it to $G_2$, with weight $\bar{y}_{i}/\gamma$. Since at least
one of these occurs, we are able to break the cycle.
Let $\phi$ denote the fractional assignment of $x$ variables at the beginning of the cycle breaking phase.
Then clearly, after this step, for all jobs
$j$, considering both $G_1$ and $G_2$, $\sum_{i}X_{i,j}=\sum_{i}\phi_{i,j}$.
For any machine $i \in M$, if $i \notin C$, then clearly
$\sum_{j}p_{i,j}X_{i,j}=\sum_{j}p_{i,j}\phi_{i,j}$.
If $i \in C$, but $i \neq v_0$, then by property ({\bf P2}), before inserting any edge
to $G_2$, we have
$\sum_{j}p_{i,j}X_{i,j}=\sum_{j}p_{i,j}\phi_{i,j}$.
Any edge added to $G_2$ after the cycle breaking step has the same
weight as it had in $G_1$. Therefore, we have, for any $i \neq w_0$,
and considering both $G_1$ and $G_2$,
$\sum_{j}p_{i,j}X_{i,j}=\sum_{j}p_{i,j}\phi_{i,j}$. Now
consider the machine $v_0(=v_k)$. Its change in load is exactly
$\mu_1(p_{v_0,v_1}-p_{v_{0},v_{k-1}}\mu_k/\mu_1)$. Therefore by the
choice of ({\bf R1}) and ({\bf R2}), the load on machine $v_0$ can only
decrease. Hence, by property (\ref{eqn:prop1}), we have the following lemma,
\begin{lemma}
\label{lem:cycle}
Considering both $G_1$ and $G_2$, we have after the cycle breaking step with probability $1$:
$
\sum_{i}X_{i,j}=1 \,\,\forall j ; \,\,\, \sum_{j}X_{i,j}p_{i,j} \leq T\bar{y}_{i}\,\, \forall i;,\,\,\,\, X_{i,j} \leq \bar{y}_{i}\,\,\forall i,j.
$
\end{lemma}
\subsection{Rounding on $G_1$ and $G_2$}
\label{subsec:round}
The previous two steps ensures, that $G_1$ is a forest and in $G_2$, $X_{i,j}\geq \bar{y}_{i}/\gamma$, for all $(i,j) \in E(G_2)$. We remove any isolated nodes from $G_1$ and $G_2$, an round them separately.
\subsubsection{ Further Relaxing the Solution}
\label{subsubsec:relax}
Let us denote the job and the machine nodes in $G_1$ ($G_2$)
by $J(G_1)$ (or $J(G_2)$) and $M(G_1)$ (or $M(G_2)$) respectively. Consider a job
node $j \in J(G_2)$. If $\sum_{i:(i,j) \in E(G_2)} X_{i,j} < 1/\delta$ (we choose $\delta$ later), we
simply remove all the edges $(i,j)$ from $G_2$ and the following must hold:
$\sum_{i:(i,j) \in E(G_1)} X_{i,j} \geq 1-1/\delta$. Otherwise, if $\sum_{i:(i,j) \in E(G_2)}
X_{i,j} \geq 1/\delta$, we remove all edges $(i,j) \in E(G_1)$ from
$G_1$. Therefore at the end of this modification, a job node can belong to
either $J(G_1)$ or $J(G_2)$, but not both. If $j \in J(G_1)$, we have $\sum_{i \in M}
X_{i,j} \geq 1-1/\delta$. Else, if $j \in J(G_2)$, $\sum_{i \in M} X_{i,j} \geq
1/\delta$.
For the makespan analysis it will be easier to partition the edges
incident on a machine node $i$ into two parts -- the job nodes
incident to it in $G_1$ and in $G_2$.
The fractional processing time due to jobs in $J(G_1)$ (or $J(G_2)$) will be denoted
by $T'\bar{y}_{i}$ (or $T''\bar{y}_i $), i.e., $T'\bar{y}_i=\sum_{j \in J(G_1)} p_{i,j}X_{i,j}$
(or $T''\bar{y}_i=\sum_{j \in J(G_2)} p_{i,j}X_{i,j}$).
\subsubsection{Rounding on $G_2$:}
\label{subsubsec:round2}
In $G_2$, for any machine node $i$, recall $\sum_{j \in J(G_2)} X_{i,j}p_{i,j}=T''y_i$.
Since we have for all $i \in M(G_2), j \in J(G_2)$, $X_{i,j} \geq y_i/\gamma$,
we have $\sum_{j \in J(G_2)} p_{i,j} \le T''\gamma$.
Therefore, if we decide to open a machine node $i \in M(G_2)$, then we can assign all the
nodes $j \in J(G_2)$, that have an edge $(i,j)\in E(G_2)$, by paying at most $T''\gamma$
in the makespan.
Hence, we only concentrate on opening a machine in $G_2$, and
then if the machine is opened, we assign it all the jobs incident to it in $G_2$.
For each machine $i \in M(G_2)$, we define $Y_i=\min\{1,\bar{y}_{i}\delta\}$.
Since, for all job nodes $j \in J(G_2)$, we know $\sum_{i \in M(G_2)} X_{i,j}\geq 1/\delta$,
after scaling we have for all $j \in J(G_2)$, $\sum_{(i,j) \in E(G_2)} Y_{i} \ge 1$.
Therefore, this exactly forms a fractional set-cover instance,
which can be rounded using the randomized rounding method developed in \cite{srin:soda01} to get
activation cost
within a factor of $\delta (\log{\frac{n}{OPT}}+1)$.
The instance in $G_2$ thus nicely captures the hard part of the problem, which comes from the hardness of approximation of set cover. Thus we have the following lemma.
\begin{lemma}
\label{lem:g2}
Considering only the job nodes in $G_2$, the final load on any machine $i \in M(G_2)$ is at most $T''\gamma$ and the total activation cost is at most $\delta (\log{\frac{n}{OPT}}+1)OPT$, where $T''$ is the fractional load on machine $i \in M(G_2)$ before \emph{rounding on $G_2$} and $OPT$ is the optimum activation cost.
\end{lemma}
\subsubsection{Rounding on $G_1$:}
\label{subsubsec:round1}
For rounding in $G_1$, we traverse each tree in $G_1$ bottom up. If there is a job node $j$,
that is a child of a machine node $i$, then if $X_{i,j} < 1/\eta$ ($\eta$ to be fixed later), we
remove the edge $(i,j)$ from $G_1$. Since initially $j \in J(G_1)$,
$\sum_{i \in M} X_{i,j} \geq 1-1/\delta$, even after these edges are
removed, we have for
$j \in J(G_1)$, $\sum_{i \in M(G_1)} X_{i,j} \geq 1-1/\delta-1/\eta$.
However if $X_{i,j} \geq 1/\eta$, simply open machine $i$, if it is
not already open and add job $j$ to machine $i$. Initially
$\bar{y}_i \geq 1/\eta$, since $\bar{y}_{i} \geq X_{i,j}$.
The initial contribution to cost by machine $i$ was
$\geq \frac{1}{\eta}a_{i}$. Now it becomes $a_{i}$.
If $\sum_{j} \frac{X_{i,j}}{y_{i}}p_{i,j}=T'$, with $X_{i,j} \geq 1/\eta$, now it can become at most $\eta T'$.
After the above modification, the yet to be assigned jobs in $J(G_1)$ form disjoint stars, with the job nodes at their centers.
Consider each star, $S_{j}$ with job node $j$ at its center. Let
$i_1,i_2,.,i_{\ell_j}$ be all the machine nodes in $S_{j}$, then we have,
$\sum_{k=1}^{\ell_j} X_{i_k,j} \geq 1-1/\delta-1/\eta$.
Therefore $\sum_{k=1}^{\ell_j} \bar{y}_{i_k} \geq
1-1/\delta-1/\eta$. If there is already some opened machine, $i_{l}$, assign $j$
to $i_{l}$ by increasing the makespan at most by an additive $T$.
Otherwise, open machine $i_{l}$ with the cheapest $a_{i_{l}}$. Since
the total contribution of these machines to the cost is
$\sum_{k=1}^{\ell_j} \bar{y}_{i_{k}}a_{i_{k}} \geq
\sum_{k=1}^{\ell_j} \bar{y}_{i_{k}}a_{i_{l}}\geq (1-1/\delta-1/\eta)a_{i_{l}}$, we are
within a factor $\frac{1}{1-1/\delta-1/\eta}$ of the total
cost contributed from $G_1$.
Hence, we have the following lemma,
\begin{lemma}
\label{lem:g1}
Considering only the job nodes in $G_1$, the final load on any machine $i \in M(G_1)$ is at most $T'\eta + \max_{i,j}p_{i,j}$ and the total activation cost is at most $\max(\frac{1}{\eta}, \frac{1}{(1-1/\delta-1/\eta)})OPT$, where $T'$ is the fractional load on machine $i \in M(G_1)$ before \emph{rounding on $G_1$} and $OPT$ is the optimum activation cost.
\end{lemma}
Now combining, Lemma \ref{lem:cycle}, \ref{lem:g2} and \ref{lem:g1}, and by optimizing the values of $\delta, \eta$ and $\gamma$, we get
the following theorem.
\begin{theorem}
A schedule can be constructed efficiently with machine activation cost $2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1) OPT$ and makespan $(2+\epsilon) T$, where $T$ is the optimum makespan possible for any schedule with activation cost $OPT$.
\end{theorem}
\begin{proof}
From Lemma \ref{lem:g2} and \ref{lem:g1}, we have,
\begin{itemize}
\item Machine opening cost is at most
$\left(\max(\frac{1}{\eta}, \frac{1}{(1-1/\delta-1/\eta)}\right) + \delta\left(\ln{\frac{n}{OPT}}+1)\right) OPT$
\item Makespan is at most $T\left(\max (\gamma,\eta)\right)+ \max_{i,j}p_{i,j} $
\end{itemize}
Now $\eta \geq \gamma$, since otherwise any edge with $X_{i,j} \geq 1/\eta$ will be
moved to $G_2$ and $1-1/\delta \geq 1/\eta$.
Now set, $\gamma=\eta$, $\delta=1+\zeta$, for
some $\zeta >0$. So $1-1/\delta=\zeta/(1+\zeta)$.
Set $1/\eta=\zeta/(1+\zeta)-1/(1+\zeta)(\ln{\frac{n}{OPT}}+1)$.
Thus, we have an activation cost at most $2 (1+\zeta)(\ln{\frac{n}{OPT}}+1) OPT$
and makespan
$\leq T (1+\frac{\ln{n}+1}{\zeta\ln{n}-1})+\max_{i,j}p_{i,j}$.
Therefore, if we set $\zeta=1+2/\ln{n}$, we get an activation cost bound of
$4(\ln{\frac{n}{OPT}}+1)OPT$ and
makespan $\leq 2T + \max_{i,j}p_{i,j}$.
In general, by setting $\epsilon=\frac{1}{\zeta}$, we get an activation cost at most
$2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1) OPT$
and makespan $\leq (2+\epsilon)T$.
\end{proof}
\section{Minimizing Machine Activation Cost and Assignment Cost}
\label{sec:assign}
We now consider the scheduling problem with assignment costs and machine activation costs.
As before, each job can be scheduled only on one machine, and processing job $j$ on machine
$i$ requires $p_{i,j}$ time and incurs a cost of $c_{i,j}$.
Each machine is available for $T$ time units and the objective is to minimize the total
incurred cost. In this version of the machine activation model, we wish
to minimize the sum of the
machine activation and job assignment costs. Our objective now is
\[ \min \sum_{i \in M}a_{i}y_{i}+ \sum_{(i,j)} c_{i,j}x_{i,j}\]
subject to the same constraints as the LP defined in Eq(\ref{eq:lp}).
\iffalse
We guess the $\frac{1}{\epsilon}$ most expensive assignments of the optimum in
$(mn)^{\frac{1}{\epsilon}}$ time.
Let $(s,t)$ be the cheapest cost edge in the guessed set and is guaranteed to have
$c_{s,t}\leq \epsilon OPT$. We then write the LP,
only with those $x_{i,j}$'s as non-zero, for which $c_{i,j} \leq c_{s,t} \leq \epsilon OPT$.
\fi
Our algorithm for simultaneous minimization of machine activation and assignment cost follows
the same paradigm as has been developed in Section \ref{sec:main}, with some problem
specific changes. We mention the differences here.
\subsection{Transforming the Solution}
After solving the LP, we obtain, $C=\sum_{i,j}c_{i,j}x_{i,j}$. Though, we have an additional constraint $C=\sum_{i,j}c_{i,j}x_{i,j}$
to care about, we {\bf do not} include it in the linear system and proceed exactly as in Subsection \ref{subsec:transform}.
As long as the system is underdetermined, we can repeatedly
apply $\mathbf{RandStep}$ to form the two graphs
$G_1$ and $G_2$. By Property \ref{eqn:prop2}, $\forall i,j,h, \expect{X_{i,j}^{h}}=\bar{x}_{i,j}$ and hence, we have that the expected cost is
$\sum_{i,j}c_{i,j}\bar{x}_{i,j}$. The procedure can be directly derandomized by the
method of conditional expectation giving an $1$-approximation to assignment cost.
When the system becomes determined, we move to the next step. Thus at that point,
in every component of $G_1$, the number of edges is at most the number of vertices. Thus again each component of $G_1$,
can consist of at most one cycle.
In $G_2$, for all $(i,j) \in E(G_2)$, we have $X_{i,j}\geq \bar{y}_{i}/\gamma$.
\subsection{Breaking the Cycles}
For breaking the cycle in every component of $G_1$, we proceed in
a slightly different manner from the previous section.
However, we now have two parameters,
$p_{i,j}$ and $c_{i,j}$ associated with each edge. Suppose $(i',j)$ is an edge in a cycle.
If the $X_{i',j}$ value of this edge exceeds $\frac{1}{2}$ then we can
assign job $j$ to machine $i'$ and increase the processing load on the
machine by $p_{i',j}$. This increases the makespan at most by an additive $\frac{T}{2}$,
since the job was already assigned to an extent of $\frac{1}{2}$ on that machine.
The assignment cost also goes up, but since
we pay $c_{i',j}$ to assign $j$ to $i'$, and the LP solution
pays at least $\frac{1}{2} c_{i',j}$, this cost causes a penalty by
a factor of $2$ even after summing up all such assignment costs. Similarly, activation cost is
also only affected by a factor of $2$.
If the $X_{i',j}$ value is
at most $\frac{1}{2}$, then we simply delete the edge $(i',j)$. We scale up all the $X_{i,j}$ values and $\bar{y_{i}}$ values by $2$.
Thus the total assignment of any job remains at least $1$ and the cost of activation and assignment can go up only by a factor of $2$.
\iffalse
For breaking the two cycles in every component of $G_1$, we proceed as in the previous section.
However, we now have two parameters, $p_{i,j}$ and $c_{i,j}$ associated with each edge.
This leads to the following modification to the update rules.
Suppose $(w_0,w_1,w_2, \ldots, w_{k}=w_{0})$ is
a cycle in $G_1$, with $w_0=w_k \in M$.
Let the current $x$ value on edge
$e_{t}=(w_{t-1},w_{t})$ be denoted by $z_{t}$. We next choose
values $\mu_{1},\mu_{2}, \ldots , \mu_{k}$
deterministically, and update the $x$ value of each edge $e_t =
(w_{t-1},w_t)$ to $z_t+\mu_{t}$. The value
$\mu_{t+1}$ (corresponding to edge $e_{t+1} = (w_t,w_{t+1})$), is determined as follows:
({\bf P1'}) If $w_t \in J$ (i.e., is a job node), then $\mu_{t+1}
= -\mu_{t}$ (i.e., we retain the total assignment value of
$w_t$);
({\bf P2'}) If $w_t \in M$ (i.e., is a machine node), we set
$\mu_{t+1}$ in such a way so that neither the load on machine $w_{t}$
increases, nor the assignment cost i.e., we set $\mu_{t+1} =
\min(-p_{w_t,w_{t-1}}/p_{w_t,w_{t+1}},-c_{w_t,w_{t-1}}/c_{w_t,w_{t+1}} )\mu_t$, if
$c_{w_t,w_{t+1}}\neq 0$. Otherwise $\mu_{t+1} =
-p_{w_t,w_{t-1}}\mu_t/p_{w_t,w_{t+1}}$. This ensures that the
incremental load $p_{w_t,w_{t-1}}\mu_t +
p_{w_t,w_{t+1}}\mu_{t+1} \leq 0$ and also, $c_{w_t,w_{t-1}}\mu_t +
c_{w_t,w_{t+1}}\mu_{t+1} \leq 0$
The changed rule {\bf P2'} ensures that, for all the machine nodes in the cycle, except $w_0=w_k$, neither the load nor the assignment cost increases.
Now we selects positive reals $\alpha, \beta$ as in the previous section and choose the vector ${\bf
\mu}$ similarly:
({\bf R1}) Set $\mu=f(\alpha)$, if
$p_{w_0,w_1}-p_{w_{0},w_{k-1}}{\mu_k/ \mu_1} < 0$.
({\bf R2}) Set $\mu=f(-\beta)$, if
$p_{w_0,w_1}-p_{w_{0},w_{k-1}}{\mu_k/ \mu_1} \geq 0$.
Therefore, the load on the machine $w_0=w_k$ does not increase.
However the assignment cost might increase.
But since $c_{i,j} \leq \epsilon C$, assignment costs can increase only by $\epsilon C$ for
breaking one cycle. Since, we have two cycles, the total assignment cost increases
at most by $2\epsilon C$.
\fi
\subsection{Rounding on $G_1, G_2$}
The first part involves further relaxing the solution, that is identical to the one described in subsection \ref{subsubsec:relax}. Therefore, we now concentrate on rounding $G_1$ and $G_2$ separately.
\subsubsection{Rounding on $G_2$}
In $G_2$, since we have for all $(i,j) \in E(G_2)$, $X_{i,j}=\bar{y}_{i}/\gamma$,
if we decide to open machine $i$, all the jobs $j \in J(G_2)$ can be assigned to $i$, by
losing only a factor of $\gamma$ in the makespan. Therefore, we just need to concentrate on minimizing the cost of opening machines and the total assignment cost, subject to the constraint that all
the jobs in $J(G_2)$ must have an open machine to get assigned.
This is exactly the case of {\em non-metric uncapacitated facility location }
and we can employ the rounding approach developed in \cite{srin:stoc95}
to obtain an approximation factor of
$O(\log{\frac{n+m}{OPT}})+O(1)$ on the machine activation and assignment costs.
\subsubsection{Rounding on $G_1$}
Rounding on $G_1$ is similar to the
case when there is no assignment costs with a few modifications.
We proceed in the same manner and obtain the stars with job nodes at the centers.
Now for each star $S_j$, with $j$ at its center, we consider all the machine nodes in $S_j$.
If some machine $i \in S_j$ is already open, we make its opening cost $0$.
Now we open the machine, $\ell \in S_j$, for which $c_j+a_{\ell,j}$ is
minimum. Again using the same
reasoning as in Subsection \ref{subsubsec:round1}, the total cost does not exceed by more than a
factor of $\frac{1}{1-1/\delta-1/\eta}$.
Now optimizing $\alpha,\beta,\gamma$, we get the following theorem,
\begin{theorem}
If there is a schedule with total machine activation and assignment cost as $OPT$ and makespan $T$,
then a schedule can be constructed efficiently in polynomial time, with total cost
$O(\log{\frac{n+m}{OPT}}+1) OPT$ and makespan $\leq (3+\epsilon) T$.
\end{theorem}
Note that for both the cases of minimizing alone the machine activation cost and also minimizing the assignment cost simultaneously,
total cost is bounded within a constant factor of $\log{d}$,
where $d$ is the maximum degree (total number of edges incident on the bipartite graph) of any machine node in $G_2$.
\input{mygreedy}
\section{Extensions}
\label{sec:extent}
\subsection{Handling Release Times}
Suppose each job $j$ has a machine related release time $r_{ij}$,
i.e, job $j$ can only be processed on machine $i$ after time $r_{ij}$.
We can modify the algorithm in Section 2 to handle release times as follows.
For any ``guess'' of the makespan $T$,
we let $x_{i,j}=0$ if $r_{ij}+p_{i,j}>T$ in the LP formulation.
Then, we run the $((2+\epsilon),2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1))$-approximation regardless of the release times
and obtain a subset of
active machines and an assignment of jobs to these machines.
Suppose the subset $J_i$ of jobs is assigned to machine $i$.
We can now schedule the jobs in $J_i$ on machine $i$ in order by release time.
It is not hard to see the makespan of machine $i$ is
at most
$T+\sum_{j\in J_i}p_{i,j}$ since every job can be scheduled on machine $i$
after time $T$.
Therefore, we get a
$(3+\epsilon, 2(1+\frac{1}{\epsilon})(\log{\frac{n}{OPT}}+O(1)))$
approximation. Similar extensions can be done for the case with
activation and assignment costs.
\subsection{Scheduling with Outliers }
We now consider the case where each job $j$ has profit $\pi_{j}$ and we are
not required to schedule all the jobs.
Some jobs can be dropped but the total profit that can be dropped is at
most $\Pi'$.
Therefore the total profit earned must be at least $\sum_{j}\pi_{j}-\Pi'=\Pi$.
We now show how using our framework and a clever trick used in \cite{Gupta},
we can obtain a bound of $(3+\epsilon)$ on the makespan and
$2 (1+\frac{1}{\epsilon})(\ln{\frac{n}{OPT}}+1)$ on the machine activation cost, while guaranteeing that
profit of at most $\Pi'(1+\epsilon)$ is not scheduled.
If we consider both machine activation and assignment cost, then we obtain a
total cost within $O(\log{\frac{n+m}{OPT}}+O(1))$ of the
optimum without altering the makespan and the profit approximation factor.
We create a dummy machine $dum$, which has cost $a_{dum}=0$ and for all $j$,
$c_{i,j}=0$. Processing time of job $j$ on $dum$ is $\pi_{j}$.
It is a trivial exercise to show that both the algorithms of the
previous sections work
when the makespan constraint is different on different machines.
If the makespan constraint on machine $i$ is $T_i$, then we the makespan for
machine $i$ is at most $(1+\epsilon)T_i+ \max_{j}p_{i,j}$.
For the dummy machine $dum$, we set a makespan constraint of $\Pi'$.
Since after the final assignment the
makespan at the dummy node can be at most $(1+\epsilon)\Pi'+\max_{j}\pi_{j}$.
With some work it can be shown that we can regain the
lost profit for a job with maximum profit on $dum$, to either an
existing machine or by opening a new machine. This either increases
our cost slightly, or increases the makespan to at most
$(3+\epsilon)T$.
\section{Minimizing Machine Activation Cost in Uniformly Related Machines}
In this section, we show that for related parallel machines,
there is an polynomial time $(1+\epsilon, 1)$-approximation for any
$\epsilon>0$. If a schedule with activation cost $A$ and makespan $T$
exists, then we find a schedule with activation cost $A$ and makespan
at most $(1+\epsilon)T$.
We briefly sketch the algorithm which is a slight generalization of
the approximation scheme for makespan minimization on related
parallel machines by Epstein and Sgall \cite{EpsteinSgall}.
Actually, their algorithm can optimize a class of objective
functions which includes for example makespan, $L_p$ norm of the
load vector etc. We only discuss the makespan objective in our paper.
The extensions to other objectives are straightforward.
Roughly speaking, Epstein and Sgall's algorithm works as follows
(see \cite{EpsteinSgall} for detailed definitions and proofs).
They define the notion of a {\em principal configuration} which is a vector
of constant dimension and is used to succinctly represent a set of jobs
(after rounding their sizes).
A principal configuration (see Appendix ~\ref{epsteinsgall} for more details)
is of the form $(w, \vec{n})$ where
$w=0$ or $w=2^i$ for some integer $i$ and $\vec{n}$
is a vector of non-negative integers. The number of
different principal configurations is polynomially bounded (for any
fixed $\epsilon>0$). They also construct the graph of
configurations in which each vertex is of the form $(i,\alpha(A))$
for any $1\leq i\leq m$ and principal configuration $\alpha(A)$ of
the job set $A\subset J$. There is a directed edge from $(i-1, \alpha)$
to $(i,\alpha')$ if $\alpha'$ represents a set of jobs that is a superset
of what $\alpha$ represents and its length is the ($1+\epsilon$)-approximated ratio
of the weights of the jobs
in the difference of these two sets to the speed $s_i$ of machine $i$.
Intuitively, an assignment $J_1,\ldots,J_m$ with jobs in $J_i$ assigned to machine $i$ corresponds to a path
$P=\{(i,\alpha_i)\}_i$ in $G$ such that $\alpha_i$ represents $\cup_{j=1}^i J_j$
and the {\em length} of
edge $((i-1,\alpha_{i-1}),(i,\alpha_i))$ is approximately the load of machine $i$.
By computing a path $P$ in $G$ from $(0,\alpha(\emptyset))$ to $(m,\alpha(J))$ such that
the maximum length of any edge in $P$ is minimized, we can find an $1+\epsilon$
approximation for minimizing the makespan.
To obtain a $(1+\epsilon, 1)$-approximation of the machine activation problem,
we slightly modify the above construction of the graph as follows.
The sets of vertices and edges are the same as before.
We associate each edge with a cost. If both endpoints of edge $((i-1,\alpha_{i-1}),(i,\alpha_i))$ have the same
principal configuration $\alpha_{i-1}=\alpha_i$ , then the cost of the edge is $0$;
Otherwise, the cost is
the activation cost $a_i$ of machine $i$.
For the guess of the makespan $T^{\#}$, we compute a path from $(0,\alpha(\emptyset))$ to
$(m,\alpha(J))$ such that
the maximum length of any edge in $P$ is at most $T^{\#}$ and the cost is minimized.
If $T \le (1+\epsilon)T^*$, we are guaranteed to find a path of cost at most $A$.
\section{Conclusions}
Current research includes considering different $L_p$ norms as well
as other measures such as weighted completion time.
The greedy approach currently only works
for the most basic version giving a makespan of $2T$
and an activation cost of $O(\log n)A$ . Extending it to handle
other generalizations of the basic problem is ongoing research.
\noindent
{\bf Acknowledgments:}
We thank Leana Golubchik (USC) and Shankar Ramaswamy (Amazon) for useful discussions. |
0911.2621 | \section{INTRODUCTION}
Big Bang nucleosynthesis begins about three minutes after the Big
Bang, when the universe has cooled down sufficiently to form
stable protons and neutrons, after baryogenesis. The relative
abundances of these particles follow from simple thermodynamical
arguments, combined with the way that the mean temperature of the
universe changes over time (if the reactions needed to reach the
thermodynamically favoured equilibrium values are too slow
compared to the temperature change brought about by the expansion,
abundances will remain at some specific non-equilibrium value).
Combining thermodynamics and the changes brought about by cosmic
expansion, one can calculate the fraction of protons and neutrons
based on the temperature at this point. The answer is that there
are about seven protons for every neutron at the beginning of
nucleogenesis, a ratio that would remain stable even after
nucleogenesis is over. This fraction is in favour of protons
initially primarily because lower mass of the proton favors their
production. Free neutrons also decay to protons with a half-life
of about 15 minutes, and this time-scale is too short to affect
the number of neutrons over the period in which BBN took place,
primarily because most of the free neutrons had already been
absorbed in the first 3 minutes of nucleogenesis-- a time too
short for a significant fraction of them to decay to protons. One
feature of BBN is that the physical laws and constants that govern
the behavior of matter at these energies are very well understood,
and hence BBN lacks some of the speculative uncertainties that
characterize earlier periods in the life of the universe. Another
feature is that the process of nucleosynthesis is determined by
conditions at the start of this phase of the life of the universe,
making what happens before irrelevant. As the universe expands, it
cools. Free neutrons and protons are less stable than helium
nuclei, and the protons and neutrons have a strong tendency to
form helium-4. However, forming helium-4 requires the intermediate
step of forming deuterium. At the time at which nucleosynthesis
occurs, the temperature is high enough for the mean energy per
particle to be greater than the binding energy of deuterium;
therefore any deuterium that is formed is immediately destroyed (a
situation known as the deuterium bottleneck). Hence, the formation
of helium-4 is delayed until the universe becomes cool enough to
form deuterium (at about T = 0.1 MeV), when there is a sudden
burst of element formation. Shortly thereafter, at twenty minutes
after the Big Bang, the universe becomes too cool for any nuclear
fusion to occur. At this point, the elemental abundances are
fixed, and only change as some of the radioactive products of BBN
(such as tritium) decay. Excellent reviews on this topic may be
had in the titles of say, Kolb and Turner ~ \cite{wien} and
Raychowdhuri \cite{akr}\\
On the other hand the long time goal of unification of gravity
with other forces of nature continues to remain elusive in quantum
field theory. Most recently efforts in this search have been
directed in studying theories where the dimensions of the
spacetime is larger than the (3+1) that we observe
today~\cite{sc}. In the cosmological context the higher
dimensional spacetime is particularly important because Einstein's
field equations generalized to higher dimensions admit solutions
where as the `usual' 3D space expands the extra dimensions shrink
with time such that at a certain stage(may be at the planckian
time)the extra dimensions are no longer visible with the present
day experimental techniques and the cosmology looks effectively
four dimensional. The study of element formation in the framework
of higher dimensional cosmology is particularly relevant in the
sense that both higher dimensional spacetime and primordial
element formation are important in the early universe.
\section{ The FIELD EQUATIONS}
We have shown earlier~ \cite{mnras} that if we start with a
$(n+2)$-dim spherically symmetric line element as
\begin{equation}
ds^{2} = A^{2}dt^{2}- B^{2}dr^{2}- C^{2}dY_{n}^{2}
\end{equation}
where A, B and C are functions of r and t and
\begin{equation}
dY_{n}^{2}= d\theta_{1}^{2}+ sin^{2}\theta_{1}~ d\theta_{2}^{2}+
sin^{2}\theta_{1}~ sin^{2}\theta_{2}...sin^{2}~\theta_{n-1}
^{2}~d\theta_{n}^{2}
\end{equation}
and then demand that
the energy momentum tensor should be homogeneous then the above
line element reduces to
\begin{equation}
ds^{2}= dt^{2}-\frac{R(t)^{2}}{(1+ kr^{2}/4)^{2}}(dr^{2}+ r^{2}d
Y_{n}^{2})
\end{equation}
where $k$ is the (n+1) space curvature. This metric form may be
looked as the generalized Friedmann-Robertson-Walker universe.We
studied this metric form earlier to get an interesting
astrophysical observation that the mean density of any local
inhomogeneity in this generalized FRW universe must be equal to
the mean cosmological density. In this report we turn our
attention to the vexed problem of nucleosynthesis in the early
universe.\\
As pointed out earlier higher dimensional spacetime is
particularly relevant to the early universe and so the question of
element formation in the higher dimensional universe is
particularly important. From Einstein's field equations
\begin{equation}
R_{ij} - \frac{1}{2}Rg_{ij}= -T_{ij}
\end{equation}
where $T_{ij}$ stand for the energy momentum tensor appropriate to
our matter field we get for the metric(3) the following field
equations
\begin{equation} \frac{n(n+1)}{2}\frac{\dot{R}^{2}+ k}{R^{2}}=
\rho
\end{equation}
\begin{equation}
-\frac{n\ddot{R}}{R}-\frac{n(n-1)}{2}\frac{\dot{R}^{2}+ k}{R^{2}}=
p
\end{equation}
where $\rho$ and p are the homogeneous mass density and pressure
respectively.In the early universe one takes the radiation
dominated case as our equation of state such that for a (n+2) dim.
universe, $ p = \frac{\rho}{n-1}$. One of the phenomenal successes
favoring bigbang cosmology is the almost correct prediction of the
of the primeval nucleosynthesis, particularly the observed
abundances of the light nuclei in the current universe.\\ Now it
can be shown via the field equations(3) and (4) that for the
radiation dominated case
\begin{equation}
\rho = \frac{2n(n+1)}{(n+2)^{2}}(\frac{1}{t^{^{2}}})
\end{equation}
Assuming absence of any dissipative mechanisms(for example,
viscosity, friction etc.) and also that the laws of thermodynamics
are valid in the early universe also with such huge temperature
one gets from elementary thermodynamical considerations ~
\cite{alvarez} that
\begin{equation}
\rho = \sigma T_{rad}^{n+2}
\end{equation}
where $\sigma$ is the higher dimensional Steffan's constant,
whence it follows that \begin{equation} T_{rad}=
\frac{2n(n+1}{(n+2)^{2}\sigma}t^{-\frac{2}{n+2}}
\end{equation}
where $T_{rad}$ is the temperature of radiation and `t' is the age
of the universe. For the usual 4D spacetime it reduces to the
wellknown relation \\
$T_{kelvin}= 1.52\times 10^{10} t^{-1/2}$ s\\ The equation (8) is
the key equation for our attempt to investigate the effect of
extra dimensions on the process of nucleosynthesis in early
universe. We shall however follow the latter here. We here study
the situation when elementary particles have already materialized
allowing us to take the low temperature approximation, $ T <<
m_{\mu}c^{2}= T_{\mu}$ for their distribution function. Here
$m_{\mu}$ is the mass of a particular species. We here try to
study the equilibrium condition for neutrinos with other species.
As the reactions involving the neutrinos fall within the category
of weak interactions and $T < T_{\mu}$ the cross section of a
typical reaction is of the order of
\begin{equation}
A = f^{2}h^{-4}(kT)
\end{equation}
where f is the weak coupling constant. For simplicity it is
further assumed that the constant,A does not depend on the number
of spatial dimensions.\\
Moreover the number densities of the participating particles(say,
muons)are, for (n+1) spatial dimensions, of the order of
$(T/ch)^{n+1}$ and for reactions involving muons at low
temperatures an exponential damping factor of $exp(-T/T_{\mu})$
should also be considered.\\ In the cosmological context one
should also consider the rate of expansion of the background,
which from equations (4) and (5) give \begin{equation}
H^{2}=\frac{\dot{R}^{2}}{R^{2}}\sim t^{-2} = T^{n+2}
\end{equation}
Thus the ratio of the reaction rate to the expansion rate now
becomes
\begin{equation}
\frac{Q}{H}\sim (
\frac{T}{10^{10}K})^{\frac{n+4}{2}}exp(\frac{10^{12}K}{T})
\end{equation}
One recovers the familiar 4D form when $n=2$, As the temperature
falls below the critical level of$10^{10}K$, the exponential
decays rapidly.One can at this stage call attention to a
significant quantitative difference from the 4D case.From
equations (9) and (12) it is tempting to suggest that as the
temperature falls less rapidly in higher dimensional cosmology
than the analogous 4D situation it takes relatively more time for
the elementary particles to cool below the threshold temperature.
More importantly the quotient $\frac{Q}{H}$ is more sensitive to
temperature fluctuations in multidimensional universe. Thus Q is
larger than H for $T > 10^{12}$ depending on the number of extra
dimensions and in these temperature ranges the neutrinos would be
in thermal equilibrium with the rest of the other species. As the
temperature falls further below $10^{10}K$ both the terms in the
rhs of equation(12) drop rapidly, which means that the reactions
involving neutrinos run at a slower rate than compared to the
expansion of the universe. This triggers the so called decoupling
of the neutrinos from the rest of the other constituents of matter
and as pointed earlier in the higher dimensional universe it takes
relatively more time for this decoupling phase of the neutrinos to
occur.However, the theoretical and observational consequences of
this supposed time lag for the initiation of the decoupling era
need to be worked out in more detail before any definite
inferences could be drawn.\\\\
\textbf{Acknowledgment : }
The financial support of UGC, New Delhi in the
form of a MRP award is acknowledged .
\bibliographystyle{aipproc}
\IfFileExists{\jobname.bbl}{}
{\typeout{}
\typeout{******************************************}
\typeout{** Please run "bibtex \jobname" to optain}
\typeout{** the bibliography and then re-run LaTeX}
\typeout{** twice to fix the references!}
\typeout{******************************************}
\typeout{}
} |
0911.2541 | \section{Introduction}
Let $X$ be a finite {\it regular} cell complex of dimension $d$ and
let $\mathbf{K}$ be a field. Following \cite{CPS1}, we will associate to
$X$, under certain global assumptions, a quadratic $K$-algebra $R(X)$
(defined below). The main focus of \cite{CPS1} is to determine the
combinatorial properties required for this algebra to be {\it Koszul}.
The primary focus of this paper is to show that the Koszul property is
actually a topological invariant, even though the algebra is not. In
the process we see that our global assumptions also imply some
restrictions on singularities of appropriate spaces $X$.
After a definition of our two technical assumptions we can state our
main theorem. Our complexes will be finite throughout. We will not
generally restate this hypothesis.
\begin{definition} Let $X$ be a regular cell complex of dimension $d$.
\item(1) $X$ is {\it pure} if $X$ is the closure of its open $d$-cells.
\item(2) A pure, finite regular cell complex $X$, is {\it connected through codimension one faces} if the space $X- X^{(d-2)}$ is path connected (where $X^{(d-2)}$ is the $(d-2)$-skeleton of $X$).
\end{definition}
\begin{theorem}\label{main-theorem-intro}
Let $X$ be a pure regular cell complex of dimension $d$, connected through codimension one faces. Then $R(X)$
is Koszul (for the field $\mathbf{K}$) if any only if the following conditions both
hold.
\begin{enumerate}
\item $\tilde{H}_{i}(X;\mathbf{K}) = 0$ for $i<d$.
\item $\tilde{H}_{i}(X,X-\{p \};\mathbf{K}) = 0$ for each $p \in X$ and
each $i<d$.
\end{enumerate}
\end{theorem}
Because our hypotheses on the cell complex structure and on homology
are obviously homeomorphism invariant,
Theorem~\ref{main-theorem-intro} shows that the Koszul property for $R
(X)$ is a homeomorphism invariant.
We point out, however, that one does not have any nice homotopy invariance.
\begin{example}\label{koszul-not-top-invt}
There are homotopy equivalent, pure regular cell complexes $X$ and $Y$
of dimension $3$ such that $R (X)$ is not Koszul but $R
(Y)$ is Koszul.
\end{example}
Take $Y$ to be the union of two $3$-cells attached by some
2-dimensional face. Then $Y$ satisfies the hypotheses of
Theorem~\ref{main-theorem-intro}. Take $X$ to be the Example
5.9 of \cite{CPS1}, which is described explicitly as Example~\ref{s1-bad}. $X$ is homotopy equivalent to $Y$ since one gets
a space homeomorphic to $Y$ by collapsing the contractible subcomplex
$a$ of $X$ to a point. But \cite{CPS1} shows $R (X)$ is not Koszul (as one can also see by Theorem~\ref{main-theorem-intro}).
Although our argument does not make direct use of the definition of $R
(X)$, we review that definition here in the interest of
self-containedness.
Let $P$ be any finite ranked poset with minimal element $\bar 0$. For
each $x\in P$ let $s_{1}(x) = \{y\in P\,|\, y<x, \rk(x)-\rk(y)=1\}$ (the
elements immediately below $x$ in the ranked poset). We define $R(P)$ to be the quadratic $\mathbf{K}$-algebra on generators $r_x$, $x\in P-\{\bar 0\}$ with relations:
\[
r_x r_y = 0 \hbox{ for all } y\not\in s_{1}(x)
\]
and
\[
r_x \sum\limits_{z\in s_{1}(x)} r_z = 0 \hbox{ for all } x
\]
The set of all {\it closed} cells of a regular cell complex, together
with the empty set, form a finite ranked poset under set inclusion.
Following \cite{CPS1}, this is denoted $\bar P(X)$. (In this poset
the rank of a cell is one more than its dimension, so that the rank of
the empty set is $0$).
If we assume that $X$ is pure, then we may adjoin one additional
(maximal) element to the poset $\bar P(X)$. The resulting poset,
which is denoted $\hat P(X)$ is still a ranked poset. If we further
assume that $X$ is connected through codimension one faces, then $\hat
P(X)$ has the combinatorial property known as {\it uniform}
(cf. \cite{GRSW05}). Then we define $R(X)$ to be $R(\hat P(X))$.
While $R (\bar{P}(X))$ is always Koszul under the hypotheses that $X$
is pure and connected through codimension one faces (see
\cite{CPS1} and \cite{RSW08}), the Koszul property for $R (X)$ is
substantially more subtle. Theorem 5.3 of \cite{CPS1} gives a precise
statement in terms of the combinatorial cell structure of $X$
describing when $R(X)$ is Koszul (refer to Theorem \ref{CPS-main}
below).
\section{CPS cohomology and local homology}\label{local-CPS}
We fix $X$, a finite regular CW complex of dimension $d$. We begin by recalling the definitions of the groups
$H_{X} (n,k)$
from \cite[\S 4]{CPS1}, which we will write as $H^{n}_{k}(X)$.
Assign orientations to each cell of $X$.
If $\beta$ is an $n$ cell and $\alpha$ is an $n+1$ cell, let
$[\alpha : \beta]$ be the incidence number of $\beta$ in $\alpha$.
Because $X$ is regular, this is either $0, 1$ or $-1$. These
incidence numbers are usually defined in the context of cellular
homology so that, if $C_{*}(X)$ is the cellular chain complex of $X$,
and $\alpha$ is an $n+1$ cell,
\[
d (\alpha) = \sum_{n-1 \mbox{ cells } \beta}[\alpha :\beta]\beta
\]
Because $X$ is finite, and because we have a chosen basis for the
cellular chains (given by the cells of $X$) we have an isomorphism
between the cellular chains and the cellular cochains of $X$. We
consider the cochains in dimension $n$ to be generated by the basis
dual to the $n$ cells of $X$, but we will use the same notation. That is, an
$n$-cell $\alpha$ when considered as a generator of $C^{n}(X)$ will be
thought of as dual to the $n$-cell $\alpha$ in the basis of $C_{n}(X)$
provided by the $n$-cells. With this identification, the coboundary map $\delta
:C^{n}(X) \rightarrow C^{n+1}(X)$ is given by
\[
\delta (\alpha) = \sum_{n+1 \mbox{ cells } \beta}[\beta :\alpha]\beta.
\]
We define $C^{n}_{k}(X)$ to be the submodule of $C^{n}(X)\otimes
C_{k}(X)$ generated by $\alpha \otimes \beta$ such that $\beta
\subseteq \partial \alpha$ (that is, the cell associated to $\beta$ is
a subset of the boundary of the cell associated to $\alpha$). Then $d$
induces a differential $C^{n}_{k}(X) \rightarrow C^{n}_{k-1}(X)$ and
$\delta$ induces a differential $C^{n}_{k}(X) \rightarrow
C^{n+1}_{k}(X)$.
\begin{definition}\label{l-n-k}
(\cite[Definition 4.1]{CPS1}) For each $k$ and $n$, let:
\[
L^{n}_{k}(X) = \coker (C^{n}_{k+1}(X) \xrightarrow{d}C^{n}_{k}(X) ).
\]
Then $L_k^*(X)$ is a cochain complex with differential induced by $\delta$.
The CPS cohomology groups of $X$ are defined by
\[
H^{n}_{k}(X) = H^{n}(L^{*}_{k}(X)).
\]
These cohomology groups are defined with coefficients in $\mathbf{Z}$. We write $H_k^n(X;R)$ to denote the same groups when calculated with coefficients in a commutative coefficient ring $R$.
\end{definition}
We now recall Theorem 5.3 of \cite{CPS1}.
\begin{theorem}\label{CPS-main} Let $X$ be a pure regular cell complex of dimension $d$, connected by codimension one faces. Then the $\mathbf{K}$-algebra $R(X)$ is Koszul if and only
if $H^{n}_{k}(X,\mathbf{K}) = 0$ for $0\leq k<n<d$.
\end{theorem}
For our purposes it is convenient to present a reformulation of Theorem \ref{CPS-main} in terms of relative cohomology groups involving the {\it stars} of the cells of $X$.
\begin{definition}
The \emph{star} of a cell $\sigma$ in a regular cell complex $X$ is
\[
\st (\sigma) = \{y\in X: y \text{ is in some open cell whose closure contains }
\sigma \}
\]
\end{definition}
We note that $\st (\sigma)$ is an open subset of $X$. We also use
$\st^{l}(\sigma)$ to denote the union of the open cell $\sigma$ with
all open cells in $\st (\sigma)$ of dimension $\leq l$.
\begin{theorem}\label{CPS-reformulated} Let $X$ be a pure regular cell complex of dimension $d$, connected by codimension one faces. Then the $\mathbf{K}$-algebra $R(X)$ is Koszul if and only
\item(1) $\tilde H^n(X,\mathbf{K}) = 0$ for $n<d$ and
\item(2) For every $k$-cell $\sigma$ and $k+1<n<d$, $H^n(X,X-\st(\sigma);\mathbf{K}) = 0$.
\end{theorem}
\begin{remark} Theorem \ref{CPS-reformulated} is a reformulation of Corollary 5.8 in \cite{CPS1}. We wish to point out that the condition $k+1<n$ was inadvertently omitted in their statement.
\end{remark}
As we will see in the next section, the cohomology groups $H^n(X,X-st(\sigma))$ can be replaced by the {\it local homology} groups
$H_n(X,X-\{x\})$ for any $x\in \sigma$ (see Lemma \ref{def-retract}). This suggests the following definition.
\begin{definition}\label{singular-n}
We define the set $S_{n}$ (relative to the ring of coefficients $R$) by $x \in S_{n}$ if $H_{i} (X,X-\{x \};R) = 0$
for $i \leq n$ and $H_{n+1} (X, X-\{x \};R) \ne 0$.
\end{definition}
Now we can state and prove a proposition equivalent to Theorem \ref{main-theorem-intro}. We will leave certain technical aspects of the proof to the subsequent two sections, as well as a more extensive discussion of the structure and significance of the sets
$S_n$.
\begin{proposition}\label{main-prop}
Let $X$ be a pure regular cell complex of dimension d which is
connected through codimension one faces. Then $R(X)$ is Koszul if and only if
\item(1) $\tilde H^n(X,\mathbf{K}) = 0$ for $n<d$ and
\item(2) The sets $S_k$ (relative to $\mathbf{K}$) are empty for $0\le k \le d-2$.
\end{proposition}
\begin{proof}
In this proof all homology and cohomology groups should be computed relative to the field $\mathbf{K}$. We suppress this from the notation.
We need only see that condition (2) of Theorem \ref{CPS-reformulated} and condition (2) of Proposition \ref{main-prop} are equivalent under the hypotheses on $X$ and condition (1). Suppose the sets $S_k$ are empty for $0\le i\le d-2$. Then by
Lemma~\ref{def-retract} $H_{i}(X,X-\st (\sigma)) = 0$ for every cell
$\sigma$ and every $i<d$. The same follows by the universal
coefficient theorem for $H^{i}(X,X-\st (\sigma))$.
Conversely, assume that for some $i \le d-2$, the set $S_i$ is not empty. Let $m$ be minimal such that
$S_{m} \ne \emptyset$. By Lemma~\ref{singular-set-union-of-cells} and Proposition~\ref{singularity-is-very-singular}, $S_m$ is a union of cells and must contain a cell $\alpha$ of dimension $k< m$. By Lemma \ref{def-retract}, $H^{m+1}(X,X-st(\alpha))$ does not vanish, contradicting (2) of Theorem \ref{CPS-reformulated}.
\end{proof}
\section{Preliminary homotopy results}
Let $X$ be a regular cell complex of dimension $d$. If $x \in X$, we write $\sigma (x)$ for the unique open cell of $X$
containing $x$.
The following is a standard lemma of piecewise linear
topology.
\begin{lemma}\label{def-retract}
Given a cell $\sigma$, $\st (\sigma)$ is contractible (and in fact has
a strong deformation retract to $\sigma$). Also, given any point $x
\in \sigma $, there is a strong deformation retract
\[
X- \{x \} \rightarrow X- \st (\sigma (x)).
\]
\end{lemma}
\begin{proof}
To see $\st (\sigma)$ has a strong deformation retract to $\sigma $ we
want a homotopy
\[
H: \st (\sigma)\times I \rightarrow \st (\sigma ).
\]
Of course $H|_{\sigma \times I}$ will just be the projection to
$\sigma$.
Now suppose $H$ has been defined on the subset of $\st (\sigma)$
consisting of $\sigma$ together with other open cells up through cells
of dimension $l$. Since $X$ was a regular cell complex, for each open
$l+1$ cell $E$ of $\st (\sigma )$, $H$ is defined on a contractible
subset of the boundary, $E' \subseteq \partial E$. So $H$ is defined
on $W = E\times \{0 \}\cup E'\times I$. The pair $(E\times I,W)$
has the homotopy extension property (see \cite[p. 23]{Hatcher02}), so
we use that to define $H$ on $E\times I$.
To define our retract
\[
X- \{x \} \rightarrow X- \st (\sigma (x)).
\]
we begin by noting there is a strong deformation retract
$\overline{\sigma (x)} - \{x \}$
to $\partial \overline{\sigma (x)}$.
Now assume the homotopy is defined on $\st^{l}(\sigma (x)) -\{x \}$
(and of course is the identity on $X-\st (\sigma (x))$). Note that
$\st^{0}(\sigma (x)) = \sigma (x)$.
Let $E$ be the closure of an $l+1$ cell of $\st (\sigma (x))$. Since the pair
\[
( (E-\{x \})\times I,(E-\{x \})\times \{0 \}\cup (\partial E - \{x
\})\times I)
\]
has the homotopy extension property, extend $H$ across $E-\{x \}$.
Continue until the homotopy is defined on all of $X - \{x \}$.
\end{proof}
\begin{corollary}\label{main-retract}
Given a cell $\sigma$ there is a strong deformation retract
\[
X-\sigma
\rightarrow X- \st (\sigma)
\]
such that if $x \in E$ for some cell $E$, then the image of $x\times
I$ is in $\overline{E}$, and meets no cells of $\st (\sigma)$ other
than $E$.
\end{corollary}
\begin{proof}
We apply the strong deformation retract of Lemma~\ref{def-retract} to the space $X- \sigma$.
\end{proof}
As an application of Corollary~\ref{main-retract} we have the
following.
\begin{corollary}\label{point-retract}
Let $D$ be an open $n$-cell of $X$ and $\sigma $ a $0$-cell in $\partial D$.
Then
\[
X - (\sigma \cup D) \simeq X-\sigma.
\]
\end{corollary}
\begin{proof}
Apply the homotopy from Corollary~\ref{main-retract} to the space $X- (\sigma \cup
D)$. This gives a retract of $X- (\sigma \cup D)$ to $X- (\st (\sigma ))$, and
since that space is also a retract of $X-\sigma $, we get $X- (\sigma \cup D)
\simeq X-\sigma$.
\end{proof}
\begin{proposition}\label{subdivision-retract}
Let $X$ be the realization of a simplicial complex $\Delta$, and let $A$
be a closed $i$-simplex in $\Delta'$ (the first barycentric subdivision of $\Delta$).
Let $v$ be the vertex that $A$ shares with an $i$-simplex of $\Delta$.
Then
\[
X - \{v \} \simeq X-A.
\]
\end{proposition}
\begin{proof}
In the complex given by $\Delta$ we can construct the deformation retract
of $X - \{v \}$ to $X - \st (v)$ (where $\st (v)$ is defined using
the simplicial complex $\Delta$)
\[
H: ( X- \{v \})\times I \rightarrow X- \{v \}.
\]
explicitly by using barycentric coordinates in each simplex of $\Delta$.
Specifically, if $\sigma$ is a simplex of $\Delta$ not containing $v$,
then $H (p,t) = p$ for $p \in \sigma$.
If $\sigma$ does contain $v$, let the vertices of $\sigma$ be
$v=v_{0},v_{1},\dotsc ,v_{k}$. Then a typical point of
$\sigma -\{v\}$ is given by
$sv_{0} + (1-s) \sum_{i=1}^{k}a_{i}v_{i}$ where
$\sum_{i=1}^{k}a_{i} = 1$. Then
\[
H (sv_{0} + (1-s) \sum_{i=1}^{k}a_{i}v_{i},t) = (1-t) sv_{0} +
(1-s+ts) \sum_{i=1}^{k}a_{i}v_{i}
\]
Applying this homotopy to $X - A$ gives a deformation retract to $X -
\st (v)$. So $X- \{v \} \simeq X- A$.
\end{proof}
\section{Singularities detected by local homology}\label{local-vanishing}
\subsection{The singular sets $S_{n}$ are composed of cells of
dimension less than or equal to $n$.}
We continue to assume that $X$ is a finite regular cell
complex of dimension $d$. Throughout this section we assume further that $X$ is pure. Recall that we refer to $H_*(X,X-x)$ as the local homology at $x$. Since $X$ is locally contractible we can choose a contractible neighborhood of $x$, say $U$. Then by excision we have
\[
H_{*} (X,X-\{x \}) \cong H_{*} (U,U-\{x \}) \cong \tilde{H}_{*-1} (U-\{x \}).
\]
From this we see that any $x$ in the interior of a $d$-cell of $X$ has
local homology $H_*(X,X-x) \cong \tilde H(S^d)$ and $x\in S_{d-1}$.
Similarly, if $x$ is on the boundary of exactly one $d$-cell then
$H_*(X,X-x) = 0$ and $x$ is none of the sets $S_k$. So if we think of
$X$ as a singular manifold with boundary, the point with neighborhoods
homeomorphic to $\mathbf{R}^{d}$ or the corresponding half-space are
not in $S_{k}$ when $k<d-1$. The sets
$S_i$, $0\le i \le d-2$ form a stratification of those singularities
of $X$ that are detected by local homology.
Of course it is also possible for $X$ to be topologically singular and
still have
local homology zero in dimensions below $d$ at every point. A simple but
illustrative example (for $d=1$) is the space
\[
( [0,1]\times \{a,b,c \})/ \{(0,a) \sim (0,b) \sim (0,c) \}.
\]
This is three copies of the unit interval identified at one end
point. The identification point is a singular point and has no local
homology below dimension $1$. This singularity is still
detected by local homology of course, but not until dimension $1$.
As is well known, there are also spaces with singularities so that all the
local homology groups are those of a manifold. A standard source of
examples is the suspension of any homology sphere which isn't actually
a sphere.
We begin by showing that the sets $S_n$ put restrictions on the cell
structure of $X$. Recall first (Definition~\ref{singular-n}) that
$S_{n}$ does not depend on the cellular structure of $X.$
Nevertheless, we have the following.
\begin{lemma}\label{singular-set-union-of-cells}
The set $S_{n}$ is a union of open cells (in any cell structure on $X$).
\end{lemma}
\begin{proof}
If $x$ is in some open cell
$D$ then $X-D \simeq X- \{x \}$ by
Lemma~\ref{def-retract} together with Corollary~\ref{main-retract}.
So applying the same argument to $x' \in D$ and using the appropriate
long exact sequences,
\[
H_{*}(X, X-\{x \}) \cong H_{*}(X,X-D) \cong H_{*}(X,X-\{x' \})
\]
\end{proof}
\begin{lemma}\label{singularity-level}
If $x \in S_{n}$ for $n<d-1$ then $x$ must be in the interior of a cell of
dimension $n$ or lower.
\end{lemma}
Note that this fact depends on $X$ being pure. For example, If we take $X$ to be the union of a two cell
and a one cell at a vertex, then points in the interior of the one
cell will be in $S_{0}$. Geometrically, $x \in S_{n}$ says that if
we take a contractible neighborhood of $x$ and remove $x$ from that
neighborhood then the resulting set is no longer $n$-connected.
\begin{proof}
Recall $st^{k}(\sigma)$ is the union of $\sigma $ and the open cells of dimension
$k$ and lower which are contained in $\st (\sigma)$. This is the same
as $\st
(\sigma)$ within the space $X^{(k)}$ if $\sigma$ is a cell of
dimension less than or equal to $k$.
Suppose $x$ is in the interior of a cell of dimension $k<d$. We have a
commutative square of spaces
\[
\CD
X^{(k+1)}- \{x \} @>>> X^{(k+1)}-\st^{(k+1)} (\sigma (x))\\
@VVV @VVV\\
X - \{x \} @>>> X - \st (\sigma (x)).
\endCD
\]
The horizontal maps are homotopy equivalences by
Lemma~\ref{def-retract}. The spaces on the right are subcomplexes of
$X$ and the right hand vertical map is inclusion of
the $k+1$-skeleton. So by cellular approximation, all maps induce
isomorphisms in $H_{i}$ for $i<k+1$.
From the long exact sequence of a pair, it follows that
\[
H_{i} (X^{(k+1)},X^{(k+1)}- \{x \}) \rightarrow H_{i} (X, X-\{x \})
\]
is an isomorphism for $i\leq k$. Now let $U = \st^{(k+1)} (\sigma
(x))$, which is an open neighborhood of $x$ in $X^{(k+1)}$. $U$
consists of the open $k$-cell containing $x$ and any open $k+1$ cells
which have that $k$-cell as a face. So $U$ looks like a finite
collection of $k+1$-cells identified along part of their boundary, and
$x$ is in that part of the common boundary. It follows that $U-\{x
\}$ is homotopy equivalent to a wedge of $k$-spheres (one fewer than
the number of $k+1$-cells attached to $\sigma (x)$).
So
\[
H_{i} (X^{(k+1)},X^{(k+1)}- \{x \}) = H_{i} (U,U-\{x \})
\]
is $0$ for $i<k+1$ (and is free abelian on one fewer generator
than the number of $k+1$-cells attached to $\sigma (x)$ for $i=k+1$).
It follows that $x$ is not in $S_{n}$ for $n<k$.
\end{proof}
See the appendix for examples where $S_{n}$ contains the interiors of
cells of dimension strictly smaller than $n$.
\subsection{The implications of connectivity by codimension one faces}
We have already assumed the global topological condition: $X$ is pure. Our final goal is to understand the effect of the extra global topological condition: connected by codimension one faces. Under that condition we can prove a remarkable strengthening of Lemma \ref{singularity-level} (see Proposition \ref{singularity-is-very-singular} and its Corollary). We require one technical lemma.
\begin{lemma}\label{boundary-subset-lemma} Let $X$ be a pure regular
cell complex of dimension $d$. Let $n<d$. Suppose $S_{0} = \dotsb =
S_{n-1} = \emptyset,$ $\tilde{H}_{k} (X) = 0$ for $k<d$, and $D$ is an open
$n$-cell of $S_{n}$ with $S_{n} \cap \partial D = \emptyset$. Let $Y=X-D$.
Let $A \subseteq \partial D$ be a subspace homeomorphic to $D^{i}$
(the closed $i$ disk) and
also a subcomplex of $\partial D$ under some cell structure on
$\partial D$ which subdivides the given cell structure.
Then $\tilde{H}_{j}(Y-A) = 0$ for $j<n+1-i$.
\end{lemma}
\begin{proof}
The proof is by a double induction with the outer induction on $i$ and
the inner induction on the number of $i$-cells in $A$,
which we'll denote by $r$.
Let $i$ be $0$. Note that $r=1$ by our hypotheses
that $A \cong D^{0}.$ Then by Corollary~\ref{point-retract}, $Y-A
= X- (A\cup D) \simeq
X-A$. Since $A$ is a single point in $\partial D$, and by
hypothesis that point isn't in $S_{0}\cup \dotsb \cup S_{n}$, we have
$\tilde{H}_{j}(X-A) = 0$ for $j<n+1$ as desired.
Now suppose the lemma is established for $i-1 \geq 0$. Consider first
the following special case. Subdivide the cell complex structure on
$\partial D$ so that it is a simplicial complex. Then take the first
barycentric subdivision of that simplicial complex. Let $A$ be the
closure of an $i$-cell in that complex, so $A \cong D^{i}.$
Let $v$ be the vertex that $A$ shares with the $i$-simplex (before
subdivision) that $A$ is part of.
By Proposition~\ref{subdivision-retract}, $Y-A \simeq Y- \{v \}$
which is in turn homotopy equivalent to $X - \{v \}$ by the previous
case.
So $\tilde{H}_{j}(Y-A) = 0$ for $j<n+1$.
Now let $A$ be as in the hypotheses, with the additional assumption
that it is a subcomplex of a barcyentric subdivision of a simplicial
subdivision of $\partial D$, as in the special case above. Suppose
$A$ has $r+1$ cells, and that the lemma is true in the case of $r$
cells. Write $A = A' \cup A''$ where $A'$ is a single cell, and $A''$
has $r$-cells, and $A' \cap A''$ is homeomorphic to $D^{i-1}$.
We look at the Mayer-Vietoris sequence for
\[
Y- (A' \cap A'') = (Y-A') \cup (Y-A'')
\]
which gives
\begin{multline*}
H_{j+1} (Y-A')\oplus H_{j+1} (Y-A'') \rightarrow H_{j+1} (Y- (A'\cap
A'')) \rightarrow H_{j} (Y-A)\\
\rightarrow H_{j} (Y-A') \oplus H_{j} (Y-A'').
\end{multline*}
By our two inductive hypotheses, (on $i-1$ and $r$), $H_{j+1} (Y-
(A'\cap A'')) = 0$ for $j+1<n+1- (i-1)$ (or $j<n+1-i$) and $H_{j}
(Y-A'') = H_{j} (Y-A') = 0$ for $j<n+1-i$.
It follows that $H_{j} (Y-A) = 0$ for $j<n+1-i$ as we want. By
induction on $r$ this holds for any $A$ which is an appropriate
subcomplex of our subdivision (of the cell structure on $\partial D$).
Finally, if $A \subseteq \partial D$ is any appropriate subcomplex of
a subdivision of $\partial D$ so that $A \cong D^{i}$, then $A$ is also an
appropriate subcomplex of a finer subdivision of $\partial D$ which is
itself a barycentric subdivision of a simplicial complex. So our
special case covers this subcomplex $A$ of $\partial D$.
\end{proof}
\begin{proposition}\label{singularity-is-very-singular}
Let $X$ be a complex as above. In addition assume that $\tilde{H}_{i} (X) =
0$ for $i<d$, and that $X$ is connected through codimension one
faces. If there is an $n<d-1$ so that $S_{n} \ne \emptyset$, then
there is some point in some such $S_{n}$ which is in an open cell of
dimension smaller than $n$.
\end{proposition}
\begin{proof}
Let $n$ be minimal so that $S_{n} \ne \emptyset$. If there is no such
$n$, or if $n\geq d-1$, we're done. So assume $n<d-1$. By Lemmas~\ref{singular-set-union-of-cells} and \ref{singularity-level} $S_n$ must contain an open cell $D$ of dimension at most $n$. If $D$ has
dimension less than $n$, then we are done. So assume $D$ has dimension $n$.
Let $Y = X-D$. From the hypothesis $\tilde H_k(X) = 0$ for $k<d$ we get
$H_{n}(Y) = H_{n+1}(X,Y) \ne 0$.
We wish to prove that $S_{n}\cap \partial D \ne \emptyset.$
Choose a sequence of subsets $A^{i}$, $B^{i}$, $i = 0,\dotsc ,n-1$
subcomplexes of $\partial D$ (or of some subdivision) so that
\begin{enumerate}
\item $A^{n-1} \cup B^{n-1} = \partial D \cong S^{n-1}$
\item $A^{i} \cup B^{i} \cong S^{i}$
\item $A^{i} \cap B^{i} = A^{i-1}\cup B^{i-1}$.
\end{enumerate}
Notice that $A^0$ and $B^0$ are distinct singleton sets.
Assume $S_{n} \cap \partial D = \emptyset$. Consider
\begin{equation}\label{level-zero}
Y = ( Y - A^{0}) \cup (Y-B^{0}).
\end{equation}
The space $Y- A^{0} \simeq X- A^{0}$ by Corollary~\ref{point-retract}, so
since the point of $A^{0}$ is not in $S_{0} \cup \dotsb \cup S_{n}$,
we get $\tilde{H}_{j} (Y-A^{0}) = 0$ for $j\leq n$, and of course the
same result for $\tilde{H}_{J} (Y-B^{0})$.
Then in the Mayer-Vietoris sequence for (\ref{level-zero}) we get
\[
H_{n} (Y) \cong H_{n-1} (Y- (A^{0}\cup B^{0})).
\]
We do a similar analysis for
\begin{equation}\label{level-one}
Y- (A^{0}\cup B^{0}) = ( Y-A^{1}) \cup (Y-B^{1}).
\end{equation}
We have $\tilde{H}_{j} (Y-A^{1}) = 0$ for $j<n+1-1 = n$ by
Lemma~\ref{boundary-subset-lemma}.
Then in the Mayer-Vietoris sequence for (\ref{level-one}) we get
\[
H_{n-1} (Y- (A^{0}\cup B^{0})) \cong H_{n-2} (Y- (A^{1}\cup B^{1})).
\]
Similarly we have
\begin{equation}\label{level-k}
Y- (A^{k-1}\cup B^{k-1}) = (Y-A^{k})\cup (Y-B^{k}),
\end{equation}
Lemma~\ref{boundary-subset-lemma} tells us that $\tilde{H}_{j}
(Y-A^{k}) = 0$ for $j<n+1-k$ (and a similar result for $B^{k}$).
So by the Mayer-Vietoris sequence for (\ref{level-k}) we get
\[
\tilde{H}_{n-k} (Y- (A^{k-1}\cup B^{k-1})) \cong \tilde{H}_{n-k-1} (Y-
(A_{k}\cup B_{k})).
\]
Assembling this information, we get
\[
0 \ne H_{n} (Y) = \tilde{H}_{0} (Y- (A^{n-1}\cup B^{n-1})) =
\tilde{H}_{0} (X-\overline{D}).
\]
The hypothesis that $X$ is connected through codimension one faces
tells us that $\tilde{H}_{0} (X-\overline{D}) = 0$ unless (possibly)
$D$ has codimension $1$.
But $D$ was assumed to have dimension $n< d-1$, so we have a
contradiction to our assumption that $S_{n} \cap \partial D = \emptyset$.
\end{proof}
\begin{corollary}\label{non-singular}
Suppose $X$ is a pure regular cell complex of dimension $d$, connected
through codimension one faces, and with $\tilde{H}_{i} (X) = 0$ for
$i<d$.
Then if for each $0\leq i<d-1$, $S_{i}$ contains no cells of dimension
less than $i$, then for $0 \leq i < d-1$, each $S_{i}$ is empty.
\end{corollary} |
0911.2752 | \section*{Introduction}
A ring $R$ is defined to be $K_n$-regular, if the map $K_n(R) \to
K_n(R[t_1,\dots,t_r])$ induced by the canonical inclusion is an
isomorphism for all $r \geqslant 0$~\cite[Definition~2.2]{bass1}. It
was proved by Quillen~\cite[Corollary of Theorem~8]{quillen} that a
(left) regular noetherian ring is $K_n$-regular for all integers
$n$. A conjecture of Vorst~\cite[Conjecture]{vorst} predicts that,
conversely, if $R$ is a commutative ring of dimension $d$ essentially
of finite type over a field $k$, then $K_{d+1}$-regularity implies
regularity. Recently, Corti\~{n}as, Haesemeyer, and
Weibel showed that the conjecture holds, if the field $k$ has
characteristic zero~\cite[Theorem~0.1]{cortinashaesemeyerweibel}.
In this paper, we prove the following slightly weaker result, if $k$
is an infinite perfect field of characteristic $p > 0$ and strong
resolution of singularities holds over $k$ in the sense of
Section~\ref{ktheorysection} below.
\begin{bigthm}\label{maintheorem}Let $k$ be an infinite perfect field of
characteristic $p > 0$ such that strong resolution of singularities
holds over $k$. Let $R$ be a localization of a $d$-dimensional
commutative $k$-algebra of finite type and suppose that $R$ is
$K_{d+1}$-regular. Then $R$ is a regular ring.
\end{bigthm}
We also prove a number of results for more general fields of
characteristic $p > 0$. For instance, we show in
Theorem~\ref{maintheoremplus} that, if strong resolution of
singularities holds over all infinite perfect fields of characteristic
$p$, then for every field $k$ that contains an infinite perfect
subfield of characteristic $p$ and every $k$-algebra $R$ essentially
of finite type, $K_q$-regularity for all $q$ implies regularity.
We give a brief outline of the proof of Theorem~\ref{maintheorem}. Let
$\mathfrak{m} \subset R$ be a maximal ideal, and let $d_{\mathfrak{m}}
= \dim(R_{\mathfrak{m}})$ and $e_{\mathfrak{m}} =
\dim_{R/\mathfrak{m}}(\mathfrak{m}/\mathfrak{m}^2)$ be the dimension
and embedding dimension, respectively. One always has
$d_{\mathfrak{m}} \leqslant e_{\mathfrak{m}}$ and the ring $R$ is said
to be regular if $d_{\mathfrak{m}} = e_{\mathfrak{m}}$ for every
maximal ideal $\mathfrak{m} \subset R$. Now, we show in
Theorem~\ref{ktheorem} below that if $R$ is $K_{d+1}$-regular, then the group $K_{d_{\mathfrak{m}}+1}(R_{\mathfrak{m}})/pK_{d_{\mathfrak{m}}+1}(R_{\mathfrak{m}})$
is zero for every maximal ideal $\mathfrak{m}
\subset R$. We further show in Theorem~\ref{hhtheorem} below that for every
maximal ideal $\mathfrak{m} \subset R$, the group
$K_q(R_{\mathfrak{m}})/pK_q(R_{\mathfrak{m}})$ is non-zero for all $0
\leqslant q \leqslant e_{\mathfrak{m}}$. Together the two theorems
show that $d_{\mathfrak{m}} \geqslant e_{\mathfrak{m}}$ as
desired. Theorem~\ref{maintheorem} follows.
\section{$K$-theory}\label{ktheorysection}
In this section, we prove Theorem~\ref{ktheorem} below. We say that strong
resolution of singularities holds over the (necessarily perfect) field
$k$ if for every integral scheme $X$ separated and of finite type
over $k$, there exists a sequence of blow-ups
$$X_n \to X_{n-1} \to \dots \to X_1 \to X_0 = X$$
such that the reduced scheme $X_r^{\operatorname{red}}$ is smooth over
$k$; the center $Y_i$ of of the blow-up $X_{i+1} \to X_i$ is connected
and smooth over $k$; the closed embedding of $Y_i$ in $X_i$ is
normally flat; and $Y_i$ is nowhere dense in $X_i$.
\begin{prop}\label{KHproposition}Let $k$ be an infinite perfect field
of characteristic $p > 0$ and assume that strong resolution of
singularities holds over $k$. Let $X$ be the limit of a cofiltered
diagram $\{ X_i \}$ with affine transition maps of $d$-dimensional
schemes separated and of finite type over $k$. Then $KH_q(X,\mathbb{Z}/p\mathbb{Z})$
vanishes, for $q > d$.
\end{prop}
\begin{proof}It follows from~\cite[Sect.~IV.8.5]{ega} that for all
integers $q$, the canonical map
$$\operatornamewithlimits{colim}_i K_q(X_i,\mathbb{Z}/p\mathbb{Z}) \to K_q(X,\mathbb{Z}/p\mathbb{Z})$$
is an isomorphism. Therefore, using the natural spectral sequence
$$E_{s,t}^1 = N_sK_t(U,\mathbb{Z}/p\mathbb{Z}) \Rightarrow KH_{s+t}(U,\mathbb{Z}/p\mathbb{Z}),$$
we conclude that for all integers $q$, the canonical map
$$\operatornamewithlimits{colim}_i KH_q(X_i,\mathbb{Z}/p\mathbb{Z}) \to KH_q(X,\mathbb{Z}/p\mathbb{Z})$$
is an isomorphism. Hence, we may assume that $X$ itself is a
$d$-dimensional scheme separated and of finite type over
$k$. In fact, we may even assume that $X$ is integral. Indeed, it
follows from~\cite[Theorem~2.3]{weibel4} that for all integers $q$, the
canonical map
$$KH_q(X,\mathbb{Z}/p\mathbb{Z}) \to KH_q(X^{\operatorname{red}},\mathbb{Z}/p\mathbb{Z})$$
is an isomorphism, so we may assume that $X$ is reduced. Moreover, if
$X_1 \subset X$ is an irreducible component and $X_2 \subset X$ the
closure of $X \smallsetminus X_1$, then $X_{12} = X_1 \cap X_2$ has
smaller dimension than $X$ and by~\cite[Corollary~4.10]{weibel4} there
is a long exact sequence
$$\cdots \to
KH_q(X,\mathbb{Z}/p\mathbb{Z}) \to
KH_q(X_1,\mathbb{Z}/p\mathbb{Z}) \oplus KH_q(X_2,\mathbb{Z}/p\mathbb{Z}) \to
KH_q(X_{12},\mathbb{Z}/p\mathbb{Z}) \to \cdots$$
Therefore, a downward induction on the number of irreducible
components shows that we can assume $X$ to be integral. So we let $X$
be integral and proceed by induction on $d \geqslant 0$. In the case
$d = 0$, $X$ is a finite disjoint union of prime spectra of fields
$k_{\alpha}$ with $[k_{\alpha} \colon k] < \infty$. It follows that
the canonical maps
$$KH_q(X,\mathbb{Z}/p\mathbb{Z}) \leftarrow
K_q(X,\mathbb{Z}/p\mathbb{Z}) \to
\prod_{\alpha} K_q(k_{\alpha},\mathbb{Z}/p\mathbb{Z})$$
are isomorphisms, and since the fields $k_{\alpha}$ again are perfect
of characteristic $p > 0$, the right-hand group is zero, for $q > 0$
as desired~\cite{kratzer}. So we let $d \geqslant 1$ and assume that
the statement has been proved for smaller $d$. By the assumption that
resolution of singularities holds over $k$, there exists a proper
bi-rational morphism $X' \to X$ from a scheme $X'$ smooth over $k$. We
may further assume that $X'$ is of dimension $d$. We choose a closed
subscheme $Y$ of $X$ that has dimension at most $d-1$ and contains the
singular set of $X$ and consider the cartesian square
$$\xymatrix{
{ Y' } \ar[r] \ar[d] &
{ X' } \ar[d]<-.2ex> \cr
{ Y } \ar[r] &
{ X. } \cr
}$$
Since the field $k$ is assumed to be an infinite perfect field such
that strong resolution of singularities holds over $k$, the proof
of~\cite[Theorem~3.5]{haesemeyer} shows that the cartesian square above
induces a long exact sequence
$$\cdots \to KH_q(X,\mathbb{Z}/p\mathbb{Z}) \to KH_q(X',\mathbb{Z}/p\mathbb{Z}) \oplus KH_q(Y,\mathbb{Z}/p\mathbb{Z})
\to KH_q(Y',\mathbb{Z}/p\mathbb{Z}) \to \cdots$$
Now, the schemes $Y$ and $Y'$ are of dimension at most $d-1$ and are
separated and of finite type over $k$. Therefore, the groups
$KH_q(Y,\mathbb{Z}/p\mathbb{Z})$ and $KH_q(Y',\mathbb{Z}/p\mathbb{Z})$ vanish, for $q > d-1$, by the
inductive hypothesis. Finally, since the scheme $X'$ is smooth over
$k$, the canonical map defines an isomorphism
$$K_q(X',\mathbb{Z}/p\mathbb{Z}) \xrightarrow{\sim} KH_q(X',\mathbb{Z}/p\mathbb{Z}),$$
and by~\cite[Theorem~8.4]{geisserlevine} the common group vanishes for
$q > d$. We conclude from the long exact sequence that
$KH_q(X,\mathbb{Z}/p\mathbb{Z})$ vanishes, for $q > d$, as desired.
\end{proof}
\begin{theorem}\label{ktheorem}Let $k$ be an infinite perfect field of
positive characteristic $p$ such that strong resolution of
singularities holds over $k$. Let $R$ be a localization of a
$d$-dimensional $k$-algebra of finite type and assume that $R$ is
$K_{d+1}$-regular. Then the group $K_{d+1}(R)/pK_{d+1}(R)$ is zero.
\end{theorem}
\begin{proof}Since we assume $R$ is $K_{d+1}$-regular, a theorem of
Vorst~\cite[Corollary~2.1]{vorst} shows that $R$ is $K_q$-regular for all
$q \leqslant d+1$, or equivalently, that the groups $N_sK_q(R)$
vanish for all $s > 0$ and $q \leqslant d+1$. The coefficient exact
sequence
$$0 \to N_sK_q(R)/pN_sK_q(R) \to N_sK_q(R,\mathbb{Z}/p\mathbb{Z}) \to
\operatorname{Tor}_1^{\mathbb{Z}}(N_sK_{q-1}(R),\mathbb{Z}/p\mathbb{Z}) \to 0$$
then shows that the groups $N_sK_q(R,\mathbb{Z}/p\mathbb{Z})$ vanish for $s > 0$ and
$q \leqslant d+1$. Therefore, we conclude from the spectral sequence
$$E_{s,t}^1 = N_sK_t(R,\mathbb{Z}/p\mathbb{Z}) \Rightarrow KH_{s+t}(R,\mathbb{Z}/p\mathbb{Z})$$
that the canonical map
$$K_q(R,\mathbb{Z}/p\mathbb{Z}) \to KH_q(R,\mathbb{Z}/p\mathbb{Z})$$
is an isomorphism for $q \leqslant d+1$. Now, for $q = d+1$,
Proposition~\ref{KHproposition} shows that the common group is zero, and
hence, the coefficient sequence
$$0 \to K_{d+1}(R)/pK_{d+1}(R) \to K_{d+1}(R,\mathbb{Z}/p\mathbb{Z}) \to
\operatorname{Tor}(K_d(R),\mathbb{Z}/p\mathbb{Z}) \to 0$$
shows that the group $K_{d+1}(R)/pK_{d+1}(R)$ is zero as stated.
\end{proof}
\section{Hochschild homology}\label{hochschildhomologysection}
In this section, we prove the following general result.
\begin{theorem}\label{hhtheorem}Let $\kappa$ be a commutative ring,
let $r$ be a positive integer, and let $A$ be the $\kappa$-algebra
$A = \kappa[x_1,\dots,x_r]/(x_ix_j \mid 1 \leqslant i \leqslant j
\leqslant r)$. Then, for all $1 \leqslant q \leqslant r$, the image of
the symbol $\{1+x_1, \dots, 1+x_q\}$ by the composition
$$K_q(A) \to \operatorname{HH}_q(A) \to \operatorname{HH}_q(A/\kappa)$$
of the Dennis trace map and the canonical map from absolute Hochschild
homology to Hochschild homology relative to the ground ring $\kappa$
is non-trivial.
\end{theorem}
To prove Theorem~\ref{hhtheorem}, we first evaluate the groups
$\operatorname{HH}_*(A/\kappa)$ that are target of the map of the statement. By
definition, these are the homology groups of the chain complex
associated with the cyclic $\kappa$-module $\operatorname{HH}(A/\kappa)[-]$ defined
by
$$\operatorname{HH}(A)[n] = A \otimes_{\kappa} \dots \otimes_{\kappa} A \hskip6mm
\text{($n+1$ factors)}$$
with cyclic structure maps
$$\begin{aligned}
d_i & \colon \operatorname{HH}(A/\kappa)[n] \to \operatorname{HH}(A/\kappa)[n-1] \hskip6mm
(0 \leqslant i \leqslant n) \cr
s_i & \colon \operatorname{HH}(A/\kappa)[n] \to \operatorname{HH}(A/\kappa)[n+1] \hskip6mm
(0 \leqslant i \leqslant n) \cr
t_n & \colon \operatorname{HH}(A/\kappa)[n] \to \operatorname{HH}(A/\kappa)[n] \cr
\end{aligned}$$
defined by
$$\begin{aligned}
d_i(a_0 \otimes \dots \otimes a_n) & = \begin{cases}
a_0 \otimes \dots \otimes a_ia_{i+1} \otimes \dots \otimes a_n & \hskip5mm
(0 \leqslant i < n) \cr
a_na_0 \otimes a_1 \otimes \dots \otimes a_{n-1} & \hskip5mm
(i = n) \cr
\end{cases} \cr
s_i(a_0 \otimes \dots \otimes a_n) & = a_0 \otimes \dots \otimes a_i \otimes 1 \otimes a_{i+1} \otimes \dots \otimes a_n \cr
t_n(a_0 \otimes \dots \otimes a_n) & = a_n \otimes a_0 \otimes a_1 \otimes \dots \otimes a_{n-1}. \cr
\end{aligned}$$
The cyclic $\kappa$-module $\operatorname{HH}(A/\kappa)[-]$ admits a direct sum
decomposition as follows. Recall that a word of length
$m$ with letters in a set $S$ is defined to be a function
$$\omega \colon \{1,2, \dots, m\} \to S.$$
The cyclic group $C_m$ of order $m$ acts on the set $\{1, 2, \dots,
m\}$ by cyclic permutation of the elements. We define a cyclical word
of length $m$ with letters in $S$ to be an orbit for the induced
action on the set of words of length $m$ with letters in $S$. We write
$[\omega]$ for the orbit through $\omega$ and call the length of the
orbit the period of $[\omega]$. In particular, the set that consists
of the empty word is a cyclical word $[0]$ of length $0$ and period
$1$. Then the cyclic $\kappa$-module $\operatorname{HH}(A/\kappa)[-]$ decomposes as
the direct sum
$$\operatorname{HH}(A/\kappa)[-] = \bigoplus_{[\omega]} \operatorname{HH}(A/\kappa;[\omega])[-],$$
where the direct sum ranges over all cyclical words with letters in
$\{x_1,\dots,x_r\}$, where the summand $\operatorname{HH}(A/\kappa;[0])[-]$ is the
sub-cyclic $\kappa$-module generated by the $0$-simplex $1$, and where
the summand $\operatorname{HH}(A/\kappa;[\omega])[-]$ with $\omega =
(x_{i_1},\dots,x_{i_m})$, $m \geqslant 1$, is the sub-cyclic
$\kappa$-module generated by the $(m-1)$-simplex $x_{i_1} \otimes
\dots \otimes x_{i_{m}}$.
\begin{lemma}\label{summandhomology}Let $\kappa$ be a commutative ring,
let $r$ be a positive integer, and let $A$ be the $\kappa$-algebra
$A = \kappa[x_1,\dots,x_r]/(x_ix_j \mid 1 \leqslant i \leqslant j
\leqslant r)$. Let $\omega = (x_{i_1},\dots,x_{i_m})$ be a word with
letters in $\{x_1,\dots,x_r\}$ of length $m \geqslant 0$ and period
$\ell \geqslant 1$.
\begin{enumerate}
\item[(1)] If $m = 0$, then $\operatorname{HH}_0(A/\kappa;[\omega])$ is the free
$\kappa$-module of rank one generated by the class of the cycle $1$
and the remaining homology groups are zero.
\item[(2)] If $m$ is odd or $\ell$ is even, then
$\operatorname{HH}_{m-1}(A/\kappa;[\omega])$ and $\operatorname{HH}_m(A/\kappa;[\omega])$ are
free $\kappa$-modules of rank one generated by the classes of the
cycles $x_{i_1} \otimes \dots \otimes x_{i_m}$ and
$\sum_{0 \leqslant u < \ell}
(-1)^{(m-1)u}t_ms_{m-1}t_{m-1}^u(x_{i_1} \otimes \dots \otimes
x_{i_m})$, respectively, and the remaining homology groups are zero.
\item[(3)] If $m \geqslant 2$ is even and $\ell$ is odd, then
$\operatorname{HH}_{m-1}(A/\kappa;[\omega])$ is isomorphic to $\kappa/2\kappa$
generated by the class of the cycle $x_{i_1} \otimes \dots \otimes
x_{i_m}$, there is an isomorphism of the $2$-torsion
sub-$\kappa$-module $\kappa[2] \subset \kappa$ onto
$\operatorname{HH}_m(A/\kappa;[\omega])$ that takes $a \in \kappa[2]$ to the class
of the cycle $a \cdot \sum_{0 \leqslant u < \ell}
(-1)^{mu}t_ms_{m-1}t_{m-1}^u(x_{i_1} \otimes \dots \otimes
x_{i_m})$, and the remaining homology groups are zero.
\end{enumerate}
\end{lemma}
\begin{proof}Let $D_*$ be the chain complex given by the quotient of the chain complex associated with the simplicial $\kappa$-module $\operatorname{HH}(A/\kappa;[\omega])[-]$ by the subcomplex of degenerate simplices. We recall that the canonical projection induces an isomorphism of $\operatorname{HH}_q(A/\kappa;[\omega])$ onto $H_q(D_*)$; see for example~\cite[Theorem~8.3.8]{weibel1}. We evaluate the chain complex $D_*$ in the three cases~(1)--(3).
First, in the case~(1), $D_0$ is the free $\kappa$-module generated by $1$ and $D_q$ is zero, for $q > 0$. This proves statement~(1).
Next, in the case~(2), let $C_{\ell}$ be the cyclic group of order $\ell$, and let $\tau$ be a generator. We define $D_*'$ to be the chain complex with $D_q'= \kappa[C_{\ell}]$, if $q = m-1$ or $q = m$, and zero, otherwise, and with differential $d' \colon D_m' \to D_{m-1}'$ given by multiplication by $1- \tau$. Then the map $\alpha \colon D_*' \to D_*$ defined by
$$\begin{aligned}
\alpha_{m-1}(\tau^u) & =
(-1)^{(m-1)u} t_{m-1}^u(x_{i_0} \otimes \dots \otimes x_{i_m}) \cr
\alpha_m(\tau^u) & =
(-1)^{(m-1)u}t_ms_{m-1}t_{m-1}^u(x_{i_0} \otimes \dots \otimes x_{i_m}) \cr
\end{aligned}$$
is an isomorphism of chain complexes, since $(m-1)\ell$ is even. Now, the homology groups
$H_{m-1}(D_*')$ and $H_m(D_*')$ are free $\kappa$-modules of rank $1$
generated by the class of $1$ and the norm element $N = 1 + \tau +
\dots + \tau^{\ell-1}$, respectively. This proves the statement~(2).
Finally, in the case~(3), let $C_{\ell}$ be the cyclic group of order
$\ell$, and let $\tau$ be a generator.
We define $D_*''$ to be the chain complex with $D_q''=
\kappa[C_{\ell}]$, if $q = m-1$ or $q = m$, and zero, otherwise, and
with differential $d'' \colon D_m'' \to D_{m-1}''$ given by
multiplication by $1 + \tau$. Then the map
$\beta \colon D_*'' \to D_*$ defined by
$$\begin{aligned}
\beta_{m-1}(\tau^u) & =
(-1)^{mu}t_{m-1}^u(x_{i_0} \otimes \dots \otimes x_{i_m}) \cr
\beta_m(\tau^u) & =
(-1)^{mu}t_ms_{m-1}t_{m-1}^u(x_{i_0} \otimes \dots \otimes x_{i_m}) \cr
\end{aligned}$$
is an isomorphism of chain complexes, since $m$ is even. Hence, to
prove statement~(3), it suffices to show that the following sequence
of $\kappa$-modules is exact.
$$0 \to \kappa[2] \xrightarrow{N} \kappa[C_{\ell}] \xrightarrow{1+\tau}
\kappa[C_{\ell}] \xrightarrow{\bar{\epsilon}} \kappa/2\kappa \to 0.$$
To this end, we consider the following commutative diagram with exact
rows.
$$\xymatrix{
{ 0 } \ar[r] &
{ I[C_{\ell}] } \ar[r] \ar[d]^{1 + \tau} &
{ \kappa[C_{\ell}] } \ar[r]^(.55){\epsilon} \ar[d]^{1 + \tau} &
{ \kappa } \ar[r] \ar[d]^{2} &
{ 0 } \cr
{ 0 } \ar[r] &
{ I[C_{\ell}] } \ar[r] &
{ \kappa[C_{\ell}] } \ar[r]^(.55){\epsilon} &
{ \kappa } \ar[r] &
{ 0 } \cr
}$$
The augmentation ideal $I[C_{\ell}]$ is equal to the
sub-$k[C_{\ell}]$-module generated by $1-\tau$. Since $\ell$ is odd,
$\tau^2$ is a generator of $C_{\ell}$, and hence, $1-\tau^2 =
(1+\tau)(1-\tau)$ is a generator of $I[C_{\ell}]$. This shows that the
left-hand vertical map $1+\tau$ is an isomorphism. Finally, the
following diagram commutes.
$$\xymatrix{
{ \kappa[2] } \ar@{=}[r] \ar[d]^{N} &
{ \kappa[2] } \ar@{^{(}->}[d] \cr
{ \kappa[C_{\ell}] } \ar[r]^{\epsilon} &
{ \kappa } \cr
}$$
Indeed, $\epsilon \circ N$ is equal to multiplication by $\ell$ which
is congruent to $1$ modulo $2$. This shows that the sequence in
question is exact. Statement~(3) follows.
\end{proof}
\begin{remark}\label{productremark}For $\kappa$ a field of
characteristic zero, the Hochschild homology of the $\kappa$-algebra
$A$ in Lemma~\ref{summandhomology} was first evaluated by
Lindenstrauss~\cite[Theorem~3.1]{lindenstrauss} who also determined
the product structure of the graded $\kappa$-algebra $\operatorname{HH}_*(A/\kappa)$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{hhtheorem}]We let $\omega$ be the
word $(x_1,\dots,x_q)$ and consider the following compotision of the
map of the statement and the projection onto the summand $[\omega]$.
$$K_q(A) \to
\operatorname{HH}_q(A) \to
\operatorname{HH}_q(A/\kappa) \xrightarrow{\operatorname{pr}_{[\omega]}}
\operatorname{HH}_q(A/\kappa,[\omega])$$
The Dennis trace map is a map of graded rings and takes the symbol
$\{1+x_i\}$ to the Hochschild homology class $d\log (1+x_i)$
represented by the cycle $1 \otimes x_i - x_i \otimes x_i$; see for example~\cite[Corollary~6.4.1]{gh},
\cite[Proposition~2.3.1]{hm4},
and~\cite[Proposition~1.4.5]{h}. Hence,
$\{1+x_1,\dots,1+x_q\}$ is mapped to $d\log(1+x_1)
\dots d\log(1+x_q)$. The product on Hochschild homology is given by
the shuffle product $*$, and moreover,
$$\operatorname{pr}_{[\omega]}(d\log(1+x_1)* \dots * d\log(1+x_q))
= \operatorname{pr}_{[\omega]}((1\otimes x_1)* \dots *(1\otimes x_q))$$
since summands that include a factor $x_i \otimes x_i$ are annihilated
by $\operatorname{pr}_{[\omega]}$. Now,
$$(1 \otimes x_1) * \dots * (1 \otimes x_q) = \sum_{\sigma}
\operatorname{sgn}(\sigma) 1 \otimes x_{\sigma(1)} \otimes
\dots \otimes x_{\sigma(q)},$$
where the sum ranges over all permutations of $\{1,2,\dots,q\}$, and
hence,
$$\operatorname{pr}_{[\omega]}((1 \otimes x_1) * \dots * (1 \otimes x_q)) =
\sum_{\tau} \operatorname{sgn}(\tau) 1 \otimes x_{\tau(1)}
\otimes \dots \otimes x_{\tau(q)},$$
where the sum range over all cyclic permutations of
$\{1,2, \dots, q\}$. By Lemma~\ref{summandhomology}~(2), this class is the
generator of $\operatorname{HH}_q(A/\kappa;[\omega])$. The theorem follows.
\end{proof}
\section{Proof of Theorem~\ref{maintheorem}}
In this section, we prove Theorem~\ref{maintheorem} of the
introduction and a number of generalizations of this result.
\begin{proof}[Proof of Theorem~\ref{maintheorem}]It suffices to show
that for every maximal ideal $\mathfrak{m} \subset R$, the local ring
$R_{\mathfrak{m}}$ is regular. The assumption that $R$ is
$K_{d+1}$-regular implies by~\cite[Theorem~2.1]{vorst1}
and~\cite[Corollary~2.1]{vorst} that the local ring $R_{\mathfrak{m}}$ is
$K_q$-regular for all $q \leqslant d+1$. The local ring
$R_{\mathfrak{m}}$ has dimension $d_{\mathfrak{m}} \leqslant
d$. We first argue that we may assume that $d_{\mathfrak{m}} =
d$. Let $I \subset R$ be the intersection of the minimal prime ideals
$\mathfrak{p}_1,\dots,\mathfrak{p}_n \subset R$ that are not contained
in $\mathfrak{m}$. We claim that $\mathfrak{m} + I = R$. For if not,
the ideal $\mathfrak{m} + I$ would be contained in a maximal ideal of
$R$ which necessarily would be $\mathfrak{m}$. Now, for each
$1\leqslant i \leqslant n$, we choose $y_i \in \mathfrak{p}_i$ with
$y_i \notin \mathfrak{m}$. Then $y = y_1 \dots y_n$ is in $I$, but not
in $\mathfrak{m}$. The claim follows. Now, by the Chinese remainder
theorem, there exists $r \in R$ such that $r \equiv 1 \mod
\mathfrak{m}$ and $r \equiv 0 \mod I$. We define $R' = R[1/r]$ and
$\mathfrak{m}' = \mathfrak{m}R'$. Then $\mathfrak{m}' \subset R'$ is a
maximal ideal, since $R'/\mathfrak{m}' = (R/\mathfrak{m})[1/r] =
R/\mathfrak{m}$, and the canonical map $R_{\mathfrak{m}} \to
R_{\mathfrak{m}'}'$ is an isomorphism. Moreover, the $k$-algebra $R'$
is of finite type, and since every minimal prime ideal of $R'$ is
contained in $\mathfrak{m}'$, we have $\dim R' = \dim
R_{\mathfrak{m}'}' = d_{\mathfrak{m}}$. Therefore, we may assume that
$d = d_{\mathfrak{m}}$. Hence, Theorem~\ref{ktheorem} shows that
$$K_{d_{\mathfrak{m}}+1}(R_{\mathfrak{m}})/pK_{d_{\mathfrak{m}}+1}(R_{\mathfrak{m}})
= 0.$$
We choose a set of generators $x_1,\dots,x_r$ of the maximal ideal of
the local ring $R_{\mathfrak{m}}$. Then $r \geqslant d_{\mathfrak{m}}$
with equality if and only if $R_{\mathfrak{m}}$ is
regular. By~\cite[Theorem~28.3]{matsumura}, we may choose a $k$-algebra
section of the canonical projection
$R_{\mathfrak{m}}/\mathfrak{m}^2R_{\mathfrak{m}} \to R/\mathfrak{m} =
\kappa$. These choices give rise to a $k$-algebra isomorphism
$$A = \kappa[x_1,\dots,x_r]/(x_ix_j \mid 1 \leqslant i \leqslant j
\leqslant r) \xrightarrow{\sim}
R_{\mathfrak{m}}/\mathfrak{m}^2R_{\mathfrak{m}}.$$
Hence, Theorem~\ref{hhtheorem} shows that for all $1 \leqslant q
\leqslant r$, the symbol
$$\{1+x_1,\dots,1+x_q\} \in
K_q(R_{\mathfrak{m}})/pK_q(R_{\mathfrak{m}})$$
has non-trivial image in $K_q(A)/pK_q(A)$, and therefore, is
non-zero. Since the group
$K_{d_{\mathfrak{m}}+1}(R_{\mathfrak{m}})/pK_{d_{\mathfrak{m}}+1}(R_{\mathfrak{m}})$
is zero, we conclude that $r \leqslant d_{\mathfrak{m}}$ which shows that
$R_{\mathfrak{m}}$ is a regular local ring. This completes the proof.
\end{proof}
\begin{theorem}\label{maintheoremplusr}Let $k$ be a field of
positive characteristic $p$ that is finitely generated over an
infinite perfect subfield $k'$, and assume that strong resolution of
singularities holds over $k'$. Let $R$ be a localization of a
$d$-dimensional commutative $k$-algebra of finite type and suppose
that $R$ is $K_{d+r+1}$-regular where $r$ is the transcendence degree
of $k$ over $k'$. Then $R$ is a regular ring.
\end{theorem}
\begin{proof}We can write $R$ as the localization $f \colon R' \to S^{-1}R' = R$
of a $(d+r)$-dimensional commutative $k'$-algebra $R'$ of finite type
with respect to a multiplicative subset $S \subset R'$. Let $\mathfrak{p}
\subset R$ be a prime ideal. Then, by~\cite[Theorem~2.1]{vorst1}, the local
ring $R_{\mathfrak{p}}$ again is $K_{d+r+1}$-regular. Now, let $\mathfrak{p}' =
f^{-1}(\mathfrak{p}) \subset R'$. Then the map $f$ induces an
isomorphism of $R'_{\mathfrak{p}'}$ onto
$R_{\mathfrak{p}}$. Therefore, we conclude from
Theorem~\ref{maintheorem} that $R_{\mathfrak{p}}$ is a regular
ring. This proves that $R$ is a regular ring as stated.
\end{proof}
\begin{theorem}\label{maintheoremplus}Let $p$ be a prime number and
assume that strong resolution of singularities holds over all infinite
perfect fields of characteristic $p$. Let $k$ be any field that
contains an infinite perfect subfield of characteristic $p$, let $R$
be a commutative $k$-algebra essentially of finite type, and assume
that $R$ is $K_q$-regular for all $q$. Then $R$ is a regular ring.
\end{theorem}
\begin{proof}We can write $R$ as a localization of $R' \otimes_{k'}k$
where $k'$ is a finitely generated field that contains an infinite
perfect subfield and where $R'$ is a commutative $k'$-algebra of
finite type. Then we can write $R$ as the filtered colimit
$$R = \operatornamewithlimits{colim}_{\alpha} R' \otimes_{k'}k_{\alpha}$$
where $k_{\alpha}$ runs through the finitely generated extensions of $k'$
contained in $k$. It follows from Theorem~\ref{maintheoremplusr} that
the rings $R' \otimes_{k'}k_{\alpha}$ are all regular. Therefore the
ring $R$ is regular by~\cite[Prop.~IV.5.13.7]{ega}.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2} |
0911.3642 | \section{Introduction}
The study of accretion processes is of fundamental astrophysical importance,
being ubiquitous on both the largest and smallest scales, i.e. ranging from
accretion by supermassive black holes in the center of galaxies at one end
to star and planet formation at the other. Here we are interested in
studying accretion phenomena in the presence of a large gravitational
potential such as that observed in X-ray binaries (XRBs) and active galactic
nuclei (AGN). In these sources, the fundamental timescales of interest,
governing the accretion process, scale with the mass of the central object
($\rm \propto M_x$). Hence, studies of the Galactic XRB population provide
an excellent laboratory for detailed examination of the process of accretion
on humanly accessible timescales.
The very-high, high-soft and low-hard states (hereafter VHS, HSS \& LHS) are
the primary active accretion states observed in XRBs (See the review by
\citealt{I2} for a detailed description of accretion states in black hole
binaries). Despite being the most common mode of accretion in black hole
X-ray binaries, the nature of the accretion flow in the low-hard state
remains uncertain. Emission in the LHS is characterized by a hard power-law
spectrum ($\Gamma \sim$ 1.4 -- 1.7) and strong X-ray variability (30\% --
40\% RMS). Correlated variability at radio/NIR/optical wavelengths has also
been observed from the black hole systems while in the hard state
\citep{a15}. The hard power-law component may be the result of
Comptonization by a hot optically thin plasma of soft seed photons from a
thermal disc or magnetic structures (through cyclo-synchrotron processes -
see, e.g., \citealt{a35}). However, the geometry of this plasma is not well
understood. A popular model for the accretion geometry in the low-hard state
was given by \citet{a36}. In that model the standard thin accretion disc,
that dominates in the spectrally soft states is radially truncated and
replaced by an advection dominated accretion flow (ADAF: see
\citealt{a16}). The fundamental assumption of a radially recessed accretion
disc may find some support in the low disc reflection fractions which are
sometimes measured in the hard state (e.g., \citealt{a37}).
A number of models have been developed which do not require a
recessed disc in the low-hard state. \citet{a38} proposed that black hole
states are driven by the height and bulk velocity of magnetic flares above a
disc, which remains at the innermost stable circular orbit (ISCO). These
flares would serve to feed a mildly relativistically outflowing corona. Low
disc reflection fractions do not signal a recessed disc in this model, but
result from mild beaming of the hard X-ray flux away from the disc. A
similar model for the LHS, based on magnetically dominated coronae, has been
proposed by \citet{a39}. These outflowing coronae share some properties with
jet based models for the hard component \citep{a40}, in the sense that the
latter also produce low reflection fractions \citep{a25} without the need
for a recessed accretion disc.
A number of recent observations call into question the assumption of a
recessed disc in the LHS. In particular GX 339-4 was observed during its
2004 outburst at a luminosity of $\rm L \sim 0.05~L_{Edd}$ by both {\it
RXTE} \& {\it XMM} \citep{a20}. The observations by {\it XMM} are critical
here as they provide coverage at energies below 3 keV whereas {\it RXTE} is
limited to energies above this. It was found that a cool disc blackbody
($\sim$ 0.35 keV) consistent with an optically thick geometrically thin
accretion disc extending to the innermost stable circular orbit (ISCO)
existed, in contrast to theoretical expectations. Fits with reflection
models revealed reflection fractions $\sim$ 0.2 -- 0.3. Previous
observations of Swift J1753.5-0127 yielded similar results for the cool disc
component, in this case a disc temperature of $\sim$ 0.2 keV was required
\citet{a18}; however, no significant disc reflection features were detected.
\begin{figure}
\begin{center}
\includegraphics[height=0.33\textheight,width=0.29\textwidth,angle=-90]{fig1_colour.eps}
\caption{{\it Swift} BAT hard X-ray lightcurve for \swt~from May 2005 to
November 2008. The times of the {\it Integral} \citep{a19}, {\it XMM}
\citep{a18}, {\it RXTE} \citep{a76}, \textit{Swift} (see \S 4.1) and
\textit{Suzaku} (this paper) observations are indicted.}
\label{asmlc}
\end{center}
\end{figure}
\swt~was discovered by the \textit{Swift} burst alert telescope (BAT) at
X-ray and $\gamma$-ray energies on 2005 May 30 (\citealt{a1}). Subsequent
observations with the X-ray telescope (XRT) revealed a hard power-law
spectrum \citep{a2,a3}, while pointed \textit{RXTE} observations detected
0.6 Hz quasi periodic oscillations (QPO; \citealt{a7}). The system was also
detected at UV \citep{a8} and optical wavelengths \citep{a4}. Radio
observations with the \textit{MERLIN} array also detected a variable
counterpart \citep{a5}. Observations at optical wavelengths by
\citet{a69} have also detected a significant modulation with a period of
3.2hrs, which they identify as a superhump period slightly larger than the
actual orbital period. This would make \swt~the black
hole binary with the shortest known orbital period.
In addition to the observations of \citet{a18} above, a number of other high
energy studies of this source have been published, which we summarize below
(see Fig. \ref{asmlc}). An analysis of \textit{RXTE} observations of the
outburst was reported in \citet{a6,a17}. The X-ray spectrum was found to be
consistent with a power-law ($1.6 \leq \Gamma \leq 1.8$), while
low-frequency quasi-periodic oscillations (QPOs) were also detected (up to
0.9 Hz). \citet{a19} analysed simultaneous \textit{RXTE} \&
\textit{INTEGRAL} data, which were also obtained during the 2005
outburst. The combined spectrum (3 -- 400 keV) could be fit with a model
consisting of thermal Comptonization modified by disc reflection ($\rm kT_0
\sim 0.5~keV, kT_e \sim 150~keV, \tau \sim 1, f \equiv \Omega/2\pi \sim
0.3$). QPOs were also detected here, but at a lower frequency than during
the outburst peak (0.24 Hz).
To place meaningful constraints on the accretion flow in the LHS one
requires sensitivity to both the soft X-rays ($<$ 2 keV), in order to detect
the cool accretion disc, and higher energies in order to detect the most
prominent disc reflection features ( 5 -- 7 keV and 20 -- 30
keV). \textit{Suzaku} with its large bandpass and low background, is ideally
equipped to carry out these observations. In this paper, we describe
observations undertaken with the \textit{Suzaku} X-ray observatory in 2007,
while \swt~was in the low-hard state. In \S2, we describe our
observation and extraction of source spectra and lightcurves. We proceed to
analyze the data in \S3, where both phenomenological and more physically
motivated models are considered. The broadband spectrum (2 -- 250 keV) is
consistent with a simple power-law model, although there are also
contributions from the accretion disc. In \S4, these results are discussed
in the context of models for the accretion flow in the low-hard state, and
finally our conclusions are presented in \S5.
\section{Observations}
\swt~was observed while still in the low-hard state by \textit{Suzaku}
\citep{a9} from 2007 September 19 20:36 UT until September 22 10:30 UT
(obsid:402088010, PI: Homan, see Fig \ref{asmlc}). Data were acquired over a
broad spectral range (0.2 -- 600 keV), with the X-ray imaging spectrometer
(XIS: \citealt{a10}) and the hard X-ray detector (HXD: \citealt{a11,a12}).
The source was observed at the XIS nominal position for total uncorrected
exposure times of $\sim$ 95 ks \& 82 ks respectively .
All data reduction and analysis takes place within the \textsc{heasoft
6.4.1} environment, which includes \textsc{ftools 6.4, suzaku 8.0} and
\textsc{xspec 12.4.0x}. The latest versions of the relevant \textit{Suzaku}
\textsc{caldb} files were also used.
\subsection{X-ray Imaging Spectrometer}
The XIS is installed at the focal plane of the four X-ray telescopes (XRT:
\citealt{a13}) and currently consists of 3 functioning detectors XIS0, XIS1
and XIS3. XIS0 \& XIS3 are front illuminated and provide coverage over the
energy range 0.4 - 12 keV whereas XIS1 is back illuminated in an effort to
provide greater sensitivity at lower energies, 0.2 -- 12 keV.
The XIS has a field of view of $\sim$ 18' x 18' (1024$^2$ pixels) and was
operated in 5x5, 3x3 readout mode. In addition the data were taken in 1/4
window mode in an effort to minimize possible photon pile-up, giving a time
resolution of 2s.
As the downloaded data products had been processed via the \textit{Suzaku}
pipeline v2.1.6.15, the data were reprocessed from the raw telemetry files
as recommended in the \textit{Suzaku} data reduction guide (abc guide)
\footnote{http://heasarc.nasa.gov/docs/suzaku/aehp\_data\_analysis.html}.
Standard screening was applied, in particular, we extracted \textit{ASCA}
event grades 0:0, 2:4, 6:6 with the data filtered to be taken outside of the
South Atlantic Anomaly (SAA) and where the earth elevation angle was greater
than 5$^{\circ}$. Good time interval (GTI) events were extracted using
\textsc{xselect}, where the 3x3 and 5x5 observation mode data for each
detector were extracted simultaneously. Science images, spectra and
backgrounds were then extracted from these event files.
Even though the observations were carried out in 1/4 window mode, we
nonetheless suffered from pileup at the source position. Hence the spectra
were extracted using an annular extraction region extending from 30 to 250
pixels from the source position. This extraction region size was chosen so
as to extract $>$ 99$\%$ of the point source flux. As the outer radius of
this annulus is larger than the window size ($\sim$ 280, 295, 280 pixels
respectively), the effective extraction region is the intersection of window
and the annulus. The resulting extraction region has an area equivalent to
62\% of the 250 pixels outer radius, hence, we expect to have detected a
commensurate percentage of the total source flux.
Background spectra were extracted from a source free region of the detector,
these are automatically scaled to match the data during the spectral
analysis. Response files were generated using the tasks \textsc{xisrmfgen}
and \textsc{xissimarfgen}. The background and response files were then
grouped with the science spectrum for analysis in \textsc{xspec}.
\begin{figure}
\begin{center}
\includegraphics[height=0.33\textheight,angle=-90]{fig2_colour.eps}
\caption{Background subtracted XIS1, PIN and GSO spectra and their
associated background spectra. For clarity, we plot only a
single XIS detector. \swt~is detected by the PIN detector
out to 60 keV, while the GSO detects flux to $\sim$ 250 keV assuming the
background is reproducible to an accuracy of 3\% (see text).}
\label{xispingso_back}
\end{center}
\end{figure}
\subsection{Hard X-Ray Detector}
The HXD covers the energy range from 10 -- 600 keV, consisting of two
separate detectors, (i) PIN: Silicon PIN photodiodes covering the energy
range 10 -- 70 keV and, (ii) GSO: GSO/BGO phoswich scintillators covering
the energy range 40 - 600 keV. Due to the arrangement of the instrument,
with the PIN diodes residing in front of the GSO scintillator in each of the
16 detector units that make up the HXD, the raw data do not differentiate
between the PIN \& GSO, with this distinction instead being made during the
extraction process.
As at the time of writing, the GSO analysis procedure had not been included in
the official \textit{Suzaku} data reduction pipeline, all of the hard X-ray
detector data were reprocessed following the prescription in the abc guide
and the 7-step
guide\footnote{astro.isas.ac.jp/suzaku/analysis/7step\_HXD\_20080114.txt}.
GTI science events were extracted using \textsc{xselect} with the
appropriate GTI files and filter options i.e., pointing elevation $>$
5$^{\circ}$ above earth and excluding data taken near the SAA. The relevant
events for each detector were then extracted by requiring DET\_TYPE=1:1 and
DET\_TYPE=0:0 for the PIN and GSO respectively.
The relevant background and response (pin/gso *20080129.rsp) files were
obtained from the \textit{Suzaku} website. A GTI between the data and the
background file was then created using the ftool \textsc{mgtime}, which was
then used to to extract the spectra in \textsc{xselect}. Standard
corrections were applied to the data, i.e. deadtime for science
data, PIN background exposure time. As the HXD background files do not
include the contribution from the cosmic X-ray background (CXB), the
expected CXB was simulated following the recipe detailed in the abc guide,
this was then added to the background file. As a check on the accuracy of
the background, a separate background estimate was made using the earth
occulted data (earth elevation = 0). This was found to be consistent with
the above background.
A similar procedure is followed in the case of the GSO spectral extraction
with the following caveat: the background file exposure time does not need
to be corrected, instead one must rebin the science spectrum to match the
provided background. The sensitivity of the GSO detector is background
dominated, hence the high energy detection threshold is determined not by
the statistical error but by the reproducibility of the background, here we
conservatively estimate the background reproducibility to be 3\% (e.g
\citealt{a53}). In Fig. \ref{xispingso_back}, we plot the extracted science
\& background spectra. \swt~is detected out to an energy of 250 keV
\section{Analysis \& Results}
\subsection{Light Curves}
Source and background lightcurves were extracted from the event files using
\textsc{xselect} and the appropriate good-time-interval events, after the
application of the appropriate baryocentric correction using the ftool
\textsc{aebarycen}. The flux from \swt~is observed to remain constant
throughout our observation. The mean count rates for each individual
detector are approximately : 16.2, 20.8, 17.1, 2.6, 2.2 counts
s$^{-1}$(XIS0, XIS1, XIS3, PIN, GSO).
There is no evidence for any periodic modulation, the lightcurves across all
detectors are characterized by rapid variability of a stochastic nature as
is expected from accretion processes in the vicinity of a black hole
\citep{a50}. This is consistent with RXTE power spectra bracketing this data
that revealed a power-law slope, $\beta \sim -1$.
\begin{figure*}
\begin{center}
\subfigure[$\rm N_H = 0.18 \times 10^{22}~cm^{-2},~i \sim 63$]{\includegraphics[height=0.35\textheight,angle=-90]{fig3a_colour.eps}}
\subfigure[$\rm N_H = 0.23 \times 10^{22}~cm^{-2},~i \sim 63$]{\includegraphics[height=0.35\textheight,angle=-90]{fig3b_colour.eps}}
\caption{Best fit to the \textit{Suzaku} spectra of SWIFT J1753.5-0127 in
the 0.6 - 150 keV range. The best fit model consisting of a disc blackbody
plus power-law modified by disc reflection ({\tt
pha*(reflect*(diskbb+po)+laor)}) is plotted. The residuals show from
bottom to top {\tt pha*po}, {\tt pha*(reflect*po)}, {\tt
pha*(reflect*(po)+laor)}, {\tt pha*(reflect*(diskbb+po)+laor)}. There is
a clear excess at soft X-ray energies, the solid lines denote the {\tt
diskbb} \& {\tt laor} components. The regions 1.7--1.9 keV and 2.1--2.4
keV are ignored due to the presence of instrumental features.}
\label{soft_excess}
\end{center}
\end{figure*}
\subsection{Spectra I: Phenomenological Models}\label{spec_i}
We initially choose to fit the spectra with a number of phenomenological
models in an effort to provide a broad characterization of the data. Initial
spectral fits were made to the entire dataset consisting of 5 spectra in
total (XIS0, XIS1, XIS3, PIN, GSO), spanning the energy range 0.6 -- 250 keV
over which SWIFT J1753.5-0127 is reliably detected. Explicitly, the spectra
provide us with data in the following ranges: XIS0, XIS3 0.4 -- 10 keV; XIS1
0.2 -- 10 keV; PIN 12 -- 70 keV and GSO 50 -- 250 keV. Unfortunately, the
low energy response of the XIS detectors contains a number of uncertain
residuals, hence we additionally ignored the regions below 0.6 keV and
between 1.7 -- 1.9 keV and 2.1 -- 2.4 keV in all further modelling.
Initially, the column density was fixed at a value consistent with previous
detections ($\rm N_H = 2.3 \times 10^{21}~cm^{-2}$: \citealt{a18}), while
the normalization was allowed to vary independently for each spectrum. The
resulting normalizations were found to be consistent with those expected for
the \textit{Suzaku} detectors. The spectra were fit with a model consisting
of an absorbed power-law ({\tt pha*po})\footnote{Throughout this paper the
abundances and cross-sections assumed are {\tt bcmc} \citep{a78} and
{\tt angr} \citep{a79} respectively}, which provided an acceptable fit
except at the lowest energies ($\leq$ 2 keV). As the main residual was
present at energies below 2 keV, this region was ignored in further
fitting. Re-applying the power-law fit results in a value for the spectral
index of $\rm \Gamma = 1.62$ ($\chi^2/\nu = 6968/6463$). It is immediately
apparent that a simple phenomenological power-law model suffices to describe
the spectrum from 2 -- 250 keV.
The data was also fit with a {\tt cutoffpl} model for comparison. The fit
does not indicate the presence of a cut-off in the spectrum with the {\tt
cutoffpl} model fit being significantly inferior to a simple power-law,
i.e we find $\chi^2/\nu_{po}$ = 6898/6463 (6968/6456) and
$\chi^2/\nu_{cutoffpl}$ = 6962/6462 (7048/6455) for the $\rm N_H =
0.18~(0.23) \times 10^{21}~cm^{-2}$ models respectively. In both cases the
best fit cut-off power-law model requires the high energy cut-off to be 500
keV, i.e the intrinsic upper limit of the {\tt cutoffpl} model. Ignoring the
data above 150 keV does not improve the quality of the {\tt cutoffpl} fit
relative to the {\tt po} fit.
As the GSO background contains a large feature in the energy range 150 --
180 keV (see Fig. \ref{xispingso_back}), we decided to ignore the data
beyond 150 keV in all further fitting. Furthermore, as there is clearly an
additional component contributing at energies below 2 keV, this region is
ignored while constraints are placed on the hard X-ray emission. Repeating
the above power-law fit, we find $\rm \Gamma = 1.61$ ($\chi^2/\nu =
6869/6456$)\footnote{All further fits in this section assume $\rm N_H =
0.18 \times 10^{21}~cm^{-2}$ unless otherwise explicitly stated}.
Inspection of the residuals reveals any contribution from disc reflection to
be small, although there is evidence for curvature in the PIN spectrum. To
investigate the possible contribution due to disc reflection, the best fit
power-law from above was convolved with the reflection model of \citet{a43}
({\tt reflect*po}). The inclination was held fixed (cos{\it i} = 0.45),
while the abundances of metals were frozen at the default values ($\rm
N_Z/N_{\sun}$ = 1). The best fit model reveals a slightly softer power-law
($\rm \Gamma \sim 1.63$) in addition to a highly significant ($> 9\sigma$ as
determined via an F-test) reflection fraction, $\rm f \sim 0.26$
($\chi^2/\nu = 6778/6455$). As the measured spectral parameters depend on
the inclination angle, these and subsequent fits were repeated at a lower
inclination angle of 30 degrees. These fits revealed a lower reflection
fraction, f $\sim$ 0.14 (see Table \ref{specfit_params}).
Closer inspection of the XIS spectra residuals reveals a slight excess
consistent with the presence of a broad Fe K line. To place a constraint on
the size of any possible line, a Gaussian was added to this model. The best
energy of this line is $\sim$ 6.4 $\pm$ 0.1 keV, in agreement with that
expected from neutral Fe K$\alpha$. This is consistent with that expected
from reflection from neutral matter ($\rm E_{line} = 6.4 keV$) as assumed in
the {\tt reflect} model. The Gaussian component was then replaced with
relativistic line model ({\tt laor}: \citealt{a45}), which more accurately
represents the expected line profile in the inner disc region. The
inclination of this line is fixed at the same value as the the reflection
component, the emissivity profile of the disc is fixed at $\rm R^{-3}$ and
the outer radius of the emitting region is fixed at 400 R$\rm _g$. We find
the data require the presence a broad weak iron line ($\rm > 8\sigma$
significant as determined via an F-test; EW = 73$\rm \pm30~eV$). Allowing
the inner disc radius to vary, we find $\rm R_{in} = 19^{+6}_{-4}~R_g$. The
iron line inner radius has a strong inclination dependence with a best fit
$\rm R_{in} \sim 13~R_g$ at 30$^{\degr}$. The above inner radii are
inconsistent with the ISCO for a Schwarzschild black hole, if we extend the
confidence intervals we place the following limits on the inner radius --
$\rm R_{in} \geq 11~R_g~\&~6.8~R_g$ (3$\sigma$ level) for inclinations of
63.256$^{\degr}$ \& 30$^{\degr}$ (cos{\it i} = 0.45, 0.8660) respectively.
Including the lower energy flux in the fit once again, results in a
chi-squared value of $\chi^2/\nu = 9431/7450$, we plot this in
Fig. \ref{soft_excess}. A soft excess is present consistent with previous
observations, i.e. \citet{a18}. A simple blackbody accretion disc component
({\tt diskbb}: \citealt{a34}) was added to this model to account for the
excess soft X-ray flux, while the power-law index, $\Gamma$, and the
reflection fraction, f, were frozen at their previous best fit values. The
disc component is strongly required by the data; the best fit is achieved
for a disc temperature of kT = 0.20 keV ($\chi^2/\nu = 8149/7442$). We
measure the associated 0.6 -- 10 keV (2 -- 150 keV) unabsorbed flux to be
$\rm 6.8 \times 10^{-10}~erg~s^{-1}~cm^{-2}$ ($\rm 2.4 \times
10^{-9}~erg~s^{-1}~cm^{-2}$).
The column density is crucial here: although it has little impact on the
spectrum above energies $\sim$ 2 keV, it will have a significant impact on
the shape of the spectrum and the measured flux below this value,
e.g. Fig. \ref{soft_excess}. In order to test the effect of different
values for $\rm N_H$ the fitting was repeated at a number of different
values for the interstellar column density ranging from $\rm N_H = 0.17
\times 10^{22}~cm^{-2}$ \citep{a30} to $\rm 0.28 \times 10^{22}~cm^{-2}$
\citep{a29}. We find the best fit as measured by \textit{Suzaku} was found
to be $\rm N_H = (0.18\pm 0.01) \times 10^{22}~cm^{-2}$. In Table
\ref{specfit_params}, we list the parameters for the best fit model and also
those for the model corresponding to the value of $\rm N_H$ measured
previously by \textit{XMM}.
The normalization of the {\tt diskbb} model is proportional to the inner
disc radius, norm = ($\rm r_{in}$ [km])/d [10kpc])$^2$cos$\theta$. In
Fig. \ref{inner_radius}, we plot the inner disc radius corresponding to the
best fit {\tt diskbb} component for various values of the column density,
where we have corrected for spectral hardening and the inner disc radius
(see \S \ref{thin_disc_isco}). We find that for reasonable values of the
column density and inclination the inner disc radius may reside close to the
ISCO, in agreement with recent work, which has provided evidence that the
cool disc may reside at or near the ISCO, even at the low luminosities
typically observed in the LHS \citep{a18,a20, a32}. We also modelled the
soft excess using the {\tt ezdiskbb} and {\tt diskpn} models in
\textsc{xspec}. In both cases, the observed blackbody component is found to
be consistent with that obtained using {\tt diskbb}. We also experimented
with using the {\tt kdblur} kernel to relativistically blur the above
continuum model; however, this did not result in an improved fit.
\begin{figure}
\begin{center}
\includegraphics[height=0.33\textheight,angle=-90]{fig4_colour.eps}
\caption{Inner disc radius for the cool multi-colour blackbody accretion
disc {\tt diskbb} component versus column density, where we have corrected
for spectral hardening and the inclination (see text). The two sets of lines
indicate the expected inner radius assuming an inclination of 30$^{\degr}$
\& 63.256$^{\degr}$ respectively. The error bars correspond to the 90\%
confidence interval for the disc normalization. The horizontal lines
indicate the position of the ISCO for both a Kerr (1.23R$\rm_g$) \&
Schwarzschild (6R$\rm_g$) black hole, where the black hole mass is assumed
to be 10$\rm M_{\sun}$ ($\rm R_g \equiv GM/c^2$).}
\label{inner_radius}
\end{center}
\end{figure}
We also carried out fits to the individual telemetry segments of the XIS,
PIN \& GSO spectra, to check for possible variability. The fits to each of
the 3 segments were found to be consistent with each other, hence there is
no evidence for spectral variability in agreement with the constant flux
observed from the lightcurves.
\begin{table*}
\begin{center}
\caption{Spectral Fit Parameters I\label{specfit_params}}
\begin{tabular}{lccccc}
\tableline
Model & Parameter & \multicolumn{4}{c}{Value} \\[0.5ex]
\tableline
{\tt phabs } & $\rm N_H~[ 10^{22}~cm^{-2} ]$ & \multicolumn{2}{c}{0.23} & \multicolumn{2}{c}{0.18}\\[0.5ex]
& $\rm i$ & 30 & 63.256 & 30 & 63.256\\[0.5ex]
\tableline
{\tt po} & $\rm \Gamma$ & 1.619$\pm$0.003 & 1.619$\pm$0.003 &
1.608$\pm$0.003 & 1.608$\pm$0.003 \\[0.5ex]
$\chi^2$ \hspace{5mm}($\nu = 6456$)& & 6938 & 6938 & 6869 & 6869 \\[0.5ex]
\tableline
{\tt reflect*po} & $\rm \Gamma$ & 1.640$\pm$0.005&
1.649$\pm$0.006 & 1.625$\pm$0.005 & 1.632$\pm$0.005\\[0.5ex]
& $\rm f~[ \Omega/2\pi ]$ & 0.17$\pm$0.03 & 0.33$\pm$0.05
& 0.14$\pm$0.03 & 0.26$\pm$0.05 \\[0.5ex]
$\chi^2$ \hspace{5mm}($\nu = 6455$) & & 6830 & 6807 & 6793 & 6778 \\[0.5ex]
\tableline
{\tt reflect*po+laor} & $\rm \Gamma$ & 1.645$\pm$0.005 &
1.654$\pm$0.005 & 1.630$\pm$0.005 & 1.637$\pm$0.005\\[0.5ex]
& $\rm f~[ \Omega/2\pi ]$ & 0.15$^{+0.03}_{-0.02}$ &
0.27$\pm$0.05 & 0.12$\pm$0.03 & 0.21$\pm$0.05\\[0.5ex]
& $\rm E_{line}~[ keV ]$ & 6.4$\pm$0.1 &
6.4$\pm$0.1 & 6.4$\pm$0.1 & 6.4$\pm$0.1\\[0.5ex]
& $\rm R_{in}~[ R_g ] $ & 13$^{+7}_{-4}$ & 19$^{+4}_{-5}$
& 13$^{+7}_{-5}$ & 19$^{+6}_{-4}$\\[0.5ex]
& $\rm EW~[ eV ] $ & 60$\pm$30 & 80$^{+50}_{-40}$
& 60$^{+20}_{-35}$ & 70$\pm$30\\[0.5ex]
$\chi^2$ \hspace{5mm}($\nu = 6451$) & & 6743 & 6720 & 6707 & 6694\\[0.5ex]
\tableline
{\tt reflect*(diskbb+po)+laor} & $\rm T_{diskbb}~[ keV ]$ & 0.190$\pm$0.002
& 0.180$\pm$0.003 & 0.25$\pm$0.03& 0.20$\pm$0.04 \\[0.5ex]
& $\rm norm0_{diskbb}$ & 3866$\pm$420 &
4830$^{+620}_{-470}$ & 137$^{+124}_{-63}$ & 361$^{+1052}_{-232}$ \\[0.5ex]
& $\rm norm1_{diskbb}$ & 4023$\pm$410 &
5080$^{+600}_{-460}$ & 115$^{+97}_{51}$ & 294$^{+780}_{-183}$\\[0.5ex]
& $\rm norm3_{diskbb}$ & 4144$\pm$460 &
5340$^{+670}_{-530}$ & 135$^{+136}_{-66}$ 76 & 432$^{+1536}_{-293}$\\[0.5ex]
$\chi^2$ \hspace{5mm}($\nu = 7442$) & & 8349 & 8282 & 8160 & 8149 \\[0.5ex]
\tableline
\end{tabular}
\tablecomments{Results of fits to the \textit{Suzaku} spectra for
\swt, spanning the energy range 0.6 -- 150 keV. All models are modified by
``{\tt pha}'' to account for interstellar extinction, which was frozen
at the value indicated in the table above. In addition, the spectral
regions below 0.6 keV and between 1.7 -- 1.9 keV and 2.1 -- 2.4 keV are
ignored at all times due the presence of instrumental calibration
uncertainties. All errors are quoted at the 90\% confidence level.}
\end{center}
\end{table*}
\subsection{Spectra II: Physically Motivated Models}\label{spec_ii}
While the above models provide an excellent fit to the observed data, due to
their phenomenological nature they offer limited constraints on the nature
of the accretion flow. In this section, we will consider more complex models
in an effort to place improved constraints on the physical processes that
create the observed spectrum. We consider two scenarios for the observed
spectrum, (i) the observed spectrum is the result of the reflection of hard
X-rays from a power-law incident on the accretion disc , and (ii) the
spectrum is due to emission from a Comptonizing corona.\\
\noindent i) Disc Reflection Models:\\
Observations of a number of AGN and Galactic black holes have revealed disc
reflection features, the most prominent of which are the Fe K line at $\sim$
6.4 keV and the reflection bump at $\sim$ 20 -- 30 keV. The constant
density ionized disc model ({\tt CDID}: \citealt{a21,a22}) is used to
model this effect. Here, we only consider data below 60 keV as the {\tt
CDID} model as implemented in \textsc{xspec} is not valid at energies
above 100 keV. Solar metallicity is assumed in these fits.
As a check a power-law fit to the 2 -- 60 keV region of the spectrum alone
was carried out, the resulting power-law index is found to be in agreement
with the value derived from a fit to the entire spectrum (\S \ref{spec_i}).
Initially the spectrum was fit with the {\tt CDID} model alone modified by
interstellar absorption, the resulting fit was good ($\chi^2/\nu$ =
8070/7428), with a moderately ionized disc ($\xi \sim$ 3.55 ) and a
reflection fraction ($\rm f \sim 0.14$) in agreement with that measured
earlier. A blackbody accretion disc component was added to this model
significantly improving the fit ($\chi^2/\nu$ = 8017/7424, $\rm >~6\sigma$
significant as measured by an ftest). For our best fit model {\tt
pha*(diskbb+CDID)}, we find the following parameters $\rm N_H \sim 0.20,
T_{ddiskbb} \sim 0.19, \xi \sim 3.55, \Gamma \sim 1.61, f \sim 0.12$ , see
Table \ref{specfit_params1} \& Fig \ref{spec_ii_images_1}.
The disc is found to be moderately ionized, while the low reflection
fraction is consistent with the absence of any large reflection features in
the observed spectrum. The disc temperature in this model is consistent with
that found assuming a simple {\tt diskbb+po} model (see \S\ref{spec_i}),
$\rm T_{diskbb} \sim 0.19~keV$. The inner radius in this case is $\rm \sim
70~cos\theta^{-1}~d_{8.5kpc}~km$. We note that for inclinations less than
39$^{\degr}$ this is less than the radius of the ISCO for a 10 M$_{\sun}$
Schwarzschild black hole. \\
\noindent ii) Comptonizing Corona Models:\\
X-ray spectra of black hole binaries in the low-hard state are typically
modelled assuming the hard X-ray flux originates in a corona lying above the
accretion disc, which then scatters the soft X-ray flux from the disc to
higher energies. \textit{Suzaku} observations of GRO J1655-40 \& Cyg X-1
have been modelled in such a manner \citep{a27,a28}, here we model the
broadband spectrum of \swt~following the Comptonization model outlined in the
above papers.
Initial fits with a single Comptonizing component were found to provide an
inadequate fit, particularly at the highest energies. As in the cases of
GRO J1655-40 and Cyg X-1 above, we model the spectrum using a pair of
Comptonizing coronae with the same electron temperature but differing
optical depths ({\tt diskbb+compps+compps+laor}), where a {\tt laor}
component is added to account for the possible presence of a relativistic
Iron line. A spherical geometry was assumed for the Comptonizing cloud and
possible disc reflection effects were accounted for using the reflection
routine built into the {\tt compps} model. Firstly the fit was carried out
without the accretion disc component (i.e. {\tt pha(compps+compps)}) to
check if this component was actually required, the best fit is good
($\chi^2/\nu$ = 8173/7430). However, the column density in this case is
large $\rm N_H$ ($\rm \sim 0.28 \times 10^{22}~cm^{-2}$), while the seed
temperature is low $\rm T_{in1} = T_{in2} = 0.1$. Significant disc
reflection is not required in this model.
Addition of an accretion disc to this model significantly improved the
quality of the fit, with a reduced chi-squared of 1.09 ($\chi^2/\nu$ =
8086/7427). The disc component is not required in the fit to the XIS0
spectrum; however, the 90\% upper limit to the disc normalization is
consistent with the value measured from the XIS1 \& XIS3 spectra. Again we
note that the value returned for the interstellar extinction $\rm N_H \sim
0.31 \times 10^{22}~cm^{-2}$ is much higher than any previously measured
value, and as such would appear to be inconsistent with expectations for
$\rm N_H$ in the direction of \swt~ (see \S \ref{nh_discuss}). The iron
line component also significantly improved the fit ($\chi^2/\nu$ =
8009/7422; see Fig. \ref{spec_ii_images_2}), where the inner radius is
$\sim$ 60 R$\rm _g$.
In the best fit model above the temperature of the electrons in the corona
is low $\rm kT_e \sim 53~keV$, while the optical depths are found to be
approximately 0.34 \& 2.57 for the 2 Comptonizing components respectively,
see Table \ref{specfit_params1}. We may estimate the size of the optically
thin ($\rm \tau \sim 0.34$) and optically thick ($\rm \tau \sim 2.57$)
regions of the Comptonizing cloud from the normalization, where the radius
of the spherical cloud is $\rm R \sim d_{10 kpc}\times(cos\theta\times
norm)^{0.5}$. In both cases the inferred radius is small $\sim$ 305$\rm
cos\theta^{0.5}$ km \& 540$\rm cos\theta^{0.5}$ km ($\rm \sim
20~R_g~\&~36~R_g$ assuming a 10 $\rm M_{\sun}$ black hole and a distance of
8.5 kpc) for the optically thick and optically thin regions respectively. The
temperature of the seed disc component is low at 0.1 keV. We note that this
is approximately half the value for the blackbody temperature one finds when
fitting the data assuming a simple reflection continuum (see \S\ref{spec_i})
or when fitting the data with a detailed reflection model (see
\S\ref{spec_ii}i). As the temperature is low and the lower limit to our data
is only 0.6 keV, the disc normalization is poorly constrained as may be seen
by inspection of Table \ref{specfit_params1}. Nonetheless, the inner radius
of the accretion disc is consistent with overlapping with the coronal region
(i.e. $\rm R_{in-disc} \approx R_{corona}$ within the errors).
\begin{table}
\begin{center}
\caption{Spectral Fit Parameters II\label{specfit_params1}}
\begin{tabular}{lcc}
\tableline
Model & Parameter & Value \\[0.5ex]
\tableline
{\tt phabs } & $\rm N_H~[ 10^{22}~cm^{-2} ]$ & 0.20$\pm$0.01\\[0.5ex]
{\tt diskbb+cdid} & $\rm T_{diskbb}~[keV]$ & 0.19$\pm$0.01 \\[0.5ex]
& $\rm norm_{xis0}$ & 1554$\rm ^{+824}_{-783}$ \\[0.5ex]
& $\rm norm_{xis1}$ & 1554$\rm ^{+1222}_{-670}$ \\[0.5ex]
& $\rm norm_{xis3}$ & 1554$\rm ^{+1338}_{-886}$ \\[0.5ex]
& $\rm \Gamma$ & 1.614$\pm$0.002 \\[0.5ex]
& $\rm f~[\Omega/2\pi]$ & 0.12$\rm^{+0.02}_{-0.01}$ \\[0.5ex]
& $\rm \xi~[erg~s^{-1}~cm^{-2}]$ & 3.47$\rm^{+0.08}_{-0.05}$ \\[0.5ex]
& $\chi^2$ ($\nu = 7424$)& 8017 \\[0.5ex]
\tableline
& Parameter & Value \\[0.5ex]
\tableline
{\tt phabs } & $\rm N_H~[ 10^{22}~cm^{-2} ]$ & 0.31$\pm$0.01 \\[0.5ex]
{\tt diskbb+compps} & $\rm T_{diskbb}~[ keV ]$ & 0.10$\pm$0.01 \\[0.5ex]
{\tt +compps+laor} & $\rm norm_{xis0}$ & $\leq 3.3\times 10^5$ \\[0.5ex]
& $\rm norm_{xis1}$ & $\rm (0.35^{+1.7}_{-0.2})\times 10^{5}$ \\[0.5ex]
& $\rm norm_{xis3}$ & $\rm (1.84 \pm 1.4)\times 10^{5}$ \\[0.5ex]
& $\rm KT_e~[keV]$ & 53$\pm$3 \\[0.5ex]
& $\rm T_{in1}~[keV]$ & $\rm T_{diskbb}$ \\[0.5ex]
& $\rm \tau_1$ & 0.34$\pm$0.01 \\[0.5ex]
& $\rm norm_{compps1}$ & $\rm (4.14^{+0.13}_{-0.21})\times 10^{5}$ \\[0.5ex]
& $\rm T_{in2}~[keV]$ & $\rm T_{diskbb}$ \\[0.5ex]
& $\rm \tau_2$ & 2.57$^{+0.01}_{-0.05}$ \\[0.5ex]
& $\rm norm_{compps2}$ & $\rm (1.28^{+0.05}_{-0.02})\times 10^{5}$\\[0.5ex]
& $\rm f [\Omega/2\pi]$ & $\leq 0.01$ \\[0.5ex]
& $\rm E_{line}~[keV]$ & 6.4$\pm$0.1 \\[0.5ex]
& $\rm R_{in}~[R_g]$ & 60$^{+70}_{-25}$ \\[0.5ex]
& $\rm EW~[eV]$ & 70$\pm$30 \\[0.5ex]
& $\chi^2$ ($\nu = 7422$)& 8009 \\[0.5ex]
\tableline
\end{tabular}
\tablecomments{Parameters for the physically motivated models from section
\ref{spec_ii}. All errors have been calculated using the {\tt error}
command in \textsc{xspec} and are quoted at the 90\% confidence level. For
{\tt compps} models the inclination is frozen at 63.256$^{\degr}$, due to
the spherical geometry of the corona the inclination dependence of this
model is negligible.}
\end{center}
\end{table}
\section{Discussion}
We present \textit{Suzaku} broadband spectra (0.6 -- 250 keV) of the black
hole candidate \swt~while in the LHS. The observed spectrum is measured to
be consistent with an unbroken power-law, $\Gamma \sim 1.63$. During our
observation the flux from the source was observed to be constant, with a 0.6
- 150 keV unabsorbed flux of $\rm 2.6 \times 10^{-9}~erg~s^{-1}~cm^{-2}$
($\rm L_x/L_{Edd} = 0.016~d_{8.5kpc}^2M_{10M_{\sun}}$). This is consistent
with \textit{INTEGRAL} observations of GRO J1655-40 in the LHS where
unbroken power-law emission ($\Gamma \sim 1.72$) extending out to $\sim$ 500
keV was detected at a luminosity of $\sim$ 0.015 L$\rm_{Edd}$
(\citealt{a67}; although see \citealt{a82}).
\subsection{Column Density : $\rm N_H$}\label{nh_discuss}
Accurate determination of the interstellar column density is crucial due to
the preferential absorption of soft X-rays, which in our case are consistent
with being emitted by the cool accretion disc. Measurement of $\rm N_H$ is
best achieved at soft X-ray energies. Here \textit{XMM} with its well
studied low energy calibration should provide the most reliable
determination of the interstellar column density $\rm N_H = 0.23 \times
10^{22}~cm^{-2}$ \citep{a18}. In contrast the best fit value as measured
with \textit{Suzaku} is $\rm N_H = 0.18 \times 10^{22}~cm^{-2}$. However,
we do note that the measured $\rm N_H$ depends on the continuum model
assumed, e.g. for the Comptonizing corona model we find a best fit value
$\rm N_H = (0.31\pm 0.01) \times 10^{22}~cm^{-2}$.
In order to investigate the instrumental dependence of our measured values of
the column density, we also fit the best fit models above to a number of
\textit{Swift} XRT spectra taken before and after our \textit{Suzaku}
observation, with the closest occurring $\sim$ 60 days beforehand (obsid:
00030090050). The results from these fits are consistent with the results of
our fits to the \textit{Suzaku} spectrum, with the caveat that the smaller
energy coverage (0.6 -- 10 keV versus 0.6 -- 150 keV) and exposure time
($\sim$ 2 ks) necessarily result in larger confidence regions from fits to
the \textit{Swift} spectra.
These measurements are in agreement with numerous independent measures of
the interstellar column, sourced both from direct measurements and Galactic
surveys. X-ray observations from \textit{Swift} measured a column density of
$\rm 0.20 \times 10^{22}~cm^{-2}$ \citep{a3}, while optical measurements
also require $\rm N_H = 0.2 \times 10^{22}~cm^{-2}$ \citep{a19}. Additionally an
estimate of the column density may be obtained from various radio (N$\rm_H$
= $\rm 0.17 \times 10^{22}~cm^{-2}$ \citealt{a30}, $\rm 0.17 \times
10^{22}~cm^{-2}$ \citealt{a31}), and far-IR ($\rm 0.28 \times
10^{22}~cm^{-2}$ \citealt{a29}) surveys.
The radius of the cool disc component depends on the column density and
inclination as illustrated in Fig. \ref{inner_radius}. From above, we see
that for the directly measured values of $\rm N_H$ towards \swt, the
observed soft excess may originate from an accretion disc whose inner radius
is consistent with the ISCO.
\begin{figure}
\begin{center}
\includegraphics[height=0.33\textheight,angle=-90]{fig5_colour.eps}
\caption{The best fit disc reflection model ({\tt pha*(diskbb+CDID)}) to the
\swt~spectrum, see \S\ref{spec_ii} for details. The inferred inner disc
radius is low and consistent with that measured in \S\ref{spec_i}. See
Table \ref{specfit_params1} for the model parameters.}
\label{spec_ii_images_1}
\end{center}
\end{figure}
\subsection{A thin-disc at the ISCO?}\label{thin_disc_isco}
The significant soft excess observed in \swt~(Fig. \ref{soft_excess}) is
consistent with an origin in a standard multi-colour blackbody accretion
disc (\citealt{a34} and see Table \ref{specfit_params}). The disc
normalization may be used to estimate the inner radius of the accretion disc
when knowledge of the distance and inclination are available, as norm $\rm
\sim (r_{in}/d_{10~kpc})^2 cos\theta$. This estimate is subject to a number
of corrections, (i) spectral hardening must be accounted for, typically this
requires a multiplicative correction factor $\sim$ 1.7 \citep{a47} and, (ii)
one must correct for the inner radius, where $\rm R_{in} =
1.18r_{in}/\sqrt{cos\theta}$ (\citealt{a61}). There are also additional
errors, i.e. the zero torque inner boundary condition \citep{a46} and
radiative transfer effects \citep{a35}, which could contribute but are
difficult to quantify.
In Fig. \ref{inner_radius}, we plot the inner disc radius (corrected for (i)
\& (ii) above) measured from our best fit models in \S 3.2. The best fit
radius is consistent with the ISCO, for certain values of $\rm N_H$ \&
inclination, although there are large uncertainties. Previous observations
of \swt~at a 0.5 -- 10 keV flux of 3.9$\rm \times
10^{-10}~erg~s^{-1}~cm^{-2}$ detected a cool accretion disc with an inner
radius consistent with the ISCO. The measurements presented herein, at an
unabsorbed 0.6 -- 10 keV flux of 6.8$\rm \times
10^{-10}~erg~s^{-1}~cm^{-2}$, are in agreement with the previous analysis of
\citet{a18} and furthermore atest to the stability of the cool disc
component given that the observations were separated by almost 18 months
(see Fig. \ref{asmlc}).
\citet{a20} observed GX 339-4 at a luminosity of $\rm \sim 0.05~L_{Edd}$
with \textit{XMM} \& \textit{RXTE}. A significant excess of soft X-ray flux
was detected in the \textit{XMM} data alone, due to the absence of soft
X-ray coverage for \textit{RXTE}. Modelling revealed this excess to be
consistent with a cool accretion disc ($\rm kT_{in} \sim 0.35~keV$)
extending to the ISCO. Subsequent observations utilizing both \textit{RXTE}
\& \textit{Swift} at luminosities of 0.023 and 0.008 $\rm L_{Edd}$ also
detected this excess soft X-ray component \citep{a32}. Together with the
new observations presented in this paper, these results appear to confirm
the presence of the inner radius of the accretion disc at the ISCO (or at
least at a radius far lower than expected) for luminosities far below that
at which the state transition from HSS to LHS state occurs, in contrast to
theoretical expectations (e.g. \citealt{a36}).
\citet{a74} have re-analysed the \textit{XMM} data on \swt~\& GX 339-4
focusing on the timing characteristics and variability inherent to the
spectra. Using a new analysis technique, they find significant evidence for
the presence of the soft spectral component in agreement with
\citet{a18,a20}. They interpret the soft X-ray flux as originating in
variability intrinsic to the accretion disc. In particular they find that
the intrinsic variability of the accretion disc is likely to be responsible
for the low frequency Lorentzian feature present in the power spectral
density function of sources in the low-hard state. This places a limit on
the disc truncation radius of $<$ 20 $\rm R_g$. The \textit{XMM} spectrum
of \swt~has also been re-analysed by \citet{a75}. They fit the spectrum with
a relativistically blurred reflection model and find an inner disc radius of
$\rm R_{in} \sim 3~R_g$. This in turn provides an measure of the spin of the
putative black hole of $\rm a \sim 0.76$. Again, inconsistent with the idea
of a large truncation radius for the accretion disc in the low-hard state.
The magnitude of the reflection features is observed to be quite low in our
data. Although the presence of a broad relativistic line is required by the
data ($> 7 \sigma$ significant, EW $\rm \sim 70 \pm 30~eV$), it is
intrinsically weak (norm $\sim 10^{-4}$). The detection of a relativistic
iron line is consistent with the measurements of \citet{a76} who also
measured a redshifted line at an energy of 6.2 keV during pointed
\textit{RXTE} observations (see Fig. \ref{asmlc}). The inner radius inferred
from the {\tt laor} line fits is larger than the ISCO at $\rm R_{in} \sim$
10 -- 20 gravitational radii although much less than might be expected from
the standard disc picture for disc truncation, where the inner radius is
expected to be an order of magnitudes larger \citep{a36}.
The curvature present in the PIN spectrum is also consistent with a low
reflection fraction, f $\sim$ 0.12 - 0.21 (inclination $\sim$ 30$^{\degr}$
-- 63.256$^{\degr}$). We note that the reflection fraction inferred from the
self consistent {\tt CDID} model is at the lower end of the values inferred
from the phenomenological {\tt reflect*(diskbb+po)+laor} model. It is known
that the reflection features of black hole binaries are generally weaker in
the low-hard state than in higher luminosity states
(i.e. \citealt{a57,a58}). The relative weakness of these features in
\swt~is primarily due to the low luminosity nature of the source ($\rm L_x
\sim 8 \times 10^{36}~erg~s^{-1}$). In comparison, the X-ray luminosity in
GX 339-4 (\citealt{a20}) was much higher when a strong relativistically
smeared iron line ($\sim 8 \sigma$ significant, EW $\sim$ 350 eV) was
detected ($\rm L_x \sim 4 \times 10^{37}~erg~s^{-1}$).
The value we measure for the reflection fraction in \swt~is comparable to
previous LHS observations where reflection fractions of $\sim$ 0.2 -- 0.3
were measured in Cyg X-1 \& GX 339-4 \citep{a37,a20}. In contrast,
observations at higher luminosities require much higher reflection
fractions. X-ray spectra of the black hole binaries XTE J1650-500 and GX
339-4 in the VHS state revealed reflection fractions of approximately unity
\citep{a57,a58}. The low reflection fractions inferred in the low-hard state
may be interpreted as evidence for a truncated disc, e.g. \citet{a37}.
Alternatively the disc may remain close to the ISCO with the low reflection
fraction resulting from beaming of the hard X-ray flux away from the disc by
a mildly relativistic outflowing corona, e.g. \citet{a38}.
\begin{figure}
\begin{center}
\includegraphics[height=0.33\textheight,angle=-90]{fig6_colour.eps}
\caption{The best fit Comptonization model ({\tt
pha*(diskbb+compps+compps+laor)}) to the \swt~spectrum, see
\S\ref{spec_ii} for details. The inferred inner disc radius is larger than
that measured in \S\ref{spec_i}, i.e. $\rm R_{in} \gtrsim 30~R_g$. See
Table \ref{specfit_params1} for the model parameters.}
\label{spec_ii_images_2}
\end{center}
\end{figure}
\subsection{Alternatives to a thin disc at the ISCO}
A number of alternative explanations have been put forward to account for
the presence of the soft excess observed in \swt~\& GX 339-4. \citet{a66}
reanalyzed the data from \citet{a18} and find that while it is consistent
with containing a soft disc component, a number of alternative continuum
prescriptions that allow the disc to be truncated are also valid; however,
see \citet{a75,a74} who outline a number of issues with this result.
\citet{a64} propose a scenario in which the disc is truncated at larger
radii; however, the inner edge of the disc is irradiated by the high energy
Comptonized photons in the corona. This irradiation will cause the truncated
inner disc edge to be hotter and hence appear to lie at smaller radii than
is actually the case. In the model of \citet{a65}, the inner edge of the
cool truncated disc is seen to overlap with the inner ADAF region. Here in
the overlap region the cool disc is heated to temperatures mimicking those of
a cool disc at the ISCO.
It is also possible that what we observe is not the accretion disc but
instead a ring of material that has condensed from the corona. Such an idea
has been explored in detail by \citet{a59} and \citet{a60}. They find that
it is possible for enough material to condense from the corona to produce an
inner ring of material extending from near the ISCO to a few tens of
Schwarzschild radii, while the accretion disc is truncated at larger
radii. In particular, detailed modelling was able to reproduce the soft disc
components observed by \citet{a32} in GX 339-4 (see \citealt{a60}).
\subsection{Comparison with previous \textit{Suzaku} observations}
Two additional black hole binaries have been observed by \textit{Suzaku}
while in the low-hard state. \citet{a27} presented observations of GRO
J1655-40, while \citet{a28} observed Cyg X-1. The observations took place
while the systems were in the LHS at luminosities of approximately 0.007
L$\rm_{Edd}$ and 0.02 L$\rm_{Edd}$ respectively. We also modelled the
spectra of \swt~with a combination of 2 Comptonizing components, to aid
comparison of this data to the \textit{Suzaku} observations of the black
hole binaries Cyg X-1 \citep{a28} \& GRO J1655-40 \citep{a27}. For both of
these systems, the observed spectrum was interpreted as being due to a 2
component Comptonizing corona, i.e. a population of electrons with the same
temperature ($\rm kT_e$) but differing optical depths ($\tau$), modified by
disc reflection, {\tt diskbb+compps+compps}. The parameters of our best fit
model are listed in Table \ref{specfit_params1}, while the fit itself is
displayed in Fig. \ref{spec_ii_images_2}. In Table \ref{obs_comparison}, we
display the parameters for our best fit Comptonization model along with
those for the best fit models for Cyg X-1 and GRO J1655-40.
The blackbody disc component is found to be cool (kT $\sim$ 0.1 keV) and the
inner radius of the accretion disc is consistent with the outer radius of
the corona within the errors ($\rm R_{in} \sim 30~R_g$). The best fit radius
for the iron line is also consistent with this value. The high
value for the Hydrogen column density ($\rm 0.31 \times 10^{22}~cm^{-2}$)
returned by this model is inconsistent with the available data and
casts doubt the Comptonization scenario as a viable model for the
observed \textit{Suzaku} spectrum of \swt. However, the presence of
circumstellar matter or a local absorber could account for the excess
absorption above that detected at radio wavelengths.
We find an electron temperature cooler than that measured in both Cyg X-1
and GRO J1655-40 when the spectra are interpreted in terms of this
Comptonizing corona model. Assuming the disc truncation model for the
low-hard state to be correct, we would expect the electron temperature to
decrease with increasing luminosity, e.g. \citet{a73}. The value of $\rm
kT_e$ we measure would imply that \swt~was at a luminosity in excess of that
at which Cyg X-1 was observed ($\rm \sim 0.02~L_{Edd}$). This is in contrast
to various luminosity estimates. Indeed, we measure a luminosity of $\rm
\sim 0.016~d_{8.5kpc}^2M_{10M_{\sun}}~L_{Edd}$; for \swt~to have a
luminosity greater than that of Cyg X-1 at the time of the \textit{Suzaku}
observations would imply a distance greater than 8.5 kpc and or a black hole
mass less than 10$\rm M_{\sun}$.
\citet{a28} have argued that the inner radius of the accretion disc does not
extend to the ISCO in the LHS state based on the \textit{Suzaku}
observations of Cyg X-1 and GRO J1655-40 (see \S 4.3). They base their
argument around 2 main points; firstly the low reflection fractions measured
in the LHS state show that either the accretion disc does not intrude too
deeply into the corona or some form of outflow must be formed. They consider
the outflow case to be unlikely as the reflection fractions measured in both
GRO J1655-40 (i $\rm \sim 70^{\circ}$) and Cyg X-1 (i $\rm \sim 45^{\circ}$)
are similar, whereas if the reflection was due to an outflow it would be
expected to have a strong inclination dependence, which is not
observed. Here, we note that the errors on the reflection fractions measured
in the case of Cyg X-1 (f = 0.4$^{+0.2}_{-0.3}$), call into question the
validity of this inclination based argument, given the quality of the
currently available data.
Secondly it is argued that the inner radius determined from fits to the iron
K line in Cyg X-1 ($\rm \sim 13~R_g$) is inconsistent with the ISCO. This
point is much more ambiguous as their reported value of the inner radius is
$\rm R_{in} = 13^{+6}_{-7}~R_g$. This value is consistent with the ISCO for
a Schwarzschild black hole. As such, it is clear from the currently
available data that there is evidence for a cool disc component, with an
inner radius that in some cases is consistent with the ISCO, whether or not
the cool disc component actually extends to the ISCO is presently not clear.
Finally, it is important to note the high mass X-ray binary nature of Cyg
X-1. Here the accretion process, namely accretion via a stellar wind, is
significantly different from that in GRO J1655-40 (Roche lobe overflow) and
as such may preclude detailed comparison with the low mass X-ray binaries
like \swt~and GRO J1655-40.
\begin{table*}
\begin{center}
\caption{Comptonizing Corona Fits to the Spectra of Low-Hard State Black
Holes Observed by \textit{Suzaku}.\label{obs_comparison}}
\begin{tabular}{lcccccccccc}
\tableline
System & L$\rm_x$ & N$\rm_H$ & kT$\rm_{bb}$ & R$\rm_{disc}$ & R$\rm_{thick}$ &
R$\rm_{thin}$ & kT$\rm_{e}$ & $\tau_1$ & $\tau_2$ & f \\
& [ L$\rm_{Edd}$ ] & [ $\rm 10^{21} cm^{-2}$ ] & [ keV ] & [
km ] & [ km ] & [ km ] & [ keV ] & & & [ $\Omega/2\pi$ ]\\
\tableline\tableline
\swt & 0.016 & 0.31 & 0.1 & $\sim$ 830 & $\sim$ 205 & $\sim$ 360 &
53 & 0.34 & 2.57 & 0 \\
GRO J1655-40 & 0.007 & 0.74 & 0.2 & $\sim$ 330 & $\sim$ 26 & $\sim$ 65 & 135 & 0.25 & 1.2 &
0.5 \\
Cyg X-1 & 0.02 & 0.66 & 0.3 & $\sim$ 103 & $\sim$ 75 & $\sim$ 200 & 100 & 0.4 & 1.5 &
0.4 \\
\tableline
\end{tabular}
\tablecomments{GRO J1655-40 data from \citet{a27}, Cyg X-1 data from
\citet{a28}, \swt~this work. Where required a distance of 8.5 kpc and an
inclination of 63.256$^{\degr}$ have been assumed for \swt. R$\rm_{disc}$
refers to the inner radius of the accretion disc whereas R$\rm_{thick}$ \&
R$\rm_{thin}$ refer to the outer radius of the optically thick and thin
regions of the Comptonizing cloud respectively.}
\end{center}
\vspace{7mm}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[height=0.25\textheight]{fig7_colour.eps}
\caption{The absorption corrected spectral energy distribution for \swt. We
assumed a column density of $\rm N_H \sim 0.20 \times 10^{22}~cm^{-2}$, to
facilitate comparison with the SEDs displayed in \citet{a19,a76}. The
optical/NIR \& X-ray data are quasi-simultaneous, see \S\ref{the_jet}
for details.}
\label{sed_jet}
\end{center}
\end{figure}
\subsection{Jet and/or Corona?}\label{the_jet}
Our knowledge of role of the relativistic radio jets in X-ray binaries has
advanced rapidly in the past 10 years. Observations of numerous X-ray
binaries have revealed the presence of a compact quasi-steady jet in the
low-hard state, whereas in the high-soft state the jet is typically observed
to disappear \citep{a26,a81}. These also point towards a significant
contribution from the jet towards the overall energy budget of the binary
system, with the jet luminosity being possibly equal to that observed in
X-rays, e.g. \citet{a56}. In addition, a significant correlation has been
found between the X-ray and radio luminosities of accreting sources across a
range of luminosity (in Eddington units), suggestive of a disc-jet
coupling \citep{a55}.
\citet{a25} have shown how one could, in principle, distinguish between
different jet models based on the magnitude of the reflection features
present in the spectrum. In particular, reflection fractions greater than
0.2 could not be reproduced by their jet models, reflection fractions
between 0.1--0.2 could be due to synchrotron self-Comptonization in the base
of a jet, while a reflection fraction below 10\% could be interpreted as
evidence that the spectrum was dominated by synchrotron emission from the
jet. If the base of the jet is confined within a few gravitational radii of
the black hole, relativistic light bending will become important. This will
act in the opposite direction to the beaming of the outflow leading to a
slightly larger reflection fraction \citep{a80}. These models are also
consistent with the detection of an accretion disc near the ISCO, as we have
likely detected with \textit{Suzaku}.
A break or cut-off in the high energy spectrum of black holes and neutron
stars can be interpreted in terms of the electron temperature in a thermal
Comptonizing corona. The high energy spectrum of \swt~extends to at least
200~keV without a strong break or cut-off in our observations. Moreover,
single thermal Comptonization models fail to achieve acceptable fits. In
the context of \citep{a25}, the reflection fraction of $\sim$ 0.12 -- 0.21
measured in \swt~ is consistent with a disc that is illuminated by
synchrotron self-Comptonization produced in the base of a jet. A corona that
is independent of a jet but which produces a non-thermal Comptonization
spectrum might be able to describe the data, but the base of a jet provides
a natural context for the production of hard X-ray emission like that we
have observed.
In section \ref{spec_i}, we showed how the data may be fit with a power-law
modified by reflection, this model is supported by more detailed modelling
in \S\ref{spec_ii}. In both of these cases an incident power-law is
required, the jet is a natural source of such a power-law. In
Fig. \ref{sed_jet}, we plot the absorption corrected spectral energy
distribution for \swt. The optical data is from \citet{a69} and was taken
during at approximately the same time as our \textit{Suzaku} spectra. In
addition, we include V-band optical and J, H \& K-band NIR data points
obtained in April and July 2009 at the Magellan and MDM observatories. The
V-band magnitude was found to be consistent with the previous measurements
of \citet{a69}, as such the NIR data should be representative of the
infrared portion of the SED at the time of our \textit{Suzaku} observations.
The radio data points represent the radio detection shortly after the 2005
outburst and likely represent upper limits to the radio flux at the time of
our observations \citep{a19}, these are also consistent with the
non-detection at 5 GHz \& 8 GHz by \citet{a77}.
The solid line extending from the X-ray to the optical represents a
power-law as might be expected from optically thin radio emission, i.e.
$\rm f_{\nu} \propto \nu^{-0.5}$. This is similar to the slope of the
optically thin jet in GX 339-4 \citep{a72}. Extrapolating this power-law to
optical/NIR wavelengths shows that a spectral break is required at
optical/UV frequencies. Analysis of optical data of \swt~in its current
state by \citet{a70} favour a case where any jet emission is insignificant
at optical wavelengths consistent with our extrapolated power-law, this is
in agreement with an independent observation/analysis by \citet{a68,a76}.
\citet{a83} has shown, based on analysis of optical data from XTE J1118+480,
GX 339-4 \& \swt, that the variability at optical wavelengths is
inconsistent with being produced by reprocessing of X-rays in the accretion
disc and instead favour a model where the emission in the low-hard state is
dominated by the jet, e.g. \citet{a84}.
\section{Conclusions}
We have presented \textit{Suzaku} broadband X-ray observations of the
candidate black hole X-ray binary \swt. The broadband spectrum (2 -- 250
keV) is observed to be consistent with a simple power-law model ($\Gamma
\sim 1.63$). Confirming previous observations, we detect the presence of an
excess at soft X-ray energies. In addition, a weak relativistic iron line
and curvature consistent with a reflection hump at 20 -- 30 keV are
detected.
These observations point towards the persistence of the accretion disc at a
much lower radius than previously appreciated in the low-hard state.
Estimates of the disc inner radius with both a simple {\tt diskbb+po} model
and a detailed reflection model reveal values consistent with the ISCO ($\rm
R_{in} \lesssim 6~R_g$) for certain values of both the column density and
inclination. In contrast modelling the spectra with a Comptonization model,
while revealing a truncated inner disc ($\rm R_g \sim 50~R_g$), implies a
value for the Hydrogen column density in disagreement with all previous
estimates.
\acknowledgements
This research has made use of data obtained from the \textit{Suzaku}
satellite, a collaborative mission between the space agencies of Japan
(JAXA) and the USA (NASA). This research has made use of data obtained from
the High Energy Astrophysics Science Archive Research Center (HEASARC),
provided by NASA's Goddard Space Flight Center. This research made
use of the SIMBAD database, operated at CDS, Strasbourg, France and
NASA's Astrophysics Data System.
GM thanks the Spanish Ministerio de Ciencia e Innovaci\'on and CSIC for support
through a Ram\'on y Cajal contract. J.H. gratefully acknowledges support
from NASA grant NNX08AC20G. |
0911.2852 | \section*{Abstract}
We present an Evolutionary Placement Algorithm (EPA) for the rapid assignment of sequence fragments (short reads) to branches of
a given phylogenetic tree under the Maximum Likelihood (ML) model.
The accuracy of the algorithm is evaluated on several real-world data sets and
compared to placement by pair-wise sequence comparison, using edit distances and BLAST.
We test two versions of the placement algorithm, one slow and more accurate where branch length optimization is conducted for each short read insertion
and a faster version where the branch lengths are approximated at the insertion position.
For the slow version, additional heuristic techniques are explored that almost yield the same run time as the fast version, with only a small loss of accuracy.
When those additional heuristics are employed the run time of the more accurate algorithm is comparable to that of a simple BLAST search for data sets with a
high number of short query sequences. Moreover, the accuracy of the Evolutionary Placement Algorithm is significantly higher, in particular when the
taxon sampling of the reference topology is sparse or inadequate. Our algorithm, which has been integrated into RAxML, therefore provides an equally
fast but more accurate alternative to BLAST for phylogeny-aware analysis of short-read sequence data.
{\textbf Keywords:} Maximum Likelihood, short sequence reads, phylogenetic placement, RAxML, metagenomics
\newpage
\section*{}
Identification of organisms from, e.g., microbial communities, through analysis of their DNA has become an important analysis process in current biology.
Recently, the advent of new wet-lab techniques such as pyrosequencing (\cite{ronaghi2001psl}) has increased the amount of sequence data available for
identification and analysis of microbial communities by orders of magnitude.
This rapid increase in available sequence data poses new challenges for short-read sequence identification tools.
We can no longer expect that the steady increase in computing power according to Moore's law
is fast enough to be able to handle this {\em biological data flood} computationally.
A single sequencing run can already generate more than 100,000 short read sequences that comprise
sequence fragments with a length of approximately 30 to 450 nucleotides (base pairs) depending on the sequencer
used. Such sequencing runs can be carried out within about an hour.
Besides rapid full-genome assembly, another important application is the
sampling of microbial communities, e.g., in permafrost-affected soils (\cite{Ganzert2007}),
the human and vertebrate gut (\cite{ley05complete,turnbaugh2008cgm,ley2008www})
(with important implications on health and nutrition), hypersaline mats (\cite{ley06complete}), or
on hands (\cite{fierer2008ish}) with respect to hand hygiene and health.
Given the short read sequences, the first step in the analysis of those microbial communities consists in identifying the anonymous reads, i.e.,
a lot of short sequences are available, but we do not know to which known organism they are most closely related to.
This assignment of the short reads to know organisms, then allows to reason about the composition of the microbial communities,
to determine the microbial diversity, and allows for comparison of the microbial communities between different samples (see~\cite{turnbaugh2008cgm}).
For instance, in one sample 20\% of the reads might be most closely related to a specific taxonomic group of bacteria,
while in a different sample (e.g., from a different gut) only 5\% may be associated to this group.
Here we present a novel algorithm, the Evolutionary Placement Algorithm (EPA), for rapid phylogenetic identification of anonymous reads
(denoted as Query Sequences (QS)), using a set of full length Reference Sequences (RS).
The most straight-forward approach for identifying the QS is to use tools that are based on sequence similarity such as BLAST.
However, such a BLAST based approach exhibits an important limitation:
It can yield misleading assignments of QS to RS, if the RS sample does not contain sequences that are sufficiently closely related to the QS, i.e.,
if the taxon sampling is sparse or inappropriate.
Any approach based on sequence similarity like BLAST, which is based on pair-wise sequence comparison will not unravel, but silently ignore,
potential problems in the taxon sampling of the RS.
For instance, given two RS $a$ and $b$, a QS $q$ may be identified as being most closely related to $a$ by BLAST.
In reality $q$ might be most closely related to a RS $c$ which is not included in the reference sequence set.
Since this is a known problem (\cite{koski2001cbh}), many studies of microbial communities employ phylogenetic (evolutionary) methods for QS identification,
despite the significantly higher computational cost.
If a QS falls into an inner branch of the phylogenetic reference tree comprising the RS, i.e., it is not located near a leave of the tree,
this indicates that the sampling of the RS is insufficient to identify and capture the diversity of the QS.
This also provides information about the clades of the reference tree for which the taxon sampling
is sparse and indicates on which part(s) of the tree sequencing efforts should focus to improve taxon sampling.
To date, phylogeny-based evolutionary identification is conducted as follows: the QS are aligned with respect to a Reference Alignment (RA) for the RS, and then
inserted into the Reference Tree (RT) either via a complete {\em de novo} tree reconstruction, a constrained tree search, using the RT as a constraint or backbone,
or some fast and approximate QS addition algorithm as implemented, e.g., in ARB (\cite{arb2004fullref}) using Maximum Parsimony (MP).
For DNA barcoding, phylogeny-based Bayesian analysis methods have recently been proposed (\cite{munch2008bayesian} and \cite{nielsen2006statistical})
that are however applied to significantly smaller trees with less taxa.
The current standard approach for analysis of environmental reads yields a fully resolved bifurcating (binary)
tree that often comprises more than 10,000 sequences (\cite{fierer2008ish,turnbaugh2008cgm}).
The alignments used to reconstruct these trees mostly comprise only a single gene, typically 16S or 18S rRNA.
The reconstruction of such large trees with thousands of taxa, based on data from a single gene is time-consuming and hard
because of the weak phylogenetic signal, i.e., the reconstruction accuracy decreases for trees with many but relatively
short sequences (see~\cite{Olaf2000, Moret2002}). Moreover, in metagenomic data sets, a large number of taxa in the alignment (the query sequences),
will only have a length of approximately 200-450 base pairs if a 454 sequencer is used.
Thus, for identification of short read QS, the lack of phylogenetic signal becomes even more prevalent and critical if a comprehensive tree is reconstructed.
As an example for the lack of signal and topological stability
in such hard-to-analyze single gene data sets with many taxa, we may consider the pair-wise topological Robinson Foulds
distances (\cite{robinson1981cpt}) for a collection of Maximum Likelihood (ML~\cite{felsenstein81}) trees that can not be
statistically distinguished from each other via the
standard significance tests (\cite{goldman2000lbt}).
For a collection of 20 ML trees inferred with RAxML (\cite{AlexandrosStamatakis08232006})
on a single-gene rRNA data set with 4,114 taxa the average pair-wise RF distance between the ML trees was approximately 30\%.
Hence, in order to solve the problems associated to the lack of signal and to significantly accelerate the analysis process,
we advocate a different approach, that only computes the optimal insertion position for every QS in the RT with respect to its Maximum Likelihood score.
We introduce a new algorithm for the phylogenetic placement of QS and thoroughly test the placement accuracy on seven previously published DNA and one protein data set.
We assess the impact of QS length on placement accuracy and also conduct tests on short paired-end reads.
Because phylogenetic placement is inherently more compute intensive than simple sequence based placement,
performance optimization is an important factor in the development of such an algorithm if it is to become
a useful and fast alternative to BLAST.
Therefore, we have devised several evolutionary placement algorithms and heuristics with varying degrees of computational complexity.
The algorithm which has been developed and tested in cooperation with microbial biologists is already available in the latest open-source
code release of RAxML (\cite{AlexandrosStamatakis08232006})
(version 7.2.1, released in July 2009, \url{http://wwwkramer.in.tum.de/exelixis/software.html}).
Our new algorithmic approach represents a useful, scalable, and fast tool for evolutionary (phylogenetic)
identification of environmental QS. We are not aware of any comparable algorithm that can perform the task described here, in particular
on trees with more than 10,000 taxa.
\section*{Evolutionary Placement Algorithm}
\label{methods_heuristics}
\subsection*{The Maximum Likelihood Model}
\label{ml}
The input of a standard phylogenetic analysis consists of a multiple sequence alignment with $n$ taxa and
$m$ alignment columns (sites). The output is an \emph{unrooted} binary tree.
In order to compute the likelihood on a fixed tree topology one also needs several ML model parameters:
the instantaneous nucleotide substitution matrix $Q$ which contains the transition probabilities for time $dt$ between nucleotide (4x4 matrix),
the prior probabilities of observing the nucleotides, e.g., $\pi_A, \pi_C, \pi_G, \pi_T$ for DNA data,
which can be determined empirically from the alignment, and the $\alpha$ shape parameter that forms part of the $\Gamma$ model (\cite{yang94}) of rate heterogeneity.
The $\Gamma$ model accounts for the biological fact that different columns in the alignment evolve at different speeds, and finally the $2n-3$ branch lengths.
The CAT approximation of rate heterogeneity~(\cite{stamatakis2006cat}) can be used as an efficient and accurate computational work around for $\Gamma$, since
it requires four times less memory and is three to four times faster than phylogenetic inferences under the $\Gamma$ model.
We want to emphasize, that the CAT approximation represents a ``quick and dirty'' work around for $\Gamma$ and should not be confused
with mixture models (\cite{lartillot2004bayesian}).
Given all these parameters, in order to compute the likelihood of a fixed \emph{unrooted} binary tree topology,
initially one needs to compute the entries for all internal probability vectors (located at the inner nodes)
that contain the probabilities $P(A),P(C),P(G),P(T)$ of observing an \verb|A,C,G,| or \verb|T| at each site $c$ of the
input alignment at the specific inner node, bottom-up from the tips towards a virtual root that can be placed into any branch of the tree.
This procedure is also know as the Felsenstein pruning algorithm (\cite{felsenstein81}).
Under certain standard model restrictions (time-reversibility of the model) the overall likelihood score will be the
same regardless of the placement of the virtual root.
Every probability vector entry $\vec{L}(c)$ at a position $c$ ($c=1...m$) $\vec{L}(c)$ at the tips and at the inner nodes
of the tree topology, contains the four probabilities P(A), P(C), P(G), P(T) of observing a nucleotide \verb|A, C, G, T| at
a specific site $c$ of the input alignment.
The probabilities at the tips (leaves) of the tree for which observed data {\em is} available are set to 1.0
for the observed nucleotide character at the respective position $c$,
e.g., for the nucleotide \verb|A|: $\vec{L}(c)=[1.0,0.0,0.0,0.0]$.
Given a parent node $k$, and two child nodes $i$ and $j$ (with respect to the virtual root), their probability vectors $\vec{L}^{(i)}$
and $\vec{L}^{(j)}$, the respective branch lengths leading to the children $b_i$ and $b_j$ and the transition probability matrices
$P(b_i), P(b_j)$, the likelihood of observing an A at position $c$ of the ancestral (parent) vector $\vec{L}_A^{(k)}(c)$
is computed as follows:
\begin{small}
\begin{equation}\label{formula_lrec}
\vec{L}_A^{(k)}(c) = \big(\sum_{S=A}^T P_{A S}(b_i) \vec{L}_{S}^{(i)}(c)\big)\big(\sum_{S=A}^T P_{A S}(b_j) \vec{L}_{S}^{(j)}(c)\big)
\end{equation}
\end{small}
The transition probability matrix $P(b)$ for a given branch length is obtained from $Q$ by $P(b)=e^{Qb}$.
Once the two probability vectors $\vec{L}^{(i)}$ and $\vec{L}^{(j)}$ to the left and right of the virtual root ($vr$) have been computed,
the likelihood score $l(c)$ for an alignment column $c, c=1...m$ can be calculated as follows, given the branch length
$b_{vr}$ between nodes $i$ and $j$:
\begin{small}
\begin{equation}\label{formula_root}
l(c) = \sum_{R=A}^T \big(\pi_R \vec{L}_{R}^{(i)}(c) \sum_{S=A}^T P_{R S}(b_{vr})\vec{L}_{S}^{(j)}(c)\big)
\end{equation}
\end{small}
The overall score is then computed by summing over the per-column log likelihood scores as indicated in equation~\ref{formula_likelihood}.
\begin{small}
\begin{equation}\label{formula_likelihood}
LnL = \sum_{c=1}^m log(l(c))
\end{equation}
\end{small}
In order to compute the {\em Maximum} Likelihood value for a fixed tree topology
all individual branch lengths, as well as the parameters of the $Q$ matrix and the $\alpha$ shape parameter,
must also be optimized via an ML estimate.
For the $Q$ matrix and the $\alpha$ shape parameter the most common approach in state-of-the-art ML implementations consists in using
Brent's algorithm (\cite{brent1973}). For the optimization of branch lengths the Newton-Raphson method is commonly used.
In order to optimize the branches of a tree, the branches are repeatedly
visited and optimized one by one until the achieved likelihood improvement (or branch length change) is smaller than some pre-defined $\epsilon$.
Since the branch length is optimized with respect to the likelihood score, the Newton-Raphson method
only operates on a single pair of likelihood vectors $\vec{L}^{(i)}, \vec{L}^{(j)}$ at a time that define the branch to be optimized.
Evidently, when a branch of the tree is updated this means that a large number of probability vectors $\vec{L}$ in the
tree are affected by this change and hence need to be re-computed.
An important implementation issue is the assignment of memory space for the probability vectors to inner nodes of the tree.
There exist two alternative approaches: a separate vector can be assigned to each of the three outgoing branches of an inner
node, or only one vector can be assigned to each inner node. In the latter case, which is significantly more memory-efficient,
the probability vectors always maintain a rooted view of the tree, i.e., they are oriented towards the current virtual root of the tree.
In the case that the virtual root is then relocated to a different branch (for instance to optimize the respective branch length),
a certain number of vectors, for which the orientation to the virtual root has changed need to be re-computed.
If the tree is traversed in an intelligent way for branch length optimization, the number of probability vectors that will
need to be re-computed can be kept to a minimum.
RAxML uses this type of rooted probability vector organization.
\subsection*{Evolutionary Placement Algorithm}
\label{algo}
The input for our evolutionary identification algorithm consists of a reference tree comprising the $r$ RS (Reference Sequences),
and a large comprehensive alignment that contains the $r$ RS {\em and} the $q$ QS (Query
Sequences).
The task of aligning several QS with respect to a given reference rRNA alignment can for instance be accomplished with
ARB (\cite{arb2004fullref}) or NAST (\cite{desantisjr2006nms}).
One key assumption is, that the Reference Tree (RT) is biologically well-established or that it has been obtained via a preceding
thorough ML analysis.
Initially, the algorithm will read in the RT and reference alignment and mark all sequences of the
alignment that are {\em not} contained in the reference tree as QS.
Thereafter, the ML model parameters and branch lengths on the reference tree will be optimized using the standard
procedures implemented in RAxML.
Once the model parameters and branch lengths have been optimized on the reference tree, the actual identification algorithm is invoked.
It will visit the $2r-3$ branches of the reference tree in depth first-order, starting at an arbitrary branch of the tree
leading to a tip. At each branch, initially the probability vectors of the reference tree to the left and the right will be re-computed (if they are not already
oriented towards the current branch). Thereafter, the program will successively insert (and remove again) one QS at a time into the current branch and
compute the likelihood (we henceforth denote this as insertion score) of the respective tree containing $r+1$ taxa.
The insertion score will then be stored in a $q \times (2r-3)$ table that keeps track of the insertion scores for all $q$ QS
into all $2r-3$ branches of the reference tree.
In order to more rapidly compute the per-branch insertions of the QS we use an approximation that is comparable
to the Lazy Subtree Rearrangement (LSR) moves that are deployed in the standard RAxML search algorithm (see~\cite{raxml2} for details).
After inserting a QS into a branch of the RT we would normally need to re-optimize all branch lengths of the
thereby obtained new---extended by one QS---tree topology to obtain the {\em Maximum} Likelihood insertion score.
Instead, we only optimize the three branches adjacent to the insertion node of the QS (see Figure~\ref{figBranch})
before computing the likelihood of the insertion, based
on the same rationale used for the design of the LSR moves. Our experimental results justify this approximation.
In analogy to the LSR moves we also use two methods to
re-estimate the three branches adjacent to the insertion branch, a fast method that does not make use of the Newton-Raphson method
and a slow method. The {\em fast} insertion method simply splits the branch of the reference tree $b_r$ into two parts $b_{r1}$ and $b_{r2}$ by setting
$b_{r1}:=\sqrt{b_r}$, $b_{r2}:=\sqrt{b_r}$, and the branch leading to the QS $b_q:=0.9$, where $0.9$ is the default RAxML value
to initialize branch lengths.
The {\em slow} method repeatedly applies the Newton-Raphson procedure to all three branches $b_{r1}, b_{r2}, b_q$ until
no further application of the Newton-Raphson branch length optimization procedure will induce a branch length
change $>\epsilon$, where $\epsilon := 0.00001$.
Alternatively, our algorithm can also use Maximum Parsimony to pre-score and order promising candidate insertion branches
in order to further accelerate the placement process.
The output of this procedure for evolutionary identification of QS consist of
the input RT, enhanced by assignments of the QS to the branches of the RT.
Each QS is attached to the branch that yielded the best insertion score for the specific QS.
Hence, the algorithm will return a potentially multi-furcating tree, if two or or more QS are assigned to the
same branch. An example output tree for 4 RS and 3 QS is depicted in Figure~\ref{class1}.
Moreover, the EPA algorithm can also conduct a standard phylogenetic bootstrap (\cite{felsenstein1985clp}), i.e.,
repeat the evolutionary identification procedure several times under slight perturbations of the
input alignment. This allows to account for uncertainty in the placement of the QS as shown in Figure~\ref{class2}.
Thus, a QS might be placed repeatedly into different branches of the reference tree with various levels of
support. For the Bootstrap replicates we introduce additional heuristics to accelerate the insertion process.
During the insertions on the original input alignment we keep track of the insertion scores for {\em all} QS
into {\em all} branches of the reference tree. For every QS we can then sort the insertion branches by their
scores and for each Bootstrap replicate only conduct insertions for a specific QS into 10\% of the best-scoring insertion branches on the original alignment.
This reduces the number of insertion scores to be computed per QS on each bootstrap replicate by 90\% and therefore approximately yields
a ten-fold speedup for the bootstrapping procedure. In a typical application scenario, one may determine the diversity of the environmental sample
for every replicate using for instance UniFrac (\cite{lozupone2005unifrac}), and then compute an average diversity over all replicates.
Alternatively, one could directly use the insertion likelihoods of the QS on the original alignment to compute
placement uncertainty, i.e., to determine a placement area, rather than a single branch,
by applying, e.g., the SH-test (\cite{shimodaira1999multiple}).
In analogy to the heuristics used for accelerating the bootstrapping process, we can also
improve the performance for computing QS insertion scores on the original alignment.
In order to improve the run time of the {\em slow} insertion method we have developed two heuristics,
that rely on rapid pre-scoring of insertion branches based on {\em fast} likelihood insertion scores or
Maximum Parsimony (MP) scores. With those pre-scoring techniques, the number of insertion positions considered for the
significantly more time consuming {\em slow} insertion process with thorough branch length
optimization can be reduced to a small fraction of promising candidate branches.
The proportion of insertion branches suggested by the rapid pre-scoring heuristics for analysis under the slow insertion method
is determined by a user defined parameter $fh$. As part of our performance evaluation we have tested the fast ML and MP heuristics
with regard to this parameter setting. Overall, these additional heuristics yield an algorithm that is significantly more accurate, but equally fast
as BLAST.
\section*{Experimental Setup}
\subsection*{Data Sets}
To test the placement accuracy of the EPA and competing approaches, we used 8 real-world protein (AA) and DNA data alignments containing 140 up to 1,604 sequences.
The experimental data span a broad range of organisms and include rbcL genes (D500), small subunit rRNA (D150, D218, D714, D855, D1604),
fungal sequences (D628) as well as protein sequences from {\em Papillomaviridae} (D140).
Table~\ref{table_datasets} provides an overview of the data sets used for evaluating the placement algorithms.
We henceforth denote these data sets as Reference Alignments (RA).
For each data we computed the best-known ML trees, denoted as Reference Trees (RT), including BS support values (\cite{stamatakis2008rba}).
The data sets are available for download at \url{http://wwwkramer.in.tum.de/exelixis/epaData.tar.bz2}.
\subsection*{Generation of QS}
For evaluating the accuracy of our algorithm, we pruned one candidate QS at a time from the existing ML trees and then subsequently re-inserted the QS into the tree.
We only prune and re-insert a subset of candidate QS with good support values from the respective reference trees in order
to assess placement accuracy for taxa whose position in the original tree is reliable.
A candidate QS is considered to have ``good'' support, when either both (inner) branches to which the taxon is attached have a bootstrap support
$\ge 75\%$ (Fig.~\ref{figure_qs}b) or if one of the two branches has support $\ge 75\%$ and the other branch leads to a neighboring tip (Fig. \ref{figure_qs}a).
The threshold setting of $75\%$ reflects the typical empirical cutoff that is widely used to interpret phylogenetic bootstrap results (\cite{hillis1993empirical}).
For each candidate QS, a new, reduced, reference tree is derived by pruning the respective tip from the original tree.
The QS associated to that taxon is then placed into the reduced tree (Fig. \ref{figure_qs}c) with our EPA algorithm.
In our test data sets, the candidate QS are still full-length sequences.
In a typical application scenario however, the placement algorithm will have to cope with QS that are significantly
shorter than the sequences of the reference alignment, even for single gene alignments.
Hence, we carry out a systematic assessment of the placement accuracy depending on query sequence length
by artificially shortening the candidate QS via insertion of gaps.
We deploy two distinct methods to insert gaps into the candidate QS:
The first method to shorten candidate QS consists of randomly replacing existing characters by gaps.
In this way we can assess the placement of QS with varying ''virtual read lengths''.
Multiple placement runs were conducted for query sequences with a relative proportion (with respect to total alignment length)
of non-gap character of 10\%, 20\%, 30\%,..., up to the full sequence length.
Because the sequences from which the QS are derived form part of the original multiple alignment,
the remaining non-gap characters are still in alignment with the RA.
Because we calculate the amount of gaps relative to the length of the multiple alignment,
the maximum proportion of available non-gap characters is data set specific and also depends on the individual QS candidate selected.
Sequences that only contain non-gap characters for every alignment site are an exception for the data sets under study.
In addition to analyzing the accuracy of the EPA with gaps that have been inserted at random,
we also evaluated accuracy on contiguous subsequences of the candidate QS, which more closely resembles
the projected application scenario. Typically, a large number of short sequence reads generated by next generation sequencing methods
will need to be placed into a reference tree.
We have chosen to shorten the QS, such that they correspond to paired-end reads (see Fig. \ref{figure_qs}c)
of the gene in the RA (we excluded data set D140 which contains protein sequences of multiple genes in this experiment).
By using subsequences originating from pre-defined positions in the alignment, we intend to minimize the influence of the contiguous
subsequence starting position in the alignment on placement accuracy.
Therefore, we do not consider selecting, e.g., contiguous subsequences from the candidate QS with the least amount of gaps
or randomly selected subsequences that are located at an arbitrary alignment position. If contiguous subsequences
at arbitrary sites are selected, the placement accuracy assessment may be biased for example
by positional variability in 16S rRNA~(\cite{chakravorty2007detailed}) such that it will be hard to determine if
a misplacement occurs because of the algorithm or the data.
While the selection of paired-end subsequences from the beginning and end of the gene may also bias placement accuracy,
this bias is consistent over all QS.
Therefore, we have conducted our accuracy assessment on paired-end reads that have been artificially generated from the
full-length candidate QS by replacing all characters with gaps in the middle of the sequence.
The artificial paired-end reads are of lengths 2x50 and 2x100 bp. This roughly corresponds
to the read lengths generated by current high throughput sequencing technologies.
\subsection*{Comparison to Sequence Based Placement}
We conduct our accuracy evaluation by comparison to a typical application scenario, in which appropriate sequence based search
tools such as BLAST (\cite{altschul1997gba}) are used to assign a QS to the most similar reference sequence.
In this setting a candidate QS will always be assigned to one of the branches of the phylogenetic tree
that leads to a reference sequence, i.e., an external branch.
In addition to BLAST, we also use a custom algorithm that is briefly outlined below.
We us a sequence similarity based algorithm as a baseline for comparisons with the EPA.
Unlike BLAST, our baseline algorithm, also uses the multiple sequence alignment information from the RA to infer placements.
Extensive experiments have shown, that the best accuracy is obtained, when a simple variation of the edit-distance is used as similarity measure.
For the pair-wise sequence comparisons, only positions are considered, where two non-gap characters are aligned.
The distance function is the number of misaligned non-gap characters.
The branch insertion position proposed by this method, will always be a branch that leads to the tip of the reference tree that has the smallest distance to the QS.
While only a moderate amount of effort was invested to optimize the implementation of this approach, it generated the best results for
the sequence comparison based methods with respect to placement of QS with random gaps (results not shown).
However, further tests, also revealed that the distance-function partially produced very large
deviations from the correct insertion position for QS with contiguous paired-end reads (results not shown).
In the latter case, the best results were obtained with a distance function that includes affine gap penalties.
Character mismatches and gap opening are penalized with a score of 3, while gap extension has a penalty of 1.
This gap-aware distance function yields less accurate placements than the method without gap penalties on candidate QS with random gaps.
In the following this approach (using either distance function) will be denoted as SEQuence based Nearest Neighbor (SEQ-NN) placement.
The use of contiguous paired-end sequences also allowed for usage of BLAST sequence queries.
For the BLAST tests we removed all gaps from the multiple alignment and built a BLAST database for each data set.
We also removed all gaps from the candidate QS and concatenated two ends of the artificial paired-end reads into one sequence.
Searches with those sequences were conducted against the corresponding BLAST database.
The default parameters of the BLAST program from the NCBI C Toolkit were used for character match/mismatch (scores 1 and -3) and gaps (non-affine gap penalty of -1).
The default values from the NCBI BLAST website with affine gap penalties were also tested, but produced slightly worse placement results and higher run times than the
default settings.
\subsection*{Accuracy Measures}
To quantify placement accuracy, we use two distance measures based on the topology and branch lengths of the original ML tree.
In all cases we consider an original branch and an insertion branch.
The original branch is the branch from which the candidate QS was originally pruned in the ML tree, and into which it should ideally be re-inserted.
The insertion branch is the branch computed by the respective placement algorithm.
To quantify the distance between the 'correct' original branch and the actual insertion branch we use the following two distance measures:
The 'Node Distance' (ND), is the unweighted path length in the original tree between the two branches.
This corresponds to the number of nodes located on the path that connects the two branches (Fig. \ref{fig_distances}a) and represents an absolute distance measure.
The second measure is the sum of branch lengths on the path connecting the two branches.
This measure also includes 50\% of the branch length of the insertion-branch and 50\% of the length of the original branch (Fig. \ref{fig_distances}a).
For comparability between different trees and in order to obtain a relative measure, we normalize the branch path length by dividing it through
the maximum tree diameter (Fig. \ref{fig_distances}b).
The maximum diameter is the branch path of maximum length between two taxa in the RT.
This distance measure is henceforth denoted as 'Branch Distance Normalized' (BDN\%).
When the EPA is used with bootstrapping, more than one insertion branch can be proposed for each candidate QS.
For a bootstrap run with $N_{bs}$ replicates, for each QS the output of the EPA contains a set of $i=1...N$, where
$N \leq N_{bs}$, insertion positions with bootstrap values $S_i$.
Using this information we derive a set of ND or BDN distances $D_i$ to the original branch for each alternative placement $i$.
We use the $D_i$ to represent the bootstrap placement information as a single quantity for each QS: the Weighted Root Mean Squared Distance (WRMSD),
$D_{wrms}$:
\begin{small}
\begin{equation}\label{formula_rmsd}
D_{wrms} = \sqrt{\frac{1}{N}\sum_{i=1}^{N} (\frac{S_i}{N_{bs}} D_i)^2}
\end{equation}
\end{small}
\section*{Results and Discussion}
\subsection*{Placement Accuracy for Random Gap QS}
To test the accuracy of the EPA on random-gap sequences, placement runs were carried out with bootstrapping (using 100 replicates) and without
bootstrapping.
For the placements from the bootstrap runs the WRMS distance from the real (original) insertion position was calculated as previously described.
Placements were carried out on the 8 data sets for varying virtual read lengths.
Figure \ref{gappy_all} gives a detailed plot of the accuracy depending on the proportion of gaps, averaged over all candidate QS from all data sets
(respective plots for the individual data sets can be found in the supplementary material).
In general, SEQ-NN produces less accurate placements than the EPA.
In cases where the QS contain less than 20\% non-gap characters (more than 80\% gaps), the EPA with bootstrapping, produces less accurate placements
than SEQ-NN. A possible cause for this effect is discussed below.
On the original alignment (without bootstrapping) the EPA placements are consistently approximately 1.5 times more accurate
than SEQ-NN placements.
The placement accuracy improvement is even higher for for the 'hard' QS subsets that only comprise inner QS (Fig. \ref{gappy_hard}).
For inner QS, the conceptual disadvantage of SEQ-NN becomes prevalent because the best possible placement
that can be inferred will at least be one node away from the original insertion position (see also Figure \ref{figure_qs}).
The difference between the EPA and SEQ-NN placements is about 3 nodes on average. For inner QS the EPA algorithm
achieves a three-fold improvement in placement accuracy.
It is worth noting that, on average the EPA correctly places almost all QS, when the they contain less than 50\% gaps, even on this 'hard' subset of inner QS.
This means that, for the single gene case, QS with virtual read lengths of about 400--500 base pairs are placed into the branch they
were pruned from. This virtual read length range, that allows for sufficiently accurate placement of QS,
is in the same length range as reads from 454 sequencing runs.
In addition, there is a trend for 454 reads to become longer as the technology matures.
The placements on the 'hard' subset are especially encouraging as they show that, in contrast to SEQ-NN,
the good overall results of the EPA are not merely caused by the presence of a tip as direct neighbor with a high sequence similarity.
The results on this subset are indicative for the performance on data sets with a sparse or inadequate taxon sampling.
Since it is hard to determine an adequate taxon sampling a priori for an unknown microbial community, our approach
can actually be used to appropriately adapt the taxon sampling.
The comparison of the accuracy graphs for the EPA with and without bootstrapping helps to understand the impact of using bootstrapping for evolutionary placement.
One of the consequences of the phylogenetic bootstrapping procedure is that, for each bootstrap replicate only a fraction (less distinct alignment sites)
of the input data is used.
The probability for each of the $n$ alignment columns to form part of a bootstrap replicate is $1-(1-\frac{1}{n})^n \approx e^{-1} \approx 0.632$.
Thus, only 63.2\% of the available characters in each QS will be used on average.
In practice this has a similar effect as using shorter QS with less signal for computing the insertion position, and partly explains why the
EPA placements with bootstrapping enabled are generally worse than those without bootstrapping.
For QS with a very low proportion of non-gap characters, the aforementioned property of the bootstrap method becomes
particularly noticeable and results in a inferior placement accuracy compared to the simple approach used in SEQ-NN.
In accordance with this, the accuracy of bootstrap placements improves significantly with increasing QS lengths for which it
clearly outperforms SEQ-NN.
\subsection*{Placement Accuracy for Short Contiguous Sequence Reads}
Table \ref{table_pe_100} provides the overall results of the experiments with virtual paired-end reads of length 2x100bp
(the results for 2x50bp reads are provided in the supplementary material).
EPA accuracy is compared against SEQ-NN with the aforementioned appropriately adapted distance function as well as against BLAST.
We specifically report results for two sequence based methods, to assess to which extent the exclusion of gap positions from the
original multiple alignment have a negative impact on the results derived from BLAST hits. Inversely, we also assess to which extent
the availability of the additional information provided by the multiple sequence alignment, benefits the alignment based SEQ-NN approach.
The placements returned by BLAST are based on a local pair-wise alignment of sequences.
In our case this means that only one half of the paired-end reads (100bp) is actually used for the placement.
As a consequence, the simple approach used in SEQ-NN (with gap penalties) consistently places the QS closer to their
original position than BLAST.
Nonetheless, the accuracy of the EPA placements is 1.58--3.55 times better than for SEQ-NN
that also uses the information contained in the multiple sequence alignment.
Figure \ref{pe_hist_nd_855_100} provides a histogram for the distribution of individual placements computed by the EPA and BLAST for 2x100bp
paired-end reads on data set D855.
Respective histograms for all data sets on 2x100bp and 2x50bp reads are available in the supplementary material.
The histograms show that placements obtained via the EPA are closer to the reference position on average
and yield smaller maximum placement errors than BLAST.
Table \ref{table_pe_100} highlights that the phylogeny-aware EPA consistently outperforms sequence comparison based methods
and that placements are approximately twice as accurate on average.
Generally, the placement accuracy for contiguous short QS is
consistent with the results obtained for candidate QS with random gaps.
\subsection*{Impact of Placement Algorithms and Substitution Models on Accuracy}
The preceding computational experiments have been carried out using the most {\em thorough} version of the EPA
under the GTR+$\Gamma$ and WAG+$\Gamma$ (AA) models.
In addition, we used the slow branch length optimization option for every possible insertion branch on the original alignment.
As previously mentioned, we also devised a {\em fast} version of the EPA where the Newton-Raphson based branch length optimization
is deactivated for QS insertions. These heuristics can speed up the EPA by one order of magnitude, when a large amount of QS is being
placed into a reference tree.
An additional speedup of a factor of 3 to 4 can be achieved by using the GTR+CAT or PROT+CAT approximations (\cite{stamatakis2006cat})
of rate heterogeneity.
Figure \ref{gappy_methods_all} shows the impact of the EPA placement heuristics and rate heterogeneity model
on the accuracy for all QS over all data sets (analogous plots for the individual data sets are available in the supplementary material).
For the thorough insertion method there is practically no difference in placement accuracy between the $\Gamma$ model and CAT approximation.
At the same time we obtained three to four-fold run time improvements (a detailed analysis of execution times is provided in the following Section),
which is in accordance with previous results on the CAT approximation (\cite{stamatak:IPDPS}).
For the fast insertion method, there is a noticeable decrease in placement accuracy for the CAT as well as the $\Gamma$ models.
In particular, the {\em slow} QS insertion method performs better for long QS that contain more than 70\% of non gap characters.
However, the differences in placement accuracy between the distinct EPA models and heuristics
are very small compared to the much larger errors returned by the sequence comparison based approaches (see Fig. \ref{gappy_all} and \ref{gappy_hard}).
\subsection*{Run Time Analysis and Heuristics for Slow Insertions}
As shown in the previous Section the loss of accuracy induced by the {\em fast} insertion method is minimal.
Nonetheless, a slight accuracy improved can be attained by the {\em slow} insertion method.
Using the rapid pre-scoring heuristics we have already described, it is possible to significantly accelerate the {\em slow}
insertion algorithm with little to no impact on placement accuracy.
Here we evaluate the run time and accuracy trade offs associated with those heuristics.
We also provide run time measurements for the standard insertion methods and BLAST.
In contrast to the previous accuracy assessments, we do not test the placement of one QS at a time into an existing RT from which the QS has been previously pruned.
Instead, we randomly split the alignments into two subsets that each comprise 50\% of the taxa.
The first subset is used to infer a best-known ML tree with RAxML
into which the remaining taxa (of the second subset) are placed via the EPA.
For run time measurements this experimental setting better corresponds to a typical application scenario of the EPA,
where a large number of QS is placed into a reference tree.
In contrast to the previous experiments, we can not use the position in the RT from which the candidate QS has been pruned as a reference for accuracy measurement.
Instead, we compare the placements obtained by the various heuristics to the placements inferred by the slowest and most thorough EPA version under
the GTR+$\Gamma$ model. This {\em slow} EPA version has shown to be the most accurate placement algorithm
in the previous experiments.
Here, we assume the {\em slow} EPA placements to be the true placements.
In this test we reduce the length of the QS to 50\% non-gap characters.
The non-gap characters are a contiguous sequence fragment that starts at the beginning of the respective sequence, i.e., the QS
represent roughly the first half of the gene.
All performance tests were carried out on a typical current desktop computer with a Intel Core2 Quad CPU Q9550 running at 2.83GHz with 8GB
of main memory and Ubuntu Linux 8.10.
All programs were compiled as optimized 64bit binaries with the gcc compiler (version 4.3.2), and only one core of the CPU was used.
The EPA uses SSE3 instructions to accelerate the likelihood computations (introduced with RAxML version 7.2.0).
Running a BLAST search of all QS against a database comprising the remaining sequences takes 216 seconds for the largest data set in this test (D1604).
We use BLAST with the default settings without affine gap penalties.
As already mentioned, affine gap penalties did not improve the accuracy, but resulted in much higher run times, therefore we kept them disabled.
On D1605 the run time for the {\em slow} insertion method is 7409 seconds under GTR+$\Gamma$ and 1846 seconds under GTR+CAT.
With fast insertions the run time amounts to 251 seconds under GTR+$\Gamma$ and only 172 seconds under GTR+CAT.
Thus, the EPA with the {\em fast} insertion method under CAT is faster than a simple BLAST search.
For the pre-scoring heuristics, the run times depend on the parameter $fh$ that determines the fraction of pre-scored insertion branches that will
subsequently be scored using the {\em slow} insertion method.
In Figure \ref{plot_heuristics}a the run times of the different pre-scoring heuristics as well as fast insertions without heuristics relative to
BLAST are shown for data set D1604.
The behavior of the heuristics and the parameter $fh$ is as expected.
It produces a constant initial overhead and scales linearly with the fraction of branches selected for {\em slow} insertions.
The initial overhead is smaller for the MP heuristics, while the run time of the thorough insertion phase only depends on $fh$.
Therefore, the run time graphs of the ML- and MP-based heuristics are parallel in the plots.
Figure \ref{plot_heuristics}b shows the accuracy on the largest data set D1604 (placement of 802 QS into a reference tree with 802 RS).
The fraction of insertion branches considered for the slow insertion phase is controlled by the parameter $fh$.
In the plot the accuracy of the heuristics for values of $fh=1/n, n={4,8,16,32,64,128,256}$ are shown.
The results suggest that on this data set it is sufficient to more thoroughly analyze only 50 out of 1601 ($fh = \frac{1}{32}$)
candidate insertion branches proposed by the heuristics to gain the best possible accuracy
(even for $fh = \frac{1}{64}$ there is only a very small deviation from the reference results).
Another important result is that the MP heuristics produces equally accurate placements as the ML heuristics,
on all except the smallest values of $fh$.
This is particularly promising since the MP implementation in RAxML can be significantly accelerated by SSE3 vectorization and other
low-level code optimizations.
We conclude that the MP heuristics with a parameter setting of $fh:=1/32$ (using the $\Gamma$ model for {\em slow} insertions)
is sufficient for achieving placement accuracy comparable to the reference placement, but with computational requirements
(290 seconds) that are in the same order of magnitude as a simple and significantly less accurate BLAST search.
Even lower run times (113 seconds) can be achieved by using the CAT model for {\em slow} insertions, at the expense
of a slight loss in accuracy.
Based on the results in the previous Section, we expect the accuracy difference
between the CAT approximation and the $\Gamma$ model to be negligible in a real world scenario.
In the execution time tests the differences in accuracy between the {\em fast} and {\em slow} insertion methods as well as between the $\Gamma$ and CAT
models are generally larger than in the previous Sections.
This is not surprising, given the setup of this experiment that was not designed to measure the insertion accuracy relative to
an assumed correct position, but the deviation between our best, yet slowest, method and less accurate, accelerated methods.
Here, we do not constrain the experiment to QS with high support values in the reference tree, but chose QS at random
which may introduce a certain degree of imprecision to this evaluation.
In addition, the RT (comprising 50\% of the taxa in the original RA) is smaller than in the previous evaluations and thus more sparsely sampled.
Nonetheless, the deviation between the {\em fast} and {\em slow} EPA versions amounts to less than half a node on average
and the general finding that {\em slow} insertions under CAT are more accurate than {\em fast} insertions
under $\Gamma$ is consistent with previous experiments.
\section*{Conclusion}
We have presented an accurate and scalable approach for phylogeny-aware sequence comparison
and compared its accuracy and run times to alignment-based as well as alignment-free
sequence comparison based methods. A phylogeny-aware approach has methodological advantages over standard
sequence based approaches and the Evolutionary Placement Algorithm is freely available for
download as open source code.
We demonstrate that our approach, that can, e.g.,
be used for analyses of microbial communities, is at least twice as accurate than standard techniques.
More importantly, we demonstrate that achieving significantly better accuracy does not require longer inference times
and that our approach is as fast as a simple BLAST based search when using additional heuristics.
The algorithm is also relatively straight-forward to parallelize (the parallelization of the EPA will be covered
elsewhere) by applying a multi-grain parallelization technique. On a multi-core system with 32 cores and 64GB
of main memory we were able to classify 100,000 QS in parallel into a reference tree with 4,000 taxa within 1.5 hours.
A major challenge that remains to be solved consists in aligning the QS to a given reference alignment.
Throughout this paper we have assumed that such an alignment was given. Ideally, one would like to simultaneously
place and align the QS to the respective insertion branch. We have already implemented a simplistic version
of such an alignment method under ML in the EPA. Our alignment procedure still lacks an appropriate indel model,
since gaps are treated as undetermined characters in most standard ML implementations. Nonetheless, our method works
surprisingly well on QS with approximately 50\% gaps and is more accurate than BLAST with an average placement distance of one node
(as in Fig. \ref{plot_heuristics}b), but less accurate and significantly slower than the alignment-based EPA insertions.
Therefore, future work will focus on the development of rapid methods for simultaneous QS placement and alignment.
\newpage
\section*{Acknowledgments}
The authors would like to thank Rob Knight, Steven Kembel, Micah Hamady, Christian von Mehring and
Manuel Stark for useful discussions on algorithm design and for providing test data sets.
\bibliographystyle{sysbio} |
2202.13827 | \section{\label{sec:Int}Introduction}
Mercury telluride based heterostructures belong to the most widely used materials for sensitive and fast infrared/terahertz (IR/THz) detectors~\cite{Capper1997,Norton2002,Henini2002,Rogalski2005,Dvoretsky2010,Downs2013,Rogalski2018,Vanamala2019} and are among the most promising materials to realize high quality topological insulators (TIs)~\cite{Moore2010,Hasan2010,Zhang2011}. The reason for that is the wide tunability of the energy spectrum of these materials including the possibility of realizing an inverted band structure in HgTe. The latter is a crucial condition for the formation of helical edge and surface states~\cite{Moore2010,Hasan2010,Zhang2011}. This is also supported by the well developed technological processes originally motivated by the fabrication of detectors, which has been adapted for the growth of high-quality TI materials. This includes the possibility to obtain high carrier mobility while at the same time contributions from three-dimensional carriers in the bulk can be largely suppressed~\cite{Becker2003,Dvoretsky2010,Koenig2007,Roth2009,Dantscher2015,Dantscher2017}. Thus HgTe-systems allow one combining the excellent performance of HgTe-based IR/THz sensors and advantages of topological systems, in particular, obtaining photon helicity sensitive photoresponses~\cite{Wittmann2010,Dantscher2017}. In the last decade, it has been demonstrated that ratchet effects in two-dimensional electron systems with lateral superlattices can be used for efficient detection of THz radiation and may even provide new functionalities, such as all-electric detection of the radiation Stokes parameters operating up to room temperature~\cite{Danilov2009}. The ratchet effect, demonstrating strong photoresponse and considered as a candidate for efficient detection of THz radiation, has been observed and investigated in various 2D semiconductor structures with parabolic energy dispersion~\cite{Olbrich2009,Olbrich2011,Ivchenko2011,Popov2011,Kannan2011,Otsuji2013,Boubanga2014,Faltermeier2015,Rupper2018,Yu2018,Notario2020,Sai2021} and in monolayer graphene~\cite{Olbrich2016}, but so far has not been detected in HgTe-based systems. Such a study, however, allows to combine advantages of the ratchet effect and the superior properties of HgTe-based materials for the infrared/terahertz radiation detection, as well as exploring the physical properties of HgTe-based QWs.
Here we report on the observation and study of polarization-dependent ratchet effects in HgTe-based quantum wells of different thicknesses. The effect was studied in QWs with superimposed dual-grating gate (DGG), lateral asymmetric superlattices excited by normally incident terahertz laser radiation. The magnitude and direction of the ratchet current are shown to be dependent both on the polarization state of the radiation and on the lateral asymmetry determined by the gate voltages applied to the two subgates. By varying the radiation polarization we observed that the THz ratchet effect has three current contributions: polarization-insensitive, linear-, and circular- ratchet ones. While the second one can be excited by linearly polarized radiation and is sensitive to the relative orientation of the electric field vector and the source-drain direction, the third one is driven by the radiation helicity and has opposite signs for right- and left-handed circularly polarized radiation. Measurements of the ratchet currents in HgTe QWs of different thicknesses allowed us to study ratchet effects in HgTe-based QWs featuring different band structure properties. Notably, the highest helicity-driven ratchet current was detected in a system with Dirac fermions. Studying the gate voltage dependence of the ratchet effect we, unexpectedly, observed emergence of high sign alternating oscillations of the photoresponse at high negative gate voltages corresponding to $p$-type conductance. Our theoretical analysis demonstrated that the oscillations are caused by shifting the Fermi energy across the well-separated multiple valence subbands, which in turn results in oscillations of the density of states. The developed theoretical model, which takes into account the Seebeck ratchet mechanism and considers tiny details of the energy dispersion, describes the experimental results well.
\begin{table*}[t]
\centering
\begin{tabular}{ccccccccc}
\hline
Devices & $d_{QW}$ (nm) & Barrier composition & $L\times W$ ($\mu m^2$) & $d_1$ ($\mu m$) & $d_2$ ($\mu m$) & $a_1$ ($\mu m$) & $a_2$ ($\mu m$) & $d$ ($\mu m$)\\
\hline
\hline
D1 &8.0 & Hg$_{0.23}$Cd$_{0.77}$Te & 72$\times$31 & 0.75 & 2.25 & 1.0 & 3.5 & 7.5\\
D2 &7.0 & Hg$_{0.28}$Cd$_{0.72}$Te & 85$\times$20 & 0.5 & 1.5 & 0.5 & 2.5 & 5.0\\
D3 &6.3 & Hg$_{0.42}$Cd$_{0.58}$Te & 50$\times$19 & 0.5 & 1.5 & 0.5 & 2.5 & 5.0\\
D4 &7.0 & Hg$_{0.28}$Cd$_{0.72}$Te & 75$\times$15 & 1.25 & 3.0 & 0.75 & 3.0 & 8.0\\
D5 &8.0 & Hg$_{0.23}$Cd$_{0.77}$Te & 75$\times$15 & 1.25 & 3.0 & 0.75 & 3.0 & 8.0\\
D6 &8.0 & Hg$_{0.23}$Cd$_{0.77}$Te & 50$\times$7 & 0.75 & 2.0 & 0.5 & 2.0 & 5.25\\
D7 &7.0 & Hg$_{0.28}$Cd$_{0.72}$Te & 72$\times$31 & 0.75 & 2.25 & 1.0 & 3.5 & 7.5\\
\hline
\end{tabular}
\caption{\label{tab:1} Basic parameters of the structures investigated.}
\end{table*}
\begin{figure*}[ht]
\includegraphics [width=0.65\linewidth, keepaspectratio] {Fig_1.jpg}
\caption{\label{Fig:1} (a)~Picture of the interdigitated gate electrodes deposited on the HgTe/HgCdTe heterostructure. All devices have four terminals: source(S), two gates(G1, G2), and drain (D). (b)~A cross-section of the asymmetric finger gate structure consisting of stripes of widths $d_1$ and $d_2$ and spacing $a_1$ and $a_2$ forming a superlattice with periodicity $d = d_1 + a_1 + d_2 + a_2$. (c)~Sketch of the HgTe/HgCdTe heterostructure under incident THz radiation indicating also the QW layer sequence.}
\end{figure*}
\subsection{Experimental technique}
Our devices are made HgTe/CdHgTe quantum well (QW) structures grown by molecular beam epitaxy on (013)-oriented GaAs substrates. The heterostucture parameters, such as QW thickness ($d_{QW}$) and barrier composition are presented in Table~\ref{tab:1}. The asymmetric inter-digitated DGG structures were fabricated by electron beam lithography (EBL). Wet Br-based etching is used to define the channel geometries, having a channel length ranging from 50 to 75~$\mu$m and a channel width between 7 and 31~$\mu$m. Plasma-enhanced chemical vapor deposition was used to deposit 140~nm \rm{Si(ON)} insulation, separating the HgTe/CdHgTe heterostructure and Ti/Au finger gates. The full description of the growth, characterization and device preparation can be found in Ref.~\cite{Majewicz2014,Majewicz2019}. Figures~\ref{Fig:1}(a) and~\ref{Fig:1}(b) show an optical image of one of the DGG structures and a schematic illustration of the cross-section of the device. The grating consists of periodically repeating asymmetric supercells which are separated by spacings $a_1$ and $a_2$. The asymmetry of the DGG structure stemming from the inequallity of stripe widths ($d_1 < d_2$) and stripe separations ($a_1 < a_2$) is crucial for the generation of the ratchet effect. Below we denote the direction perpendicular to the metal fingers as $x$-direction.
All thin stripes are connected by a metal film forming the gate G1, and all interconnected wide stripes form the gate G2. This allows us to create an asymmetric periodic electrostatic potential acting on 2DEG by applying different voltages to the subgates G1 ($V_{\rm G1}$) and G2 ($V_{\rm G2}$). Table~I. presents the geometrical parameters of the DGG and period of the supperlattice, $d = d_1 + a_1 +d_2 + a_2$. Ohmic contacts to source, drain, and gates were fabricated by In soldering. To characterize the structures we performed transport and magnetotransport measurements. To measure the electrical resistance $R_{SD}$ we have used SR830 lock-in amplifiers with low modulation frequency (13~Hz) and $0.1~\upmu$A current amplitude. All transport studies were carried out at $T=4.2$~K. The source-drain resistance as a function of the gate voltage exhibits a clear maximum at negative values of the gate voltages $V_{\rm G1}$ or $V_{\rm G2}$. This demonstrates that besides the controllable modification of the lateral asymmetry, sweeping the gate voltages allows us to change the type of conductivity beneath the gates. The variation of the carrier density of the 2DEG in the HgTe QWs from $n$- to $p$-type occurs for a Fermi energy position in the insulating band gap. In this case, the resistance $R_{SD}$ shows a maximum which corresponds to a change of the sign of the Hall coefficient and identifies the charge neutrality point (CNP). For the investigated structures with 6.3, 7.0, and 8.0~nm QW thickness at zero gate voltage, the carrier concentrations are in the range of $(1.3 \div 7.4) \times 10^{11}$cm$^{-2}$, and mobilities are about $ (5.0 \div 8.5) \times 10^4$~cm$^2$/Vs.
To excite the ratchet effects we applied normal-incident terahertz radiation of a continuous wave (cw) optically pumped molecular laser, see Fig. \ref{Fig:1}(c). The laser operated at frequency $f= 2.54$\,THz (wavelength $\lambda = 118~\upmu$m, photon energy $\hbar\omega = 10.5$~meV). The samples were placed in an optical cryostat with $z$-cut quartz and TPX (4-methyl-1-pentene) windows. In order to block visible and near-infrared radiation, the windows were additionally covered with a black polyethylene film. The laser beam was focused using off-axis parabolic mirrors. The radiation power at the sample position $P$ was about 20~mW and was monitored during the measurements by a pyroelectric detector. The beam had an almost Gaussian shape with a spot diameter of around 1.5~mm, which is much larger than the area of the DGG structure. The spatial distribution of incoming THz beam was controlled by a pyroelectric camera. More details on the system can be found in Refs.~\cite{Dantscher2015,Dantscher2017,Shalygin2007,Plank2016}. The radiation was modulated by a chopper with a frequency of 35~Hz and standard lock-in technique was used to detect the photoresponse, measured as voltage drop $V_{ph}$ across the sample resistor $R_S$. All measurements were performed at helium temperature $T = 4.2$~K.
To explore the polarization dependence of the THz radiation induced signal we controllably varied the radiation Stokes parameters. The polarization was modified by placing a crystal quartz $\lambda/4$-plate in front of the sample, which was rotated by a phase angle $\varphi$ between the $c-$axis of the plate and the electric field vector of the laser radiation. This allowed us to vary the degree of circular polarization $P_{\rm circ} = \sin 2\varphi$, which corresponds to the Stokes parameter $s_3$, as well as the degrees of the linear polarization (these Stokes parameters are $s_1/s_0 = \sin 4\varphi /2$ and $s_2/s_0 = (1+\cos 4\varphi)/2$)~\cite{Belkov2005}). Additionally, we performed measurements in which the vector of the electric field $\bm E$ was rotated by a $\lambda$/2-plate. In this case, the azimuth angle $\alpha$ between $\bm E$ and $x$-direction determines the orientation of the incident radiation regarding the gate stripes, see Fig. \ref{Fig:1}(c), and the Stokes parameters are described by $s_1/s_0 = \sin 2\alpha$ and $s_2/s_0 = \cos 2\alpha$~\cite{Belkov2005}.
\begin{figure}[h]
\includegraphics [width=\columnwidth, keepaspectratio] {Fig_2.jpg}
\caption{\label{Fig:4} Helicity dependence of the photovoltage of sample D7 normalized to power under normal incident radiation with frequency $f=2.54$~THz. Measurements are carried out at $T=4.2$~K and zero gate voltages. Arrows correspond to right-handed ($\sigma^+$) and left-handed~($\sigma^-$) circular polarizations. The orange line is a fit according to Eq.~\eqref{Helicity}. The inset indicates the circular photoresponse as a function of the QW thickness for different devices (D1~-~D7).}
\end{figure}
\begin{figure}[h]
\includegraphics [width=1.0\columnwidth, keepaspectratio] {Fig_3.jpg}
\caption{\label{Fig:8} Photovoltage normalized to power as a function of the gate voltage for device~D6. Red and blue lines correspond to gate voltages applied to electrodes with thin ($G1$) and thick ($G2$) fingers, respectively. Measurements are performed under normal incident right-handed circularly polarized radiation ($\sigma^+$) with frequency $f=2.54$~THz and $T=4.2$~K. The inset shows the resistance $R_{SD}$ vs gate voltage applied to thin stripes $V_{\rm G1}$ ($V_{\rm G2}=0$, red symbols) and to thick stripes $V_{\rm G2}$ ($V_{\rm G1}=0$,~blue symbols).}
\end{figure}
\begin{figure}[h]
\includegraphics [width=\columnwidth, keepaspectratio] {Fig_4.jpg}
\caption{\label{Fig:6} (a-c)~Normalized photovoltages in devices D3, D2, and D5 versus gate voltage $V_{\rm G1}$. All data are obtained at normal incident of linearly polarized radiation frequency $f = 2.54$~THz at $\alpha_{max}$, and $T = 4.2$~K. Here $\alpha_{max}$ is the azimuth angle at which the signal achieves its maximum. The solid lines are smoothed curves. Insets show results of transport measurements in HgTe QW devices D3, D2, and D5. The resistances were measured between source and drain as a function of the gate voltage $V_{\rm G1}$.
}
\end{figure}
\subsection{Results}
Figure~\ref{Fig:4} shows the dependence of the photosignal excited in sample D7 (QW thickness $d_{QW}=7$~nm) on the phase angle $\varphi$. The signal change $V_{ph}$ when changing the polarization is characteristic for the ratchet effects and can be well described by the function~\cite{Olbrich2016,Ivchenko2011}
\begin{equation}
\label{Helicity}
V_{ph}(\varphi) = V_0 + V_\mathrm{L1}\frac{\sin 4\varphi}{2} + V_\mathrm{L2}\frac{1 + \cos 4\varphi}{2} + V_\mathrm{circ} \sin 2\varphi \,\,,
\end{equation}
where the fitting parameters $V_0, V_\mathrm{L1}$, $V_\mathrm{L2}$, and $V_\mathrm{circ}$ describe the polarization independent ratchet contribution ($V_0$, also called Seebeck ratchet), the linear ratchet effect~($V_\mathrm{L1}, V_\mathrm{L2}$), and the circular ratchet effect~($V_\mathrm{circ}$), respectively. This characteristic polarization dependence has been detected for all samples and give a first hint that we have to do here with a ratchet effect. Note that the above equation reflects the linear combination of the radiation Stokes parameters $s_0, s_1, s_2$, and $s_3$ with different weights. For the right- ($\sigma^+ $) and left-($\sigma^- $) handed circularly polarized radiation the Stokes parameter $s_3$ changes its sign, whereas $s_1$ and $s_2$ vanish and the first polarization-independent term in Eq.~(\ref{Helicity}) remains constant. Consequently, the circular photoresponse can be calculated as the odd part of the voltage signal with respect to the radiation helicity
\begin{equation}
\label{circ}
V_\mathrm{circ} = \frac{V^\mathrm{\sigma^+} - V^{\sigma^-}}{2} \,\,,
\end{equation}
where $V^\mathrm{\sigma^+}$ is the measured photosignal at right-handed circular polarization ($\varphi =45^{\circ}$), and $V^\mathrm{\sigma^-}$ corresponds to the measured photosignal at left-handed circular polarization ($\varphi =135^{\circ}$)
The inset in Fig.~\ref{Fig:4} compares the circular contributions for the studied devices having different quantum well thickness. One can see that the most pronounced feature of the ratchet effect is observed in device D3 ($d_{QW}=6.3$~nm) with a quantum well thickness close to the critical one, i.e. hosting two-dimensional massless fermions.
For linearly polarized radiation the last contribution in Eq.~(\ref{Helicity}) vanishes and the polarization dependences of the second and third terms are given by $s_1/s_0 = \sin 2\alpha$ and $s_2/s_0 = \cos 2\alpha$ with amplitudes $V_\mathrm{L1}$, and $V_\mathrm{L2}$, respectively (not shown).
To ensure that the detected signal is caused by the ratchet effects we studied the photoresponse as a function of the top gate voltages. According to theory~\cite{Ivchenko2011} the ratchet contributions should reverse sign upon inversion of the lateral asymmetry parameter given by~\cite{Ivchenko2011}
\begin{equation}
\label{Xi}
\Xi = \overline{|E(x)|^2\frac{dV(x)}{dx}\, .}
\end{equation}
Here the overline means an average over the coordinate perpendicular to the DGG stripes. $V(x)$ is the electrostatic potential induced by lateral superlattice, $E(x)$ is the spatially modulated near electric field of radiation acting on charge carriers in QW. Consequently, it is controlled by the potential variation $dV(x)/dx$, which is determined by the voltages applied to the individual gates, $V_{\rm G1}$ and $V_{\rm G2}$. In order to tune the lateral asymmetry, we hold one of the gates at zero bias and vary the gate voltage on the other. Figure~\ref{Fig:8} demonstrates that the photosignals obtained upon variation of $V_{\rm G1}$ and $V_{\rm G2}$, i.e. by the interchange of gate voltage polarities at narrow and wide gates, have consistently opposite signs. This observation, exemplary presented for the device D6 and left-handed circularly polarized radiation, agrees well with the signature of ratchet photovoltages, $V_{ph} \propto \Xi$, and proves that ratchet effects are responsible for it.
Strikingly, for high negative potentials applied to the top gates ($V_{\rm G1}$ or $V_{\rm G2}$), the photosignal exhibits sign-alternating oscillations when varying the top gate voltage. The number of the oscillations in the ratchet photosignal depends on the QW thickness, see Fig.~\ref{Fig:6}. The photosignal oscillations are detected for both linear- and circular- ratchet effects. Note that the sample resistance varies smoothly with the gate voltage and only exhibits a peak at the CNP corresponding to the transition from $n$- to $p$-type conductivity, see insets in Fig.~\ref{Fig:6}. We also emphasize that the oscillations appear for gate voltages corresponding to the $p$-type conductivity.
The overall features of the observed THz photoresponse, in particular its proportionality to the lateral asymmetry parameter (see Fig.~\ref{Fig:8}), clearly indicates that it is caused by the ratchet effect. Measurements on QWs with different thickness reveal that the magnitude of the ratchet effect in HgTe-based structures increases substantially (more than an order of magnitude) with decreasing and becomes maximal for the QW with critical thickness, see the inset in Fig.~\ref{Fig:4}. This we ascribe to the qualitative change of the energy dispersion, which changes from parabolic with inverted band structure to a linear one for the critical thickness (6.4~nm).
The key result of the present work is the observation of the unexpected ratchet current oscillations when varying the top gate voltages, see Figs.~\ref{Fig:8} and~\ref{Fig:6}. While the sample resistance shows the conventional behavior for narrow band HgTe systemswith a resistance maximum at $V_{CNP}$ (see insets in Figs,~\ref{Fig:8} and~\ref{Fig:6}), the ratchet responses in all samples exhibit sign-alternating oscillations for high negative top gate voltages . Furthermore, the amplitude of the ratchet effects drastically increases by more than two orders of magnitudes, as compared to that detected for lower gate voltages at which the samples have $n-$type conductivity, see Figs.~\ref{Fig:8} and~\ref{Fig:6}. While oscillations and increase of the ratchet current magnitude are detected in all samples, the number of oscillations is specific for each sample and is maximum for the quantum well structures with critical $d_{QW}$ in which the energy gap is very small or vanishing, see Fig.\ref{Fig:6}(a).
Whereas giant oscillations have been detected in DGG structures in an external magnetic field~\cite{Sai2021,Faltermeier2017,Budkin2016,Faltermeier2018,Hubmann2020}, no oscillations in the ratchet voltage have been observed at zero magnetic field. In the next section, we discuss the origin of these photocurrent oscillations in HgTe-based DGG structures, thus revealing peculiar features of the valence band energy spectrum in HgTe quantum wells with different QW thicknesses.
\section{DISCUSSION}
To explore the origin of the observed oscillations, we develop a theory for ratchet currents in the HgTe quantum wells with $p$-type conductivity. To be specific, we consider the polarization-independent contribution of the photocurrent, which is generated via energy relaxation of the radiation-heated holes (the Seebeck ratchet). A detailed description of this mechanism can be found in Refs.~\cite{Ivchenko2011,Faltermeier2017}. The theory of polarization-dependent ratchet currents is out of the scope of the present manuscript. However, it can be developed using kinetic theory from~\cite{Durnev2021,Nalitov2012} taking into account the complicated band structure of the QW.
The knowledge of the band structure is crucially needed for the theory of electric current generation and understanding of the oscillation origin. The energy dispersion of differently wide quantum wells, shown in Fig.~\ref{fig_spectrum}, are calculated within the framework of an 8-band $\bm{k} \cdot \bm{p}$~model described in detail in Ref.~\cite{Dantscher2015}. The energy dispersion of the hole subbands varies greatly with the change of quantum wells width. The $6.3$~nm quantum well has bandgap close to zero, for $7$~nm the band gap between hole and electrons is increased, at $8$~nm bottom electron and top hole subbands are pushed even further away at the $\Gamma$-point, however, the quantum well has an indirect bandgap due to the anticrossing between first and second hole subbands. We consider the case of low temperatures, when free charge carriers are described by a nearly degenerate distribution function. In this case, the electronic properties are determined by carriers near the Fermi energy. For certain values, the Fermi level crosses the spectrum at several wave vector values, so that there are essentially multiple types of carriers, each with its own Fermi velocity and density, all of which contribute to both conductivity and ratchet current. Due to the anticrossing in the valence band multiple Van Hove singularities are observed in the density of states (DoS). This is shown in the left-hand sides of Figs.~\ref{fig_spectrum}(a) --~\ref{fig_spectrum}(c). Both momentum relaxation time and charge carrier density change significantly when the Fermi energy is close to these singularities. As will be shown below, these are the reasons for the drastic change of the ratchet current magnitude and direction.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.7\linewidth]{Fig_5.jpg}
\end{center}
\caption{The calculated energy spectra of the quantum wells of width $6.3$~nm (panel a), $7$~nm (panel b), and $8$~nm (panel c) shown by colored lines. Black lines on the left side of the panels show the DoS for the corresponding energy spectra obtained in isotropic approximation.}
\label{fig_spectrum}
\end{figure*}
Now we turn to the microscopic theory of the Seebeck ratchet current, which is based on models previously developed in Refs.~\cite{Budkin2016,Olbrich2016}. The Seebeck ratchet current emerges when there is a phase difference between the in-plane component of the static electric field $\partial V(x)/\partial x$ caused by the electric potential induced by DGG and the spatially modulated free charge carrier heating caused by the electric near field radiation. To calculate the ratchet current first we split the time-independent part of the distribution function as ${f_{\bm{p},\nu}=f^+_{\bm{p},\nu}+f^-_{\bm{p},\nu}}$, where $\bm{p}$ is the charge carrier momentum, $\nu$ is the subband index, and $f^+_{\bm{p},\nu}$ and $f^-_{\bm{p},\nu}$ are even and odd on the momentum parts of the distribution
function, respectively. The Boltzmann kinetic equation leads to the following relation
\begin{equation}
-\dfrac{f^-_{\bm{p},\nu}}{\tau_p}=v_x\dfrac{\partial f^+_{\bm{p},\nu}}{\partial x}- \dfrac{\partial V}{\partial x} \dfrac{\partial f^+_{\bm{p},\nu}}{\partial p_x}\:,
\end{equation}
where $\tau_p$ is the momentum relaxation time, $\bm{v}$ is the velocity defined as $\partial \varepsilon_{\bm{p},\nu}/\partial \bm{p}$, and $\varepsilon_{\bm{p},\nu}$ is the charge carrier energy. The Seebeck ratchet current is obtained by summing over all momentum and subband indices $ j_x=2e \sum_{\bm{p},\nu} v_x f^-_{\bm{p},\nu}$, where the factor $2$ stands for spin degeneracy. The ratchet current linear in $V(x)$ and radiation intensity is given by
\begin{equation}
j_x=-\dfrac{1}{e} \sigma \dfrac{\partial V}{\partial x}-\dfrac{1}{e} \dfrac{\partial}{\partial x} S \:,
\end{equation}
where conductivity $\sigma$ and $S$ are given by
\begin{equation}
\label{conducivity_diff}
\sigma=e^2 \sum_{\bm{p},\nu} v^2 \tau_p \left( -\dfrac{\partial f^+_{\bm{p},\nu}}{\partial \varepsilon_{\bm{p},\nu}}\right)\:,
\quad\quad S=e^2 \sum_{\bm{p},\nu} v^2 \tau_p f^+_{\bm{p},\nu}\:.
\end{equation}
Due to fast electron-electron interaction the electron gas is locally thermalized and $f^+_{\bm{p}}$ is described by the Fermi-Dirac distribution function
\begin{equation}
f^+_{\bm{p},\nu }=\left[\exp\left(\dfrac{\varepsilon_{\bm{p},\nu}+V(x)-\varepsilon_F-\delta \varepsilon_F(x)}{T+\delta T(x)}\right)+1\right]^{-1}\:,
\end{equation}
where $\delta T(x)$ and $\delta \varepsilon_F(x)$ are small nonequilibrium corrections of the temperature and Fermi energy induced by radiation. Since no current flows without radiation or at $V(x)=0$, these conditions yield
\begin{equation}
\label{zero_current}
\sigma_0=\dfrac{\partial S_0}{\partial \varepsilon_F}\:,\quad\quad 0=
\dfrac{\partial S_0}{\partial \varepsilon_F} \dfrac{\partial \delta\varepsilon_F (x)}{\partial x}+
\dfrac{\partial S_0}{\partial T} \dfrac{\partial \delta T(x)}{\partial x}\:,
\end{equation} where $\sigma_0$ and $S_0$ are values obtained from Eq.~\eqref{conducivity_diff}
at thermal equilibrium for $V(x)=0$.
Equation~\eqref{zero_current} gives us an expression for the correction to the Fermi energy
\begin{equation}
\delta \varepsilon_F=
-\dfrac{\partial S_0/\partial T}{\sigma_0}\delta T+\delta \tilde{\varepsilon}\:.
\end{equation}
Here, $\delta \tilde{\varepsilon}$ is a constant independent of $x$,
which takes into account that the radiation does not change the average value of the carrier density.
Spatial temperature oscillations can be found using the energy balance
\[
|E(x)|^2 \dfrac{\sigma_0}{1+\omega^2 \tau_p^2}=\dfrac{\delta T(x)}{\tau_T}\:,
\]
where $\tau_T$ is the temperature relaxation time.
Finally using the relation for the equilibrium distribution function
${\partial f^{(0)}_{\bm{p},\nu }/\partial T=(\partial f^{(0)}_{\bm{p},\nu }/\partial \varepsilon_{\bm{p},\nu})(\varepsilon_{\bm{p},\nu}-\varepsilon_F)/T } $
we obtain the Seebeck ratchet current
\begin{equation}
\label{final_current}
j_x=-\dfrac{1}{e} \dfrac{\sigma_0 \tau_T}{1+\omega^2 \tau_p^2} \left(\dfrac{\partial \sigma_0}{\partial T}-\dfrac{\pi^2}{6}T\left(\dfrac{\partial \sigma_0}{\partial \varepsilon_F}\right)^2\dfrac{1}{\sigma_0}\right) \overline{|E(x)|^2 \dfrac{\partial V}{\partial x}}\:.
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[width=1.0\linewidth]{Fig_6.jpg}
\end{center}
\caption{Dependencies of the Seebeck ratchet current on the Fermi energy for HgTe quantum wells of different widths.
Photocurrents are calculated after Eq.~\eqref{final_current} using DoS and electron energy spectra from Fig.~\ref{fig_spectrum}.}
\label{fig_ratchet_current}
\end{figure}
To estimate the photocurrent, we make some simplifications: we assume that the momentum relaxation rate is proportional to the number of states where charge carriers can scatter. Similar assumptions can be made about $\tau_T$, so that both $\tau_T^{-1}$, $\tau_p^{-1}$ $\propto g(\varepsilon_{\bm{p},\nu})$, where $g(\varepsilon_{p,\nu})$ is the density of states. We also ignore the anisotropy of the charge carriers spectrum in HgTe quantum wells and take into account only its isotropic part. Figure~\ref{fig_ratchet_current} shows the dependencies of the ratchet current for different quantum wells at $T=4.2$~K. It is clearly seen that the ratchet current is highly sensitive to the details of the energy spectrum. A complex structure is observed in the dependence of the ratchet current on the Fermi energy, which follows the DoS. When the Fermi energy is close to the abrupt change in the density of states or one of the Van Hove singularities, the Seebeck ratchet current is drastically increased or changes its direction. The latter is caused by the large change of the derivative of conductivity with respect to Fermi energy and temperature for these energies.
The developed theory well describes the main features of the experimental results. Indeed, the theoretical model shows both i) singularities and ii) sign changes across the voltage/energy range with rough agreement with those observed in the experiments (see Figs.~\ref{Fig:8},~\ref{Fig:6} and~\ref{Fig:8}). Therefore, we want to emphasize that despite several simplifications presented model provides evidence of the main responsible physical phenomena. Quantitative comparisons, however, require several model refinements. First of all, the electrostatic problem for the structure should be solved to calculate the Fermi energy and potential $V(x)$ dependence on the two gate voltages applied to the DGG. $\tau_p$, $\tau_T$ should be accurately calculated for different wave vectors and band indices both for scattering by impurities and phonons taking into account spectrum anisotropy and that wave function of carries in QW has contributions in the $\Gamma_7$, $\Gamma_8$, and $\Gamma_6$ bands each with its own envelope functions along the $z$-axis. The accurate treatment of relaxation times will significantly alter the ratchet current dependencies shown in Fig.~\ref{fig_ratchet_current}. Lastly, when the Fermi energy exactly matches the energy of one of the Van Hove singularities, the theory loses its applicability, since the difference between electron kinetic energy and local extreme in the energy spectrum goes to zero, and $V(x)$ can't be considered small. As a result, the ratchet current should be expressed in all orders in $V(x)$. These more precise results can be obtained numerically - but they will not bring new /deeper physical understanding.
\section{Conclusions}
To summarize, different kinds of THz ratchet effects, including the circular, linear, and polarization independent ratchets, are observed in HgTe-based DGG structures. In comparison with common semiconductors, HgTe/HgCdTe QW heterostructures exhibit unconventional band structure properties, correlated to quantum well thickness. Measurements on devices with different QW widths demonstrated that the magnitude of the circular ratchet effect increaseswhen reducing the QW width and is maximal in structures with critical QW thickness where the energy gap is close to zero. A further drastic increase of the ratchet current magnitude is obtained by applying high negative top gate voltages resulting in $p-$type conductivity beneath the gates. The enhancement of the photoresponse as well as the observed sign-alternating oscillations with gate voltage are shown to be caused by the complex valence band structure of HgTe-based QWs. Our study of ratchet currents in HgTe DGG devices paves the way to develop THz detectors with substantial increased responsivity and opens up an access to the analysis of energy dispersion details, including the properties of the valence band.
\section*{Acknowledgments}
The authors thank L.E.~Golub and M.V.~Durnev for valuable discussions. The support of the DFG-RFBR project (Ga501/18, RFBR project 21-52-12015), the Elite Network of Bavaria (K-NW-2013-247), and the Volkswagen Stiftung Program (97738) is gratefully acknowledged. I.Y., W.K. and S.D.G. thank the IRAP Programme of the Foundation for Polish Science (grant MAB/2018/9, project CENTERA). A.K. and T.W. thank the IRAP Programme of the Foundation for Polish Science (grant MAB/2017/1, project MagTop). G.V.B. acknowledges the support from the ''BASIS'' foundation. D.W. acknowledges funding by the European Research Council under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 787515, 253 Pro-Motion). |
0805.0253 | \section{Introduction}
Neutrinos from astrophysical sources and long-baseline experiments
are powerful probes of potential new physics. They have already been
used to discover and measure the novel phenomena of neutrino
oscillations, thereby establishing that neutrino have masses
\cite{Strumia:2006db, Totsuka:1991dm}. Long-baseline neutrino
experiments have also been used to set limits on quantum decoherence
effects that might be induced by foamy fluctuations in the
space-time background in some models of quantum gravity
(QG)~\cite{Barenboim:2006xt, CNGS-T2K, Morgan:2004vv,
Hooper:2004xr}. It has also been suggested that the space-time foam
due to QG fluctuations might cause energetic particles to propagate
at speeds different from the velocity of light, which would be
approached only by low-energy massless particles~\cite{foam,
gambini}. Any deviation from the velocity of light at high energies
might be either linear or quadratic, $\delta v/c = (E/M_{QG1})$ or
$(E/M_{QG2})^2$, and might be either subluminal or superluminal.
Such effects are, in principle, easily distinguishable from the
effects of neutrino masses, since they depend differently on the
energy $E$.
There have been many tests of such Lorentz-violating effects on
photon propagation from distant astrophysical objects such as
gamma-ray bursters~\cite{amellis}, pulsars~\cite{pulsar} and active
galactic nuclei~\cite{Albert:2007qk}. These tests have looked for
delays in the arrival times of energetic photons relative to
low-energy photons, and their sensitivities improve with the
distance of the source, the energies of the photons, the accuracy
with which the arrival times of photons can be measured, and the
fineness of the time structure of emissions at the astrophysical
source. The sensitivities of these tests has reached $M_{\gamma QG1}
\sim 2 \times 10^{17}$~GeV and $M_{\gamma QG2} \sim 4 \times
10^{10}$~GeV for linear and quadratic violations of Lorentz
invariance, respectively.
At least one QG model of space-time foam~\cite{equiv,refract_last}
suggests that Lorentz violation should be present only for particles
without conserved internal quantum numbers, such as photons, and
should be absent for particles with electric charges, such as
electrons. Indeed, astrophysical data have been used to set very
stringent limits on any Lorentz violation in electron propagation.
However, these arguments do not apply to neutrinos, since they are
known to oscillate, implying that lepton flavour quantum numbers are
not conserved. Moreover, neutrinos are often thought to be Majorana
particles, implying that the overall lepton number is also not
conserved, in which case QG effects might also be present in
neutrino propagation~\cite{LeptonNumber}. It therefore becomes
interesting to study experimentally the possibility of Lorentz
violation in neutrino propagation.
Experimental probes of Lorentz violation in neutrino propagation are
hindered by the relative paucity of neutrino data from distant
astrophysical sources, and require the observation of narrow time
structures in neutrino emissions. However, there has been one
pioneering experimental study of possible Lorentz violation using
the long-baseline MINOS experiment exposed to the NuMI neutrino beam
from Fermilab, which found a range of neutrino velocities $-2.4
\times 10^{-5} < (v - c)/c < 12.6 \times 10^{-5}$ allowed at the
99\% C.L.~\cite{Rebel:2008th}. Assuming an average neutrino energy
of 3~GeV, and allowing for either linear or quadratic Lorentz
violation: $v/c = [1 \pm (E/M_{\nu QG1})]$ or $[1 \pm (E/M_{\nu
QG2})^2]$, the MINOS result~\cite{Rebel:2008th} corresponds in the
case of linear Lorentz violation to $M_{\nu QG1}
> 1 (4) \times 10^5$~GeV in the case of subluminal (superluminal)
propagation, and in the case of quadratic Lorentz violation to
$M_{\nu QG2} > 600 (250)$~GeV.
In this paper we first establish limits on Lorentz violation using
neutrino data from supernova 1987a, using data from the Kamioka II
(KII) \cite{k2sn1987a}, Irvine-Michigan-Brookhaven (IMB)
\cite{imbsn1987a} and Baksan detectors \cite{baksan1987a}. We find
$M_{\nu QG1} > 2.7 (2.5) \times 10^{10}$~GeV for subluminal
(superluminal) propagation, respectively, and $M_{\nu QG2} > 4.6
(4.1) \times 10^{4}$~GeV at the 95~\% confidence level. These limits
are already much more stringent than those established using the
MINOS detector. We then assess the improved sensitivity to Lorentz
violation that could be obtained if a galactic supernova at a
distance of 10~kpc is observed using the Super-Kamiokande detector,
estimating sensitivities to $M_{\nu QG1}
> 2 (4) \times 10^{11}$~GeV for subluminal (superluminal)
propagation, respectively, and $M_{\nu QG2} > 2 (4) \times
10^{5}$~GeV. All these results are obtained taking neutrino
oscillation effects into account, and assuming that any Lorentz
violation is flavour-independent~\footnote{This is a strong
condition on any model of Lorentz violation, that is imposed by the
success of conventional neutrino oscillation phenomenology, which
implies that flavour-dependent dispersion effects can be neglected
in the analyses of MINOS and OPERA data. Such effects could in
principle appear in neutrinos from supernovae, but would not affect
the results presented below, which are essentially independent of
oscillation hypotheses.}.
We also discuss the sensitivity to Lorentz violation of the OPERA
experiment at the CNGS neutrino beam from
CERN~\footnote{For previous discussions of searches for Lorentz violation in
neutrino data, see~\cite{LeptonNumber,volkov}.}. We recall that the
CNGS beam cycle provides two fast-extracted proton spills lasting
10.5~$\mu$s each and separated by 50~ms, each containing 2100
bunches with standard deviation 0.25~ns, separated from each other
by the CERN SPS RF bucket structure of 5~ns~\cite{Meddahi:2007zz}.
The OPERA data-acquisition (DAQ) system is organized in such a way
that each subdetector provides its data with a distributed
time-stamp with a granularity of 10~ns. If a time-synchronization
method conceptually similar to that of MINOS between the CERN
neutrino extraction-magnet signal and the OPERA time-stamp were
implemented, the sensitivity would be greater than that of MINOS.
This because, even though the baseline between the source and the
detector are the same and the spill lengths are similar, the
neutrinos in the CNGS beam typically have higher energies than those
in the NuMI beam. Exploiting this feature, on the basis of an
optimized analaysis we estimate that after 5 years of running
sensitivities using OPERA could reach
$M_{\nu QG1} \sim 7 \times 10^{5}$~GeV
($M_{\nu QG2} \sim 8 \times 10^{3}$~GeV) for subluminal
(superluminal) propagation.
Further improvements in sensitivity would result if one could
exploit the RF bucket structure of the spill. Assuming that the
arrival time of the neutrinos would be correlated with the RF bunch
structure with a timing accuracy of, say, 1~ns, the sensitivity to
Lorentz violation could be improved to $M_{\nu QG1}\sim 5 \times
10^{7}$~GeV ($M_{\nu QG2} \sim 4 \times 10^{4}$~GeV) for
the linear and quadratic cases, respectively. These results could be
improved significantly if neutrino events occurring in the rock
upstream from OPERA could be included in the analysis. In this case,
the sensitivities would become $M_{\nu QG1}\sim 4 \times 10^{8}$~GeV
and $M_{\nu QG2} \sim 7 \times 10^{5}$~GeV. In the case of quadratic
Lorentz violation, this sensitivity is better than that obtained
from supernova 1987a, and even improves on the sensitivity possible
with a future galactic supernova.
\section{Limits on Lorentz Violation from Supernovae}
In this Section we discus the supernova mechanism and the ability to
test Lorentz violation via the detection of neutrinos created in
this process. We then analyze the data from the supernova SN1987a,
the first supernova from which neutrinos have been detected, giving
bounds at the $95\%$ C.L.. Then we simulate a possible future
galactic supernova and discuss the potential of the next generation
of neutrino detectors, represented by Super-Kamiokande (SK), to
improve this bound.
\subsection{Review of Neutrino Emissions from Supernovae}
The detection of neutrinos from SN1987a in the Large Magellanic
Cloud (LMC) remains a landmark in neutrino physics and astrophysics.
Although only a handful of neutrinos were detected by the
Kamiokande-II (KII)~\cite{k2sn1987a}, Irvine-Michigan-Brookhaven
(IMB)~\cite{imbsn1987a} and Baksan~\cite{baksan1987a} detectors,
they provided direct evidence of the mechanism by which a star
collapses, and the role played by neutrinos in this mechanism
\cite{Totsuka:1991dm}. The numbers and energies of the neutrinos
observed were consistent with the expected supernova energy release
of a few times $10^{53}$~ergs via neutrinos with typical energies of
tens of MeV. A future galactic supernova is expected to generate up
to tens of thousands of events in a water-{\v C}erenkov detector
such as SK, which will clarify further theories of the supernova
mechanism and of particle physics~\cite{Ikeda:2007sa}.
Current simulations reveal several distinct stages of neutrino
emission \cite{Keil:2002in,snnew, Totani:1997vj}. During the early
stage with a typical timescale of a few milliseconds, huge numbers
of $\nu_{e}$ are produced via $pe\rightarrow n\nu_{e}$, known as the
neutronization peak. Despite the huge numbers of neutrinos
produced, these are difficult to detect water-{\v C}erenkov
detectors, because the neutrinos produced in this process are
detected via scattering on electrons and (in the case of the
electron neutrino) via interactions with Oxygen nuclei. At the
energies of interest the cross section for detection is dominated by
the charged-current interaction $\bar{{\nu}}_{e}p\rightarrow n
e^{+}$, which detects anti-electron neutrinos. During the later
stages of the supernova explosion, all flavours of neutrinos and
antineutrinos are produced with approximate Fermi-Dirac spectra,
that are characterized by different average energies for different
neutrino species: $\langle E_{\nu_{e}}\rangle=(10-12)$~MeV, $\langle
E_{\bar{\nu}_{e}}\rangle=(12-18)$~MeV and $\langle
E_{\nu_{x}}\rangle=(15-28)$~MeV (where $\nu_{x}$ denotes
$\nu_{\mu}$, $\nu_{\tau}$ and their respective antiparticles), with
total emitted energy fractions $\varepsilon_{\nu_{e}}=(10-30)\%$,
$\varepsilon_{\bar{\nu}_{e}}=(10-30)\%$,
$\varepsilon_{\nu_{x}}=(10-20)\%$~\cite{Keil:2002in,snnew}.
The neutrinos produced in the supernova pass from densities close to
nuclear density in the core through to the approximate vacuum of
interstellar space, and the interactions with this matter dominate
the neutrino oscillations. The neutrinos become maximally mixed at
Mikheev-Smirnov-Wolfenstein (MSW) resonances and to first
approximation the nature of the oscillations can be determined by
the properties of these resonances. The resonance condition is
$A=\Delta m^{2}\cos2\theta$, where $A$ is the matter potential,
$\Delta m^{2}$ is the difference in mass squared and $\theta$ is the
mixing angle. For a typical density profile and composition of the
supernova medium, and typical neutrino energies ,the matter
potential is positive (negative) for neutrinos (antineutrinos).
Assuming just the three Standard Model neutrinos, there are two
possible MSW resonances, corresponding to the solar and atmospheric
mass-squared splittings~\cite{msw1,msw2,msw3}. We know from the
solar and KamLAND data that $\Delta m_{21}^{2}\equiv
m_{2}^{2}-m_1^{2}$ is positive, and therefore the corresponding MSW
resonance is in the neutrino sector\cite{solglobal}.
However, the sign of $\Delta m^{2}_{32}$ is undetermined and
therefore the corresponding resonance could be in either the
neutrino or the antineutrino sector, corresponding to the two
possible mass hierarchies, the normal (inverted) for a positive
(negative) $\Delta m^{2}_{32}$. At the resonance there is a
probability of transitions between the mass eigenstates, known as
`level crossing'. If the width of the resonance is large compared to
the neutrino oscillation length at the resonance then the level
crossing probability is small and the resonance is adiabatic. On the
other hand, if the width of the resonance is small compared to the
neutrino oscillation length scale, then transitions between the mass
eigenstates occur and the resonance is said to be non-adiabatic.
Combining current simulations of the supernova and the value of the
solar mixing angle we can determine that the the solar resonance is
adiabatic \cite{Strumia:2006db}. However, the current limit on
$\theta_{13}$ is insufficient to determine whether the atmospheric
resonance is adiabatic or not: simulations indicate that if
$\sin^{2}2\theta_{13}\gtrsim10^{-3}$ the resonance is adiabatic and
if $\sin^{2}2\theta_{13}\lesssim10^{-5}$ the resonance is
non-adiabatic. The oscillation probabilities for both hierarchies
are given in Table \ref{tab:probs}.
In addition to these effects, recent work has shown that neutrino
self-interactions can induce large, non-MSW flavour
oscillations~\cite{Spectral_split}. These occur at large neutrino
densities, just outside the neutrinosphere. For the normal hierarchy
these effects have little effect on the flavour oscillations, but
for the inverted hierarchy with non-zero $\theta_{13}$ significant
flavour changes can occur. These effects result in a `spectral
split', in which the $\nu_e$ and $\nu_x$ spectra are simply swapped
above a critical energy, while the entire spectra of the ${\bar
\nu}_e$ and ${\bar \nu}_x$ are swapped. For the case where the
flavour transformations have occurred before the MSW resonances the
flavour transformations can be thought of as changing the initial
spectra, whereas in the case of shallow density profiles this
becomes more complicated.
We note in addition that, as the shock wave inside the supernova
passes through the atmospheric resonance, it can change it from
adiabatic to non-adiabatic, resulting in a time dependence in the
signal that we do not consider in this
paper~\cite{raffeltforplusrev}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Hierarchy & $\sin^{2}\theta_{13}$ & \multicolumn{2}{|c|}{p} & $\bar{p}$ \\
\cline{3-4}
& & $E<E_{c}$ & $E<E_{c}$ & \\
\hline
Normal & $\gtrsim 10^{-3}$ & 0 & 0 & $\cos^{2}\theta_{\odot}$ \\
Inverted & $\gtrsim 10^{-3}$ & $\sin^{2}\theta_{\odot}$ & $\sin^{2}\theta_{\odot}$ & 0 \\
Normal or Inverted & $\lesssim 10^{-5}$ & $\sin^{2}\theta_{\odot}$ & $\sin^{2}\theta_{\odot}$ & $\cos^{2}\theta_{\odot}$ \\
Inverted SS & $\gtrsim 10^{-3}$ & $\sin^{2}\theta_{\odot}$ & $\cos^{2}\theta_{\odot}$ & 1 \\
Inverted SS & $\lesssim 10^{-5}$ & $\sin^{2}\theta_{\odot}$ & $\cos^{2}\theta_{\odot}$ & $\sin^{2}\theta_{\odot}$ \\
\hline
\end{tabular}
\end{center}
\caption{\it The oscillation probabilities for the normal and
inverse hierarchies, including the effect of the spectral split
(SS), where the resulting $\nu_{e}$ and $\bar{\nu}_{e}$ fluxes are
$F_{\nu_{e}}=pF_{\nu_{e}}^{0}+(1-p)F_{\nu_{x}}^{0}$ and
$F_{\bar{\nu}_{e}}=\bar{p}F_{\bar{\nu}_{e}}^{0}+(1-\bar{p})F_{\bar{\nu}_{x}}^{0}$
respectively.}\label{tab:probs}
\end{table}
\subsection{Analysis Techniques}
As previously discussed, it has been suggested that QG effects may
lead to Lorentz-violating modifications in the propagation of
energetic particles, and hence to dispersive effects, specifically a
non-trivial refractive index. These dispersive properties of the
vacuum would lead to an energy dependence in the arrival times of
neutrinos.
{ Even in the absence of any detailed, analytic understanding of time
structure
of a neutrino signal from a supernova, one can exploit the observation that, since the
neutrino events have a range of energies, an energy dependence of the neutrino
velocity would spread out the arrival times, compared to the signal if
there were no dispersive properties of the vacuum. Any data set comprising
both the time and energy of each neutrino event can be
analyzed by inverting the dispersion that would be caused by any
hypothesized QG effect. The preferred value of the energy-dependence parameter
would minimize the duration (time spread) of the supernova neutrino signal.}
Assuming either a linear or a quadratic
form of Lorentz violation: $v/c = [1 \pm (E/M_{\nu QG1})]$ or $[1
\pm (E/M_{\nu QG2})^2]$, a lower limit on $M_{\nu QG1}$ and $M_{\nu
QG2}$ may be obtained by requiring that the emission peak not be
broadened significantly. A non-zero value of $M_{\nu QG1}^{-1}$ or
$M_{\nu QG2}^{-1}$ might be indicated if it reduced significantly
the duration (time spread) of the neutrino signal. The duration
(time spread) of the neutrino signal can be quantified using different
estimators depending mostly on the amount of available statistics and time
profile of the data set, if applicable~\footnote{ Statistically poor event
lists,
such as that for SN1987a, the only one currently available in supernova neutrino
astronomy, do not allow the time profile to be classified, because time binning is
impractical and one cannot apply nonparametric statistical tests to
unbinned data.}. In the following, we outline two estimators for analyzing
neutrino signals, that we use first to
quantify the limits obtainable from the SN1987a neutrino data and then
the sensitivities that would be provided by a possible future
galactic supernova signal.
\subsubsection{Minimal Dispersion (MD) Method}
{ We assume that the data set consists of a list of neutrino events
with measured energies $E$ and arrival times $t$ such as that in Table~2.
In the first method, we consider event lists
with a relatively low number of events, that do not allow a
reasonable time profile to be extracted. In this case} we consider
the time
dispersion of the data set, quantified by
\begin{equation}
\label{MD_base}
\sigma_{t}^{2}\equiv\langle\left(t-\langle
t\rangle\right)^{2}\rangle,
\end{equation}
where $t$ is the time of each detected event. We then apply an
energy-dependent time shift $\Delta t=\tau_{l} E^{l}$, where
$\tau_{l}=L/c M_{\nu QGl}^{l}$, varying $M_{\nu QGl}$ so as remove
any assumed dispersive effects.
{ The `correct' value of the time shift $\tau_{l}$
should always compress the arrival times of the neutrino
events. Any other (`uncorrect') value of $\tau_{l}$ would spread in time the
events relative to the `correct' shift. Therefore, the dispersion~(\ref{MD_base}) can be considered
as an estimator to measure the degree of `compression' of the neutrino events in
time. In the following, we first apply this MD method in a warm-up exercise to
the data from SN 1987a, and later we exhibit in subsection~\ref{future_g_SN}
the typical behaviour of this estimator versus
$\tau_l$ for hypothetical data from a possible future galactic supernova.
Evaluating the dispersion~(\ref{MD_base}) one obtains}
\begin{eqnarray}
\label{disp_tau}
\sigma_{t}^{2}&=&\langle \left(t - \tau_{l} E^{l} - \langle t -
\tau_{l} E^{l}\rangle\right)^{2}\rangle \\ & = & \langle t^{2}
\rangle- \langle t \rangle^{2} - 2 \tau_{l} \left(\langle t
E^{l}\rangle - \langle t\rangle \langle E^{l}\rangle
\right)+\tau_{l}^{2}\left(\langle E^{2l}\rangle-\langle
E^{l}\rangle^{2}\right).
\end{eqnarray}
Therefore, the dispersion of the `de-refracted' time distribution
is minimized by the parameter
$\tau_{l}^{min}$, defined by
\begin{equation}
\tau_{l}^{min} \equiv \frac{\langle t E^{l}\rangle- \langle t\rangle
\langle E^{l}\rangle}{\langle E^{2l}\rangle-\langle
E^{l}\rangle^{2}} . \label{eq:taumin}
\end{equation}
We can use (\ref{eq:taumin}) for any data set to estimate the scale
$M_{\nu QGl}$ at which Lorentz violation is manifest. However, there
are uncertainties in the energy and time measurements, as well as
statistical uncertainties in the estimation of the observables
calculated from any given data set, compared to their true values.
We estimate the statistical uncertainties of an observable $x$ as
\begin{equation}
\sigma_{x}^{stat}=\sqrt{\frac{\langle x^{2}\rangle-\langle
x\rangle^{2}}{N}}, \label{eq:tauuncert}
\end{equation}
where N is the number of events, and $x=E, t$ or some combination of
both. In order to estimate the uncertainties in $\tau_{l}^{min}$, we
use a Monte Carlo simulation to repeat the calculation of
$\tau_{l}^{min}$ including the energy and statistical uncertainties.
We then make a Gaussian fit and use it to quote best-fit parameters
and errors.
\subsubsection{Energy Cost Function (ECF) Method}
{ This is a different analysis technique that is mostly applicable to event
lists
that are statistically rich. This means
that one can combine the neutrino events into a time profile exhibiting
pulse features that can be distinguished (parametrically or nonparametrically)
from a uniform distribution at high confidence level.}
For the analysis we first
choose the most
active (transient) part of the signal $(t_{1};t_{2})$, as defined using a
Kolmogorov-Smirnov (KS) statistic. The KS statistic is calculated
using the difference between the cumulative distribution function
(CDF) of the unbinned data and that of a uniform distribution. { The
KS statistic is defined as the time that elapses between the minimum
and maximum of this difference~\footnote{ The most active part of the signal
can be also chosen by fitting the binned time profle, but the
nonparametric way we use to extract a feature is less dependent on the time profile.
In the case of a multipulse structure of the time profile, several windows may
be analized separately.}.} { Having chosen this window, we scan over its whole
support the time distribution
of all events, shifted by $\Delta t=\tau_{l} E^{l}$, and sum the energies
of events in the
window. This procedure is repeated for many values of $\tau_l$,
chosen so that the shifts $\Delta t$ match the precision of the arrival-time
measurements, thus defining the `energy cost function' (ECF).
The maximum of the ECF indicates the value of $\tau_l$ that best recovers
the signal, in the sense of maximizing its power (amount of energy in a
window of a given time width $t_{2}-t_{1}$). This
procedure is then repeated for many Monte-Carlo (MC)
data samples generated by
applying to the measured neutrino energies the estimated Gaussian
errors. A typical ECF for one particular MC realization as well as the
typical distribution of the positions of the maxima of the ECFs for many enegy
MC realizations are illustrated in subsection~\ref{future_g_SN}
(see Fig.~\ref{ECF} and Fig.~\ref{ECF_position}). }
We perform this procedure for
different energy weightings $E^{n}$, where n=0,1,2, { summing up either
the numbers of events, the energies or the squares of the energies in
the time window selected}, so to optimize the
errors placed on the scale of Lorentz violation.
\subsection{Data Analysis}
For the analysis of SN1987a we use the uncertainties in Table~2,
which were taken from~\cite{Loredo:2001rx}. In the case of a
possible galactic supernova, we consider the Super-Kamiokande (SK)
water Cerenkov detector, and we use the detector properties given
in~\cite{raffeltforplusrev,Tomas:2003xn}, where the energy
uncertainties are modelled as $\sigma_{E}^{det 2}=\sqrt{E_{0}E}$,
where $E_{0}=0.22$~MeV. We note that the uncertainties in the time
measurements are in general much less than the statistical and
energy uncertainties, and we therefore neglect them in our analysis.
\begin{table}[h]
\begin{center}
\begin{tabular}{c c}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{IMB}\\
\hline
t (s) & E (MeV) & $\sigma_{E}$ (MeV) \\
\hline
$t\equiv0.0$ & 38 & 7 \\
0.412 & 37 & 7 \\
0.650 & 28 & 6 \\
1.141 & 39 & 7 \\
1.562 & 36 & 9 \\
2.684 & 36 & 6 \\
5.010 & 19 & 5 \\
5.582 & 22 & 5 \\
\hline
\multicolumn{3}{|c|}{Baksan}\\
\hline
t (s) & E (MeV) & $\sigma_{E}$ (MeV) \\
\hline
$t\equiv0.0$ & 12.0 & 2.4 \\
0.435 & 17.9 & 3.6 \\
1.710 & 23.5 & 4.7 \\
7.687 & 17.6 & 3.5 \\
9.099 & 10.3 & 4.1 \\
\hline
\end{tabular}
&
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|}{Kamiokande II}\\
\hline
t (s) & E (MeV) & $\sigma_{E}$ (MeV) \\
\hline
$t\equiv0.0$ & 20.0 & 2.9 \\
0.107 & 13.5 & 3.2 \\
0.303 & 7.5 & 2.0 \\
0.324 & 9.2 & 2.7 \\
0.507 & 12.8 & 2.9 \\
1.541 & 35.4 & 8.0 \\
1.728 & 21.0 & 4.2 \\
1.915 & 19.8 & 3.2 \\
9.219 & 8.6 & 2.7 \\
10.433 & 13.0& 2.6 \\
12.439 & 8.9 & 1.9 \\
\hline
\multicolumn{3}{c}{}\\
\multicolumn{3}{c}{}\\
\multicolumn{3}{c}{}\\
\multicolumn{3}{c}{}\\
\end{tabular}
\\
\end{tabular}
\end{center}\label{tab:data}
\caption{\it The measured neutrino data from SN1987a, where we have
omitted the events identified previously as background, and in each
data set we define $t\equiv0.0s$ for the first event.}
\end{table}
\subsubsection{SN1987a}\label{subsection:sn1987}
Neutrinos from SN1987a were detected in three detectors, KII, IMB
and Baksan. The times and energies of the events are given in
Table~2. The minimum dispersion was calculated 1000 times for each
data set to include the smearing from uncertainties. As an example,
Fig.~\ref{fig:SN1987aKII} shows this smearing for the KII data set.
From these distributions we can determine the best fit and the
error, which are summarized in Table \ref{tab:SN1987a}. We analyze
similarly the data from IMB and Baksan. As there is an uncertainty
in the relative time measurements of each detector, we analyze each
data set independently using the minimal dispersion method and then
combine them to quote the final best fit and error, as shown in
Table~\ref{tab:SN1987a}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\textwidth]{KIIplot.eps}
\end{center}\label{fig:KII}
\caption{\it The distribution $\tau_{min}$ of 1000 Monte Carlo
simulations of the KII data on neutrinos from SN1987a, including the
smearing due to energy uncertainties.} \label{fig:SN1987aKII}
\end{figure}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Data set & \multicolumn{2}{|c|}{$\tau_{1}({\rm s\cdot MeV^{-1})}$} &
\multicolumn{2}{|c|}{$\tau_{2}(10^{-3}{\rm s\cdot MeV^{-2}})$} \\
\cline{2-5}
& Best fit & Error & Best fit & Error \\
\hline
Kamiokande II & -0.0233307 & 0.197601 & -0.685 & 2.935 \\
IMB & -0.00417622 & 0.121513 & -0.308 & 1.601 \\
Baksan & 0.0574167 & 0.47789 & 2.704 & 8.105 \\
\hline
Combined & -0.00643648 & 0.101162 & -0.304 & 1.385 \\
\hline
\end{tabular}
\end{center}
\caption{\it The best fits to the SN1987a neutrino data obtained using the
minimal dispersion method.}\label{tab:SN1987a}
\end{table}
On the basis of this combined analysis, Fig.~\ref{fig:SN1987aCL}
shows the region which is excluded by the SN1987a data. Taking the
distance to the supernova as $L=(51.3\pm1.2)$~kpc, the scale at
which Lorentz violation may enter the neutrino sector is constrained
to be
\begin{equation}
M_{\nu QG1} > 2.7 \times 10^{10}~{\rm GeV~or}~M_{\nu QG1}
> 2.5 \times 10^{10}~{\rm GeV}
\end{equation}
at the $95\%$ C.L. for the linear subluminal and superluminal models
respectively. The corresponding limits for the quadratic models are
\begin{equation}
M_{\nu QG2} > 4.6 \times 10^{4}~{\rm GeV~or}~M_{\nu QG2} > 4.1
\times 10^{4}~{\rm GeV}
\end{equation}
at the $95\%$ C.L. for the subluminal and superluminal versions,
respectively.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\epsfig{file=SN1987MdiffCL.eps,width=0.4\linewidth} &
\epsfig{file=SN1987MdiffCLQuad.eps,width=0.4\linewidth} \\
\end{tabular}
\caption{\it The regions of parameter space excluded by SN1987a, for
subluminal (dashing) and superluminal (black) linear (left) and
quadratic (right) models.} \label{fig:SN1987aCL}
\end{figure}
\subsection{A Possible Future Galactic Supernova} \label{future_g_SN}
\begin{figure}[h]
\centering \epsfig{file=numberevents.eps,width=0.8\linewidth}
\caption{\it The time distribution of events predicted by our Monte
Carlo simulation for the case of subluminal Lorentz violation at the
mass scales $M=10^{10}GeV$ and $M=10^{11}GeV$.} \label{fig:events}
\end{figure}
The detection of a galactic supernova would provide improved
sensitivity to the scale at which Lorentz violation might enter the
neutrino sector, due to an increase in the number of neutrinos which
would be detected. The number of events would also increase because
the current neutrino detectors are larger than those used to detect
neutrinos from SN1987a. However, these effects would be partially
offset because $\tau_{l}\propto L$ and therefore the time-energy
shift will be reduced if, as expected, the supernova takes place
within the galactic disc at a distance $\sim 10$~kpc, compared to
SN1987a in the LMC at a distance of $\sim 51$~kpc. The increase in
the number of neutrinos which are expected to be detected are also
because the next supernova is expected to be closer to the earth
than SN1987a. For definiteness, we use here a Monte Carlo simulation
of the Super-Kamiokande (SK) neutrino detector, but note that other
neutrino detectors could also probe this
physics~\cite{NeutrinoDetectors}. Simulations estimate that the
number of events detected in SK from a supernova at 10~kpc would be
of the order of 10,000 \cite{Ikeda:2007sa}. In order to analyze at
what scales Lorentz violation could be probed by the detection of
galactic supernova neutrinos, we made Monte Carlo simulations with
various levels of linear and quadratic Lorentz violation. We used
the energy spectra of neutrinos from the Livermore
simulation~\cite{Totani:1997vj}, which is shown in Figure
\ref{fig:SNSpectrum} and the detector properties given
in~\cite{Tomas:2003xn}.
\begin{figure} [htb]
\begin{center}
\epsfig{file=SNSpectrum.eps,width=0.7\textwidth}
\end{center}
\caption{\it The neutrino energy spectra from the Livermore
simulation~\cite{Totani:1997vj}.} \label{fig:SNSpectrum}
\end{figure}
We show in Fig.~\ref{fig:events} results from our Monte Carlo
simulation including both charged-current and neutral-current events
for linear sublminal Lorentz violation at the energy scales $M_{\nu
QG1}=10^{10}GeV$ and $M_{\nu QG1}=10^{11}GeV$, including
oscillations corresponding to the normal hierarchy and assuming that
the atmospheric resonance is adiabatic. The signal has spread out
and shifted in time, as we would expect. This time shift is
unobservable because it is shifted relative to the signal in the
absence of Lorentz violation, which in practice cannot be measured.
We have applied the MD and the maximal ECF methods with various
energy weightings to the Monte Carlo data with $M_{QG1}=10^{10}$~GeV
in order to estimate the level of Lorentz violation.
Fig.~\ref{ECF} shows the
ECF for one realization of the energy-smeared sample obtained applying MC to the
measured neutrino energies with the Gaussian errors expected from SK.
It exhibits a clear maximum, whose position may be estimated by fitting it
with a Gaussian profile in the peak vicinity. Fig.~\ref{ECF_position} shows the
results of such fits to the
ECFs constructed for the 1000
energy-smeared realizations. From this distribution we can
derive
the prefered value of
$\tau_l$.
\begin{figure} [htb]
\begin{center}
\epsfig{file=ECF01splitzoom.eps,width=0.7\textwidth}
\end{center}
\caption{\it The ECF linearly weighted with energy from one realization of the
simmulated time profile Fig.~\ref{fig:events} with neutrino energies smeared by
MC applying to the expecteed energy resolution of SK, for the case of
linear energy depending neutrino velocity.} \label{ECF}
\end{figure}
\begin{figure} [htb]
\begin{center}
\epsfig{file=ECF01.eps,width=0.7\textwidth}
\end{center}
\caption{\it The distribution of the postions of the maximums from fits of ECFs
like in~Fig.~\ref{ECF} of 1000 realizations of the simmulated time profile
Fig.~\ref{fig:events} with neutrino energy smeared by MC.} \label{ECF_position}
\end{figure}
\begin{figure} [htb]
\begin{center}
\epsfig{file=Events600.eps,width=0.7\textwidth}
\end{center}
\caption{\it The time profile of 600 events, which could be detected by SK from
a
future extra galactic supernovae occured at the distance 40~kpc from the earth.
The events are simmulated with respect to the energy spectrum given
in~Fig.~\ref{fig:SNSpectrum} with linear energy depending propagation
effect, encoded at the level $\tau_1 = 5.5\ s\cdot MeV^{-1}$. } \label{600}
\end{figure}
\begin{figure} [htb]
\begin{center}
\epsfig{file=MinDisp600.eps,width=0.7\textwidth}
\end{center}
\caption{\it The dispersion (\ref{disp_tau}) versus $\tau$ from
one realization of the simmulated time profile Fig.~\ref{600} with
neutrino energies smeared by
MC applying to the expecteed energy resolution of SK, for the case of
linear energy depending neutrino velocity.}
\label{disp600}
\end{figure}
The results are
sumarized in Table~\ref{tab:MCmethod}, where we have defined ${\hat
m_{l}} \equiv M_{\nu QGl}/M_{\nu QGl}^{true}$, where $M_{\nu
QGl}^{true}$ is the true scale of Lorentz violation and $M_{\nu
QGl}$ is that deduced from the analysis method. Comparing these
results, we find that the maximal ECF technique has greater
sensitivity than the MD method, and that the linear energy weighting
has the greatest sensitivity among the ECF analyses. We therefore
use this in the following.
We have performed simulations for both the normal and inverted mass
hierarchies, with and without the spectral splits caused by neutrino
self-interactions, for the extreme cases $P_{H}=0.0$ and
$P_{H}=1.0$, and analysed them using the ECF method. The
corresponding results are summarized in
Table~\ref{tab:MChierarchies}, where we see that Lorentz violation
can be probed with similar sensitivity for all mass hierarchies.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|}
\hline
Method & $95\%$ C.L. \\
\hline
Minimal Dispersion (MD) & $0.60 < {\hat m_{1}} < 2.37$ \\
ECF 0th order & $0.90 < {\hat m_{1}} < 1.29$ \\
ECF 1st order & $0.88 < {\hat m_{1}} < 1.26$ \\
ECF 2nd order & $0.87 < {\hat m_{1}} < 1.27$ \\
\hline
\end{tabular}
\caption{\it The $95\%$ C.L. ranges of ${\hat m_{l}} \equiv M_{\nu
QGl}/M_{\nu QGl}^{true}$ obtained using the different dispersion
methods and various energy weights for a Monte Carlo simulation of a
possible future galactic supernova for $P=0.0$, assuming the normal
mass hierarchy and $M_{\nu QG1}=10^{10}$~GeV.} \label{tab:MCmethod}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|}
\hline
Mass hierarchy & $95\%$ C.L. \\
\hline
NH $P=0.0$ & $ 0.90 < {\hat m_{1}} < 1.29$ \\
NH $P=1.0$ & $ 0.90 < {\hat m_{1}} < 1.28$ \\
IH $P=0.0$ & $ 0.91 < {\hat m_{1}} < 1.26$ \\
IH SS $P=0.0$ & $ 0.90 < {\hat m_{1}} < 1.27$ \\
IH SS $P=1.0$ & $ 0.91 < {\hat m_{1}} < 1.28$ \\
\hline
\end{tabular}
\caption{\it The $95\%$ C.L. for ${\hat m_{l}} \equiv M_{\nu
QGl}/M_{\nu QGl}^{true}$ obtained using the ECF method for a Monte
Carlo simulation of a possible future galactic supernova, for the
normal (NH) and inverted hierarchies (IH), and including the effect
of a spectral split (SS), where $P$ is the level-crossing
probability, and NH $P=1.0$ is equivalent to IH $P=1.0$.}
\label{tab:MChierarchies}
\end{center}
\end{table}
The top three rows of Table~\ref{tab:MClinearQuad} show the results
of our analysis for the linear cases $M_{\nu QG1}=(10^{10}, 10^{11},
10^{12})$~GeV, using the minimal ECF method with no energy
weighting, and making linear and quadratic fits. We see that data
from a future galactic supernova could place strong $95\%$ C.L.
limits on the range of $M_{\nu QG1}$ if it is lower than
$10^{11}$~GeV. In the limit of negligible Lorentz violation ($M_{\nu
QG1} \ge 10^{12}$~GeV), we find the lower limits $M_{\nu QG1}> 2.2
\times 10^{11}GeV$ and $M_{\nu QG1}> 4.2 \times 10^{11}GeV$ at the
$95\%$ C.L. for subluminal and superluminal models, respectively.
The bottom three rows of Table~\ref{tab:MClinearQuad} show the
corresponding results for the quadratic cases $M_{\nu
QG2}=(10^{4.5}, 10^{5}, 10^{5.5})$~GeV, again using the minimal ECF
method with no energy weighting. We see that data from a future
galactic supernova could place strong $95\%$ C.L. limits on the
range of $M_{\nu QG2}$ if it is lower than $10^{5}$~GeV. In the case
of large $M_{\nu QG2}$, we find the lower limits $M_{\nu QG2}> 2.3
\times 10^{5}$~GeV and $M_{\nu QG2}> 3.9 \times 10^{5}$~GeV at the
$95\%$ C.L. for subluminal and superluminal models, respectively, in
the quadratic case.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|}
\hline
Model & $95\%$ C.L. \\
\hline
$M_{\nu QG1}=10^{10}{\rm GeV}$ & $ 0.90 < {\hat m_{1}} < 1.29$ \\
$M_{\nu QG1}=10^{11}{\rm GeV}$ & $ 0.64 < {\hat m_{1}} < 1.93$ \\
$M_{\nu QG1}=10^{12}{\rm GeV}$ & $ 0.22 < {\hat m_{1}},~ 0.42 < {\hat m_{1}^{super}}$
\\
$M_{\nu QG2}=10^{4.5}{\rm GeV}$ & $0.93 < {\hat m_{2}} < 1.23$\\
$M_{\nu QG2}=10^{5}GeV$ & $ 0.65 > {\hat m_{2}},~ 2.3 < {\hat m_{2}^{super}}$\\
$M_{\nu QG2}=10^{5.5}GeV$ & $ 0.19 > {\hat m_{2}},~ 0.72 < {\hat m_{2}^{super}}$\\
\hline
\end{tabular}
\caption{ \it
The $95\%$ C.L. limits on $M_{\nu QG1}$ and $M_{\nu QG2}$ obtained
using the KS statistic and the ECF method, for subluminal Lorentz
violation with certain input choices of $M_{\nu QG1}$ (top three
rows) and $M_{\nu QG2}$ (bottom three rows). We give the $95\%$ C.L.
limits for subluminal (superluminal) propagation as ${\hat m_{1}}$
(${\hat m_{1}^{super}}$); if a limit for ${\hat m_{1}^{super}}$ is
not given then superluminal propagation has been ruled out at the
$95\%$ C.L..} \label{tab:MClinearQuad}
\end{center}
\end{table}
{ Although the ECF
method is more sensitive than the
MD method, it is not applicable to a statistically poor data set.
The ECF method is best for the analysis of a feature
in a distribution superposed on a uniform background, and the
extraction procedure is possible only with a relatively
representative (i.e., large) sample of events. This is demonstrated by
simmulating a possible future extra galactic supernova which might take place
at a distance similar to that of SN 1987a. The simulation has been
performed in such a way as to have a sample with sufficient statistics to claim at least a
3-$\sigma$ detection of Lorentz invariance in
neutrino propagation. This would need about 600 events
for the linear case, corresponding, assuming the sensitivity of SK,
to a supernova at a distance of about 40~kpc from the
Earth. An expected time profile is presented in Fig.~\ref{600}. The
signal Fig.~\ref{600} contains 600 events and the time distribution encodes
a linearly energy-dependent propagation effect at the level of $\tau_1 = 5.5\
s\cdot MeV^{-1}$, corresponding to $M_{\nu QG1}=7 \times 10^{9}\ GeV$. This
distribution does not demonstrate any significant feature that one could extract in a
time window to be analyzed using the ECF. Therefore, we apply the MD method, which
is better for a signal with poor statistics. The typical behaviour of the
dispersion~(\ref{disp_tau}) versus $\tau$ for one realization of the
energy-smeared sample of the 600 simulated events is presented
in Fig.~\ref{disp600}. The distribution of $\tau_{min}$ given by
(\ref{eq:taumin}) of 1000 MC simulations similar to
Fig.~\ref{fig:SN1987aKII} recovers, in this case, the encoded signal
$\tau_1 = 5.5\ s\cdot MeV^{-1}$ ($M_{\nu QG1}=7 \times 10^{9}\ GeV$) at the
3-$\sigma$ level. A similar simulation for
the quadratic case would require about 400 events, which would
correspond to a SN at a distance of about 50~kpc from the Earth for the
SK efficiency. A 3-$\sigma$ signal could be recovered if the dispersion effect
was at the level $\tau_2 = 0.1\ s\cdot MeV^{-2}$, which corresponds to $M_{\nu
QG2}=7 \times 10^{3}\ GeV$.
The minimal 3-$\sigma$ discovery statistics, which amounts to
600 (400) events for linear (quadratic) energy dependence, is defined for the MD
method by the uncertainty in the denominator of (\ref{eq:taumin}), which reads
$\approx 5/\sqrt{N}$ ($\approx 4/\sqrt{N}$)
for either the simulated events or events from SN 1987a, where $N$
is the number of detected events. This means, that for the statistics of
SN 1987a, a Lorentz-violating signal could be detected only at about
the 1-$\sigma$ C.L., corresponding to the bounds obtained in the previous
subsection. In the case of limited statistics like SN 1987a,
It is possible to estimate similar limits on Lorentz violation without the full
MD machinery used
in~\ref{subsection:sn1987}. However, such an estimate would implicitly
assumes that the dispersion of the initial signal at the source is known. One
could rely on computer simulations of a supernova
explosion~\cite{Totani:1997vj}, but this would introduce an element of
model-dependent information into the analysis. The methods considered here do not
assume any knowladge on the true profile (spread) of the neutrino signal at the
source: instead, they remove any propagation effect that may be encoded in the
time profile.}
\section{CNGS and the OPERA Experiment}
In this Section we discuss the sensitivities to Lorentz violation in
neutrino propagation that could be provided by the OPERA experiment
in the CNGS neutrino beam. We first discuss the sensitivity to
Lorentz violation that could be obtained using the spill structure
alone, without taking into account its bunch substructure. In a
second step, we consider how this bunch substructure could be
exploited to improve the sensitivity, which could be possible if the
timing resolution currently expected for the OPERA detector could be
improved significantly.
We first recall some of the details of the pioneering analysis of
the neutrino velocity in a long-baseline neutrino beam that has been
published by the MINOS collaboration using the NuMI
beam~\cite{Rebel:2008th}. This analysis compared the absolute
timings of the detected neutrino events in the near and far
detectors. The arrival times in the near detector provide a direct
measurement of the neutrino intensity time-profile, consisting of
either 5 or 6 batches separated by short gaps within a 9.7~$\mu s$
long spill. The near and far clocks were synchronized absolutely by
means of Global Positioning Satellite (GPS) receivers. The resulting
systematic error of $\pm$64~ns was dominated by uncertainties in the
delays in the optical fibres that ran between the surface antennae
and the underground detectors. Including the jitter of the two GPS
clocks, the total relative time uncertainty was $\sigma=$150~ns.
This analysis measured $(v-c)/c = (5.1 \pm 2.9) \times 10^{-5}$ at
the 68\% C.L., or $-2.4 \times 10^{-5} < (v - c)/c < 12.6 \times
10^{-5}$ at the 99\% C.L., at an average neutrino energy of 3~GeV
\cite{Rebel:2008th}. In the case of linear Lorentz violation, this
would correspond approximately to $M_{\nu QG1} > 1.2 (4.2) \times
10^5$~GeV in the case of subluminal (superluminal) propagation.
\subsection{CNGS Beam Characteristics}
The energy spectrum of the calculated CNGS $\nu_\mu$ flux is
reproduced in Fig.~\ref{fig:spectrum}. Its average neutrino energy
is $\sim 17$~GeV, significantly higher than that of the NuMI beam.
Since the CNGS baseline is almost identical with that the NuMI beam,
this gives some advantage to OPERA, assuming that it can attain
similar or better timing properties. We also recall that the CNGS
beam is produced by extracting the SPS beam during spills of length
$10.5~\mu$s (10500~ns). Within each spill, the beam is extracted in
2100 bunches separated by 5~ns. Each individual spill has a
$4-\sigma$ duration of 2~ns, corresponding to a Gaussian RMS width
of 0.25~ns \cite{Meddahi:2007zz}.
\begin{figure} [htb]
\begin{center}
\epsfig{file=nu_spectrum.eps,width=0.8\textwidth}
\end{center}
\caption{\it The expected CNGS neutrino beam energy
spectrum~\cite{Meddahi:2007zz}.} \label{fig:spectrum}
\end{figure}
\subsection{Spill Analysis}
We introduce a `slicing estimator', based on the fact that if some
energy-dependent time delay is encoded into the time structure of
the spill by propagation of the neutrinos before detection, one
should observe a systematic increase in the overall time delay of
events as their energies grow. Therefore, we propose cutting the
energy spectrum of the neutrino beam into a number of energy slices,
and searching for a systematic delay in the mean arrival times of
the events belonging to different energy slices that increases with
the average energy of the slice.
In order to illustrate this idea, we perform a simple exercise
simulating the sensitivity of the slicing estimator for a time delay
depending linearly on the neutrino energy: $\Delta t=\tau E$,
assuming $\approx 2 \times 10^4$ charged-current events, as are
expected to be observed in the 1.8 kton OPERA detector over 5 years of
exposure time to the CNGS beam. We envisage superposing all the CNGS
spills with a relative timing error $\delta t$. Since each spill has
2100 bunches, we expect about 10 events on average due to each set
of superposed bunches. As a starting-point, before incorporating the
relative timing error, the timing of each event has been smeared
using a Gaussian distribution with standard deviation 0.25~ns,
reflecting the bunch spread. We display in Fig.~\ref{fig:comb} a
sample of events in our simulation, before incorporating the
relative timing error and any delay in propagation. The 5~ns
internal time structure of the spill is clearly visible.
\begin{figure} [htb]
\begin{center}
\epsfig{file=zoom_bunch_str.eps,width=0.8\textwidth}
\end{center}
\caption{\it A superposition of the production times of neutrinos in
CNGS spills reflects the bunch structure of the CNGS
beam~\cite{Meddahi:2007zz}.} \label{fig:comb}
\end{figure}
We now incorporate the uncertainty in the relative timing of the
bunch extraction and the detection of an event in the detector. The
overall uncertainty has three components: an uncertainty in the
extraction time relative to a standard clock at CERN, an uncertainty
in the relative timing of clocks at CERN and the LNGS provided by
the GPS system, and the uncertainty in the detector timing relative
to a standard clock in the LNGS. With the current beam
instrumentation, implementation of GPS and detector resolution, it
is expected that this will be similar to that achieved by MINOS in
the NuMI beam, namely $\sim 100$~ns. Such a timing error renders
essentially invisible the internal bunch structure of the CNGS
spill, which looks indistinguishable from a uniform distribution
generated with the same statistics, as shown in the upper panel of
Fig.~\ref{fig:smear}.
\begin{figure} [thb]
\begin{center}
\epsfig{file=batch_shape.eps,width=0.8\textwidth}
\end{center}
\caption{\it The time structure of events in the CNGS beam,
including a 100~ns timing uncertainty without (upper panel) Lorentz
violation in neutrino propagation, and (lower panel) with a linearly
energy-dependent time delay during neutrino propagation at the level
of $\tau =5$~ns/GeV.} \label{fig:smear}
\end{figure}
We next demonstrate in the lower panel of Fig.~\ref{fig:smear} the
effect of a time delay during neutrino propagation at the level of
$\tau_l =5$~ns/GeV, as would occur if $M_{\nu QG1} = 4.8\times 10^5\ {\rm GeV}$.
This would correspond to a total delay $\sim 100$~ns at
the average energy of the CNGS neutrino beam. We see clearly its
smearing effect at the beginning and end of the spill, due to the
later arrivals of the more energetic neutrinos. Our `slicing
estimator' aims to quantify this effect.
We smear the events with an energy resolution of 20\%, and then cut
the sample into slices of about 1000 events each with increasing
energies. The asterisks in Fig.~\ref{fig:slice} show the mean
arrival times of each slice, relative to the mean time of the
superposed spills, using one particular smearing of the timing with
a Gaussian error $\delta t = 100$~ns. The triangles in
Fig.~\ref{fig:slice}, on the other hand, show the mean arrival times
of events in each energy slice if the propagation delays caused by an
assumed value of $\tau =5$~ns/GeV are included. We see clear
differences between the asterisks and the red and triangles, that increase
with the energies of the slices.
\begin{figure} [thb]
\begin{center}
\epsfig{file=distr_orig_shift.eps,width=0.8\textwidth}
\end{center}
\caption{\it The mean arrival times of 1000-event slices with
increasing energies without Lorentz violation in the neutrino
propagation (asterisks) and with the effect of a time delay during
neutrino propagation at the level of $\tau =5$~ns/GeV (triangles). The latter
corresponds to $M_{\nu QG1} = 4.8\times 10^5\ {\rm GeV}$. One particular simulation
of the OPERA experiment is shown: others are similar, exhibiting the
expected statistical fluctuations.}
\label{fig:slice}
\end{figure}
By making many realizations of the event sample with the Gaussian
$\delta t = 100$~ns smearing, one can understand the significance of
the shifts in the mean positions of the slices. Fig.~\ref{fig:shift}
shows the energy dependence of the shifts in the mean timings of the
slices of 1000 events with a delay $\tau_l =5$~ns/GeV encoded. These
points may be fitted to a straight line
\begin{equation}
\label{linfit} \Delta\langle t\rangle =\tau_l\langle E\rangle +b.
\end{equation}
In general, when choosing the number of events for each slice, one
has to strike a balance between the statistics of each subsample
(which determines the precision of the determination of the mean
arrival time of each slice), and the number of subsamples to be
included in the fit. We choose the statistics of each slice so as to
give comparable error bars for each energy bin. The propagation
effect of interest to us is reflected in the slope $\tau_l$, while
the intercept is an overall shift that has no physical significance.
The sensitivity of the experiment to linear Lorentz violation at,
say, the 95\% confidence level (C.L.) may be estimated by finding
the value of the parameter $\tau_l$ which yields a fitted slope
parameter that differs from a horizontal line ($\tau_l=0$) by
$1.95\sigma$ or more. We show in Fig.~\ref{fig:ellipses} the
confidence contours corresponding to 68\% , 95\% and 99\%
sensitivity levels in the $(b, \tau_l)$ plane. From the upper
(lower) edge of the corresponding ellipse one obtains $\tau_{\rm
l95\%}=4.9(2.6)\ {\rm ns/GeV}$ at the 95\% C.L. for the subluminal
(superluminal) propagation schemes, corresponding via
\begin{equation}
\label{linscale} M_{\nu QG1}=\frac{L_{\rm CNGS}}{c}\tau_{\rm
l}^{-1}=2.4\times 10^6\left(\frac{ns~GeV^{-1}}{\tau_{\rm l}}\right)
~{\rm GeV}
\end{equation}
to values of the linear Lorentz-violating scale $M_{\nu QG1} =
4.9(9.2)\times 10^5\ {\rm GeV}$ for the subluminal (superluminal)
case, yielding a mean sensitivity~\footnote{Since the CNGS spill is
in principle time symmetric, the estimated sensitivities for sub-
and superluminal propagation should be the same. The difference
between these numbers reflects the finite size of the simulated
sample. Here and subsequently we quote the means of our sum and
superluminal limits as estimates of the CNGS sensitivity.} to
$M_{\nu QG1}\simeq 7\times 10^5\ {\rm GeV}$.
\begin{figure} [thb]
\begin{center}
\epsfig{file=fit.eps,width=0.8\textwidth}
\end{center}
\caption{\it The measured shifts in the average arrival times of
neutrinos in 1000-event slices with increasing energies, assuming a
time delay during neutrino propagation at the level of $\tau
=5$~ns/GeV.} \label{fig:shift}
\end{figure}
It is important to note that the slope and intercept are
anticorrelated in such a fit, as shown in Fig.~\ref{fig:ellipses}.
Our conservative estimate of the limits corresponds to the upper
(lower) edges of the ellipse.
\begin{figure} [thb]
\begin{center}
\epsfig{file=ellipse_100_100.eps,width=0.8\textwidth}
\end{center}
\caption{\it The 68\% (dashed dotted line), 95\% (dashed line) and
99\% (solid line) sensitivity contours for the case of linear
energy-dependent fit~(\ref{linfit}).} \label{fig:ellipses}
\end{figure}
If the velocity of the neutrino depends quadratically on the energy
of the neutrino, the slices should obey a parabolic fit
\begin{equation}
\label{quadrfit} \Delta\langle t\rangle =\tau_q\langle E\rangle^2
+c.
\end{equation}
Here the propagation effect of interest is parameterized by
$\tau_q$, while an overall shift is reflected in the constant $c$.
\begin{figure} [thb]
\begin{center}
\epsfig{file=ellipse_quadratic.eps,width=0.8\textwidth}
\end{center}
\caption{\it The same as in Fig.~\ref{fig:ellipses} calculated for
the sensitivity of the quadratically energy-dependent
fit~(\ref{quadrfit}).} \label{fig:ellipses2}
\end{figure}
The sensitivity contours at 68\% , 95\% and 99\% CL are presented in
Fig.~\ref{fig:ellipses2}. According to the formula
\begin{equation}
\label{quadscale} M_{\nu QG2}=\sqrt{\frac{L_{\rm CNGS}}{c}\tau_{\rm
q}^{-1}}=1.6\times 10^3\sqrt{\frac{ns~GeV^{-2}}{\tau_{\rm q}}}~{\rm
GeV},
\end{equation}
after substituting $\tau_{\rm q95\%}=0.066(0.022)$, we obtain
$M_{\nu QG1}= 6.2(11)\times 10^3\ {\rm GeV}\simeq 8\times 10^3\ {\rm
GeV}$.
The stability of the slicing estimator has been checked by
generating several data sets that have linear or quadratic
dispersion effects artificially encoded. To test our level of
sensitivity, we chose the Lorentz-violating parameters to be close
to our estimations of the levels of sensitivities in the case of
where dispersion effects are absent. The encoded values have been
recovered for the linear (\ref{linfit}) and quadratic
(\ref{quadrfit}) fits to slices containing the same numbers of
events. Slight variations in the numbers of events in the individual
slices do not change substantially the levels of sensitivity
estimated for 1000-event bins. Another check has been performed
using the minimal dispersion method described in Section 2.2.1. This
method has been applied to the whole sample of about $2\times 10^5$
events expected to occur in the rock upstream of the OPERA detector,
and results very similar to those of the slicing estimator have been
obtained. Although the whole data sample is very rich statistically,
the time distribution, given the 100~ns time uncertanty assumed in
the current analysis, is still featureless apart the edges of the
spill~\footnote{For this reason, the ECF technique described in
Section 2 is inapplicable.}.
Another check has been made analyzing the distortion of the shape of
the spill at its edges. For this purpose, we analyze two independent
histograms of the type shown in Fig.~\ref{fig:smear}. One of the
histograms is treated as a reference, while the other was shifted by
introducing a time delay $\tau_{\rm l(q)}$ for every event,
corresponding to the linear (quadratic) propagation scheme. The
shifted histogram has been compared to the reference (unshifted)
histogram, and the parameter $\tau_{\rm l(q)}$ increased until the
difference between two histograms reaches the 95\% C.L.
\begin{figure}[thb]
\begin{center}
\epsfig{file=edges.eps,width=1.0\textwidth}
\end{center}
\caption{\it The left (right) spill edges fitted using 20000
detector events for scenario with a linear energy dependence of the
neutrino velocity. The solid red line is the reference histogram,
while the points represent the shifted data. The solid black line
represents the probability distribution function.} \label{corr_cngs}
\end{figure}
We find that this edge-fitting method has a factor 5 less
sensitivity than that obtained earlier with the slicing estimator or
the MD method.
We recall that the OPERA detector may also be used to measure the
arrival times of muons from $2 \times 10^5$ neutrino events in the
rock upstream of the detector. Information on the neutrino energy is
missing in this measurement. Therefore, one cannot employ methods
involving time-energy correlation information such as the slicing
estimator. Methods requiring an energy-dependent time shift of the
data, like the MD method, are also not applicable in this case,
again because events in the rock do not have measured energies.
Nevertheless, one can use methods that compare overall the time
shift of the simulated data to the measured time distribution of the
rock events. In this spirit, applying to the $2\times 10^5$ expected
rock events the edge-fitting procedure described in the previous
paragraph, we find a sensitivity to $M_{\nu QG1} \approx 2.4\times
10^6\ {\rm GeV}$, about three times better than previously, for the
sensitivity to linear energy dependence, and the same level of
sensitivity for the quadratic energy dependence.
One can also modify the MD method for analyzing rock events. Namely,
one could generate the reference spill and introduce an
energy-dependent shift via the parameter $\tau_{\rm l(q)}$ so as to
make the dispersion of the shifted reference spill match as closely
as possible the dispersion of the events measured in the rocks.
However, due to statistical uncertainties the dispersion of each
reference spill will be different to the dispersion of the rock events.
If this uncertainty is much less than the increase in the dispersion
of the rock events due to Lorentz violation then this method can be
used. However, this limits the sensitivity to $M_{\nu QG1} \simeq
3\times 10^5\ {\rm GeV}$ for the linear propagation scheme, which is
not as sensitive to other methods we have described above. From the
other side the sensitivity of this modified MD method approachs to
$M_{\nu QG2} \simeq 7\times 10^3\ {\rm GeV}$, which is similar to
the slicing estimator.
\subsection{Bunch Analysis}
We now explore the additional sensitivity that OPERA could obtain if
it could achieve a correlation between the SPS RF bunch structure
and the detector at the nanosecond level. Sub-ns resolution could be
obtained in OPERA with the help of additional specialized timing
detectors such as TOF hodoscopes~\footnote{We point out that it is
sufficient to refer all measured far times to a well-defined plane
perpendicular to the beam axis.}. However, synchronizing the SPS and
OPERA clocks with such a precision over a period of 5~years is a
challenging task. With the new IEEE Standard Precision Time Protocol
(PTP) IEEE1588~\cite{IEEE1588} it is possible achieve time
synchronization in the range of 100~ns on an Ethernet network but
not better; GPS clock synchronization at the ns level is also highly
demanding. Standard `One-Way' GPS techniques~\cite{OWGPS} can reach
a precision of $\sim 20$~ns at best. Devices known as GPS
disciplined oscillations (GPSDO)~\cite{GPSDO}, containing
high-quality temperature-controlled local oscillators, steered to
agree with the onboard oscillators of the GPS satellites, can
provide ultra-precise standard frequencies that could reproduce the
CERN RF frequency. A more elegant but less standard method is called
`Common-View' GPS~\cite{OWGPS}: in this case two clocks (e.g., one
at CERN and the other at LNGS) view simultaneously the same GPS
satellite, thereby cancelling out the common errors (e.g., the
satellite's local clock). It has been shown that the data recorded
by the two GPS receivers can be processed offline to provide a
timing uncertainty $\lesssim 5$~ns. Finally it has been shown that
`Carrier-Phase' GPS measurements~\cite{CPGPS}, which use the carrier
frequencies instead of the codes transmitted by the satellites, can
achieve synchronization of clocks with uncertainties $\sim 0.5$~ns
at the cost of extensive post processing. Turning to ground based
solutions, the most precise atomic clock (the NIST-F1 used to define
the UTC) has a long-term accuracy of $5\times 10^{-16}$ or $\sim
75$~ns over 5 years. It would therefore not be sufficient to bring
two {\it a priori} synchronized clocks to the near and far locations
to define the arrival times with the required long-term stability.
Alternatively, next-generation accelerators, e.g., free electron
lasers such as XFELs that aim to generate X-ray pulses with pulse
durations down to tens of femtoseconds, will meet the challenge of
finding new methods of ultra-stable timing stabilization,
synchronization and distribution over several kilometres. These
systems will most likely rely on optical timing synchronization. We
can therefore imagine a phase-locked loop RF oscillator located at
the far location remotely locked to the SPS RF system. These two
systems would be connected and locked via stabilized optical fibre
links~\footnote{We note that the temperature dependence of the
refractive index of an optical fibre is typically $10^{-6}$/K, which
corresponds to a drift of 5~ns for 1000~km and a temperature
stability of 1$^\circ$C.}. To conclude, a combination of space- or
ground-based solutions could probably provide the possible
synchronization of the CNGS and OPERA clocks, and allow for
systematic cross-checks to be performed.
We now discuss how the sensitivity of the previous analysis could be
improved by taking into account the 5-ns bunch structure of the CNGS
spills. In Fig.~\ref{fig:batch_smeared} we present one particular
realization of a sample of simulated events which incorporates a
relative timing error of 1ns.
\begin{figure} [thb]
\begin{center}
\epsfig{file=zoom_bunch_smear.eps,width=0.8\textwidth}
\end{center}
\caption{\it A particular realization of the bunch structure with
$\approx 1$~ns relative time uncertainty incorporated. The histogram
is binned with a resolution sutable for resolving the bunch
structure.} \label{fig:batch_smeared}
\end{figure}
Although the periodic bunch structure survives, the signal itself
represents a time series with a relatively low signal-to-noise
ratio. The latter implies that the proper deconvolution to extract
isolated features cannot be made. In the other words, there is a
problem in fitting the fine structure of the signal with an
analytical function. Such a situation has been widely investigated
and applied to the temporal profiles of gamma gamma ray bursters
(GRBs)~\cite{ccf_grb}. We therefore apply a cross correlation
function (CCF) method similar to that described in~\cite{ccf_grb}
but differing only in details of its adaptation. Namely, we
introduce the temporal correlation of two time series $A(t)$ and
$B(t+\tau_{l(q)})$
\begin{equation}
\label{ccf} {\rm CCF(\tau_{l(q)})}=\frac{\langle (A(t)-\bar
A(t))(B(t-\tau_{l}E^{l})-\bar B(t-\tau_{l}E^{l}))
\rangle}{\sigma_{A(t)}\sigma_{B(t-\tau_{l}E^{l})}},
\end{equation}
where $A(t)$ is a Monte Carlo simulation of the events with no
dispersion effects, $B(t-\tau_{l}E^{l})$ is the simulated data which
has the time shift required to invert the effect of the
energy-dependent dispersion, $\bar A(t)$ and $\bar
B(t-\tau_{l}E^{l})$ are the mean values of the corresponding time
series, and $\sigma_{A(t)}$ and $\sigma_{B(t-\tau_{l}E^{l})}$ are
the standard deviations from these mean values. We average over
several Monte Carlo simulations to include the statistical
uncertainties as well as performing time and energy smearing due to
the uncertainty in these measurements.
We calculate the ${\rm CCF(\tau_{l(q)})}$ as a function of
$\tau_{l}$ and find its maximum value. The value of $\tau_{l}$ which
maximizes the CCF is an estimate of the true value of $\tau_{l}$. To
find this estimate we fit a Gaussian to the peak of the resulting
CCF function shown in Fig.~\ref{fig:ccf_lin}.
\begin{figure} [thb]
\begin{center}
\epsfig{file=ccf1.eps,width=0.8\textwidth}
\end{center}
\caption{\it The Gaussian fit to the CCF calculated for the case of
a linear energy dependence with time smearing $\approx 1$~ns. }
\label{fig:ccf_lin}
\end{figure}
Each realization produced an independent measurement of the CCF at a
given value of the shift parameter. The process of iteration for
every value of the shift parameter in Fig.~\ref{fig:ccf_lin} was
repeated until the resulting distribution approached a normal
distribution, which typically took about 100 runs. Using these
normal distributions the values and the standard deviations (error
bars) presented in Fig.~\ref{fig:ccf_lin} have been calculated.
The sensitivity of the CCF can then be estimated by the precision of
the position of the maximum for the Gaussian fit in
Fig.~\ref{fig:ccf_lin}. For the case of linear energy dispersion,
the maximum of the CCF is found at $\tau_{\rm l}^{\rm max}=-0.033\pm
0.036\ {\rm ns/GeV}$ if no time shift encoded in the simulated data.
For superluminal propagation, when $\tau_{\rm l}>0$, one can
estimate $\tau_{\rm l95\%}^{\rm su}=0.037\ {\rm ns/GeV}$, which
corresponds via (\ref{linscale}) to $M_{\nu QG1} \approx 6.6\times
10^7\ {\rm GeV}$. For the subluminal case, one obtains $\tau_{\rm
l95\%}^{\rm sb}=0.1{\rm ns/GeV}$, which corresponds to $M_{\nu QG1}
\approx 2.4\times 10^7\ {\rm GeV}$. The same CCF procedure may also
be applied to the quadratic case, as shown in
Fig.~\ref{fig:ccf_quadr}.
\begin{figure} [thb]
\begin{center}
\epsfig{file=ccf_qudratic1_gauss.eps,width=0.8\textwidth}
\end{center}
\caption{\it The same as in Fig.~\ref{fig:ccf_lin} for the quadratic
energy dependence.} \label{fig:ccf_quadr}
\end{figure}
The limits deduced from the fit Fig.~\ref{fig:ccf_quadr} are $M_{\nu
QG2}= 3.6(4.9) \times 10^{4}\ {\rm GeV}\simeq 4\times 10^{4}\ {\rm
GeV}$.
Repeating the CCF procedure for a time resolution above 2~ns, one
observes no maximum correlation in a reasonable range of the shift
parameter, as seen in Fig.~\ref{fig:ccf_flat}.
\begin{figure} [thb]
\begin{center}
\epsfig{file=ccf_flat.eps,width=0.8\textwidth}
\end{center}
\caption{\it The profile of the CCF calculated with a 2~ns time
resolution for the case of linear energy dependence in neutrino
propagation.} \label{fig:ccf_flat}
\end{figure}
From this one can conclude that the bunch structure degenerates into
an essentially uniform distribution as soon as the time resolution
becomes bigger than $\approx 2$~ns, in which case the slicing
estimator described in the previous subsection should be applied.
If the same time resolution $\sim 1$~ns can be attained
for events occurring in the rock upstream from the OPERA detector,
the CCF method can also be used to analyze these data, which should
amount to some $2 \times 10^5$ events.
\begin{figure} [thb]
\begin{center}
\epsfig{file=zoom_bunch_smear1.eps,width=0.8\textwidth}
\end{center}
\caption{\it A simulated realization of the bunch structure for rock
events, incorporating a timing uncertainty $\approx 1$~ns. The
histogram is binned with a resolution suitable for resolving the
bunch structure.} \label{fig:batch_smeared1}
\end{figure}
We see in Fig.~\ref{fig:batch_smeared1} that the bunch structure of
the rock events is clearly visible if a time resolution $\approx
1$~ns is achieved, despite the fact that the energies of the
neutrinos colliding in the rock cannot be determined.
\begin{figure} [thb]
\begin{center}
\epsfig{file=ccf_rock.eps,width=0.8\textwidth}
\end{center}
\caption{\it The CCF for rock events with time resolution $\approx
1$~ns in the case of linear energy dependence, compared with a
Gaussian fit.} \label{fig:ccf_rock}
\end{figure}
The CCF calculated for the rock events is presented in
Fig.~\ref{fig:ccf_rock}, together with a Gaussian fit. The
sensitivities to Lorentz violation now attain the levels of $M_{\nu
QG1}= 4.3(3.2)\times 10^8\ {\rm GeV}\simeq 4\times 10^8\ {\rm GeV}$
for the linear case, and $M_{\nu QG2}= 8.8(4.3) \times 10^{5}\ {\rm
GeV}\simeq 7 \times 10^{5}\ {\rm GeV}$ for the quadratic case. The
sensitivity in the quadratic case is significantly better than the
sensitivity estimated for a possible future galactic supernova.
\section{Conclusions}
We find from the SN1987a data lower limits on the scale of linear
Lorentz violation in the neutrino sector, namely $M_{\nu QG1}
> 2.68 \times 10^{10}$~GeV and $M_{\nu QG1}
> 2.51 \times 10^{10}$~GeV at the $95\%$ C.L. in the subluminal and
superluminal cases respectively. The corresponding limits for the
quadratic models are $M_{\nu QG2} > 4.62 \times 10^{4}$~GeV and
$M_{\nu QG2} > 4.13 \times 10^{4}$~GeV at the $95\%$ C.L. in the
subluminal and superluminal cases, respectively. We have also used a
Monte Carlo simulation of a galactic supernova at 10~kpc to estimate
how accurately Lorentz violation could be probed in the future. In
such a case one would observe more events because of the larger
fiducial volume of the SK detector compared to the previous
generation of detectors, and because the next observable supernova
is likely to be inside the galaxy and hence closer than SN1987a. On
the other hand, if the next supernova is closer than SN1987a then
the energy-dependent time shift due to Lorentz violation will be
reduced, reducing also the expected sensitivity. We performed
simulations for both the normal and inverted mass hierarchies and
for both an adiabatic and a non-adiabatic atmospheric resonance. In
all scenarios it would be possible to probe Lorentz violation using
the methods decribed in this paper. We used the minimal dispersion
(MD) method and the maximal ECF method with a several energy
weightings and have shown that using the latter with a linear energy
weighting has the greatest sensitivity. Using this method we have
shown that we could place limits up to $M_{\nu QG1}> 2.2 \times
10^{11}$~GeV and $M_{\nu QG1}> 4.2 \times 10^{11}$~GeV at the $95\%$
C.L. for the subluminal and superluminal cases, respectively, for
linear models of Lorentz violation, and $M_{\nu QG2}> 2.3 \times
10^{5}$~GeV and $M_{\nu QG2}> 3.9 \times 10^{5}$~GeV at the $95\%$
C.L. for the subluminal and superluminal cases, respectively, for
quadratic models of Lorentz violation.
We have then explored the sensitivity to Lorentz
violation in neutrino propagation that could be obtained using data
from the OPERA detector in the CNGS beam. By comparison with the
result already obtained by MINOS in the NuMI beam, OPERA would
benefit from the higher energy of the CNGS beam, the larger
statistics we assume, and the possibility of exploiting the bunch
structure of the CNGS beam that we have explored. We find that,
using standard clock synchronization techniques, the sensitivity of
the OPERA experiment would reach $M_{\nu QG1} \sim 7 \times
10^{5}$~GeV ($M_{\nu QG2} \sim 8 \times 10^{3}$~GeV) after 5 years
of nominal running. If the time structure of the SPS RF bunches
within the extracted CNGS spills of 10.5~$\mu$s could be exploited,
which would require reducing the timing uncertainty to $\sim 1$~ns,
these figures would be significantly improved to $M_{\nu QG1}\sim 5
\times 10^{7}$~GeV ($M_{\nu QG2} \sim 4 \times 10^{4}$~GeV). Using
events in the rock upstream of OPERA, and again assuming a time
resolution $\sim 1$~ns, the sensitivities to Lorentz violation could
be further improved to $M_{\nu QG1} \simeq 4\times 10^8\ {\rm GeV}$
for the linear case and $M_{\nu QG2}= \simeq 7 \times 10^{5}\ {\rm
GeV}$ for the quadratic case. While still inferior to the
sensitivity of the supernova limits in the linear case, the OPERA
rock sensitivity in the quadratic case would exceed even that
possible using data from a future galactic supernova. This and the
fact that any accelerator limit benefits from better-understood
experimental conditions would motivate the effort that would be
required to achieve nanosecond time resolution for the OPERA/CNGS
combination.
\section*{Acknowledgements}
We thank N.~E.~Mavromatos, D.~V.~Nanopoulos and E.~K.~G.~Sarkisyan for
discussions on related subjects. N.~H. and A.~S.~S. thank the CERN Theory
Division for its kind hospitality. N.~H. also thanks STFC for the Studentship
Award PPA/S/S/2004/03926, and the UniverseNet network for supporting
this research project by a Marie Curie Early Stage Research Training
Fellowship of the European Community's Sixth Framework Programme
under contract (MRTN-CT-2006-0355863-UniverseNet). |
0805.1718 | \section{Introduction}
The Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory (BNL) in Upton, NY has just finished its 8$^{th}$ year of operation. The PHENIX Collaboration with its to--date 476 scientists from 67 institutions and 14 Nations has collected in the recent d--Au and p--p run a record 577 TB of data and 275 billions events. The Run--8 d--Au sample represents a 30 times increase over the Run--3 data set despite the addition of new detectors which are described in Sect.~\ref{sec:newdet} in more detail. These large data samples allow us to probe the properties of the new matter with precision measurements of the distributions and systematic study of their dependence on colliding system, centrality, rapidity or even the reaction plane. RHIC also increased its luminosity by a better understanding of the machine and new techniques like stochastic cooling.
RHIC is likely the most versatile heavy ion collider in the world and has collided in its first 8 years 4 different species at 6 different beam energies.
Table~\ref{tab:runs} shows a summary of these first 8 years of PHENIX data taking.
\begin{table}[h]
\caption{Summary of the first 8 years of RHIC running for the PHENIX experiment}
\begin{center}
\begin{tabular}{|c|c|c|r|r|r|r|}\hline
\multirow{2}{*}{} &\multirow{2}{*}{Year} &\multirow{2}{*}{Species} & \multirow{2}{*}{$\sqrt{s}$ [GeV ]} & \multirow{2}{*}{$\int$Ldt}& N$_{tot}$&\multirow{2}{*}{Data Size} \\
& & & & & (sampled) & \\ \hline
Run--1 & 2000 &Au--Au & 130 & 1 $\mu$b$^{-1}$ & 10 M & 3 TB \\ \hline
\multirow{3}{*}
{Run--2} & \multirow{3}{*} {2001/02} & Au--Au & 200 & 24 $\mu$b$^{-1}$ & 170 M & 10 TB \\ \cline{3-7}
& & Au--Au & 19 & & $<$ 1 M & \\ \cline{3-7}
& & p--p & 200 & 0.15 pb$^{-1}$ & 3.7 B & 20 TB \\ \hline
\multirow{2}{*}
{Run--3} & \multirow{2}{*}{2002/03} & d--Au & 200 & 2.74 nb$^{-1}$ & 5.5 B & 46 TB \\ \cline{3-7}
& & p--p & 200 & 0.35 pb$^{-1}$ & 6.6 B & 35 TB \\ \hline
\multirow{2}{*}
{Run--4} & \multirow{2}{*} {2003/04} & Au--Au & 200 & 241 $\mu$b$^{-1}$ & 1.5 B & 270 TB \\ \cline{3-7}
& & Au--Au & 62.4 & 9 $\mu$b$^{-1}$ & 58 M & 10 TB \\ \hline
\multirow{4}{*}
{Run--5} & \multirow{4}{*}{2005} & Cu--Cu & 200 & 3 nb$^{-1}$ & 8.6 B & 173 TB \\ \cline{3-7}
& & Cu--Cu & 62.4 & 0.19 nb$^{-1}$ & 0.4 B & 48 TB \\ \cline{3-7}
& & Cu--Cu & 22.4 & 2.7 $\mu$b$^{-1}$ & 9 M & 1 TB \\ \cline{3-7}
& & p--p & 200 & 3.8 pb$^{-1}$ & 85 B & 262 TB \\ \hline
\multirow{2}{*}
{Run--6} & \multirow{2}{*}{2006} & p--p & 200 & 10.7 pb$^{-1}$ & 233 B & 310 TB \\ \cline{3-7}
& & p--p & 62.4 & 0.1 pb$^{-1}$ & 28 B & 25 TB \\ \hline
Run--7 & 2007 & Au--Au & 200 & 813 $\mu$b$^{-1}$ & 5.1 B & 650 TB \\ \hline
\multirow{3}{*}
{Run--8} & \multirow{3}{*}{2007/08} & d--Au & 200 & 80 nb$^{-1}$ & 160 B & 437 TB \\ \cline{3-7}
& & p--p & 200 & 5.2 pb$^{-1}$ & 115 B & 118 TB \\ \cline{3-7}
& & Au--Au & 9.2 & & & few k \\ \hline
\end{tabular}
\end{center}
\label{tab:runs}
\end{table}
The different collisions systems varying from simple p--p and d--Au, where cold nuclear effects should be visible, serving as a baseline and via proper scaling as a comparison to the collisions of heavier ions, e.g. Cu--Cu and Au--Au. These comparisons should enhance the difference of scaled p--p collisions to the properties of the produced dense medium. In 2003 all four RHIC experiments published white papers \cite{whitepaper} to summarize their findings which led to the announcement that a new phase of matter had been found \cite{perfectliquid}.
\section{New PHENIX detector subsystems} \label{sec:newdet}
\Fref{fig:beamview} and \ref{fig:sideview}
show the PHENIX detector in the 2007/2008 configuration. It consists of 4 spectrometer arms with 3 main magnets. Two arms at mid-rapidity (East and West) with tracking, particle identification (PID) detectors and calorimeters for hadron, electron and photon detection and two muon arms in the forward angles (North and South). Details can be found in \cite{phenixnim}.
Several new detectors had been added over the past years to improve PID, the measurement of the reaction plane (RP) and $\pi^{0}$ identification at forward angles.
\begin{figure}[h]
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{AFranzFig1.pdf}
\caption{{\bf View of the PHENIX detector in beam direction.}}
\label{fig:beamview}
\end{minipage}%
\begin{minipage}[b]{0.55\linewidth}
\centering
\includegraphics[width=\linewidth]{AFranzFig2.pdf}
\caption{{\bf View of the PHENIX detector in side--view.}}
\label{fig:sideview}
\end{minipage}
\end{figure}
A time-of-flight (TOF-W) detector, based on multi-gap resistive plate chambers (MRPC) \cite{tofw}, was added to the PHENIX West central arm detector in 2007 to extend the PID to higher momenta, i.e. above 2-3 GeV/c. Before the PID in the PHENIX West arm relied on a combination of a gas Ring Image Cherenkov (RICH) vessel, an aerogel detector (n=1.0114), and the electromagnetic calorimeter (EMCal) which left a gap in the pion to kaon separation between 3 and 5 GeV/c. One octant of these MRPCs with pad readout were installed and achieving a 75ps time resolution, 85ps overall with the Beam--Beam Counter (BBC), the interaction and TOF start detector, resolution folded in. Several new results based on this detector have been presented at this conference \cite{Shengli,Valle}.
For a few years PHENIX will use the new Reaction Plane Detector (RxNP) \cite{rxnp} to further improve the RP measurement and to improve triggering at lower energies when the BBC and Zero--Degree--Calorimeters (ZDC) are not efficient enough. The RxNP consists of 2x2 rings with 12 scintillator
counters each, read out by 2x24 photomultipliers. It covers the pseudorapidity windows $\eta=1.0\rightarrow1.5, 1.5\rightarrow2.8$.
Seeking to extend the detector coverage into the forward direction, PHENIX installed 412~PbWO$_{4}$ crystals into the forward tips, $3.1<|\eta|<3.7$, of each magnet piston in the North and South muon magnets. The main goal for the Muon Piston Calorimeter (MPC) \cite{mpc}, as it is called, is the reconstruction of $\pi^{0}$ and the search for spin asymmetries in p--p collisions. In heavy ion running, when the overall multiplicity is too high, it improves the measurement of the event reaction plane.
A detector which had a first engineering run in 2007 is the Hadron Blind Detector (HBD) \cite{hbd1,hbd2}. Its a windowless Cherenkov detector using pure $CF_{4}$ with a triple GEM readout, where the top most layer is coated with Cesium Iodide (CsI) to convert the Cherenkov photons into photo-electrons which are in turn amplified by the GEM with a gain of $\sim5\cdot10^{3}$. The HBD will be important in coming years for the measurement of low mass electron pairs from the decay of light vector mesons ($\rho$, $\omega$, and $\phi$).
In the coming years PHENIX plans to install a silicon vertex tracking system, see \cite{Hubert}, a forward silicon--tungsten calorimeter, and a muon trigger based on resistive plate chambers.
\section{Global Observables}\label{sec:global}
Presentations at Quark Matter 1987 in Nordkirchen \cite{nordkirchen}, more than 20 years ago, concentrated on the first measurements of global observables like event multiplicity, transverse momentum and energy distributions to understand if the levels of energy densities reached where sufficient to form a QGP. More than ten years later, in 2000, further detailed measurements lead to the announcement \cite{cern} that a new state of matter had been observed.
\begin{figure}[h]
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{AFranzFig3.pdf}
\caption{{\bf Transverse energy versus number of participating nucleons in Au~-~Au collisions.}}
\label{fig:etauau}
\end{minipage}%
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{AFranzFig4.pdf}
\caption{{\bf Transverse energy versus $\sqrt{s}$ for central Au~-~Au collisions.}}
\label{fig:etcent}
\end{minipage}
\end{figure}
PHENIX had published measurements of the total transverse energy, $E_{T}$, for $\sqrt{s}$~=~200, 130 and 13.9~GeV/c previously \cite{et1, et2}.
\Fref{fig:etauau} summarizes these measurements for Au--Au collisions as a function of participating nucleons and adds the distribution for the fourth beam energy. $E_{T}$ increases with increasing number of participants and stronger with higher beam energy. Concentrating on the most central collisions,
\Fref{fig:etcent} shows $dE_{T}/(d\eta\ 0.5N_{p})$ as a function of $\sqrt{s}$ for several measurements including the new PHENIX datapoint at $\sqrt{s}$~=~62.4~GeV/c. The measurement falls well in line with the previous observed linear dependence of the scaled $E_{T}$ with the log of $\sqrt{s}$.
Detailed studies of charged particle multiplicities in smaller and smaller rapidity windows are accessible with the large datasets. In this volume PHENIX presents fluctuation studies in overall multiplicities and particle ratios \cite{Homma}. Deviations from a monotonic behavior in these ratios should indicate a possible phase transition or critical change in the medium. In the current data nothing unusual was found.
A long proven approach to study the dynamics of a heavy ion collisions is a Hanbury-Brown and Twiss, HBT, analysis \cite{hbt1, hbt2}. PHENIX presented first results in reconstructing the shape and dynamical evolution of the particle emitting source by a 3--dimensional source imaging technique \cite{Lacey}.
Another method to estimate the source size is to measure the coalescence parameter, $B_2$, for deuterons \cite{deuterons, Valle}. Using the above mentioned TOF-W detector PHENIX could extend the existing (anti--) deuteron measurements to higher $p_T$ and multiple centrality bins. Expressing the coalescence probability, $B_{2}^{-1}$ is a measure of the source radius. It increases linearly with the number of participating nucleons in the collision, and the extracted radius parameter is compatible with HBT results on pion pairs.
\section{Flow}
A most surprising observation at RHIC was the strong elliptic flow, which lead to the conclusion that the medium we are studying does not behave like a hot gas but rather like a strongly coupled liquid. When colliding at intermediate impact parameters the overlap region between the two nuclei is elliptically shaped in the transverse plane. This spacial anisotropy creates a pressure gradient which translates into a momentum anisotropy in the final particle stage. Experimentally this is measured via the $\phi$ angular distribution of the particles with respect to the reaction plane angle, $\Psi_{R}$, of the event which is defined by the beam direction and the distance vector of the center of the two nuclei \cite{flowintro}, and a Fourier decomposition.
\begin{equation}
E\frac{d^{3}N}{dp^{3}}=\frac{1}{2\pi }\frac{dN}{p_{T}dp_{T}dy}\left[%
1+\sum_{n=1}^{\infty }2v_{n}(p_{T},y)\cos (n\phi )\right] \label{dndphi}
\end{equation}%
Because of the symmetry $\phi \leftrightarrow -\phi $ in the collision
geometry, sine terms do not appear in above expansion. Also the
odd-order anisotropic flows of particles at midrapidity vanish in collisions
with equal mass nuclei as a result of the additional symmetry $\phi
\leftrightarrow\phi +\pi $.
\begin{figure}[h]
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width=\linewidth]{AFranzFig5.jpg}
\caption{{\bf $v_4$ as a function of $p_T$, $KE_T$ and $KE_{T}/n_{q}$}}
\label{fig:v4ket}
\end{minipage}
\end{figure}
The second coefficient in the Fourier transform, $v_{2}$, is usually the largest and has been studied by all RHIC experiments. It has been observed that the $v_{2}$ of all studied particles scales as $v_{2}/n_{q} \sim KE_{T}/n_{q}$, where $n_{q}$ is the number of quarks in the particle and $KE_{T}=m_{T}-m_{0}$ is the transverse kinetic energy. This scaling was observed up to $KE_{T}\approx1~GeV/c$, an indication that hydrodynamical description of the data was valid \cite{Taranenko}. New data from PHENIX \cite{Shengli} indicate that this is not valid above 1~GeV/c which corresponds to $p_{T} \approx 3~GeV/c$ for a proton, the region where hard scattering becomes important.
The next higher term $v_{4}$ is an important measure if a ideal hydrodynamical description is applicable in this momentum range. If valid, $v_{4}$ should follow the same scaling in $KE_{T}$ as $v_{2}$, but scaled with $n_{q}^{2}$ and more important be equivalent to $v_{2}^{2} n_{q}^{-2}$ as demonstrated in the right panel of \Fref{fig:v4ket}.
\begin{figure}[h]
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width=\linewidth]{AFranzFig6.pdf}
\caption{{\bf $v_2$ for $J/\Psi \rightarrow e^+e^-$} measured at mid-radidity in Au -- Au collisions with the PHENIX central arm detectors.
The lines indicate theoretical predictions as indicated in the figure.}
\label{fig:v2jpsi}
\end{minipage}
\end{figure}
\Fref{fig:v2jpsi} shows the first data on the $v_2$ of $J/\Psi \rightarrow e^+e^-$ in heavy ion collisions. The preliminary result of $v_2=-0.10\pm0.10\pm0.02$ is compatible with 0, but only 42\% of the data are analyzed so far. The lines in \Fref{fig:v2jpsi} represent theoretical predictions as indicated. A PHENIX result on $J/\Psi \rightarrow \mu^+\mu^-$ is to be presented soon, see \cite{Sylvestre} for more details.
Instead of a Fourier transform of the angular particle distribution with respect to the reaction plane, a cumulant technique \cite{cumulants, Issah} can also be used to extract the anisotropic flow strength. The detectors used for the RP determination in PHENIX have a large enough rapidity gap to the tracking and PID detectors that non-flow effects are not important, A direct comparison should then indicate where non--flow effects set in. The cumulant $v_{2}$ starts to diverge from the RP $v_2$ at $p_{T} \approx 3.5GeV/c$, indicating that non--flow effects, e.g. jets from hard scattering, become important. Incidentally it is also the region where the $KE_{T}/n_{q}$ scaling starts to fail.
\section{Jets}
PHENIX has studied the properties of jets at 200 and 62.4~GeV/c for p--p, d--Au, Cu--Cu, and Au--Au collisions \cite{jets0, jets1, jets2, jets3, Adare, McCumber, Pei}.
\begin{figure}[h]
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width=0.6\linewidth]{AFranzFig7.jpg}
\caption{{\bf Near (squares) and awayside (circles) transverse momentum spectra for Au--Au (filled) and p--p (open) collisions for
different rapidity regions for the near and away side jets.
The lines represent fits to the datapoints, with the solid line indicating the fit to an inclusive $p_T$ distribution.}}
\label{fig:nearaway}
\end{minipage}
\end{figure}
Jets resulting from a hard scattering of partons are impossible to reconstruct in heavy ion collisions because of the high background. Therefore these studies are done by 2 and 2+1 azimuthal correlations of a high $p_T$ particle, assumed to be the leading particle of one jet--arm and all other particles assumed to be from the same jet or the recoil. The correlation functions have to be corrected for background and flow, which in itself is an angular correlation.
It has been observed in p--p and peripheral A--A collisions that opposite the trigger particle jet (near side) a slightly wide correlated distribution emerges (away side). In central A--A collisions the momentum spectra for the away side softens and the angular distribution widens even more. Several explanations for these effects have been presented at this conference. \Fref{fig:nearaway} shows a PHENIX comparison of the momentum distribution for two pseudo-rapidity, $\eta$, regions of the near and away side in p--p and Au--Au collisions. The momentum spectra for Au--Au collisions on the near side, but away from the main jet (upper, blue, solid points), and the away side (lower, red, solid points) are softer compared to p--p (open points) and close to the inclusive spectra (lines). This indicates that the momentum distributions of these particles has been softened by passing through the medium.
If the particle distributions and momenta are affected by their passage through the collision medium than the distributions when measured along the long versus the short axis of the collisions ellipsoid should be different. \Fref{fig:rpcorrelations} shows preliminary PHENIX results on a 2 particle correlation function were the data are binned in angular regions with respect to the reaction plane. Even so the $v_2$ dominated systematic error is large a clear change in the shape of the distributions is visible.
\begin{figure}[ht]
\begin{minipage}[b]{\linewidth}
\centering
\includegraphics[width=\linewidth]{AFranzFig8.jpg}
\caption{{\bf Jet correlations functions for Au Au collisions, panels represent 15 degree slices from 0 - 90 degrees away from the reaction plane.}}
\label{fig:rpcorrelations}
\end{minipage}
\end{figure}
\section{Summary}
PHENIX has collected a vast sample of data from p--p to Au--Au collisions at various energies. The data show that we have created a dense medium which affects the momenta and angular distributions of the produced particles. On the other hand it shows a strongly coupled flow which affects all produced particles, even heavy quarks. PHENIX has shown multiple new and more detailed results at this conference and will with its current and future detector subsystems continue to uncover the details of this 'perfect liquid'.
\section*{References} |
0805.0402 | \section{Introduction}
A large variety of physical systems can be described in terms of an
elastic string interacting with a quenched random potential. The role of
such a string can be played by a domain wall in a two-dimensional magnet,
a vortex line in a superconductor, a dislocation in a crystal, and so on;
however following Ref. \onlinecite{KZ} the systems of such a kind are
usually discussed under the generic name of a directed polymer in a random
medium. The unfading interest to this problem is additionally supported by
its resemblance to more complex systems with quenched disorder (e.g., spin
glasses), as well as by its close relation to the dynamics of a randomly
stirred fluid and to the problem of a stochastic growth (see Refs.
\onlinecite{Kardar-R} and \onlinecite{HHZ} for reviews).
One of the main objects of interest in the directed polymer problem is
$P_L(F)$, the free-energy distribution function for large polymer length
$L$. In particular, the knowledge of this distribution function allows one
to make conclusions on the distribution of displacements. The first
important step in the analysis of $P_L(F)$ was made twenty years ago by
Kardar, \cite{Kardar} who suggested that all moments of $P_L(F)$ can be
found by calculating the moments $Z_n\equiv \overline{Z^n}$ of the
distribution of the partition function $Z$ and proposed an asymptotically
exact method for the calculation of $Z_n$ in a $(1+1)\,$-dimensional
system (a string confined to a plane) with a $\delta$-correlated random
potential. However, soon after that Medina and Kardar \cite{MK} understood
(see also Ref. \onlinecite{DIGKB}) that the information provided by the
approach introduced in Ref. \onlinecite{Kardar} is insufficient for
finding any of the moments of $P_L(F)$. However, it allows one to find
\cite{Zhang} the tail of $P_L(F)$ at large negative $F$ (the left tail).
In such a situation the conclusions on the width of the distribution
function have to rely on the assumption that at large $L$ it acquires a
universal form,
\begin{equation} \label{P*}
P_L(F)=\frac{P_*(F/F_*)}{F_*}\;,
\end{equation}
incorporating the dependence on all parameters
through a single characteristic free-energy scale $F_*(L)\propto
L^\omega$, which therefore can be extracted from the known form of the
tail. The form of Eq. (\ref{P*}) assumes that $F$, the free energy of a
directed polymer in a given realization of a disorder, {is counted off
from its average, $\bar{F}(L)$ or, more precisely, from the linear in $L$
contribution to $\bar F$ (that is, $L\lim_{L\rightarrow\infty}[\bar
F(L)/L]$). The same convention is implied below.}
Only recently it has been understood \cite{KK} that the form of the tail
following from Zhang's analysis \cite{Zhang} is applicable for the
description only of the most distant part of the left tail and therefore
has no direct relation to the universal form of the distribution function
which is achieved in the limit of $L\rightarrow\infty$. At large but
finite $L$ the form of $P_L(F)$ given by Eq. (\ref{P*}) can be expected to
be achieved only for not too large fluctuations of $F$ [that is, for
$|F|\ll F_c(L)$ with $F_c(L)/F_*(L)$ tending to infinity with the increase
of $L$], whereas the behavior of $P_L(F)$ at $|F|\gg F_c(L)$ remains
nonuniversal and is not obliged to have anything in common with $P_*(F/F_*)$.
In particular, it can incorporate quite different characteristic
free-energy scales. Thus, the fact that Zhang's approach \cite{Zhang}
reproduces both the correct form of the left tail of $P_*(F/F_*)$ and the
correct estimate of the universal free energy scale $F_*(L)$ (which is the
only relevant free-energy scale inside the universal region), is not more
than a happy coincidence. In contrast to that, the behavior of the right
tail of $P_L(F)$ inside the universal region is qualitatively different
from its behavior in the nonuniversal part of the tail. \cite{KK}
In this article the analysis of the universal and non-universal tails of
$P_L(F)$ developed in Ref. \onlinecite{KK} is presented in more detail and
also is extended to the investigation of $(1+d)\,$-dimensional systems, in
which polymer's displacement can be treated as a $d$-dimensional vector.
The article is organized as follows.
In Sec. \ref{II} we formulate the continuous model which is traditionally
applied for the description of the directed polymer problem and briefly
review its relation to the Kardar-Parisi-Zhang (KPZ) model \cite{KPZ}
of a stochastic growth, as well as to the Burgers turbulence problem.
Sec. \ref{OF} provides a short introduction to the optimal fluctuation
approach, which can be used for the description of the most distant
(non-universal) parts of the tails of $P_L(F)$. In Refs. \onlinecite{BFKL}
an analogous approach has been used to investigate the distribution of
velocity and its derivatives in the Burgers turbulence problem, which
however requires one to consider optimal fluctuations with completely
different structures then studied here.
In Sec. \ref{fl} the optimal fluctuation approach is applied for the
analysis of the far-left tail of $P_L(F)$, and in Sec. \ref{fr} of the
far-right tail. Our main attention is focused on the systems with a
$\delta$-correlated random potential; however for $d\geq 2$ the problem
with purely $\delta$-functional correlations becomes ill-defined, so we also
consider the case when a random potential correlations can be
characterized by a finite correlation radius.
For finding the universal parts of both tails one also has to look for
optimal fluctuations, but taking into account that in this regime the
parameters of the system have to be considered as scale dependent due to
their renormalization by fluctuations. This is done in Sec. \ref{ut}. The
validity of this approach is confirmed by the consistency of its
predictions with the results of the exact solution \cite{PS} of the
$(1+1)\,$-dimensional polynuclear growth (PNG) model, as well as by
obtaining identical estimates for $F_*(L)$ in the left and right tails.
The concluding Sec. \ref{Concl} is devoted to summarizing the results and
comparing them with some results of other authors, whereas in Appendix
\ref{RA} we discuss how some of the results of this work can be derived in
terms of the Kardar-Zhang replica approach. \cite{Kardar,Zhang}
Our main attention throughout this work is focused on a system with free
initial condition, that is, we assume that only one end of a string is
fixed, whereas the other one is free to fluctuate. In terms of the KPZ
problem \cite{KPZ} the same distribution function describes the
distribution of heights in the regime of a nonstationary growth in the
situation when an interface starts to grow from a flat configuration ($L$
being the total time of the growth). One only has to bear in mind that the
height (as defined in the standard form of the KPZ equation) and the free
energy of the directed polymer problem differ from each other by the sign.
Therefore, what we call here the left (right) tail of $P_L(F)$ in terms
of the KPZ problem corresponds to the right (left) tail of the height
distribution function.
Finally, Appendix \ref{FIC} is devoted to demonstrating that when
both end points of a directed polymer are fixed, the form of the left tail
of $P_L(F)$ remains basically the same as for free initial condition.
\section{The model \label{II}}
We consider an elastic string in a $(1+d)\,$-dimensional space
interacting with a random potential $V(t,{\bf x})$.
The coordinate along the average direction of the string is denoted $t$
for the reasons which will become evident few lines below.
Such a string can be described by the Hamiltonian,
\begin{equation} \label{H}
H
=\int_{0}^{t}dt' \left\{\frac{J}{2}\left[\frac{d{\bf x}(t')}{dt'}\right]^2
+V[t',{\bf x}(t')]\right\} \;,
\end{equation}
where the first term describes the elastic energy and the second one
the interaction with a random potential. Note that the form of the first
term in Eq. (\ref{H}) relies on the smallness of the angle between
the string and its preferred direction.
The partition function of a string which starts at $t=0$ and ends
at the point $(t,{\bf x})$ is then given by the functional integral,
\begin{equation} \label{z(t,x)}
z(t,{\bf x})=\int_{-\infty}^{+\infty} d{\bf x}' \,z(0,{\bf x}')
\int_{{\bf x}(0)={\bf x}'}^{{\bf x}(t)={\bf x}}{\cal
D}{\bf x}(t')\exp\left(-H/T\right)\,,
\end{equation}
where $T$ is the temperature.
Naturally, $z(t,{\bf x})$ depends on the initial condition at $t=0$.
The fixed initial condition, ${\bf x}(t=0)={\bf x}_0$, corresponds to
$z(0,{\bf x})=\delta({\bf x}-{\bf x}_0)$,
whereas the free initial condition (which implies the absence of any
restrictions on ${\bf x}$ at $t=0$) to
\begin{equation} \label{FBC}
z(0,{\bf x})=\mbox{const}\,.
\end{equation}
Since Eq. (\ref{z(t,x)}) has exactly the same form as the Euclidean
functional integral describing the motion of a quantum particle whose mass
is given by $J$ in a time-dependent random potential $V(t,{\bf x})$ (with
$t$ playing the role of imaginary time and $T$ - of Plank's constant
$\hbar$), the evolution of $z(t,{\bf x})$ with the increase in $t$ has to
be governed by the imaginary-time Schr\"{o}dinger equation
\begin{equation} \label{dz/dt}
-T\frac{\partial z}{\partial t}
=\left[-\frac{T^2}{2J}\nabla^2+V(t,{\bf x})\right]z(t,{\bf x})\,.
\end{equation}
As a consequence of this, the evolution of
the free energy corresponding to $z(t,{\bf x})$,
\begin{equation} \label{}
f(t,{\bf x})=-T\ln\left[z(t,{\bf x})\right]\,,
\end{equation}
is governed \cite{HHF} by
the Kardar-Parisi-Zhang (KPZ) equation, \cite{KPZ}
\begin{equation} \label{KPZ}
\frac{\partial f}{\partial t}+\frac{1}{2J}(\nabla f)^2-\nu \nabla^2 f
=V(t,{\bf x})\,,
\end{equation}
with the inverted sign of $f$, where $t$ plays the role of time
and $\nu\equiv{T}/{2J}$ of viscosity.
On the other hand, the derivation of Eq. (\ref{KPZ}) with respect
to ${\bf x}$ allows one to establish the equivalence \cite{HHF} between
the directed polymer problem and the Burgers equation \cite{Burgers}
with random potential force,
\begin{equation} \label{Burg}
\frac{\partial {\bf u}}{\partial t}+\frac{1}{2}\nabla {\bf u}^2
-\nu\nabla^2{\bf u}=\frac{1}{J}\nabla V(t,{\bf x})\,,
\end{equation}
where the vector
\begin{equation} \label{}
{\bf u}(t,{\bf x})\equiv\frac{1}{J}\nabla f(t,{\bf x})
\end{equation}
plays the role of velocity.
Note that in terms of the KPZ problem
the free initial condition (\ref{FBC})
corresponds to starting the growth from a flat
interface, $f(0,{\bf x})=\mbox{const}$, and in terms of the Burgers problem
to starting the evolution from a liquid at rest, ${\bf u}(0,{\bf x})=0$.
To simplify an analytical treatment, the statistic of a random potential
is usually assumed to be Gaussian with
\begin{equation} \label{VV}
\overline{V(t,{\bf x})}=0\,,~~~
\overline{V(t,{\bf x})V(t',{\bf x}')}=\delta(t-t')U({\bf x}-{\bf x}')\,,
\end{equation}
where an overbar denotes the average with respect to disorder.
{Our main attention below is focused on the case of purely
$\delta$-functional correlations, \makebox{$U({\bf x})=U_0\delta({\bf
x})$}. However, for $d\geq 2$ the problem with such a form of correlations
is ill-defined and needs a regularization, so we also consider the case when
$U({\bf x})$ can be characterized by a finite correlation radius $\xi$.
On the other hand, we always assume that the correlations in the $t$ direction
are $\delta$-functional, because in almost all situations considered below
the finiteness of the correlation radius in the $t$ direction can be ignored
as soon as it is small in comparison with the total length of
a string.}
\section{Optimal-fluctuation approach\label{OF}}
When the distribution of $V(t,{\bf x})$ is Gaussian and satisfies Eqs.
(\ref{VV}), the probability of any realization of $V(t,{\bf x})$ is
proportional to $\exp[-S\{V\}]$, where the action $S\{V\}$ is given by the
functional
\begin{equation} \label{S(V)}
S\{V\}
= \frac{1}{2}\int_{0}^{L} dt
\int\hspace*{-2mm}\int d{\bf x}\,d{\bf x}'\,
V(t,{\bf x})U^{-1}({\bf x}-{\bf x}')V(t,{\bf x}')\,.
\end{equation}
Here $U^{-1}({\bf x})$ denotes the function whose
convolution with $U({\bf x})$ is equal to $\delta({\bf x})$.
Accordingly, the probability of any time evolution of $f(t,{\bf x})$ is
determined by the action $S\{f\}$,
which is obtained by replacing $V(t,{\bf x})$ in Eq. (\ref{S(V)})
by the left-hand side of the KPZ equation (\ref{KPZ}).
To find the most optimal fluctuation having the largest probability
(in literature it is often called ``instanton"),
one has to minimize $S\{f\}$ for the given boundary conditions at $t=0$
and $t=L$.
A convenient way to perform such a minimization consists in replacing
$S\{f\}$ by
\begin{eqnarray} \label{S(f,mu)}
S\{f,\mu\} \hspace*{-2mm} & = &\hspace*{-2mm} \int_{0}^{L} dt\left\{\int d{\bf x}\,
\left[\frac{\partial f}{\partial t}+\frac{1}{2J}(\nabla f)^2
-\nu \nabla^2 f\right]\mu(t,{\bf x})\right. \nonumber\\
\hspace*{-2mm} &-&\hspace*{-2mm} \left.\frac{1}{2}\int\hspace*{-2mm}\int d{\bf x}\,d{\bf x}'\,
\mu(t,{\bf x})U({\bf x}-{\bf x}')\mu(t,{\bf x}')\right\}
\end{eqnarray}
where $\mu(t,{\bf x})$ is an auxiliary field with respect to which
$S\{f,\mu\}$ also has to be extremized.
Variation of Eq. (\ref{S(f,mu)}) with respect to $\mu(t,{\bf x})$
reproduces the KPZ equation (\ref{KPZ}) with
\begin{equation} \label{V(mu)}
V(t,{\bf x})=\int d{\bf x}'\,U({\bf x}-{\bf x}')\mu(t,{\bf x}')\,,
\end{equation}
whereas its variation with respect to $f(t,{\bf x})$ leads to
\begin{equation} \label{dmu/dt}
{\partial \mu}/{\partial t}+\mbox{div}({\bf u}\mu)+\nu\nabla^2\mu = 0\,, \label{mu_t}
\end{equation}
where ${\bf u}(t,{\bf x})\equiv \nabla f(t,{\bf x})/J$ is the ``velocity''
entering the Burgers equation (\ref{Burg}).
The form of Eq. (\ref{mu_t}) implies that the integral of $\mu(t,{\bf x})$
over whole space is a conserved quantity, whereas substitution
of Eq. (\ref{V(mu)}) into Eq. (\ref{S(V)})
shows that in terms of $\mu(t,{\bf x})$ the action can be rewritten as
\begin{equation} \label{S(mu)}
S\{\mu\}=\frac{1}{2}\int_{0}^{L} dt \int
\int d{\bf x}\,d{\bf x}'\,\mu(t,{\bf x})U({\bf x}-{\bf x}')\mu(t,{\bf x}')\,.
\end{equation}
In a system with $\delta$-functional correlations,
\makebox{$U(x)=U_0\delta({\bf x})$}, $V$ and $\mu$ differ from each other
only by a constant factor $U_0$, and accordingly,
Eq. (\ref{mu_t}) can be replaced by
\begin{equation} \label{V_t}
{\partial V}/{\partial t}+\mbox{div}({\bf u} V)+\nu \nabla^2 V=0\,.
\end{equation}
If the beginning of a polymer (at $t=0$) is not fastened to a particular
point and is free to fluctuate, the initial condition for the partition
function $z(t,{\bf x})$ has to be chosen in the form
$z(0,{\bf x})=\mbox{const}$.
In such a case to find the tails of $P_L(F)$
one has to find the solution of Eqs. (\ref{KPZ}) and (\ref{mu_t})
which satisfies the initial condition
\begin{equation} \label{ini}
f(0,{\bf x})=0\,,
\end{equation}
and the final condition
\begin{equation} \label{fin}
f(L,0)=F\,,
\end{equation}
where for the left tail $F<0$ and for the right tail $F>0$. Alternatively,
condition (\ref{fin}) can be imposed by the inclusion of the
$\delta$-functional factor,
\begin{equation} \label{}
\int d\lambda\exp\{ i\lambda[f(L,{\bf x}=0)-F]\}\,,
\end{equation}
into the functional integral defining the probability of a fluctuation.
In such a case condition (\ref{fin}) for $f(L,{\bf x})$
should be replaced by the condition for $\mu(L,{\bf x})$,
\begin{equation} \label{fin-2}
\mu(L,{\bf x})=\mu_0\delta({\bf x})\,,
\end{equation}
where, however, the value of
$\mu_0\propto\lambda$ has to be chosen to satisfy Eq. (\ref{fin}).
\section{Far-left tail \label{fl}}
It turns out that in the case of the left tail
the solution of Eqs. (\ref{KPZ}) and (\ref{mu_t})
which satisfies boundary conditions (\ref{ini}) and (\ref{fin})
can be constructed on the basis of the solution of these equations
in which the potential $V$ and all derivatives of $f$ do not depend on $t$,
which means that the time dependence of $f(t,{\bf x})$
is decoupled from its spacial dependence and is as trivial as possible,
\begin{equation} \label{f(x)0}
f(t,{\bf x})={E}(t-t_1)+f({\bf x})\,,
\end{equation}
where $t_1=\mbox{const}$ and ${E}=\mbox{const}<0$.
Below we for brevity call such solutions stationary.
For $f(t,{\bf x})$ of form (\ref{f(x)0})
the replacement
\begin{equation} \label{Psi(f)}
f({\bf x})=-T\ln\Psi({\bf x})\,
\end{equation}
transforms the KPZ equation (\ref{KPZ}) into a
stationary Schr\"{o}dinger equation:
\begin{equation} \label{Schr}
{E}\Psi=\hat{H}
\Psi\,,
\end{equation}
for a single-particle quantum-mechanical problem defined by the Hamiltonian
\begin{equation} \label{Hq}
\hat{H}=-\frac{T^2}{2J}\nabla^2+V({\bf x})\,,
\end{equation}
where $J$ plays the role of mass and $T$ of Plank's constant $\hbar$
[compare with (\ref{dz/dt})].
On the other hand, when both \makebox{${\bf u}=-(T/J)\nabla\Psi/\Psi$}
and $\mu$ do not depend on $t$,
Eq. (\ref{mu_t}) is automatically fulfilled as soon as
\begin{equation} \label{mu(psi)}
\mu({\bf x})\propto \Psi^2({\bf x})\,,
\end{equation}
which implies
\begin{equation} \label{V(mu)2}
V({\bf x})=
-\Lambda\int_{}^{}d{\bf x}'\,U({\bf x}-{\bf x}')\Psi^2({\bf x}')\,,
\end{equation}
where $\Lambda$ is an arbitrary constant.
Substitution of Eq. (\ref{V(mu)2}) into Eq. (\ref{Schr}) allows one
to replace them by a single nonlinear Schr\"{o}dinger equation,
\
{E}\Psi=-\frac{T^2}{2J}\nabla^2\Psi-\Lambda\Psi({\bf x})
\int_{-\infty}^{+\infty}d{\bf x}'\,U({\bf x}-{\bf x}')\Psi^2({\bf x}')\,.
\
Equation (\ref{V(mu)2}) has been derived by Halperin and Lax \cite{HL}
when looking for the optimal fluctuation of the potential $V({\bf x})$,
which for the given value of the ground state energy \makebox{$E<0$}
of the quantum-mechanical Hamiltonian (\ref{Hq}) minimizes the functional
\begin{equation} \label{s(V)}
s\{V\}
= \frac{1}{2}\int_{}^{}\int_{}^{}d{\bf x}\,d{\bf x}'\,
V({\bf x})U^{-1}({\bf x}-{\bf x}')V({\bf x}')\,,
\end{equation}
determining the probability of $V({\bf x})$
(or, equivalently,
minimizes $E$ for the given value of $s\{V\}$).
Apparently, in terms of our problem $s\{V\}$ is related to the action
$S\{V\}$ defined by Eq. (\ref{S(V)}) as $S=Ls$.
In the case of $\delta$-functional correlations,
$U({\bf x})=U_0\delta({\bf x})$,
and $t$-independent potential $V({\bf x})$,
functional (\ref{S(V)}) is reduced to
\begin{equation} \label{S(V)2}
S\{V\}
= \frac{L}{2U_0}
\int_{}^{}d{\bf x}\,V^2({\bf x})\,.
\end{equation}
\subsection{$\delta$-functional correlations, $\bf d=1$\label{d=1}}
In a $1+1$-dimensional system with a $\delta$-correlated random potential,
$U(x)=U_0\delta(x)$,
the localized solution of Eqs. (\ref{Schr}) and (\ref{V(mu)2}) (the soliton)
exists for any ${E}<0$ and can be found exactly, \cite{HL}
\begin{eqnarray} \label{Psi}
\Psi(x) & = &
\left(\frac{-2E}{\Lambda U_0}\right)^{1/2}\frac{1}{\cosh(x/\Delta)}\;, \\
V(x) & = & \frac{2E}{\cosh^2({x}/{\Delta})}\;,
\label{V(x)}
\end{eqnarray}
where the length-scale
\begin{equation} \label{}
\Delta=\frac{T}{(-2J{E})^{1/2}}
\end{equation}
can be called soliton width.
This allows one to conclude that the stationary solution of
Eqs. (\ref{KPZ}) and (\ref{V_t}) is given by Eq. (\ref{V(x)}) and
\begin{eqnarray} \label{f(x)}
f(t,x) & = & E(t-t_1)+T\ln\left(2\cosh\frac{x}{\Delta}\right) \;,
\end{eqnarray}
which follows from the substitution of Eq. (\ref{Psi})
into Eqs. (\ref{f(x)0}) and (\ref{Psi(f)}). Note that in Eq. (\ref{f(x)})
the constant $t_1$ has been redefined in order to absorb $\Lambda$.
Differentiation of Eq. (\ref{f(x)}) with respect to $x$ gives
a stationary profile of $u(x)$,
\begin{equation} \label{u(x)}
u(x)=v\tanh\frac{x}{\Delta} \;,
\end{equation}
schematically shown in Fig. \ref{fig1}(a).
Here
\begin{equation} \label{}
v={T}/{J\Delta}
\end{equation}
is the velocity of the outward flow created by the forces
acting inside the soliton. The profile (\ref{u(x)})
up to a sign coincides with the one
in a stationary shock wave with the same amplitude $v$.
The solitons of such a kind (both stationary and moving ) have been
discussed in a number of works by Fogedby.\cite{Fogedby}
\begin{figure}[bt]
\begin{center}
\includegraphics[width=50mm]{tails-fig1.eps}
\caption[Fig. 1] {The spacial dependence of $u$ and $f$
in the stationary solution of Eqs. (\ref{KPZ})
and (\ref{V_t}).}\label{fig1}
\end{center}
\end{figure}
The stationary profile of $f$ described by Eq. (\ref{f(x)})
is schematically shown in Fig. \ref{fig1}(b).
With the increase in time it is moving downward as a whole with a constant
velocity ${\partial f}/{\partial t}=E$.
Away from the soliton's core, that is at $|x|\gg \Delta$,
the dependence described by Eq. (\ref{f(x)}) can be approximated as
\begin{equation} \label{f(x)-appr}
f(t,x) \approx E(t-t_1)+(-2JE)^{1/2}|x|\,.
\end{equation}
Since Eq. (\ref{f(x)-appr}) describes a solution of the noiseless KPZ
equation, its form does not depend on the form (or amplitude)
of the random potential correlator $U(x)$.
The stationary solution minimizes the action
for the given negative value of ${\partial f}/{\partial t}=E$.
Therefore, it allows one to find the optimal value of $S$ in situations
when it is not influenced by the initial condition.
Substitution of Eq. (\ref{V(x)}) into Eq. (\ref{S(V)2}) then gives
\begin{equation} \label{}
S(\Delta)=\frac{2}{3}\frac{T^4L}{U_0J^2\Delta^3}\,.
\end{equation}
Apparently, the condition $f(L,0)-f(0,0)=F$ is fulfilled when
$E=F/L$, which corresponds to
\begin{equation} \label{F(Delta)}
F=-\frac{T^2}{2J\Delta^2}L
\end{equation}
and
\begin{equation} \label{S(F)}
S(F)=\frac{4\sqrt{2}
}{3}\frac{\;T{(-F)^{3/2}}}{U_0J^{1/2}{L^{1/2}}}\;.
\end{equation}
However, the real optimal fluctuation also
has to respect the initial condition and it is clear that
the spacial dependence of $f$ in Eq. (\ref{f(x)})
in no way resembles the initial condition (\ref{ini}).
In terms of the quantum-mechanical problem with time-independent
potential $V({\bf x})$ it is clear that the applicability of the relation
$F\approx E L$ requires to have $(\delta E)L \gg T$,
where $\delta E$ is the energy gap separating the ground state
of the Hamiltonian (\ref{Hq}) from the first excited state.
Since in potential (\ref{V(x)}) there exists only one bound level with
a negative energy, \cite{LL-QM} whereas excited states can have any
non-negative energy, this condition is equivalent to $-F \gg T$.
\begin{figure}[bt]
\begin{center}
\includegraphics[width=50mm]{tails-fig2.eps}
\caption[Fig. 1] {The spacial dependence of $u$ and $f$ in the solution of
Eqs. (\ref{KPZ}) and (\ref{V_t}) corresponding to the left tail of
$P_L(F)$. The arrows show the directions of motion of the two shock waves.
}\label{fig2}
\end{center}
\end{figure}
For constructing a non-stationary solution which eliminates the inconsistency
between the forms of the stationary solution and of the initial condition
({without increasing the action}),
one has to complement the soliton shown
in Fig. \ref{fig1}(a) by two traveling shock waves
[as shown in Fig. \ref{fig2}(a)], whose existence does not require
any additional pumping. Both these shock waves will be moving outwards
with velocity $v/2$.
Their presence will change the profile of $f(t,x)$ to the one shown in Fig.
\ref{fig2}(b), so that $f(x)$ will be given by Eq. (\ref{f(x)}) only
in the interval where $f(t,x)<0$, whereas outside of this region
it will coincide with the initial condition (\ref{ini}) (with a smooth
crossover between the two solutions).
This means that if a potential localized in the vicinity of
$x=0$ is switched on at $t=0$, its influence on $f(t,x)$ at $t>0$ extends
only to a finite (but growing with $t$) region,
which is perfectly logical.
In such a situation the constant $t_1$ in Eq. (\ref{f(x)}) [or in Eq.
(\ref{f(x)-appr})] will have the meaning of an effective time required for
the formation of the non-stationary solution shown in Fig. \ref{fig2}. At
the initial stage, that is, at $t\lesssim t_1$, the spacial distribution
of $V(x)$ will substantially differ from the one given by Eq. (\ref{V(x)})
The value of $t_1$ can be estimated from the comparison of the soliton
width $\Delta=2\nu/v$ with the velocity $v$ of the flow it creates, which
gives $t_1\sim\nu/v^2$. This allows one to expect that for $L\gg t_1$ the
main contribution to the action is coming from the region in $t$ where Eq.
(\ref{V(x)}) gives a sufficiently accurate description of the solution,
and therefore the value of $S(F)$ is given by Eq. (\ref{S(F)}). In terms
of $F$ the constraint $L\gg t_1$ corresponds to the condition $-F\gg T$
(which was already derived above in different terms).
The same condition allows one to neglect the final stage of the
optimal-fluctuation evolution. At this stage the potential has to shrink
from the form given by Eq. (\ref{V(x)}) to a $\delta$-function as suggested
by Eq. (\ref{fin-2}). Simultaneously, the downward tip of $f(x)$ has to
change its shape from rounded to more sharp.
The decrease in $f(x=0)$ related to this process can be expected
to be comparable with the change in $f$ induced by the rounding of the tip,
which according to Eq. (\ref{f(x)}) is of the order of $T$, and therefore
for $-F\gg T$ can be ignored.
Note that the same answer for $S(F)$, Eq. (\ref{S(F)}), can be also obtained
in the framework of the Kardar-Zhang replica approach
based on mapping a system to a set of interacting bosons and keeping only
the ground state contribution to the partition function of these bosons
(see Appendix \ref{RA} for more details).
In Appendix \ref{FIC} we demonstrate that the change of the initial
condition from free to fixed does not change the form
of the main contribution to $S(F)$. The same conclusion is even more
easily attained in terms of the replica approach (see Appendix
\ref{RA}).
In the remaining part of this section, we analyze the systems with
an arbitrary dimension and/or finite-range correlations assuming that
the main features of the optimal fluctuation determining the far-left tail
are the same.
Namely, we expect that in a growing region around \makebox{${\bf x}=0$},
the solution is close to the stationary solution, whereas outside of this
region it is close to the initial condition $f(t,{\bf x})=0$, the crossover
between the two regions being described by a corresponding solution of
the noiseless KPZ equation. In such a situation the action of the optimal
fluctuation is determined by the form of the stationary solution.
\subsection{Generalization to $\bf d\neq 1$\label{fl-d}}
If the dimension of the transverse space $d$ is not equal to 1,
the joint solution of Eqs. (\ref{Schr}) and (\ref{V(mu)2}),
that is, the wave function $\Psi({\bf x})$
which minimizes the sum of a positive kinetic energy,
\begin{equation} \label{}
{\cal K}\equiv\frac{T^2}{2J}
\frac{\int_{}^{}d^d{\bf x}|\nabla\Psi({\bf x})|^2}
{\int_{}^{}d^d{\bf x}|\Psi({\bf x})|^2}\;,
\end{equation}
and a negative potential energy,
\begin{equation} \label{}
{\cal V}\equiv \frac{\int_{}^{}d^d{\bf x}V({\bf x})|\Psi({\bf x})|^2}
{\int_{}^{}d^d{\bf x}|\Psi({\bf x})|^2}\;,
\end{equation}
for a given value of the functional $S\{V\}$ defined by Eq. (\ref{S(V)2}),
cannot be found exactly.
However, in a situation when this wave function
and, therefore, the potential $V({\bf x})\propto -U_0\Psi^2({\bf x})$
are well localized at some length scale $\Delta$,
an estimate for $\Delta$ and a qualitative relation between $S$ and $F$
can be obtained without finding the exact form of $\Psi({\bf x})$.
When $\Psi({\bf x})$
can be characterized by a single relevant length-scale $\Delta$, one has
\begin{equation} \label{cal-K}
{\cal K}(\Delta)\sim\frac{T^2}{J\Delta^2}\;,
\end{equation}
whereas
the absolute value of \makebox{${\cal V}\sim V(0)$} at a given $S$
can be estimated with the help of Eq. (\ref{S(V)2}), which gives
\begin{equation} \label{S(cal-V)}
S
\sim \frac{L}{U_0}\Delta^d {\cal V}^2\,,
\end{equation}
and therefore
\begin{equation} \label{V(S)}
{\cal V}(\Delta)
\sim -\left(\frac{SU_0}{L\Delta^d}\right)^{1/2}\,.
\end{equation}
For $0<d<4$ the sum ${\cal K}(\Delta)+{\cal V}(\Delta)$ has a minimum
with respect to $\Delta$ when
\makebox{${\cal K}(\Delta)\sim -{\cal V}(\Delta)$} and therefore both
${\cal K}$ and $-{\cal V}$ have to be of the same order
as $-E=-F/L$.
Substitution of ${\cal V}\sim F/L$ into Eq. (\ref{S(cal-V)})
allows one to rewrite this relation as
\begin{equation} \label{S(Delta,F)}
S\sim \frac{\;\,\Delta^dF^2}{U_0L}\;.
\end{equation}
On the other hand, an estimate for $\Delta$ in terms of $F$
can be obtained from the relation ${\cal K}\sim -E$, which gives
\begin{equation} \label{Delta(F)}
\Delta(F)\sim \frac{T}{2J}\left[\frac{JL}{-F}\right]^{1/2}\,.
\end{equation}
After that to obtain an estimate for $S(F)$
one needs only to substitute Eq. (\ref{Delta(F)}) into Eq.
(\ref{S(Delta,F)}), which leads to
\begin{equation} \label{S(F)d}
S(F)\sim \frac{\;\;T^d{\left(-F\right)^{2-d/2}}}{U_0J^{d/2}{L^{1-d/2}}}\;.
\end{equation}
Naturally, for $d=1$ Eq. (\ref{S(F)d}) is consistent with Eq. (\ref{S(F)})
derived in the Sec. \ref{d=1} on the basis of the exact solution of
Eqs. (\ref{Schr}) and (\ref{V(mu)2}).
For $d>4$ the sum of ${\cal K}(\Delta)$ and ${\cal V}(\Delta)$ at a given $S$
is not bounded from below and tends to $-\infty$ when $\Delta\rightarrow 0$.
Accordingly, for any $F<0$ it becomes possible to find
a stationary fluctuation with an arbitrary low action,
so the method of optimal fluctuation is no longer applicable.
However, it turns out that the range of the applicability of Eq. (\ref{S(F)d})
is even more narrow than the interval $0<d<4$, where the action of
stationary fluctuations has a well-defined positive minimum.
The point is that $L$ enters Eq. (\ref{S(F)d}) as the total time
of the development of the optimal fluctuation of $f(t,{\bf x})$.
From this it is clear that Eq. (\ref{S(F)d}) can be expected to be valid
only if $S$ decreases with the increase in $L$,
which forces the time of the development of the optimal fluctuation
to coincide with $L$.
In the opposite case (when $S$ {decreases} with the {\em decrease} of $L$)
there appears a possibility to decrease the action of the fluctuation
we are considering by making the time of its development smaller than $L$.
Namely, if one makes in Eq. (\ref{S(Delta,F)}) a replacement
\begin{equation} \label{}
L\Rightarrow\gamma^2L\,,~~~
\Delta\Rightarrow\gamma\Delta
\end{equation}
conserving relation (\ref{Delta(F)}), this leads to
$S\Rightarrow\gamma^{d-2}S$. Therefore, for $d>2$ a consistent decrease
in the size of the fluctuation and in the time of its development allow
one to make $S(F)$ {arbitrarily small}
by choosing a sufficiently small $\gamma$.
This suggests that the result (\ref{S(F)d}) can be expected
to be applicable only at $0<d<2$, whereas at $d>2$
the optimal fluctuation corresponding to the most
distant part of the left tail has to be localized at small scales and its
form has to be determined by the form of a cutoff.
Without a cutoff the problem with $d\geq 2$ and $\delta$-functional
correlations is ill-defined.
Note that at $d\geq 2$, the problem with $\delta$-functional
correlations is ill-defined also for another reason.
Namely, at $d\geq 2$ the perturbative corrections to the viscosity
$\nu$ and other quantities acquire ultraviolet divergencies which at $d<2$
are absent.
Apparently, this is not a coincidence but another manifestation
of the same phenomenon. Therefore, for $d\geq 2$ some ultraviolet cutoff
must be introduced into the problem.
One of the most natural ways to do it consist in assuming that
the correlations of a random potential are characterized by
a finite correlation radius.
\subsection{Finite-range correlations \label{frc}}
When random potential correlator $U({\bf x})$ [which we assume to be
spherically symmetric, $U({\bf x})\equiv U({|\bf x}|)$] is characterized
by a finite correlation radius $\xi$,
the stationary solution of Eqs. (\ref{KPZ}) and (\ref{mu_t})
cannot be found exactly even at $d=1$.
However, it is clear from the form of Eq. (\ref{V(mu)}) relating $V$ and
$\mu$ that when the soliton width $\Delta$ is much larger than
$\xi$, the actual solution has to be rather close to the solution
for $\xi=0$, the same being true also for the value of $S(F)$.
It follows from Eq. (\ref{Delta(F)}) that in terms of $F$
the condition $\Delta\gg\xi$ corresponds to
\begin{equation} \label{}
-F\ll F_\xi\sim \frac{T^2L}{J\xi^2}\;.
\end{equation}
It turns out that for the opposite relation between the parameters,
$-F\gg F_\xi$,
the stationary solution of Eqs. (\ref{KPZ}) and (\ref{mu_t})
also can be found rather accurately.
As it is shown below, in such a case $\mu$
is localized in a region which is much narrower than $\xi$,
whereas both $f$ and $V$ change at the scales of the order of $\xi$.
In particular, it follows from Eq. (\ref{V(mu)}) that in such a situation
the spacial dependence of the potential $V({\bf x})$ just repeats that of
$U({\bf x})$,
\begin{equation} \label{V(e)}
V({\bf x})\approx -U({\bf x}){\varepsilon}\,,
\end{equation}
whereas the amplitude of $V({\bf x})$ is determined by
\begin{equation} \label{}
\varepsilon\equiv -\int_{}^{}d{\bf x}\,\mu({\bf x})\,,
\end{equation}
the overall strength of the negative potential source $\mu({\bf x})$.
For $-F\gg F_\xi$ the viscous term in Eq. (\ref{KPZ})
can be neglected, which immediately gives that in the
spherically symmetric stationary solution
\begin{equation} \label{}
\partial f/\partial t \approx -U(0)\varepsilon
\end{equation}
and
\begin{equation} \label{}
\left(\frac{\partial f}{\partial r}\right)^2
={2J[U(0)-U(r)]\varepsilon}\,,
\end{equation}
where $r=|{\bf x}|$, so that
\begin{equation} \label{f(r)}
f({\bf x})=f(0)+\sqrt{2J\varepsilon}\int_{0}^{|{\bf x}|}dr\,
\sqrt{U(0)-U(r)}\;.
\end{equation}
In terms of the Schr\"{o}dinger equation (\ref{Schr}) the neglect of the
viscous term in the stationary KPZ equation corresponds to nothing else
but using the semiclassical approximation
for the calculation of the ground-state wave function.
Substitution of $\Psi({\bf x})=\exp[-f({\bf x})/T]$ with $f({\bf x})$
given by Eq. (\ref{f(r)})
into Eq. (\ref{mu(psi)}) demonstrates that at $|{\bf x}|\ll\xi$,
\begin{equation} \label{mu(x)}
\mu({\bf x})\propto\exp\left[-\frac{{\bf x}^2}{2\Delta^2}\right]\,,
\end{equation}
where $\Delta$, the width of the region where the
potential source $\mu({\bf x})$ is localized, is given by
\begin{equation} \label{x_mu}
\Delta(F)
\left[\frac{T^2U(0)}{-4U_{rr}(0)JE}\right]^{1/4}
\sim \left(\frac{F_\xi}{-F}\right)^{1/4}\xi\,.
\end{equation}
When deriving this estimate we have replaced
$-U_{rr}(0)$ by $U(0)/\xi^2$ and $E$ by $F/L$.
The result shows that the assumption $\Delta(F)\ll\xi$,
which has been used above to obtain Eq. (\ref{V(e)}),
is indeed self-consistent as soon as \makebox{$-F\gg F_\xi$}.
Substitution of Eq. (\ref{V(e)}) into Eq. (\ref{S(mu)}) reduces
the expression for the action to a very simple form,
\begin{equation} \label{S(e)}
S=\frac{U(0)}{2}{L}\varepsilon^2
=\frac{LE^2}{2U(0)} \;,
\end{equation}
which is easily recognizable to those familiar with application of
the optimal-fluctuation approach to a quantum-mechanical problem
with finite-range correlations of a random potential \cite{ShEf}
and after substitution of $E=F/L$ gives
\begin{equation} \label{S(F)xi}
S(F)=\frac{F^2}{2U(0)L}\;.
\end{equation}
The same temperature-independent answer can be also reproduced in terms of
the Kardar-Zhang replica approach (see Appendix \ref{RA}).
Thus we have demonstrated that for $\xi>0$ the most distant part of the
left tail is Gaussian independently of the dimension. Since the width of
the region where $\mu$ is localized grows with the decrease in $-F$, a
crossover to some other regime must occur when this width becomes
comparable with $\xi$. In particular, for $d<2$ and \makebox{$\xi\ll x_0$}
the dependence of $S$ on $F$ at $-F\ll F_\xi$ has to be described by Eq.
(\ref{S(F)d}) with a subsequent crossover to the universal regime
discussed in Sec. \ref{ul}. Naturally, the increase in $\xi$ (or in $d$)
leads to shrinking and subsequent vanishing of the region where $S(F)$ can
be described by Eq. (\ref{S(F)d}). On the other hand, when $\xi$ is taken
to zero $F_\xi$ goes to infinity, which leads to the disappearance of the
region with Gaussian behavior.
\subsection{A boundary from below}
It is worthwhile to emphasize that expression (\ref{S(F)xi}) gives an exact
boundary from below for the value of $S(F)$ in the optimal fluctuation.
This is so because the potential of the form
\begin{equation} \label{V(V0)}
V({\bf x})=\frac{U({\bf x})}{U(0)}{V({\bf x}=0)}
\end{equation}
minimizes functional (\ref{s(V)}) for the given value of $V({\bf x}=0)$,
from where
\begin{equation} \label{}
S(F)\geq \frac{1}{2U(0)}\int_{0}^{L}dt\,[V(t,{0})]^2\,.
\end{equation}
On the other hand, in a growing fluctuation of $f(t,{\bf x})$
which has a spherically symmetric shape and an extremum at ${\bf x}=0$,
the absolute value of ${\partial f(t,{0})}/{\partial t}$ is bounded
from above by $|V(t,{0})|$ because at the point of extremum
the second term in the left-hand side of the KPZ equation (\ref{KPZ})
vanishes, whereas the third term, $-\nu \nabla^2 f$, has to have
the same sign as ${\partial f(t,{0})}/{\partial t}$.
This allows one to conclude that
\begin{eqnarray} \nonumber
S(F) & \geq & \frac{1}{2U(0)}\int_{0}^{L}dt\, \left[\frac{\partial f(t,{0})}
{\partial t}\right]^2 \\
& \geq & \frac{F^2}{2U(0)L} \label{Smin}
\end{eqnarray}
Apparently, this inequality is reduced to equality only if
(i) $V(t,{\bf x})$ is of form (\ref{V(V0)}),
(ii) the viscous term in the KPZ equation can be neglected, and
(iii) ${\partial f(t,{0})}/{\partial t}$ does not depend on time.
Since in the negative fluctuation of $f$ considered in Sec. \ref{frc}
all these conditions are satisfied rather accurately,
the action of this fluctuation is approximately equal to the boundary
from below given by Eq. (\ref{Smin}).
Note that the argument leading to the derivation of
Eq. (\ref{Smin}) is valid for {both signs} of $F$. Therefore
inequality (\ref{Smin}) has to be satisfied also in the far-right tail.
\section{Far-right tail \label{fr}}
Our analysis has established that the optimal fluctuation corresponding to
the left tail of $P_L(F)$ has a very special shape which can be
characterized by two different scales. Namely, the size of the area where
the potential $V$ is localized, $\Delta(F)$, is much smaller then the total
size of the fluctuation
$\tilde{\Delta}(F)\sim\left({-FL}/{J}\right)^{1/2}$, that is,
the width of the area where $f$ and ${\bf u}$ essentially deviate from zero.
Apparently this property is closely related to the fact that inside a
growing negative fluctuation of $f$ the terms ${\partial f}/{\partial t}$
and $(1/2J)(\nabla f)^2$ in the functional,
\begin{equation} \label{S(f)0}
S\{f\}=\frac{1}{2U_0}\int_{0}^{L}\hspace*{-2mm} dt \int d{\bf x}\,
\left[\frac{\partial f}{\partial t}+\frac{1}{2J}(\nabla f)^2
-\nu \nabla^2 f\right]^2\,
\end{equation}
defining the probability of a fluctuation in a system with a
$\delta$-correlated potential have to be of the opposite signs.
This provides a possibility for their mutual compensation
in almost the whole volume of the fluctuation.
It is clear that in the case of the right tail such a cancellation
is impossible because in the substantial part of the optimal fluctuation
${\partial f}/{\partial t}$ has to be of the same sign as
$(1/2J)(\nabla f)^2$.
As a consequence, the optimal fluctuation corresponding to the right tail
must have a shape which can be characterized
by a single relevant length-scale, $\Delta_+(F)$.
This length scale can be estimated from the comparison
of ${\partial f}/{\partial t}\sim F/L$ with
\makebox{$(1/2J)(\nabla f)^2\sim F^2/J\Delta_+^2$},
which shows that $\Delta_+$ has to be of the same order as the total size
of the optimal fluctuation with $F<0$:
\begin{equation} \label{Delta(F)+}
\Delta_+(F)\sim \tilde{\Delta}(-F)\sim\left(\frac{LF}{J}\right)^{1/2}\!\!.
\end{equation}
Note that for $\Delta_+(F)$ given by Eq. (\ref{Delta(F)+})
the viscous term in the integrand of functional (\ref{S(f)0})
can be neglected if $F$ is large enough.
This is precisely the reason why an estimate for $\Delta_+$ can be obtained
by matching the two other terms in this integrand.
A comparison of $\nu\nabla^2f\sim \nu F/\Delta_+^2$
with $(1/2J)(\nabla f)^2\sim F^2/J\Delta_+^2$ shows that the condition
which allows one to neglect the viscous term can be written as
$F\gg 2J\nu=T$. Apparently this constraint is automatically fulfilled as
soon as one considers the most distant part of the tail.
Substitution of Eq. (\ref{Delta(F)+}) into the relation
\begin{equation} \label{}
S\sim\frac{L\Delta_+^d}{U_0}\left(\frac{F}{L}\right)^2\,,
\end{equation}
following from the assumption that $\Delta_+(F)$ is the only relevant
length-scale in the problem,
gives then an estimate for the action determining the form of
the far-right tail of $P_L(F)$,
\begin{equation} \label{S(F)+}
S(F)\sim \frac{F^{2+d/2}}{U_0J^{d/2}L^{1-d/2}}\;\,,
\end{equation}
which naturally is independent of temperature.
On a more formal level, the same relation can be obtained as a variational
estimate from above. If one assumes, for example, that
\begin{equation} \label{f-trial+}
f(t,{\bf x})=\frac{Ft}{L}
\exp\left(-\frac{{\bf x}^2}{2\Delta_+^2}\right)
\end{equation}
and substitutes Eq. (\ref{f-trial+}) into Eq. (\ref{S(f)0}), then
for $0<d<4$ the result of this substitution $S_{\rm var}(\Delta_+)$ (which
for $F\gg T$ is insensitive to the presence of the viscous term in the
integrand) has a minimum with respect to the variational parameter $\Delta_+$.
This minimum is situated at $\Delta_+(F)$ satisfying relation (\ref{Delta(F)+}),
whereas the value of $S_{\rm var}[\Delta_+(F)]$
satisfies relation (\ref{S(F)+}).
The important difference between the far-left and far-right tails is that
in the far-right tail, the width of the region where the fluctuation of a
random potential is localized grows with the increase in $|F|$.
In such a situation one can expect that the shape of the optimal
fluctuation in the most distant part of the tail at finite $\xi$
will be the same as for a $\delta$-correlated potential.
This requires the fulfillment of the condition $\Delta_+\gg \xi$,
that is, $F\gg J\xi^2/L\,.$
Therefore, for a given $\xi$ and sufficiently large $L$
the region of the applicability of Eq. (\ref{S(F)+}) will be extended
to the whole non-universal part of the right tail.
Although the minimum of $S_{\rm var}(\Delta_+)$ with respect to $\Delta_+$
exists for any $d$ in the interval $0<d<4$, it follows from the form of
Eq. (\ref{S(F)+}) that this equation can be expected to be directly
applicable only at $d<2$, exactly like in the case of the analogous
expression for the far-left tail, Eq. (\ref{S(F)d}).
For $d>2$ Eq. (\ref{S(F)+})
(where $L$ enters as the total time of the development of the fluctuation)
predicts that the action
can be decreased by making the time of the development of this fluctuation
much smaller than $L$. According to Eq. (\ref{Delta(F)+}) this
will be accompanied by the decrease in the size of the fluctuation. This
suggests that at $d>2$ the optimal fluctuation must have a different
structure, which has to be sensitive to the form of a random potential
correlator at small lengths.
If the first factor in the right-hand side of Eq. (\ref{f-trial+}) is
replaced by $$\frac{F\sinh(t/L_+)}{\sinh(L/L_+)}\,,$$
which allows one to vary not only the characteristic size of a fluctuation
$\Delta_+$ but also the time of its development $L_+$, then for
$U({\bf x})\propto\exp(-{\bf x}^2/2\xi^2)$ and $d>2$ the minimum of
the action is achieved at $\Delta_+\sim\xi$ and $L_+\sim J\xi^2/F$,
which corresponds to
\begin{equation} \label{S(F)+f}
S(F)\sim \frac{\xi^{d-2}F^3}{U_0 J}\;.
\end{equation}
Note that at $d=2$, the estimates given by Eqs. (\ref{S(F)+}) and (\ref{S(F)+f}) coincide with each other.
Naturally, at the marginal dimension of $d=2$ (where algebraic divergences
are replaced by logarithmic) some logarithmic factors may appear in the
expression for the action.
\section{Modification of tails by the renormalization effects \label{ut}}
In terms of the Burgers equation parameters (the viscosity $\nu=T/2J$ and
the pumping force intensity \makebox{$D=U_0/2J^2$}),
Eq. (\ref{S(F)d}) can be rewritten as
\begin{equation} \label{S(F)d2}
S(F)\sim\frac{\nu^d}{D}
\frac{(-F/J)^{2-d/2}}{L^{1-d/2}}\;.
\end{equation}
This estimate has been derived at $d<2$ and $\xi=0$ and
is applicable also at $\xi>0$ as soon as $\xi\ll\Delta$.
However, from the nature of the optimal-fluctuation approach it is clear
that the range of the applicability of Eq. (\ref{S(F)d2}) is
restricted also from the other side because in order to disregard
the renormalization of any parameters by the nonlinearity
the soliton has to be sufficiently narrow:
$\Delta\ll x_0$, where $x_0$ is defined by the relation
\begin{equation} \label{x_0}
x_0^{2-d}\sim\frac{\nu^3}{D}
\sim\frac{T^3}{JU_0}\;.
\end{equation}
At any $d\neq 2$ $x_0$ is the only parameter with the dimension of ${\bf
x}$ which can be constructed from $T$, $J$ and $U_0$. In particular,
in the case of $d<2$ and small $\xi$ we are discussing now, $x_0$ is the
length scale at which the perturbative corrections to $\nu$ and $D$ become
comparable with the bare values of these parameters.
Thus, at $\Delta\gg x_0$ the renormalization effects become important.
In such a regime the probability of a large negative fluctuation of $F$
is determined not by a single fluctuation (and small deviations from it)
but by a relatively wide class of fluctuations,
the summation over which can be taken into account by analyzing an
optimal fluctuation in a system with renormalized parameters.
Since in all the cases we consider the optimal fluctuations are
quasi-stationary (see below) and well localized at a particular length
scale, this can be done by replacing all parameters in Eq. (\ref{S(F)d2})
by their effective values at the corresponding length scale
and zero frequency. \cite{KK}
However, it is well known that only $\nu$ and $D$ are subject to
renormalization, whereas the amplitude of the nonlinear term in the KPZ
equation (\ref{KPZ}) (and, therefore, the coefficient $J$) cannot
be renormalized as a consequence of the Galilean invariance. \cite{MHKZ}
From the continuity it is clear that when the instanton is not too narrow,
the approach relying on using Eq. (\ref{S(F)d2}) with renormalized
parameters can be also expected to work even at $d\geq 2$ [where Eq.
(\ref{S(F)d2}) has no region of the direct applicability]
as soon as the parameters of the system correspond to the same phase
as at $d<2$ [namely, the strong-coupling phase in which the fluctuations
of $f(t,{\bf x})$ in a stationary situation are divergent,
see Eq. (\ref{ff'}) below]. At $d>2$ this requires to have
$x_0/\xi>\kappa(d)$, that is, the temperature $T$ should be lower then some
critical value $T_c(d)$, \cite{IS} which tends to infinity when
$d\rightarrow 2+0$. In the weak-coupling phase, that is at $T>T_c(d)$,
typical fluctuations of $f(t,{\bf x})$ in the stationary situation can be
described by neglecting the non-linear term in the KPZ equation (\ref{KPZ}).
However, the form of the most distant parts of the tails of $P_L(F)$ is
insensitive to the relation between $T$ and $T_c(d)$ and in both phases has
to be given by Eqs. (\ref{S(F)xi}) and (\ref{S(F)+f}).
To describe how the renormalization effects change the shape of the tails
of $P_L(F)$ in the regime when they are important (which corresponds
to the universal parts of the tails in the strong-coupling phase),
we first have to review some known properties of the stationary solution
of the KPZ model in the strong-coupling regime.
\subsection{Stationary solution of the KPZ model
\label{ss}}
In a stationary situation the divergence of fluctuations
in the strong-coupling phase of a KPZ system is algebraic.
Their behavior at large scales in space-time can be
described by two fundamental exponents, \cite{MHKZ,HH}
\begin{equation} \label{ff'}
\langle[f(t,{\bf x})-f(t',{\bf x}')]^2\rangle
\propto{|{\bf x}-{\bf x}'|^{2\chi}}
g\left(\frac{|t-t'|}{|{\bf x}-{\bf x}'|^z}\right)\,.
\end{equation}
Here $\chi\equiv\chi(d)$ is the roughening exponent characterizing the
equal-time interface fluctuations, $z\equiv z(d)$ is the dynamic exponent
describing the scaling of the relaxation time with the length-scale,
whereas the function $g(\alpha)$ has a finite limit at $\alpha\rightarrow
0$ and diverges as $\alpha^{2\chi/z}$ when $\alpha\rightarrow\infty$. It
is well known \cite{MHKZ} that the existence of the Galilean invariance
imposes
\begin{equation} \label{z+chi}
z+\chi=2\,.
\end{equation}
At $d=1$ the value of the exponent $\chi=1/2$ is known exactly because
the equal-time correlator of $f(t,{\bf x)}$ in a system with
$\delta$-functional correlations of a random potential has to be exactly
the same as in the absence of the non-linearity. \cite{HHF}
This property is a consequence of the fluctuation-dissipation
theorem, \cite{DH} which is obeyed by Eq. (\ref{KPZ}) only at $d=1$.
At $d\neq 1$ the values of the exponents $z$ and $\chi$ are known only
from approximate or numerical calculations.
In terms of the directed polymer problem the dependence (\ref{ff'})
corresponds to
\begin{equation} \label{x-x'}
\langle[{\bf x}(t)-{\bf x}(t')]^2\rangle\propto (t-t')^{2/z}\,,
\end{equation}
which shows that $\zeta=1/z$ plays the role of the roughening exponent
for the transverse displacements inside an infinite polymer and therefore
cannot be smaller than 1/2, \cite{comm-zeta} from where $z\leq 2$.
A natural way to describe the effective renormalization of $\nu$ and $D$
by the nonlinearity consist in introducing \cite{HF}
a generalized viscosity $\nu(\omega,{\bf q})$ and a generalized pumping
intensity $D(\omega,{\bf q})$ defined by the relations
\begin{eqnarray}
\label{G}
G(\omega,{\bf q}) & = & [-i\omega+\nu(\omega,{\bf q})q^2]^{-1}\,,
\\ C(\omega,{\bf q}) & = & 2J^2|G(\omega,{\bf q})|^2D(\omega,{\bf q})\,,
\label{C}
\end{eqnarray}
where $G(\omega,{\bf q})$ and $C(\omega,{\bf q})$ are, respectively,
the Fourier transforms of the response function and of the two-point
correlation function of $f(t,{\bf x})$.
The form of Eqs. (\ref{G}) and (\ref{C}) corresponds to the replacement
of the considered non-linear system by a linear system with the same
form of $G(\omega,{\bf q})$ and $C(\omega,{\bf q})$.
The compatibility with the behavior described by Eq. (\ref{ff'})
requires then that at small enough $q$,
\[
\lim_{\omega\rightarrow 0}\nu(\omega,{\bf q})\propto q^{-(2-z)}\,,~~~
\lim_{\omega\rightarrow 0} D(\omega,{\bf q})\propto q^{-(d+2\chi-z)}\,.
\]
This suggests that the behavior of low-frequency fluctuations
with typical or smaller amplitude can be
qualitatively described by using an effective viscosity
$\nu_{\rm eff}(R)$ and an effective pumping intensity $D_{\rm eff}(R)$
which algebraically depend on a length scale $R$,
\begin{equation} \label{nu_eff}
\nu_{\rm eff}(R)\sim\nu\left(\frac{R}{a_\nu}\right)^{2-z}\hspace*{-4mm},~~~
D_{\rm eff}(R)\sim D\left(\frac{R}{a_D}\right)^{4+d-3z}\hspace*{-8mm},
\end{equation}
where in accordance with Eq. (\ref{z+chi}) we have replaced $\chi$ by $2-z$.
As a convenient way of describing the amplitudes of $\nu_{\rm eff}(R)$
and $D_{\rm eff}(R)$, we have introduced in Eq. (\ref{nu_eff})
two new length scales, $a_\nu$ and $a_D$.
For $d=1$ and $\xi\lesssim x_0$ both $a_\nu$ and $a_D$ can be expected
to be of the order of $x_0$, because in such a situation
$x_0$ is the only relevant length in the problem.
However for $\xi\gg x_0$ and/or $d>1$ these two length-scales do not have
to be of the same order.
Since both $\nu$ and $D$ increase under the renormalization,
Eqs. (\ref{nu_eff}) can be expected to be applicable only for
\makebox{$R\gg a_\nu,a_D$}.
In scaling regime, when $\nu_{\rm eff}(R)$ and $D_{\rm eff}(R)$
behave themselves in accordance with Eqs. (\ref{nu_eff}), both these
quantities have no direct relation to their bare values, $\nu$ and $D$.
Their origin can be traced to the effect of fluctuations with
shorter wave lengths than the given length scale $R$. In particular, it
follows from the structure or the KPZ equation (\ref{KPZ}) that at the
length-scale $R$ the role of the effective random potential is played by
the deviation of $-(J/2)\langle{\bf u}^2\rangle_R$ from its average value,
$-(J/2)\overline{\langle{\bf u}^2\rangle_R}$, where $\langle\ldots\rangle_R$
denotes spatial averaging over a region with a linear size of the order of
$R$. From this the value of $D_{\rm eff}(R)$ can be estimated as
\[
D_{\rm eff}(R) \sim \int_{|{\bf r}|<R}d{\bf r}\int_{-\infty}^{+\infty}
d\tau\left[\overline{u^a(t,{\bf x})u^b(t+\tau,{\bf x+r})}\right]^2
\]
\begin{equation} \label{D_eff-2}
\sim \frac{R^{2+d} u_{\rm typ}^4(R)}{\nu_{\rm eff}(R)}\;.\hspace*{32mm}
\end{equation}
In Eq. (\ref{D_eff-2}) we have assumed that the integration over $d\tau$
can be replaced by the multiplication by the factor $\sim\tau(r)$, where
$\tau(r)\sim r^2/\nu_{\rm eff}(r)$ is the characteristic relaxation time
which can be associated with the length scale $r$, whereas the result of
the integration over $d{\bf r}$ has been estimated assuming that as
a consequence of the universality for any length scale there exist
only one characteristic velocity scale which can be associated with this
length scale (in other terms, there is no anomalous scaling).
We have chosen as such a velocity scale the typical velocity,
$u_{\rm typ}(R)$, defined by the relation
\begin{equation} \label{u_typ}
u_{\rm typ}^2(R)\equiv\overline{\langle{\bf u}\rangle_R^2}
\sim \frac{D_{\rm eff}(R)}{\nu_{\rm eff}(R)R^d}\;.
\end{equation}
Substitution of Eq. (\ref{u_typ}) into Eq. (\ref{D_eff-2}) then gives the
relation
\begin{equation} \label{nu^3/D}
\frac{\nu_{\rm eff}^3(R)}{D_{\rm eff}(R)}\sim R^{2-d}\,,
\end{equation}
whose structure is analogous to that of Eq. (\ref{x_0}). The consistency
between Eqs. (\ref{nu^3/D}) and (\ref{z+chi}) confirms the correctness
of assumptions which have been used for the derivation of Eq. (\ref{nu^3/D}).
In terms of the length scales $a_\nu$ and $a_D$ introduced above,
see Eqs. (\ref{nu_eff}), relation (\ref{nu^3/D}) can be rewritten as
\begin{equation} \label{a_nu(a_D)}
a_\nu\sim\left(\frac{x_0}{a_D}\right)^\frac{2-d}{3(2-z)}a_D\,,
\end{equation}
which for $d=1$ (when $z=3/2$) is reduced to
\begin{equation} \label{a_nu(a_D)-2}
a_\nu\sim(x_0^2a_D)^{1/3}\,.
\end{equation}
When the dynamics of fluctuations at $R\sim\xi$ is dominated by wave
breaking, the value of $u_{\rm typ}(\xi)$ can be estimated
as a characteristic velocity
\makebox{$u_\xi\sim(D\tau_{\xi}/\xi^3)^{1/2}$},
which is created by a random force with characteristic length-scale $\xi$
during the time $\tau_\xi\sim\xi/u_\xi$ required for breaking
of such a fluctuation, which gives
\makebox{$u_\xi\sim(D/\xi^{1+d})^{1/3}\,.$}
A comparison of this estimate with Eqs. (\ref{D_eff-2}) and (\ref{u_typ})
suggests that in such a regime $D_{\rm eff}(\xi)\sim D$, that is, $a_D\sim\xi$.
At $d<2$ we expect this conclusion to be applicable when $\xi\gtrsim x_0$,
whereas at $d>2$ - in the whole region of the existence of the
strong-coupling phase.
It follows from the definition of $u_{\rm typ}(R)$
that with the increase in $R$ the value of $u_{\rm typ}(R)$ has to decrease.
A comparison of Eq. (\ref{u_typ}) with Eqs. (\ref{nu_eff}) allows one then
to conclude that $z$ has to be larger than 1.
\subsection{Universal part of the left tail} \label{ul}
After replacing in Eq. (\ref{Delta(F)}) $T/2J\equiv\nu$ by
$\nu_{\rm eff}(\Delta)$, one obtains a relation which allows one to find
that in the regime when the renormalization effects are important
the estimate for the soliton width $\Delta$ acquires a form
\begin{equation} \label{Delta(F)d}
{\Delta}\equiv\Delta(F)\sim {a_\nu}
\left(\frac{L\nu^2J}{-Fa_\nu^2}\right)^{\frac{1}{2(z-1)}}\,.
\end{equation}
A substantial change of $\Delta$ in comparison with what is given by Eq.
(\ref{Delta(F)}) means that in the regime we consider now the probability
of a large negative fluctuation of $F$ is determined not by a narrow
vicinity of the fluctuation which minimizes the original action (like it
happens in the more distant part of a tail), but by a wide vicinity of an
essentially different fluctuation whose dominance is ensured by a factor
related to the integration over its vicinity.
In the framework of a renormalization group approach this factor is
effectively taken into account when one is replacing different parameters
by their renormalized values.
An estimate for the action can be then obtained by making
in Eq. (\ref{S(F)d2}) a replacement
\begin{equation} \label{nu_eff(Delta)}
\nu\rightarrow\nu_{\rm eff}(\Delta)\,,~~~
D\rightarrow D_{\rm eff}(\Delta)
\end{equation}
with $\Delta$ given by relation (\ref{Delta(F)d}).
With the help of Eq. (\ref{nu^3/D})
the result of this substitution can be reduced to the form
\begin{equation} \label{S(F)u}
S(F) \sim \left(\frac{-F}{F_*}\right)^{\eta}\,,
\end{equation}
with exponent
\begin{equation} \label{eta_-}
\eta=\eta_-
\equiv\frac{z}{2(z-1)}\;
\end{equation}
which depends on $d$ only through the dynamic exponent $z\equiv z(d)$
but not explicitly.
Here
\begin{equation} \label{F*d}
F_{*}\sim {J\nu
\left(\frac{\nu L}{a_\nu^2}\right)^\omega
\end{equation}
plays the role of a characteristic free-energy scale whose dependence
on $L$ is described by the exponent
\begin{equation} \label{omega}
\omega=1-\frac{1}{\eta_-}=\frac{2}{z}-1\;.
\end{equation}
The universality hypothesis for the directed polymer problem \cite{IV}
(or, more generally, for the collective pinning problem \cite{Blatter})
suggests that $F_*$ has to be of the same order as a characteristic
elastic energy $E_{\rm el}\sim J(\delta x)^2/L$, where the dependence of
the characteristic transversal displacement between the two ends of a
polymer, \makebox{$\delta x\equiv |{\bf x}(t=L)-{\bf x}(t=0)|\propto
L^\zeta$}, on its total length $L$ is described by the roughening exponent
$\zeta$ so that
\begin{equation} \label{omega(zeta)}
\omega=2\zeta-1\,.
\end{equation}
A comparison of Eq. (\ref{omega(zeta)}) with Eq. (\ref{omega}) demonstrates
that the fluctuations of $\delta x$ are described by the same roughening
exponent $\zeta=1/z$ as fluctuations inside an infinite polymer, see Eq.
(\ref{x-x'}), in full agreement with what one expects from the
universality. This consistency can be considered as an additional
confirmation of the validity of the set of assumptions which have been
used for obtaining Eq. (\ref{S(F)u}).
Note that the list of these assumptions includes the conjecture that the
system evolves sufficiently slow, so that at relevant length scales it can
be considered as already equilibrated, which is a necessary condition for
using Eqs. (\ref{nu_eff}). For this the total evolution time $L$ has to be
much larger then the characteristic relaxation time
\makebox{$\tau(\Delta)\sim\Delta^2/\nu_{\rm eff}(\Delta)$} which can be
associated with the length scale $\Delta$. \cite{NT} Since in terms of
$\Delta(F)$ and $L$ relation (\ref{S(F)u}) can be rewritten as
\begin{equation} \label{S(tau)}
S(F)\sim \frac{\nu_{\rm eff}(\Delta)}{\Delta^2}L
\sim\frac{L}{\tau(\Delta)}\;,
\end{equation}
the constraint $L\gg\tau(\Delta)$ is equivalent to
$S(F)\gg 1$ and, accordingly, is automatically fulfilled
as soon as one is dealing with the tail.
It is also important that the effective viscosity and effective pumping
intensity given by Eqs. (\ref{nu_eff}) and following from the form
of the correlation function (\ref{ff'}) can be used for
the description only of typical (or more weak) fluctuations.
The comparison of the characteristic velocity of the flow
created around the instanton,
\begin{equation} \label{u_F}
u_F\sim \left(\frac{|F|}{JL}\right)^{1/2}\hspace*{-1mm},
\end{equation}
with $u_{\rm typ}(\Delta)$, the typical velocity of equilibrium
fluctuations at the length scale $\Delta$ [see Eq. (\ref{u_typ})],
demonstrates that in the considered case both quantities are of the same
order, and therefore, the approach based on using Eqs. (\ref{nu_eff}) with
$R\sim\Delta$ is indeed justified. This allows us to conclude that our
instanton is created by fluctuations of the effective random potential
whose amplitude is typical for their length scale. In such a situation the
only reason why the probability of the instanton is small is that the
signs of these typical fluctuations have to be same in all
$L/\tau(\Delta)$ independent time intervals of the length $\tau(\Delta)$.
This provides a qualitative explanation why the expression for the action
can be reduced to a very simple form $S\sim L/\tau(\Delta)$.
At large values of $-F$ the range of the applicability of Eq.
(\ref{S(F)u}) is restricted by the constraint $\Delta(F)\gg a_\nu,a_D$,
which is required for making replacement (\ref{nu_eff(Delta)}). In
particular, when $d<2$ and $\xi\ll x_0$ (so that \makebox{$a_\nu\sim
a_D\sim x_0$}) one can expect that at $\Delta(F)\sim x_0$, that is, at
\begin{equation} \label{Fc}
-F\sim F_c\sim \frac{T^2L}{Jx_0^2}\,,
\end{equation}
a crossover takes place from dependence (\ref{S(F)u})
to dependence (\ref{S(F)d}).
{On the other hand, in situations when $\xi$ (or $d$) is too
large for dependence (\ref{S(F)d}) to have any range of applicability,
one could expect to have a direct crossover between dependences
(\ref{S(F)xi}) and (\ref{S(F)u}). However, the range of
the applicability of Eq. (\ref{S(F)xi}) describing the far-left tail
corresponds to $\Delta(F)\ll \mbox{min}[a_\xi,a_D]$
and of Eq. (\ref{S(F)u}) describing the universal regime
to $\Delta(F)\gg \mbox{max}[a_\xi,a_D]$.
Since we expect that in a general situation the two length scales, $a_\xi$
and $a_D$, are essentially different, we have to admit the existence
in such a case of an intermediate region in $F$, where the form of
the left tail of $P_L(F)$ cannot be established without
further investigation.}
As it has been already mentioned above, at $d=1$ the value of the exponent
$z$ is known exactly. Substitution of $z=3/2$ and $a_\nu\sim x_0$ into Eq.
(\ref{S(F)u}) and Eq. (\ref{eta_-}) then
reproduces an estimate for $S(F)$ which up to unknown numerical factor
coincides with Eq. (\ref{S(F)}) for $S(F)$ in the far-left tail.
This shows that for $d=1$ and $\xi\ll x_0$ the dependence of $S$
on all parameters in the universal part of the left tail
is exactly the same as in its non-universal part at $F_c\ll -F\ll F_\xi$.
In this particular case at $-F\sim F_c$ only a numerical coefficient
in the dependence (\ref{S(F)u}) can experience a crossover.
For $d=1$ and $\xi\gg x_0$, substitution of Eq. (\ref{a_nu(a_D)-2})
with $a_D\sim\xi$
into Eq. (\ref{F*d}) reproduces an estimate for $F_*(L)$ which has been
obtained by Nattermann and Renz \cite{NR} from scaling arguments
complemented by the assumption that at low enough temperatures $F_*(L)$
{has} to be temperature independent, and follows also from the
replica-symmetry-breaking analysis of Ref. \onlinecite{KD}.
For $d\neq 1$ the value of the exponent $\eta$ in the universal part of
the left tail, $\eta_-=z/[2(z-1)]$, does not coincide with its value in
the far-left tail, where it is given by $2-d/2$ [see Eq. (\ref{S(F)d})].
Note that expression (\ref{S(F)u}) decreases with the increase in $L$
as long as $\eta_{-}>1$, that is, $1<z<2$. This means that in the universal
part of the left tail the condition which is necessary for the possibility
of having a macroscopic optimal fluctuation (whose size is much larger
then $\xi$) is changed from $d<2$ to \makebox{$1<z<2$}. On the other hand,
when the renormalization effects are taken into account, the condition
$0<d<4$ required for having a minimum of $S$ with respect to $\Delta$ (see
Sec. \ref{fl-d}) is replaced by $4/3<z<2$. Thus, the range of the
applicability of Eq. (\ref{S(F)u}) is not restricted to $0<d<2$ (as in the
case of the analogous expression for the far-left tail) but extends itself
to the whole region of parameters {where the strong-coupling phase
does exist and $z>4/3$ (the condition $z<2$ always has to be fulfilled,
see Sec. \ref{ss}). Note that for $d=1$ the value of
$\zeta\equiv 1/z$ is equal to 2/3 and according to numerical simulations
goes down with the increase in $d$. \cite{HHZ} This means that
the restriction $z>4/3$ is fulfilled for any physical dimension.}
\subsection{Universal part of the right tail} \label{ur}
One could expect the approach based on the application
of the replacements (\ref{nu_eff(Delta)}) to be applicable also for the
description of the universal part of the right tail.
However, it turns out that in this case the situation is more complex.
This can be understood by comparing the size of the optimal fluctuation
$\Delta_+(F)$,
given by Eq. (\ref{Delta(F)+}), with the length scale $R_*(F)$ at which
the typical velocity of equilibrium fluctuations $u_{\rm typ}(R_*)$,
given by Eq. (\ref{u_typ}), becomes comparable with
\begin{equation} \label{u_F+}
u_F\sim \frac{F}{J\Delta_+}\,
\sim\left(\frac{F}{JL}\right)^{1/2}\,,
\end{equation}
the characteristic velocity
inside the optimal fluctuation with the size $\Delta_+(F)$.
In Eq. (\ref{u_F+}) we have used the estimate for $\Delta_+(F)$
given by Eq. (\ref{Delta(F)+}), which
has led to exactly the same estimate for $u_F$ in terms of $|F|$
as in the left tail [see Eq. (\ref{u_F})].
This means that in both tails $R_*(F)$ has to be of the same order.
On the other hand, in Sec. \ref{ul} we have established that in the
left tail the relation $u_F\sim u_{\rm typ}(R_*)$ holds precisely when
$R_*\sim \Delta(F)$. This allows one to conclude that in the right tail,
\begin{equation} \label{}
R_*(F)\sim \Delta(-F)\,,
\end{equation}
where $\Delta(F)$ is the instanton width in the left tail given by
Eq. (\ref{Delta(F)d}).
Accordingly, for the creation of the optimal fluctuation whose
size $\Delta_+(F)$ is much larger than $R_*(F)$
(as it is required in the case of the right tail),
the fluctuations of the effective random potential with length-scale
$\Delta_+(F)$ should have amplitudes much larger then typical.
Naturally, the probability of such fluctuations is strongly suppressed and
cannot be estimated by using Eqs. (\ref{nu_eff(Delta)}).
The most effective way of formation of a fluctuation whose amplitude $u_F$
substantially exceeds the typical velocity of fluctuations at the
corresponding length-scale consists in formation of a set of fluctuations
with smaller length scales, such that for them the amplitudes
of the order of $u_F$ are typical.
This means that the length-scales of these fluctuations should be
of the order of $R_*(F)$, and, accordingly, the estimate for the action
should include an additional factor $(\Delta_+/R_*)^d$ which takes into
account the need for the spatial coherence of these fluctuations.
This leads to
\begin{equation} \label{S(F)+u}
S(F)\sim \frac{L}{\tau(R_*)}\left(\frac{\Delta_+}{R_*}\right)^d
\sim \left(\frac{F}{F_*}\right)^{\eta_+}\,,
\end{equation}
where $F_*$ is the same characteristic free-energy scale as in
the universal part of the left tail [see Eq. (\ref{F*d})], whereas
exponent $\eta_+$ is given by
\begin{equation} \label{eta+u}
\eta_+ = \frac{(1+d)z}{2(z-1)}\;\,.
\end{equation}
In terms of the renormalization approach exactly the same result
is obtained if the renormalization is stopped not at the scale
$\Delta_+(F)$, corresponding to the total size of the optimal fluctuation,
but at a smaller scale $R_*$ (at which the fluctuations stop to be strong
enough for inducing the renormalization), that is,
by using Eq. (\ref{S(F)+}) with the replacement
\begin{equation} \label{D_eff}
D\rightarrow D_{\rm eff}(R_*)\sim D\left(\frac{R_*}{a_D}\right)^{4+d-3z},
\end{equation}
where $R_*\sim \Delta(-F)$.
Since the value of $D_{\rm eff}(R_*)$ does not depend on $\Delta_+$,
the condition for the existence of a minimum of $S$ with respect to
$\Delta_+$ remains the same as has been found when deriving
Eq. (\ref{S(F)+}), $0<d<4$.
On the other hand, in the universal part of the right tail the condition
required for the possibility of having a macroscopic optimal fluctuation
(whose size is much larger then $\xi$) is changed from $d<2$ to $1<z<2$,
which in the strong-coupling phase anyway has to be fulfilled [see
Sec. \ref{ss}].
Therefore, the range of the applicability of Eq. (\ref{S(F)+u}) is
restricted from above not by $d=2$ (as in the case of the analogous
expression for the far-right tail) but by $d=4$.
Note that in contrast to exponent $\eta_-$ given by Eq. (\ref{eta_-}),
exponent $\eta_+$ depends both on $z$ and $d$.
However, the ratio of these two exponents does not depend on $z$,
\begin{equation} \label{}
\frac{\eta_+}{\eta_-}=1+d\;,
\end{equation}
and therefore is known exactly.
The fact that in the regime where the renormalization effects are important
both tails of the free energy distribution function incorporate
the same characteristic free-energy scale $F_*$ confirms that
this regime corresponds to studying the universal form of this
distribution function.
A comparison of Eq. (\ref{Delta(F)+}) with Eq. (\ref{Delta(F)d}) allows one
to verify that the condition \makebox{$\Delta_+(F)\gg R_*(F)$,} on which
we have relied when deriving Eq. (\ref{S(F)+u}), is equivalent to
$S(F)\gg 1$, and therefore is always satisfied as soon as we are
dealing with the tail.
Another condition whose fulfillment is required to justify
replacement (\ref{D_eff}) is related to the quasistationarity of the
problem. Namely, the total evolution time $L$ has to be much larger than
the characteristic relaxation time $\tau(R_*)\sim R_*^2/\nu_{\rm eff}(R_*)$
which can associated with the length scale $R_*(F)$.
For $R_*(F)\sim\Delta(-F)$, this condition is also reduced to $S(F)\gg 1$.
From the side of large $F$ the range of the applicability of Eq.
(\ref{S(F)+u}) is restricted by {the constraint $R_*\gg a_D$, whose
fulfillment is also required for making replacement (\ref{D_eff}). In
particular, at $d<2$ and $\xi\lesssim x_0$ (when $a_D\sim x_0$), the
crossover between dependences (\ref{S(F)+u}) and (\ref{S(F)+}) can be
expected to occur at $F\sim F_c$, where $F_c$ is given by the same
expression [Eq. (\ref{Fc})] as in the left tail. On the other hand, at
$d>2$ the crossover between dependences (\ref{S(F)+u}) and
(\ref{S(F)+f}) has to take place while $R_*(F)$ is still much
larger than $\xi$. In this situation we expect that the two contributions
to $P_L(F)$ [one from the ``macroscopic" instanton, corresponding to
dependence (\ref{S(F)+u}) and the other from the ``microscopic" instanton
corresponding to dependence (\ref{S(F)+f})] can coexist with each
other and the crossover has to occur when they become comparable
with each other.}
\section{Conclusion \label{Concl}}
In the present work we have studied the form of the tails of the
free-energy distribution function $P_L(F)$ in the directed polymer problem
both for a $\delta$-correlated random potential and for the case of
a finite correlation length $\xi$.
In all regimes that we have investigated the tails have
a stretched-exponential form,
\begin{equation} \label{}
-\ln P_L(\pm F)\sim \left[\frac{F}{F_*(L)}\right]^{\eta_\pm}\,,
\end{equation}
with $F_*(L)\propto L^{\omega_\pm}$ and therefore can be characterized by
the two exponents whose values depend on the dimensionality
of the space in which the polymer is imbedded. We use letter $d$ to denote
the transverse dimensionality of this space, that is, the number of
components of the displacement vector ${\bf u}$.
For sufficiently large fluctuations of $F$
the form of the tails of $P_L(F)$ is
determined by the form of the most optimal fluctuation of a random potential
which is sufficient for achieving a given value of $F$.
For a $\delta$-correlated random potential and $d<2$ the minimization
of the action corresponding to such a fluctuation
allows one to show that in the far-left tail
\begin{equation} \label{c-fl}
\eta_-=\frac{4-d}{2}\;,~~~\omega_-=\frac{2-d}{4-d}\;.
\end{equation}
The same values of $\eta_-$ and $\omega_-$ have been obtained by Monthus
and Garel \cite{MG} by constructing a generalization
of the Imry-Ma scaling argument (based on a disorder-dependent
Gaussian variational approach introduced in Ref. \onlinecite{GO}).
However, the approach of Ref. \onlinecite{MG} leaves one in doubt on
what is the range of its applicability (and if such a range exists at all),
whereas the methods used in this work allowed us to establish
that the exponents (\ref{c-fl}) are applicable in the most distant part of
the left tail corresponding to the nonuniversal regime.
At $d\geq 2$ the problem with strictly $\delta$-functional correlations
of a random potential becomes ill-defined,
so it becomes necessary to introduce some regularization.
The natural way of doing it consists in assuming that a random potential
correlations are characterized by a finite correlation radius $\xi$.
In the case of $\xi>0$ one finds that in the most distant part of
the left tail the size of the optimal fluctuation of a random potential
has to be comparable with $\xi$ and the values of the exponents become
superuniversal, that is, not dependent on $d$,
\begin{equation} \label{c-fl-xi}
\eta_-=2\,,~~~\omega_-=1/2\,.
\end{equation}
For $d<2$ and not too large $\xi$ one can expect to have a
crossover from regime (\ref{c-fl-xi}) to regime (\ref{c-fl}).
The application of the optimal-fluctuation approach
to the analysis of the right tail shows that for $d<2$
the most distant part of this tail is described by
\begin{equation} \label{c-fr}
\eta_+=\frac{4+d}{2}\,,~~~\omega_+=\frac{2-d}{4+d}\,.
\end{equation}
In contrast to the case of the left tail, the form of the most distant
part of the right tail is insensitive to whether $\xi$ is zero or finite.
On the other hand, for $d>2$ the size of the optimal fluctuation again
becomes determined by $\xi$, which leads to the change of the exponents to
\begin{equation} \label{}
\eta_+=3\,,~~~\omega_+=0\,.
\end{equation}
Note that the value of $\omega_+$ given by Eq. (\ref{c-fr}) corresponds
to the value of the roughening exponent,
\begin{equation} \label{}
\zeta_{\rm F}
=\frac{3}{4+d}\;\;,
\end{equation}
which is known as ``Flory exponent'' \cite{Kard-JAP} and follows
from simple scaling arguments of Refs. \onlinecite{Kard-JAP}, as well as
from the Gaussian variational calculation of Mezard and Parisi \cite{MP}
incorporating a hierarchical replica-symmetry breaking.
Our analysis has revealed that this scaling analysis (which insofar has been
assumed to be of little relevance, because it cannot reproduce the exactly
known value of $\zeta=2/3$ at $d=1$) in reality is applicable for
the description of the most distant (non-universal) part of the right
tail of $P_L(F)$.
However, it still remains unclear whether the appearance of the same
exponent in the variational calculation of Ref. \onlinecite{MP} (based on
the {\em maximization} of the variational free energy of a system with
$L=\infty$) is not more than a coincidence.
If the parameters of the system correspond to the strong coupling phase,
the decrease in $|F|$ makes the optimal-fluctuation approach no
longer directly applicable because the size of the optimal fluctuation
becomes too large (or its amplitude becomes too small) to neglect
the renormalization of the parameters of the system by fluctuations.
In such a situation, a consistent inclusion of the renormalization effects
into account allows one to express the exponents in terms of the roughening
exponent $\zeta=1/z$ describing the behavior of displacement
fluctuations inside an infinite polymer [see Eq. (\ref{x-x'})].
For universal parts of left and right tails, one obtains, respectively,
\begin{equation} \label{c-eta-pm}
\eta_-=\frac{1}{2(1-\zeta)}\;,~~~\eta_+=\frac{1+d}{2(1-\zeta)}\;\,.
\end{equation}
Not unexpectedly, one finds that the value of $\omega$ is the same for
both tails and is equal to $2\zeta-1$,
as it could be expected from the universality.
Quite remarkably, the ratio $\eta_+/\eta_-=1+d$ does not depend on $\zeta$.
The value of $\zeta$ is known exactly only at $d=1$, where $\zeta=2/3$.
In this case the values of $\eta_-=3/2$ and $\eta_+=3$ which
follow from Eqs. (\ref{c-eta-pm}) are in perfect agreement with
the exact solution \cite{PS} of the polynuclear growth (PNG) model,
which is accepted to belong to the same universality class.
In terms of the directed polymer problem the PNG model
corresponds to the Poisson distribution of identical pointlike impurities
and a rather peculiar limit of vanishing elasticity, $J=0$, and
zero temperature. \cite{PS,Johan}
For this model the form of the distribution function $P_L(F)$ in the universal
regime, as well as the scaling function $g(\alpha)$ entering Eq. (\ref{ff'}),
is known exactly. \cite{PS,PS-04}
The consistency between our results and that of Ref. \onlinecite{PS}
confirms that the directed polymer problem defined by Eq. (\ref{H}) and
the PNG model indeed belong to the same universality class.
The nonuniversal tails in the PNG model have been analyzed in
Ref. \onlinecite{DZ}. Naturally, in the nonuniversal regime even the models
belonging to the same universality class can have different tails.
The difference is especially evident in the case of what we call
the far-right tail because in the PNG model the energy is by definition
bounded from above and therefore its distribution function has to vanish
for large enough positive fluctuations.
On the other hand, it follows from Ref. \onlinecite{DZ}
that in the PNG model the far-left tail is described by
$S(F)\propto F\ln(-F/L)$ and, thus, also has nothing in common with
the far-left tail of the model considered in this work.
In terms of the exponent $\omega=2\zeta-1$ Eqs. (\ref{c-eta-pm})
can be rewritten as
\begin{equation} \label{c-eta-pm2}
\eta_-=\frac{1}{1-\omega}\;,~~~\eta_+=\frac{1+d}{1-\omega}\;\,.
\end{equation}
Our results demonstrate that in model (\ref{H})
the analogous relations are fulfilled also
in non-universal regimes (where $\omega$ is not obliged to coincide
with $2\zeta-1$ and be the same in both tails)
as soon as the size of the optimal fluctuation
is comparable with the total length of a string.
For the far-left tail this has been known \cite{HHZ,MG-07}
from the Kardar-Zhang replica approach.
Recently both relations (\ref{c-eta-pm2}) have been derived by
Mothus and Garel \cite{MG-07} with the help of a recursive procedure
for the zero-temperature problem on a hierarchical diamond
lattice whose effective dimension is equal to $d$.
These authors have also suggested that the same relations can be
expected to hold on all hypercubic lattices.
Although, in our opinion, the argument accompanying this proposal
does not take into account some important differences between
hypercubic and hierarchical lattices, the results derived in this
work confirm its correctness.
\section*{ACKNOWLEDGEMENTS}
The authors are grateful to G. Blatter, T. Garel, V.B. Geshkenbein,
A.I. Larkin and V.V. Lebedev for useful discussions.
The work of I.V.K. was supported by RFBR under Grant No. 06-02-17408-a. |
2107.01804 | \section{Introduction} \label{sec:intro}
Clustering is a fundamental problem with many applications in machine learning, statistics, and data analysis. Although many formulations of clustering are NP-hard in the worst case, many heuristics and approximation algorithms exist and are widely deployed in practice. Unfortunately, many of those algorithms suffer from large running times, especially if the input data sets are high-dimensional.
In order to improve the performance of clustering algorithms in high-dimensional spaces, a popular approach is to project the input point set into a lower-dimensional space and perform the clustering in the projected space. Reducing the dimension (say, from $m$ to $d \ll m$) has multiple practical and theoretical advantages, including (i) lower storage space, which is linear in $d$ as opposed to $m$; (ii) lower running time of the clustering procedure - the running times are often dominated by distance computations, which take time linear in the dimension; and (iii) versatility: one can use {\em any} algorithm or its implementation to cluster the data in the reduced dimension.
Because of its numerous benefits, dimensionality reduction as a tool for improving algorithm performance has been studied extensively, leading to many theoretical tradeoffs between the projected dimension and the solution quality. A classic result in this area is the Johnson-Lindenstrauss (JL) lemma \yrcite{originalJL} which (roughly) states that a random projection of a dataset $X \subseteq \mathbb{R}^m$ of size $n$ onto a dimension of size $O(\log n)$ approximately preserves all pairwise distances. This tool has been subsequently applied to many clustering and other problems (see \cite{dim_reductionsurvey} and references therein).
Although the JL lemma is known to be tight~\cite{larsen2017optimality} in general, better tradeoffs are possible for {\em specific} clustering problems. Over the last few years, several works \cite{kmeans1, kmeans2, becchetti2019oblivious, ilyapaper} have shown that combining random dimensionality reduction with $k$-means leads to better guarantees than implied by the JL lemma. In particular, a recent paper by Makarychev, Makarychev, and Razenshteyn \yrcite{ilyapaper} shows that to preserve the $k$-means cost up to an arbitrary accuracy, it suffices to project the input set $X$ onto a dimension of size $O(\log k)$, as opposed to $O(\log n)$ guaranteed by the JL lemma. Since $k$ can be much smaller than $n$, the improvement to the dimension bound can be substantial.
However, when $k$ is comparable to $n$, the improvement is limited. This issue is particularly salient for clustering problems with a variable number of clusters, where no a priori bound on the number of clusters exists.
In this paper we study randomized dimensionality reduction over Euclidean space $\mathbb{R}^m$ in the context of two fundamental clustering problems with a variable number of clusters. In particular:
\begin{itemize}
\item Facility location (FL): given a set of points $X \subset \mathbb{R}^m$ and a facility opening cost, the goal is to open a subset $\mathcal{F} \subseteq X$ of facilities in order to minimize the total cost of opening the facilities plus the sum of distances from points in $X$ to their nearest facilities (see Section \ref{sec:prelim} for a formal definition). Such cost functions are often used when the ``true'' number of clusters $k$ is not known, see e.g., \cite{cambridge2009online}, section 16.4.1.
\item Single-linkage clustering, or (equivalently) Minimum Spanning Tree (MST): given a set of points $X \subset \mathbb{R}^m$, the goal is to connect them into a tree in order to minimize the total cost of the tree edges. This is a popular variant of Hierarchical Agglomerative Clustering (HAC) that creates a hierarchy of clusters, see e.g., \cite{cambridge2009online}, section 17.2.
\end{itemize}
We remark that some papers, e.g., \cite{abboud2019hac} define approximate HAC operationally, by postulating that each step of the clustering algorithm must be approximately correct. However, there are other theoretical formulations of approximate HAC as well, e.g., \cite{dasgupta2016hac, moseley2017hac}.
Since single-linkage clustering has a natural objective function induced by MST, defining approximate single-linkage clustering as approximate MST is a natural, even if not unique, choice.
\paragraph{Our Results} Our main results show that, for both FL and MST, it is possible to project input point sets into low (sometimes even constant) dimension while provably preserving the quality of the solutions. Specifically, our theorems incorporate the \emph{doubling dimension} $d_X$ of the input datasets $X$. This parameter\footnote{We formally define it in Section \ref{sec:prelim}.} measures the ``intrinsic dimensionality'' of $X$ and can be much lower than its ambient dimension $m$. If $X$ has size $n$, the doubling dimension $d_X$ is always at most $\log n$, and is often much smaller. We show that random projections into dimension roughly proportional to $d_X$ suffice in order to approximately preserve the solution quality. The specific bounds are listed in Table \ref{table}.
\begin{figure*}[t]
\begin{center}
\caption{Number of dimensions $d$ required for a random projection to provide a good clustering approximation}
\vskip 0.1in
\begin{tabular}{|l|l|l|l|l|}
\hline
Problem & Proj. dimension $d$ & Approx. & Cost/Solution & Reference \\
\hline
FL & $O(d_X)$ & $O(1)$ & Cost & Theorem \ref{thm:constant}\\
FL & $O(d_X)$ & $O(1)$ & Locally optimal solution & Theorem \ref{cor:main}\\
\hline
MST & $O(1/\epsilon^2 \cdot (d_X \log(1/\epsilon) +\log \log n))$ & $1+\epsilon$ & Cost & Theorem~\ref{thm:MainMST}\\
MST & $O(1/\epsilon^2 \cdot (d_X \log(1/\epsilon) +\log \log n))$ & $1+\epsilon$ & Optimal solution & Theorem~\ref{thm:MainMST}\\
\hline
\end{tabular}
\label{table}
\end{center}
\vskip -0.1in
\end{figure*}
We distinguish between two types of guarantees. The first type states that the minimum {\em cost} of FL or MST is preserved by a random projection (with high probability) up to the specified factor. This guarantee is useful if the goal is to quickly estimate the optimal value. The second type states that a {\em solution} computed in the projected space induces a solution in the original space which approximates the best solution (in the original space) up to the specified approximation factor. This guarantee implies that one can find an approximately optimal clustering by mapping the data into low dimensions and clustering the projected data. To obtain the second guarantee, we need to assume that the solution in the projected space is either globally optimal (for MST) or locally optimal\footnote{Informally, a solution is locally optimal if opening any new facility does not decrease its cost. The formal definition is slightly more general, and is given in Section~\ref{sec:local}. Note that any solution found by local search algorithms such as that in~\cite{MP_alg} satisfies this condition.} (for FL). We note that these two types of guarantees are incomparable. In fact, for FL, our proofs of the cost and of the solution guarantees are substantially different. We also prove analogous theorems for the ``squared'' version of FL, where the distance between points is defined as the {\em square} of the Euclidean distance between them.
We complement the above results by showing that the conditions and assumptions in our theorem cannot be substantially reduced or eliminated. Specifically, for both FL and MST, we show that:
\begin{itemize}
\item The bounds on the projected dimension $d$ in the theorems specified in the table must be at least $\Omega(d_X)$, as otherwise the approximation factors for both the cost and the solution become super-constant (Theorems \ref{thm:lb_fac}, \ref{thm:lb_mst_cost}, \ref{thm:lb_mst_pullback})
\item The assumptions that the solution in the projected space is (locally) optimal cannot be relaxed to ``approximately optimal'' (Lemmas \ref{lem:opt_nec}, \ref{lem:opt_nec_mst}).
\end{itemize}
Also, we show that, in contrast to facility location and MST, one must project to $\Omega(\log k)$ dimensions for preserving both the cost and solution for $k$-means and $k$-medians clustering, even if the doubling dimension $d_X$ is $O(1)$.
Finally, we present an experimental evaluation of the algorithms suggested by our results. Specifically, we show that both FL and MST, solving these problems in reduced dimension can reduce the running time by 1-2 orders of magnitude while increasing the solution cost only slightly. We also give empirical evidence that the doubling dimension of the input point set affects the quality of the approximate solutions. Specifically, we study two simple point sets of size $n$ that have similar structure but very different doubling dimension values ($O(1)$ and $O(\log n)$, respectively). We empirically show that a good approximation of the MST can be found for the former point set by projecting it into much fewer dimensions than the latter point set.
\paragraph{Related Work}
There is a long line of existing work on approximating the solution of various clustering problems in metric spaces with small doubling dimensions (see \cite{db_dim1, db_dim2, db_dim3, db_dim4}). The state of the art result is given in \cite{focspaper} where a near linear $(1+\epsilon)$-approximation algorithm is given for a variety of clustering problems. However, these runtimes have a doubly-exponential dependence on $d$ which is proven to be unavoidable unless P = NP \cite{focspaper}.
For MST in spaces of doubling dimension $d_X$, it is known that an $(1+\epsilon)$-approximate solution can be computed in time $2^{O(d_X)} n \log n + \epsilon^{-O(d_X)} n$ \cite{gottlieb2013proximity}.
To the best of our knowledge, none of these algorithms have been implemented.
In addition, the notion of doubling dimension has also been previously used to study algorithms for high dimensional problems such as the nearest neighbor search, see e.g., \cite{indyknaor, nn1, nn2}. The paper~\cite{indyknaor} is closest in spirit to our work, as it shows that, for a fixed point $q$ and a data set $X$, a random projection into $O(d_X)$ dimensions approximately preserves the distance from $q$ to its nearest neighbor in $X$ with a ``good'' probability. If the probability of success was of the form $1-1/{2n}$, we could apply this statement to all (up to $n$) facilities in the solution simultaneously, which would prove our results. Unfortunately, the probability of failure is much higher than $1/n$, and therefore this approach fails. Nevertheless, our proofs use some of the lemmas developed in that work, as discussed in Section~\ref{sec:prelim}.
\section{Preliminaries} \label{sec:prelim}
\paragraph{Problem Definitions}
The \emph{Euclidean Facility Location} problem is defined as follows: We are given a dataset $X \subset \mathbb{R}^m$ of $n$ points and a nonnegative function $c: X \rightarrow \mathbb{R}$ that represents the cost of \emph{opening} a facility at a particular point. The goal is to find a subset $\mathcal{F} \subseteq X$ that minimizes the objective $\cost(\mathcal{F}) = \sum_{f \in \mathcal{F}} c(f) + \sum_{x \in X} D(x, \mathcal{F}),$ where $D(x, \mathcal{F}) = \min_{f \in \mathcal{F}} \|x-f\|$. In this work we restrict our attention to the case that $\| \cdot \|$ is the Euclidean $(\ell_2)$ metric. The first term $\sum_{f \in \mathcal{F}} c(f)$ is referred to as the opening costs and the second term $ \sum_{x \in X} D(x, \mathcal{F})$ is referred to as the connection costs. In this work, we also focus on the \emph{uniform} version of facility location where
all opening costs are the same. By re-scaling the points, we can further assume that $f(x) = 1$ for all $x \in X$. Therefore, throughout the paper, we focus on minimizing the following objective function:
\begin{equation}\label{eq:objective}
\cost(\mathcal{F}) =| \mathcal{F}| + \sum_{x \in X} \, \min_{f \in \mathcal{F}}\|x-f\|.
\end{equation}
A set $\mathcal{F}$ of facilities is also referred to as a \emph{solution} to the facility location problem.
The \emph{Euclidean Minimum Spanning Tree} problem is defined as follows. Given a dataset $X \subset \mathbb{R}^m$ of $n$ points, we wish to find a set $\mathcal{M}$ of edges $(x, y)$ that forms a spanning tree of $X$ and minimizes the following objective function:
\begin{equation}\label{eq:objective_mst}
\cost(\mathcal{M}) = \sum_{(x, y) \in \mathcal{M}}\|x-y\|.
\end{equation}
\paragraph{Properties of Doubling Dimension}
We parameterize our dimensionality reduction using \emph{doubling dimension}, a measure of the intrinsic dimensionality of the dataset. The notion of doubling dimension holds for a general metric space $X$ and is defined as follows. Let $B(x,r)$ denote the ball of radius $r$ centered at $x \in X$, intersected with the points in $X$. Then the doubling \emph{constant} $\lambda_X$ is the smallest constant $\lambda$ such that for all $x \in X$ and for all $r > 0$, there exists $S\subseteq X$ with $|S| \le \lambda$ such that $B(x,r) \subseteq \bigcup_{s \in S} B(s, r/2)$. The doubling \emph{dimension} of $X$ is is defined as $d_X := \log \lambda_X$. One can see that $\lambda_X \le |X|,$ so $d_X \le \log |X|$.
In this paper, we focus on the case that $X$ is a subset of Euclidean space $\mathbb{R}^m$.
\paragraph{Dimension Reduction}
In this paper we define a random projection as follows.
\begin{definition}\label{def:proj}
A random projection from $\mathbb{R}^m$ to $\mathbb{R}^d$ is a linear map $G$ with i.i.d.\@ entries drawn from $\mathcal{N}(0, 1/d)$.
\end{definition}
The following dimensionality reduction result related to doubling dimension was proven in \cite{indyknaor}. Informally, the lemma below states that a random projection of $X$ onto a dimension $O(d_X)$ subspace does not `expand' $X$ very much.
\begin{lemma}[Lemma $4.2$ in \cite{indyknaor}]\label{lem:indyk} Let $X \subseteq B(0,1)$ be a subset of the $m$-dimensional Euclidean unit ball, and let $G$ be a random projection from $\mathbb{R}^m$ to $\mathbb{R}^d$. Then there exist universal constants $c, C > 0$ such that for $d \ge C \cdot d_X + 1$ and $t > 2$, $\Pr(\exists x \in X, \, \|Gx\| \ge t) \le \exp(-cdt^2)$.
\end{lemma}
For our proofs, we will need some additional preliminary results on random projections, which are deferred to Appendix \ref{sec:app_prelim}.
\section{Local Optimality for Facility Location}
\label{sec:local}
We now define the notion of a locally optimal solution for facility location. As stated in the introduction, this notion plays a key role in our approximation guarantees. Before we present our criterion for local optimality, we begin by discussing the Mettu Plaxton (MP) algorithm, an approximation algorithm for the facility location problem. The MP approximation algorithm gives a useful geometric quantity to understand the facility location problem.
\subsection{Approximating the Cost of Facility Location}
For each $p \in X$, we associate with it a radius $r_p > 0$ which satisfies the relation
\begin{equation}\label{eq:rdef}
\sum_{q \in B(p, r_p) \cap X} (r_p - \|p-q\|) = 1.
\end{equation}
It can be checked that a unique value $r_p$ satisfying $1/n \le r_p \le 1$ exists for every $p$. The geometric interpretation of $r_p$ is shown in Figure $\ref{fig:circle_radii}$. This quantity was first defined by Mettu and Plaxton \yrcite{MP_alg},
who proved that a simple greedy procedure of iteratively selecting facilities that lie in balls of radii $2r_p$ gives a $3$ factor approximation algorithm for the facility location problem.
For completeness, their algorithm is given in Appendix \ref{sec:app_local}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.35\linewidth]{images/radii.png}
\caption{$r_p$ is defined such that the dotted lines add to $1$.}
\label{fig:circle_radii}
\end{figure}
One of the main insights from
Mettu and Plaxton's algorithm
is that the sum of the radii $r_p$ is a constant factor approximation to the cost of the optimal solution. This insight was first stated in \cite{sublin_MP} where it was used to design a sublinear time algorithm to approximate the cost of the facility location problem. In particular, we have the following result from \cite{sublin_MP} about the approximation property of the radii values.
\begin{lemma}[Lemma $2$ in \cite{sublin_MP}]\label{lem:appx}
Let $C_{OPT}$ denote the cost of the optimal facility location solution. Then
$\frac{1}{4} \cdot C_{OPT} \le \sum_{p \in X} r_p \le 6 \cdot C_{OPT}$.
\end{lemma}
For our purposes, we use the radii values to define a local optimality criterion for a solution to the facility location problem. Our local optimality criterion states that each point $p$ must have a facility that is within distance $3r_p$.
\begin{definition}\label{def:local}
A solution $\mathcal{F}$ to the facility location problem is \emph{locally optimal} if for all $p \in X$,
$B(p, 3r_p) \cap \mathcal{F} \ne \emptyset$.
\end{definition}
We show in Lemma \ref{lem:localopt} that a solution that is not locally optimal can be improved, i.e. the objective function given in Eq.\@ \eqref{eq:objective} can be improved, by adding $p$ to the set of facilities. This implies that any global optimal solution must also be locally optimal, so requiring a solution of the facility location problem to be locally optimal is a \emph{less restrictive} condition than requiring a solution to be globally optimal.
\begin{lemma} \label{lem:localopt}
Let $\mathcal{F}$ be any collection of facilities. If there exists a $p \in X$ such that $B(p, 3r_p) \cap \mathcal{F} = \emptyset$ then $\textup{cost}(\mathcal{F} \cup \{p\}) < \textup{cost}(\mathcal{F})$,
i.e., we can improve the solution.
\end{lemma}
The proof of Lemma \ref{lem:localopt} is deferred to Appendix \ref{sec:app_local}.
\section{Dimension Reduction for Facility Location}
\subsection{Approximating the Optimal Facility Location Cost}\label{subsec:approx}
In this subsection we show that we can \emph{estimate} the cost of the global optimal solution for a point set $X$ by computing the value of the radii after a random projection onto dimension $d = O(d_X)$. We do this by showing that for each $p$, the value of $r_p$ can be approximated up to a constant multiplicative factor in $\mathbb{R}^d$, the lower dimension.
For each $p \in X$, let $r_p$ and $\tilde{r}_p$ be the radius of $p$ and $Gp$ in $\mathbb{R}^m$ and $\mathbb{R}^d$, respectively, computed according to Eq.\@ \eqref{eq:rdef}. Then we prove that $\E[\tilde{r}_p] = \Theta(r_p)$, where the expectation is over the randomness of the projection $G$.
This proof can be divided into showing $\E[\tilde{r}_p] = O(r_p)$ and $\E[\tilde{r}_p] = \Omega(r_p).$
Our proof strategy for the former is to use the concentration inequality in Lemma \ref{lem:indyk} to roughly say that points in $B(p, r_p) \cap X$ cannot get `very far' away from $p$ after a random projection. In particular, they must all still be at a distance $O(r_p)$ of $p$ after the random projection. Then using the geometric definition of $r_p$ given in \eqref{eq:rdef} and Figure \ref{fig:circle_radii}, we can say that the corresponding radii of $Gp$ in $\mathbb{R}^d$ denoted as $\tilde{r}_p$ must then be upper bounded by $O(r_p)$.
Our proof strategy for the latter is different in that our challenge is to show that points do not `collapse' closer together. In more detail, for a fixed point $p$, we need to show that after a dimension reduction, many \emph{new} points do not come inside a ball of radius $O(r_p)$ around the point $Gp$. An application of Theorem \ref{thm:indyk} in Appendix \ref{sec:app_prelim},
due to Indyk and Naor \yrcite{indyknaor}, deals with this event.
By adding these expectations over each point $p$ and applying Lemma \ref{lem:appx}, we can prove that the facility location cost is preserved under a random projection. Formally, we obtain the following theorem:
\begin{theorem}\label{thm:constant}
Let $X \subseteq \mathbb{R}^m$ and let $G$ be a random projection from $\mathbb{R}^m$ to $\mathbb{R}^d$ for $d = O(d_X)$. Let $\mathcal{F}_m$ be the optimal solution in $\mathbb{R}^m$ and let $\mathcal{F}_d$ be the optimal solution for the dataset $GX \subseteq \mathbb{R}^d$. Then there exist constants $c, C >0$ such that
$ c \cdot \textup{cost}(\mathcal{F}_m) \le \E[\textup{cost}(\mathcal{F}_d)] \le C \cdot \textup{cost}(\mathcal{F}_m)$.
\end{theorem}
The full proof of Theorem \ref{thm:constant} and the lemmas bounding $\E[\tilde{r}_p]$ are deferred to Appendix \ref{sec:app_fl}.
\subsection{Obtaining Facility Location Solution in Larger Dimension} \label{subsec:pullback}
As discussed in the introduction, for many applications, it is not enough to be able to approximate the \emph{cost} of the optimal solution, but rather \emph{obtain} a good solution.
In particular, we would like to perform dimensionality reduction on a dataset $X$, use some algorithm to solve facility location, and then have the guarantee that the quality of the solution we found is a good indicator of the quality of the solution in the
original
dimension. Furthermore, since optimally solving facility location in the smaller dimension might still be a challenging task, it is desirable to have a guarantee that a good solution (not necessarily the global optimum) will be a good solution in the larger dimension. We show in this section that this is indeed the case for \emph{locally optimal} solutions.
Specifically, we show that the cost of a locally optimal solution found in $\mathbb{R}^d$ does not increase substantially when evaluated in the larger dimension. More formally, we prove the following theorem:
\begin{theorem}\label{cor:main}
Let $X \subset \mathbb{R}^m$ and $G$ be a random projection from $\mathbb{R}^m$ to $\mathbb{R}^d$ for $d = O(d_X \cdot \log(1/\epsilon)/\epsilon^2)$. Let $\mathcal{F}_d$ be a locally optimal solution for the dataset $GX$. Then, the cost of $\mathcal{F}_d$ evaluated in $\mathbb{R}^m$, denoted as $\textup{cost}_m(\mathcal{F}_d)$, satisfies
$$ \E[\textup{cost}_m(\mathcal{F}_d)] \le |\mathcal{F}_d| + O\bigg(\sum_{p \in X} r_p\bigg) \le \textup{cost}_d(\F_d) + O(F),$$
where $F$ is the optimal facility location cost of $X$ in $\bbR^m$.
\end{theorem}
To describe the proof intuition, first note that the cost function defined in Eq.\@ \eqref{eq:objective} has two components. One is the number of facilities opened, and the other is the connection cost. The first term is automatically preserved in the larger dimension since the number of facilities stays the same. Therefore, the main technical challenge is to show that if a facility is within distance $O(\tilde{r}_p)$ of a fixed point $p$ in $\mathbb{R}^d$ (note that $\tilde{r}_p$ is calculated according to Eq.\@ \eqref{eq:rdef} in $\mathbb{R}^d$), then the facility must be within distance $O(r_p)$ in $\mathbb{R}^m$, the larger dimension. From here, one can use Lemma \ref{lem:appx} to bound $\sum_{p \in X} r_p$ by $O(F)$, and the simple fact that $|\F_d| \le \cost_d(\F_d)$.
The proof of our main technical challenge relies on the careful balancing of the following two events. First, we control the value of the radius $\tilde{r}_p$ and show that $\tilde{r}_p \approx r_p$. In particular, we show that the probability of $\tilde{r}_p \ge k r_p$ for any constant $k$ is exponentially decreasing in $k$.
Next, we need to bound the probability that a `far' point comes `close' to $p$ after the dimensionality reduction.
While there exists a known result on this (e.g., Theorem \ref{thm:indyk} in Appendix \ref{sec:app_prelim}),
we need a novel, more detailed result to quantify how close far points can come after the dimension reduction.
To study this in a more refined manner, we bucket the points in $X \setminus \{p\}$ according to their distance from $p$, with the $i$th level representing distance approximately $i$ from $p$.
We show that points in $X\setminus \{p\}$ that are in `level' $i$ do not shrink to a `level' smaller than $O(\sqrt{i})$. Note that we need to control this even across all levels. To do this requires a chaining type argument which crucially depends on the doubling dimension of $X$. Finally, a careful combination of probabilities gives us our result.
The proof of Theorem \ref{cor:main} is deferred to Appendix \ref{sec:app_fl}.
\begin{remark}\label{rem:generalization}
Our proof of Theorem \ref{cor:main} generalizes to the case of \emph{arbitrary} opening costs $c_p$ by changing the definition of $r_p$ to be $ \sum_{q \in B(p, r_p)} (r_p - \|x-q\|) = c_p$.
\end{remark}
\subsection{Facility Location with Squared Costs}
Facility location problem with squared costs is the following variant of facility location. Given a dataset $X \subset \mathbb{R}^m$, our goal is to find a subset $\mathcal{F} \subseteq X$ that minimizes the objective
\begin{equation
\cost(\mathcal{F}) =| \mathcal{F}| + \sum_{x \in X} \, \min_{f \in \mathcal{F}}\|x-f\|^2.
\end{equation}
In contrast to \eqref{eq:objective}, we are adding the \emph{squared} distance from each point to its nearest facility in $\mathcal{F}$, rather than just the distance. This is comparable to $k$-means, whereas standard facility location is comparable to $k$-medians.
For the facility location problem with squared costs, we are again able to show that a random projection of $X$ into $O(d_X)$ dimensions preserves the optimal cost up to an $O(1)$ factor, and that any locally optimal solution in the reduced dimension has its cost preserved in the original dimension. The formal statements and proofs are very similar to those of the standard facility location problem, and are deferred to Appendix \ref{sec:fl_squared}.
\section{Dimension Reduction for MST} \label{sec:mst}
In this section we demonstrate the effectiveness of dimensionality reduction for the minimum spanning tree (MST) problem. As in the case of facility location, we show that we can \emph{estimate} the cost of the optimum MST solution by computing the MST in a lower dimension, and that the minimum spanning tree in the lower dimension is an \emph{approximate} solution to the high-dimensional MST problem.
This time our approximations, both to the optimum cost and the optimum solution, can be $(1 + \epsilon)$-approximations for any $\epsilon > 0$, as opposed to the constant factor approximations that we could guarantee for facility location. To formally state our theorem, for some spanning tree $T$ of $X$, let $\cost_X(T)$ be the sum of the lengths of the edges in $T$. Likewise, let $\cost_{GX}(T)$ be the sum of the lengths of the edges in $T$, where distances are measured in the projected tree $GX$. Our main result is the following theorem:
\begin{theorem} \label{thm:MainMST}
For some positive integers $m, d$, let $X \subset \mathbb{R}^m$ be a point set of size $n$ and let $G: \mathbb{R}^m \to \mathbb{R}^d$ be a random projection. Let $\mathcal{M}$ represent the minimum spanning tree of $X$, with $M = \cost_X(\mathcal{M})$ and $\widetilde{\mathcal{M}}$ represent the minimum spanning tree of $GX$, with $\widetilde{M} = \cost_{GX}(\widetilde{\mathcal{M}})$. Then, for some sufficiently large constant $C_6,$ if $d \ge C_6 \cdot \epsilon^{-2} \cdot (\log \epsilon^{-1} \cdot d_X + \log \log n)$, the following are true:
\begin{enumerate}
\item The cost of the MST is preserved under projection with probability at least $\frac{9}{10}.$ In other words, $\widetilde{M} \in [1-\epsilon, 1+\epsilon] \cdot M$.
\item The optimal projected MST $\widetilde{\mathcal{M}}$ is still an approximate MST in the original dimension with probability at least $\frac{9}{10}.$ In other words, $\cost_{X}(\widetilde{\mathcal{M}}) \in [1, 1+\epsilon] \cdot M$.
\end{enumerate}
\end{theorem}
Hence, we obtain a significantly stronger theoretical guarantee for preserving the MST than $d = \Theta(\epsilon^{-2} \log n)$, which is promised by the Johnson-Lindenstrauss Lemma, assuming that $d_X$ and $\epsilon^{-1}$ are constant or very small.
Our main technical result in establishing Theorem \ref{thm:MainMST} is the following crucial lemma, which will in fact allow us to prove both parts of the above theorem simultaneously.
\begin{lemma} \label{lem:MST}
For all notation as in Theorem \ref{thm:MainMST}, $\E[\cost_{X}(\widetilde{\mathcal{M}}) - \cost_{GX}(\widetilde{\mathcal{M}})] \le O(\epsilon) \cdot M.$
\end{lemma}
The proof strategy for Lemma \ref{lem:MST} involves first dividing the edges of $\widetilde{\mathcal{M}}$ into levels based on their lengths, and bounding the difference between edge lengths (pre- and post- projection) in each level separately. To analyze a level consisting of the edges of length approximately $t$, we first partition the point set $X$ (in the original dimension $\mathbb{R}^m$) into balls $B_1, \dots, B_r$ of radius $c \cdot t$ for a small constant $c$, and show via chaining-type arguments that not too many pairs of balls that were originally far apart come close together after the random projection. Moreover, using Lemma \ref{lem:indyk}, we show that almost all of the balls do not expand by much.
Therefore, there are not many \emph{bad} pairs of balls $(B_i, B_j)$, where $(B_i, B_j)$ is bad if there exists $p \in B_i, q \in B_j$ where $\|p-q\|$ is much bigger than $t$ but $\|Gp-Gq\|$ is approximately $t$. Now, assuming that none of the balls expand by much in the random projection, for any bad pair $(B_i, B_j)$ and edges $(p, q)$ and $(p', q')$ with $p, p' \in B_i$ and $q, q' \in B_j,$ we cannot have both edges in the minimum spanning tree of $GX$. This is because $\|Gp-Gq\|, \|Gp'-Gq'\| \approx t,$ but since $B_i$ and $B_j$ have radius $c \cdot t$ and do not expand by much, we can improve the spanning tree by replacing $(Gp, Gq)$ with either $(Gp, Gp')$ or $(Gq, Gq')$. So, each bad pair can have at most $1$ edge in $\widetilde{\mathcal{M}}$, the MST of $GX$. Overall, in each level, not too many edges in $\widetilde{\mathcal{M}}$ can shrink by much after the projection.
The full proofs of Lemma \ref{lem:MST} and Theorem \ref{thm:MainMST} are given in Appendix \ref{sec:app_mst}.
\section{Lower Bounds for Projection Dimension} \label{sec:lower}
In this section, we state various lower bounds for the projection dimension $d$ for both facility location clustering and minimum spanning tree. We also show that, in contrast to facility location, low doubling dimension does not actually help with dimensionality reduction for $k$-means or $k$-medians clustering.
All proofs are deferred to Appendix \ref{sec:app_lb}.
In all results of this section, we think of $X$ as a point set of size $n$ in Euclidean space $\mathbb{R}^m$, and $G$ as a random projection sending $X$ to $GX \subset \mathbb{R}^d$. In this section, for FL, we always let $\F$ be the optimal set of facilities in $X$, with cost $F$, and $\widetilde{\F}$ be the optimal set of facilities in $GX$, with cost $\widetilde{F}$. We define $\mathcal{M}, M, \widetilde{\mathcal{M}}, \widetilde{M}$ analogously for MST. We use $o(1)$ to denote functions going to $0$ as $n \to \infty,$ and $\omega(1)$ to denote functions going to $\infty$ as $n \to \infty$, where $n = |X|$.
First, we show that the dependence of the projected dimension $d$ on the doubling dimension $d_X$ in Theorems
\ref{thm:constant}, \ref{cor:main}, and \ref{thm:MainMST}
are all required to obtain constant factor approximations for either the cost or the pullback solution. Namely, we show the following three theorems:
\begin{theorem}[FL] \label{thm:lb_fac}
Let $d = o(\log n)$. There exists $X$ with doubling dimension $\Theta(\log n)$, such that with at least $2/3$ probability over $G: \mathbb{R}^m \to \mathbb{R}^d$, $\widetilde{F} = o(1) \cdot F$. Moreover, with probability at least $2/3,$ $\widetilde{\F}$, when pulled back to $X$, has cost $\omega(1) \cdot F$ in the original dimension.
\end{theorem}
\begin{theorem}[MST] \label{thm:lb_mst_cost}
Let $d = o(\log n).$ There exists $X$ with doubling dimension $\Theta(\log n)$, such that with probability at least $2/3,$ $\widetilde{M} = o(1) \cdot M$.
\end{theorem}
\begin{theorem}[MST] \label{thm:lb_mst_pullback}
Let $d = o(\log n).$ There exists $X$ with doubling dimension $\Theta(\log n)$, such that with probability at least $2/3,$ $\widetilde{\mathcal{M}},$ when pulled back to $X$, will have cost $\omega(1) \cdot M$.
\end{theorem}
Next, we show that (local) optimality is required for Theorems \ref{cor:main} and \ref{thm:MainMST}, and cannot be replaced with \emph{approximate} optimality. In other words, random projections to $o(\log n)$ dimensions do not necessarily preserve the set of approximate solutions for either facility location or MST, even for point sets of low doubling dimension. Namely, we show
the following two lemmas:
\begin{lemma}[FL] \label{lem:opt_nec}
Let $d = o(\log n)$. There exists $X$ with constant doubling dimension, such that with at least $2/3$ probability, there exists a $(1+O(m^{-1/2d})) = (1+o(1))$-approximate solution $\F'$ for $GX$ whose total cost when pulled back to $X$ is at least $\Omega(m^{1/2d}) \cdot F = \omega(1) \cdot F$.
\end{lemma}
\begin{lemma}[MST] \label{lem:opt_nec_mst}
Let $d = o(\log n)$ but $d = \omega(\log \log n)$. There exists $X$ with constant doubling dimension, such that with at least $2/3$ probability, there exists a $(1+o(1))$-approximate MST $\mathcal{M}'$ for $GX$ whose total cost whose total cost when pulled back to $X$ is at least $\omega(1) \cdot M$.
\end{lemma}
Finally, we show that the guarantees of facility location are in fact not maintained for $k$-means and $k$-medians clustering. In other words, the bound of $O(\log k)$ by \cite{ilyapaper} is optimal even for sets of doubling dimension $O(1)$.
\begin{theorem}[$k$-means/$k$-medians] \label{thm:lb_kmeans}
Let $k < n$ and $d = o(\log k)$. Then, there exists $X$ with constant doubling dimension, such that with probability at least $2/3$, the $k$-means (resp., medians) cost of $GX$ is $o(1)$ times the $k$-means (resp., medians) cost of $X$. Moreover, the optimal choice of $k$ centers in $GX$, when pulled back to $X$, will be an $\omega(1)$-approximate solution in the original dimension $\mathbb{R}^m$.
\end{theorem}
At a first glance, Theorem \ref{thm:lb_kmeans} may appear to contradict our upper bounds for facility location. However, in our counterexamples for $k$-means and $k$-medians, the cost (both initially and after projection) is substantially smaller than $k$. Facility location adds a cost of $k$ for the $k$ facilities that are created, and since these facilities now make up the bulk of the cost, the facility location cost is still approximately preserved under random projection.
\section{Experiments} \label{sec:experiments}
We use the following datasets in our experiments for Subsections \ref{subsec:exp_FL} and \ref{subsec:exp_MST}.
\begin{itemize}
\item \textbf{Faces Dataset}: This dataset is used in the influential ISOMAP paper and consists of $698$ images of faces in dimension $4096$ \cite{isomap}. From \cite{faces}, we can estimate that the doubling dimension of this dataset is a small constant.
\item \textbf{MNIST `2' Dataset}: $1000$ randomly chosen images from the MNIST dataset (dimension 784) restricted to the digit $2$. We picked $2$ since it is considered in the original ISOMAP paper \cite{isomap}.
\end{itemize}
All of our experimental results are averaged over $20$ independent trials and the projection dimension $d$ ranges from $5$ to $20$ inclusive.All of our experiments were done on a CPU with i5 2.7 GHz dual core and 8 GB RAM.
\begin{figure*}[!ht]
\centering
\subcaptionbox{Faces Dataset\label{fig:FL_a} }[0.65\columnwidth][c]{%
\includegraphics[width=2.2in]{images/fac_loc_faces.pdf}}
\hspace{35mm}
\subcaptionbox{MNIST `2' Dataset\label{fig:FL_b}}[0.65\columnwidth][c]{%
\includegraphics[width=2.2in]{images/fac_loc_mnist.pdf}}
\caption{Facility Location Experiments. (a) \textbf{Blue:} Ratio of solution costs with/without dimensionality reduction, as a function of $d$. \textbf{Orange: }Running time (in secs) as a function of $d$. (b) Same plot as (a) but for MNIST `2' dataset.}
\label{fig:FL_experiments}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subcaptionbox{Faces Dataset\label{fig:MST_a} }[0.5\columnwidth][c]{%
\hspace*{-1cm}
\includegraphics[width=2.2in]{images/mst_faces.pdf}}\hspace{17mm}
\subcaptionbox{MNIST `2' Dataset\label{fig:MST_b}}[0.5\columnwidth][c]{%
\hspace*{-1cm}
\includegraphics[width=2.2in]{images/mst_mnist.pdf}}\hspace{17mm}
\subcaptionbox{Low vs High Doubling Dimension Comparison\label{fig:large_vs_small}}[0.5\columnwidth][c]{%
\hspace*{-1cm}
\includegraphics[width=2.2in]{images/MST_comparision.pdf}}
\caption{Minimum Spanning Tree Experiments. (a) \textbf{Blue:} Ratio of solution costs with/without dimensionality reduction, as a function of $d$. \textbf{Orange:} Running time (in secs) as a function of $d$. (b) Same plots as (a) but for MNIST `2' dataset. (c) Dataset $1$ (low doubling dimension) can be projected into a much smaller dimension than Dataset $2$ for MST computation.}
\label{fig:MST_experiments}
\end{figure*}
\subsection{Facility Location: Cost versus Accuracy Analysis} \label{subsec:exp_FL}
In this section we compare the accuracy of the MP algorithm with/without dimensionality reduction for various number of centers opened.
\paragraph{Experimental Setup }
We project our datasets and compute a facility location clustering with the opening costs scaled so that $n/2, n/5,$ and $n/10$ facilities are opened respectively. We then take this solution and evaluate its cost in the original dimension. We also perform a clustering in the original dimension with the same prescribed number of facilities opened and plot the ratio of the cost of the solution found in the lower dimension (but evaluated in the larger dimension) to the solution found in the larger dimension. We also plot the time taken for the clustering algorithm in the projected dimension. We use the MP algorithm to perform our clustering due to the intractability of finding the exact optimum and also because the MP algorithm is fast and quite practical to use.
\paragraph{Results}
Our results are plotted in Figures \ref{fig:FL_a}-\ref{fig:FL_b}.
Our experiments empirically demonstrate that the dimensionality reduction step does not significantly decrease the accuracy of the solution. Furthermore, we get a substantial reduction in the runtime since the average runtime was at least $20$ seconds for Faces and around $6.5$ seconds for MNIST `2' in the original dimension for all the values of $k$ tested, which is \textbf{1-2} orders of magnitude higher than the runtime when random projections are used. Note that the runtime includes the time taken to perform the random projection. Overall, our experiments demonstrate that the method of performing dimensionality reduction to perform facility location clustering is well-founded.
\subsection{MST: Cost versus Accuracy Analysis} \label{subsec:exp_MST}
We empirically show the benefits of using dimensionality reduction for minimum spanning tree computation.
\paragraph{Experimental Setup}
We project our datasets and compute a MST. We then take the tree found in the lower dimension and compare its cost in the higher dimension against the actual MST. Our MST algorithm is a variant of the Boruvka algorithm from \cite{MST_mlpack} that is suitable for point sets in large dimensions and is implemented in the popular `mlpack' machine learning library \cite{mlpack2018}.
\paragraph{Results}
Our results are plotted in Figures \ref{fig:MST_a}-\ref{fig:MST_b}. In the blue plots of these figures, the ratio of the cost of the tree found in the projected dimension, but evaluated in the original dimension, to the cost of the actual MST is shown. We see that indeed as projection dimension increases, the ratio approaches $1$. However even for very low values of $d$, such as $10$, the tree found in the projected space serves as a good approximate for the actual MST. Conversely, we see that as $d$ increases, the cost of computing the MST also increases as shown in the orange plots of the Figures \ref{fig:MST_a} and \ref{fig:MST_b}. Note that the time taken to perform the projection is also included. The time taken to compute the MST in the original dimension was approximately $3.2$ seconds for the Faces dataset and $7.1$ seconds for the MNIST `2' dataset. Therefore, projection to dimension $d=20$ gives us approximately \textbf{80x} improvement in speed for the Faces dataset and \textbf{30x} improvement in speed for the MNIST `2' dataset while having a low cost distortion.
\subsection{Large versus Small Doubling Dimension } \label{subsec:exp_MST_doubling}
In this section we present two datasets in $\mathbb{R}^n$ where one dataset has doubling dimension $O(1)$ and the other has doubling dimension at least $\Omega(\log n)$ which is asymptotically the largest doubling dimension of any set of size $n$. We empirically show that the second dataset requires larger projection dimension than the first to guarantee that the MST found in the projected space induces a good solution in the original space. Our two datasets are the following. Let $e_i$ denote the standard basis vectors in $\mathbb{R}^n$. We first draw $n$ standard Gaussians $g_1, \cdots, g_n \in \mathbb{R}$. Our datasets are:
\noindent \textbf{Dataset 1:} $\{ g_1 \cdot e_1, g_1 \cdot e_1 + g_2 \cdot e_2, \ldots, g_1 \cdot e_1 + \cdots + g_n \cdot e_n\}$. \\
\noindent \textbf{Dataset 2:} $\{g_1 \cdot e_1, g_1 \cdot e_2, \ldots, g_n \cdot e_n \}$.
Note that we use the same $g_i$'s for both datasets. The above datasets appear to be similar, but it can be shown that their respective doubling dimensions are $O(1)$ and $\Omega(\log n)$.
\paragraph{Experimental Setup} We let $n = 1000$ and construct the two datasets. We project our datasets and find the MST for each dataset in the projected space. Then we evaluate the cost of this tree in the larger dimension and compare this cost to the cost of the actual MST for each dataset.
\paragraph{Results} Figure \ref{fig:large_vs_small} demonstrates that we can find a high quality approximation of the MST by finding the MST in a much smaller dimension for Dataset $1$ compared to Dataset $2$. For example, Dataset $1$ required only $d = 10$ dimensions to approximate the true MST within $10\%$ relative error while Dataset $2$ needed $d = 38$ to get within $10\%$ relative error of the true MST.
\section*{Acknowledgments} This research was supported in part by the NSF TRIPODS program (awards CCF-1740751 and DMS-2022448); NSF award CCF-2006798; MIT-IBM Watson collaboration; Simons Investigator Award; and NSF Graduate Research Fellowship Program. |
2110.09739 | \section[1]{Introduction}
Our object in this paper is to study a gradient pseudo-Ricci soliton on a real hypersurface of a non-flat complex space form under the assumption that the structure vector field of a real hypersurface is an eigen vector field of the Ricci tensor. The concept of gradient pseudo-Ricci solitons are closely related to pseudo-Einstein metric on real hypersurfaces.
A Riemannian manifold $M$ is Einstein if its Ricci tensor is a scalar multiple of the identity at each point. Cartan and Thomas \cite{Th} have shown that an Einstein hypersurface of Euclidean space is a hypersphere if its scalar curvature is positive, and Fialkow \cite{Fi} classified Einstein hypersurfaces in spaces of constant curvature, while Einstein complex hypersurfaces of complex space forms were classified by Smyth \cite{Sm}.
On the other hand, it is well known that there are no Einstein real hypersurfaces of a non-flat complex space form. So we need the notion of pseudo-Einstein real hypersurfaces. Let $M$ be a real hypersurface of a Kaehlerian manifold $\tilde{M}$ and $(\phi, \xi, \eta, g)$ be an {\it almost contact metric structure} on $M$ (cf. Blair \cite{Bl}, Yano-Kon \cite{YK}). If the Ricci tensor $S$ of $M$ is of the form $SX = aX + b\eta(X)\xi$ for some constants $a$ and $b$, then $M$ is called a {\it pseudo-Einstein real hypersurface}. As for the definition of pseudo-Einstein real hypersurfaces, see also Kon \cite{MK3}. When $b = 0$, $M$ is Einstein. We remark that if $a$ and $b$ are functions, then they become constants. Any pseudo-Einstein real hypersurface has at most three constant principal curvatures and becomes a Hopf hypersurface. For pseudo-Einstein real hypersurfaces of a non-flat complex space form, see Cicel-Ryan \cite{CR}, Kon \cite{Ko} and Montiel \cite{Mo} . In case of dim$M$=3, see also Ivey-Ryan \cite{IR} and Kim-Ryan \cite{KR}.
On the other hand, the geometry of Ricci solitons has been the focus of attention of many mathematicians. In particular, it has become more important after Perelman \cite{Pe} applied Ricci solitons to solve the Poincar\'e conjecture. A Ricci soliton is said to be trivial if the potential vector field is zero or Killing, in which case the metric is Einstein. A Ricci soliton is called a gradient Ricci soliton if its potential field is the gradient of some smooth function on $M$. Later, Ricci soliton was also studied in the theory of various submanifolds of Riemannian manifolds (cf. Chen \cite{Ch}). For example, in \cite{CK}, Cho-Kimura proved that a compact Hopf hypersurface of a non-flat complex space form does not admit a Ricci soliton, and that a ruled real hypersurface of a non-flat complex space form does not admit a gradient Ricci soliton.
For real hypersurfaces in complex space form, the condition pseudo-Einstein play an important role instead of Einstein. As a corresponding extended concept, we define the notion of pseudo-Ricci solitons and gradient pseudo-Ricci solitons. First we show the existence of non-trivial gradient pseudo-Ricci soliton on $3$-dimensional ruled real hypersurface of a complex hyperbolic space. For important results on ruled real hypersurfaces related to our results, refer to Berndt and D\'iaz-Ramos \cite{BD}, Lohnherr and Reckziegel \cite{LR}.
For the study of real hypersurfaces in a complex space form, it is popular condition that the real hypersurface is Hopf, that is, the structure vector field $\xi$ is a principal vector of the shape operator. However, if we assume that a real hypersurface is Hopf, gradient Ricci soliton and gradient pseudo-Ricci soliton turn to be trivial. So, in our study, we use the condition $S\xi=\beta\xi$, $\beta$ being a function. We remark that any Hopf hypersurface satisfies this condition and ruled real hypersurfaces are non-Hopf examples which satisfy $S\xi=\beta\xi$ (see Kon \cite{MK}, \cite{MK2}).
Our main theorem is the following
\begin{theorem} Let $M$ be a real hypersurface of a non-flat complex space form $M^n(c)$. Suppose the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$. If $M$ admits a gradient pseudo-Ricci soliton, then $M$ is a pseudo-Einstein real hypersurface of $M^n(c)$ or a $3$-dimensional ruled real hypersurface of $M^2(c), c<0$.
\end{theorem}
Moreover, in view of the proof of Theorem 1.1, we see that there does not exist a real hypersurface of a non-flat complex space form $M^n(c)$ admitting a gradient Ricci soliton. Thus, we obtain
\begin{theorem}
Let $M$ be a compact real hypersurface of a non-flat complex space form $M^n(c)$ and suppose that the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$. Then $M$ does not admit a Ricci soliton.
\end{theorem}
\section[2]{Real hypersurfaces}
Let $M^n(c)$ denote the complex projective space of complex dimension $n$ (real dimension $2n$) with constant holomorphic sectional curvature $4c$. We denote by $J$ the almost complex structure of $M^n(c)$. The Hermitian metric of $M^n(c)$ will be denoted by $G$.
Let $M$ be a real $(2n-1)$-dimensional hypersurface immersed in $M^n(c)$. Throughout this paper, we suppose that $M$ is connected. We denote by $g$ the Riemannian metric induced on $M$ from $G$. We take the unit normal vector field $N$ of $M$ in $M^n(c)$. For any vector field $X$ tangent to $M$, we define $\phi$, $\eta$ and $\xi$ by
$$JX=\phi X+\eta(X)N, \hspace{1cm} JN=-\xi,$$
where $\phi X$ is the tangential part of $JX$, $\phi$ is a tensor field of type (1,1), $\eta$ is a 1-form, and $\xi$ is the unit vector field on $M$. We call $\xi$ the {\textit{structure vector field}}. Then they satisfy
$$\phi^2X=-X + \eta(X)\xi,\quad \phi\xi = 0, \quad \eta(\phi X)=0$$
for any vector field $X$ tangent to $M$. Moreover, we have
\begin{eqnarray*}
& &g(\phi X,Y)+g(X,\phi Y) = 0, \quad \eta(X)=g(X,\xi),\\
& &g(\phi X,\phi Y)=g(X,Y)-\eta(X)\eta(Y).
\end{eqnarray*}
Thus $(\phi,\xi,\eta,g)$ defines an almost contact metric structure on $M$.
We denote by $\tilde{\nabla}$ the operator of covariant differentiation in $M^n(c)$, and by $\nabla$ the one in $M$ determined by the induced metric. Then the {\it Gauss and Weingarten formulas} are given respectively by
$$\tilde{\nabla}_XY={\nabla}_XY+g(AX,Y)N, \hspace{1cm} \tilde{\nabla}_XN = -AX,$$
for any vector fields $X$ and $Y$ tangent to $M$. We call $A$ the {\it shape operator} of $M$. For the almost contact metric structure on $M$, we have
$${\nabla}_X\xi=\phi AX, \hspace{1cm} ({\nabla}_X\phi)Y=\eta(Y)AX-g(AX,Y)\xi.$$
We denote by $R$ the Riemannian curvature tensor field of $M$. Then the {\it equation of Gauss} is given by
\begin{eqnarray*}
R(X,Y)Z&=&c\{g(Y,Z)X - g(X,Z)Y + g(\phi Y,Z)\phi X\\
& & - g(\phi X,Z)\phi Y - 2g(\phi X,Y)\phi Z\} \\
& & + g(AY,Z)AX - g(AX,Z)AY,
\end{eqnarray*}
and the {\it equation of Codazzi} by
$$(\nabla_XA)Y-(\nabla_YA)X = c\{\eta(X)\phi Y - \eta(Y)\phi X - 2g(\phi X, Y)\xi\}.$$
From the equation of Gauss, the Ricci tensor $Ric$ of type (0,2) and $S$ of type (1,1) of $M$ are given by
\begin{eqnarray*}
{\it{Ric}}(X,Y)&=&g(SX,Y)\nonumber\\
&=&(2n+1)cg(X,Y)-3c\eta (X)\eta (Y) \\
& &\quad + {\rm tr}Ag(AX,Y) -g(AX,AY),\nonumber
\end{eqnarray*}
where ${\rm tr}A$ is the trace of $A$. The scalar curvature $Sc$ of $M$ is given by $Sc={\rm tr}S$ and
\begin{eqnarray*}
Sc=4(n^2 - 1)c+ ({\rm tr}A)^2 -{\rm tr}A^2.
\end{eqnarray*}
If the shape operator $A$ of $M$ satisfies$A\xi=\alpha \xi$ for some function $\alpha$, then $M$ is called a \textit{Hopf hypersurface}. By the equation of Codazzi, we have the following result (cf. \cite{Ma}).\\
\noindent{\textbf{Proposition A}.} \textit{Let $M$ be a Hopf hypersurface in $M^n(c)$, $n\geq 2$, If $X\perp \xi$ and $AX=\lambda X$, then $\alpha=g(A\xi,\xi)$ is constant and}
$$(2\lambda-\alpha)A\phi X=(\lambda\alpha +2c)\phi X.$$
From the equation of Codazzi, we have the following lemma (\cite{MK3}).
\begin{lem}
Let $M$ be a real hypersurface in a complex space form $M^n(c)$, $n\geq 3$, $c\neq 0$. If there exists an orthonormal frame $\{\xi, e_1,\cdots,e_{2n-2}\}$ on a sufficiently small neighborhood $\mathcal{N}$ of $x\in M$ such that the shape operator $A$ can be represented as
$$A\xi=\alpha\xi+he_1,\ \ Ae_1=a_1 e_1+h\xi,$$
$$ Ae_j=a_j e_j \ \ (j=2,\cdots, 2n-2),$$
then we have, for any $i,j\geq 2$, $i\neq j$,\\
\begin{eqnarray}
& &(a_j-a_k)g(\nabla_{e_i}e_j,e_k)-(a_i-a_k)g(\nabla_{e_j}e_i,e_k)=0,\\
& &(a_j-a_1)g(\nabla_{e_i}e_j,e_1)-(a_i-a_1)g(\nabla_{e_j}e_i,e_1)\\
& &\quad +h(a_i+a_j)g(\phi e_i,e_j) =0,\nonumber\\
& &\{2c-2a_ia_j+\alpha (a_i+a_j)\}g(\phi e_i,e_j)-hg(\nabla_{e_i}e_j,e_1)\\
& &\quad +hg(\nabla_{e_j}e_i,e_1) =0,\nonumber\\
& &(a_j-a_i)g(\nabla_{e_i}e_j,e_i)-(e_ja_i)=0,\\
& &(a_1-a_i)g(\nabla_{e_i}e_1,e_i)-(e_1a_i)=0,\\
& &(a_1-a_j)g(\nabla_{e_i}e_1,e_j)+(a_j-a_i)g(\nabla_{e_1}e_i,e_j)\\
& &\quad +a_ihg(\phi e_i,e_j) =0,\nonumber\\
& &\{2c-2a_1a_i+\alpha (a_i+a_1)\}g(\phi e_i,e_1)+hg(\nabla_{e_1}e_i,e_1)\\
& &\quad +(e_i h) =0,\nonumber\\
& &h(2a_i+a_1)g(\phi e_i,e_1)+(a_1-a_i)g(\nabla_{e_1}e_i,e_1)+(e_ia_1)=0,\\
& &h g(\nabla_{e_i}e_1,e_i)-(\xi a_i)=0,\\
& &(c+a_i\alpha -a_ia_j)g(\phi e_i,e_j)+h g(\nabla_{e_i}e_1,e_j)\\
& &\quad +(a_j-a_i)g(\nabla_{\xi}e_i,e_j)=0,\nonumber\\
& &(c+a_i\alpha -a_1a_i+h^2)g(\phi e_i,e_1)+(a_1-a_i)g(\nabla_{\xi}e_i,e_1)\\
& &\quad +(e_i h)=0,\nonumber\\
& &h(\alpha -3a_i)g(\phi e_i,e_1)+hg(\nabla_{\xi}e_i,e_1)+(e_i\alpha)=0,\\
& &(e_1 h)-(\xi a_1)=0,\\
& &(e_1 \alpha)-(\xi h)=0,\\
& &(c+a_1\alpha -a_1a_i-h^2)g(\phi e_1,e_i)-(a_1-a_i)g(\nabla_{\xi}e_1,e_i)\\
& &\quad +hg(\nabla_{e_1}e_1,e_i)=0,\nonumber
\end{eqnarray}
for any $i,j\geq 2$, $i\neq j$.
\end{lem}
We define the subspace $L_{x}\subset T_{x}(M)$ as the smallest subspace that contains $\xi$ and is invariant under the shape operator $A$. Then $M$ is Hopf if and only if $L_{x}$ is one-dimensional at each point $x$. We use the following lemma (see \cite{MK2}).
\begin{lem} Let $M$ be a real hypersurface of $M^n(c)$. If the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$, then ${\rm dim}L_{x} \leq 2$ at each point $x$ of $M^n(c)$.\end{lem}
\begin{lem} Let $M$ be a non-Hopf hypersurface of $M^n(c)$. If the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$, then
\begin{eqnarray}
{\tr}A=\alpha + a_1, \hspace{1cm} a_2+\cdots+a_{2n-2}=0,
\end{eqnarray}
\begin{eqnarray}
Sc=4(n^2-1)c+2(\alpha a_1 - h^2)-\sum_{j=2}^{2n-2}a_j^2.
\end{eqnarray}
\end{lem}
\begin{proof}
From the assumptions, using Lemma 2.2, we can take an orthonormal frame, given in Lemma 2.1, $\{\xi,e_1,\cdots,e_{2n-2}\}$, locally, such that $A$ is of the form
$$A\xi=\alpha\xi+he_1,\ \ Ae_1=a_1 e_1+h\xi,$$
$$ Ae_j=a_j e_j, \ \ j=2,\cdots, 2n-2.$$
Then, we obtain
\begin{eqnarray*}
S\xi&=&(2n-2)c\xi +(\tr A)(he_1 +\alpha\xi)-A(he_1 +\alpha\xi)\\
&=&(\tr A-\alpha-a_1)he_1+\{(2n-2)c+(\tr A)\alpha -h^2-\alpha^2\}\xi\\
&=&\beta \xi.
\end{eqnarray*}
So we see that ${\tr}A=\alpha + a_1$ and $a_2+\cdots+a_{2n-2}=0$. Moreover, the Ricci tensor $S$ can be represented as
$$S\xi=\beta\xi,\ \ Se_j=\lambda_j e_j, \ \ j=1,\cdots,2n-2,$$
where $\beta$ and $\lambda_i$ are given by
\begin{eqnarray}
& &\beta=(2n-2)c+(\alpha a_1-h^2),\nonumber\\
& &\lambda_1=(2n+1)c+(\alpha a_1-h^2),\\
& &\lambda_j=(2n+1)c+ \tr A\cdot a_j-a_j^2, \quad j=2,\cdots,2n-2\nonumber.
\end{eqnarray}
Thus the scalar curvature $Sc$ is given by (17).
\end{proof}
Here, we state some basic properties of pseudo-Einstein real hypersurfaces.\\
{\bf Remark.} Let $M$ be a pseudo-Einstein real hypersurface with ${\rm dim}M>3$. Then the Ricci tensor $S$ of $M$ satisfies $SX=aX+b\eta(X)\xi$. Hence, $S\xi=\beta\xi$. Suppose $h\ne 0$, that is, $M$ is not Hopf. Then, by (18), we have $b=-3c$. Therefore, for any vector $X$, we obtain
$$A^2 X-({\rm tr}A)AX+(a-(2n+1)c)X=0.$$
Since $M$ is not totally umbilical, we see that $M$ has two principal curvatures $\lambda$ and $\mu$. Then $\lambda+\mu={\rm tr}A$ and $\lambda\mu=a-(2n+1)c$. On the other hand, ${\rm tr}A=p\lambda+q\mu, p+q=2n-1$. Thus we have $(p-1)\lambda+(q-1)\mu=0$. From Lemma 2.3 of \cite{Ko}, we see ${\rm rank}A>1$. Hence $p, q>1$. Consequently, $\lambda$ and $\mu$ are constant.
Here, for the tangent space at $x$, we put
$$T_{x}(M)=L\oplus T_{\lambda}\oplus T_{\mu},$$
where $L$ is the subspace spanned by $\xi$ and $e_1$, $T_{\lambda}=\{X:AX=\lambda X, X\perp \xi, e_1\}, T_{\mu}=\{X:AX=\mu X, X\perp \xi, e_1\}$. By (16) and $h\neq 0$, we have ${\rm dim}T_{\lambda}, {\rm dim}T_{\mu}\geq 2$.
Let $X$ and $Y$ be unit vectors of $L$ such that $AX=\lambda X$ and $AY=\mu Y$. We note $\eta(X)\ne0$ and $\eta(Y)\ne0$. For any $Z\in T_{\lambda}$, by the equation of Codazzi,
$$0=g((\nabla_{X}A)Z,X)-g((\nabla_{Z}A)X,X) =-3c\eta(X)g(Z,\phi X).$$
This implies $g(Z,\phi X)=0$, and hence $\phi X\in T_{\mu}$. Similarly, we have $\phi Y\in T_{\lambda}$. On the other hand, $\phi X=\phi(\eta(X)\xi+g(X,e_1)e_1)=g(X,e_1)\phi e_1$ Since $g(X,e_1)\ne 0$, we have $\phi e_1\in T_{\mu}$. Similarly, $\phi Y=g(Y,e_1)\phi e_1$ and hence $\phi e_1\in T_{\lambda}$. This is a contradiction.
If ${\rm dim}M=3$, by (16), $a_2=0$. From this and (18), we have $\alpha a_1-h^2=0$ and hence $(e_2 \alpha)a_1+\alpha(e_2 a_1)-2h(e_2 h)=0$. Using the equation of Codazzi, we have $c=0$ (see (7), (8), (11) and (12)) . This is a contradiction. Therefore, $h=0$ and $M$ is a Hopf hypersurface.\\
We introduce an important example of non-Hopf hypersurface. A real hypersurface of a complex space form $M^n(c) (c\ne 0, n\geq 2)$ is called a {\it ruled real hypersurface} if the holomorphic distribution $H(M)=\cup_{x\in M}\{X\in T_{x}(M) : X\perp \xi\}$, subbundle of the tangent bundle $T(M)$, of $M$ is integrable and each maximal integral manifold is locally congruent to a totally geodesic complex hypersurface $M^{n-1}(c)$ of $M^n(c) $.
It is known that every ruled real hypersurface is constructed in the following manner.
Take a regular curve $\gamma$ in $M^n(c)$, defined on some interval, with tangent vector field $X$. At each point of $\gamma$ there is a unique totally geodesic complex hypersurface which is locally congruent to $M^{n-1}(c)$ of $M^n(c)$ cutting $\gamma$ so as to be orthogonal to $X$ and $JX$. The union of these hypersurfaces is called a {\textit{ruled real hypersurface}} (cf. \cite{Ki}, \cite{LR}, \cite{MAK}).
A ruled real hypersurface $M$ is characterized by the shape operator by
$$A\xi=\alpha\xi+he_1, \ \ Ae_1=h\xi, \ \ AX=0 $$
for any $X$ orthogonal to $\xi$ and $e_1$, where $e_1$ is a unit vector field orthogonal to $\xi$. The Ricci tensor $S$ of a ruled real hypersurface $M$ satisfies $S\xi=\beta\xi$, where $\beta=(2n-2)c-h^2$.\\
\section[3]{Gradient pseudo-Ricci solitons}
We now state the definitions of a pseudo-Ricci soliton and a gradient pseudo-Ricci soliton.
A vector field $V$ on a Riemannian manifold $M$ is said to define a {\it Ricci soliton} if it satisfies
$$\frac{1}{2}L_{V}g + Ric = \lambda g,$$
where $L_{V}g$ is the Lie-derivative of the metric tensor $g$ with respect to $V$, $Ric$ is the Ricci tensor of type (0,2) and $\lambda$ is a constant.
We call the vector field $V$ the {\it potential vector field} of the Ricci soliton. A Ricci soliton $(M, g, V, \lambda)$ is called {\it shrinking}, {\it steady} or {\it expanding} according to $\lambda> 0; \lambda= 0; or \lambda< 0$, respectively. A Ricci soliton is said to be {\it trivial} if the potential vector field $V$ is zero or Killing, in which case the metric is Einstein.
A Ricci soliton $(M, g, V, \lambda)$ is called a {\it gradient Ricci soliton} if its potential field is the gradient of some smooth function $-f$ on $M$, which called the {\it potential function}:
$${\rm Hess}f = Ric - \lambda g.$$
A gradient Ricci soliton $(M, g, f, \lambda)$ is said to be {\it trivial} if its potential function $-f$ is a constant. Trivial gradient Ricci solitons are trivial Ricci solitons since $V= -Df$. It was proved in Perelman \cite{Pe} that if $(M, g, \xi, \lambda)$ is a compact Ricci soliton, the potential field is a gradient of some smooth function $f$ up to the addition of a Killing field. Thus compact Ricci solitons are gradient Ricci solitons.\\
In this article, we study real hypersurfaces of a non-flat complex space form. But, it is well known that there are no Einstein real hypersurfaces of a non-flat complex space form. Therefore we define
\begin{definition} A vector field $V$ on a real hypersurface $M$ of a non-flat complex space form with an almost contact metric structure $(\phi,\xi,\eta,g)$ is said to define a {\it pseudo-Ricci soliton} if it satisfies
$$\frac{1}{2}L_{V}g + Ric = \lambda g + \mu\eta\otimes\eta,$$
where $\lambda$ and $\mu$ are constants.
\end{definition}
A trivial pseudo-Ricci soliton is one for which $V$ is zero or Killing, in which case the metric is pseudo-Einstein.\\
\begin{definition} A pseudo-Ricci soliton $(M, g, V, \eta, \lambda, \mu)$ is said to be {\it gradient} if its potential field is the gradient of a potential function $-f$ on $M$:
$${\rm Hess}f = Ric - \lambda g - \mu\eta\otimes\eta.$$
\end{definition}
A gradient pseudo-Ricci soliton $(M, g, f, \lambda,\mu)$ is called {\it trivial} if its potential function $-f$ is a constant. Then $M$ is a pseudo-Einstein real hypersurface.
For a Riemannian manifold $M$ we generally have
\begin{equation}\label{xx}
\begin{split}
&{\rm Hess}f(X,Y)={\rm Hess}f(Y,X),\\
&{\rm Hess}f(X,Y)=g(\nabla_{X}Df,Y)\\
&=\frac{1}{2}(g(\nabla_{X}Df,Y)+g(\nabla_{Y}Df,X))=\frac{1}{2}(L_{Df}g)(X,Y).
\end{split}
\end{equation}
Let $\{e_j\}$ be an orthonormal basis of $M$. Then the scalar curvature $Sc$ and the Ricci tensor $S$ satisfy
\begin{equation}
\nabla_X Sc=2\sum_j g((\nabla_{e_j}S)e_j,X).
\end{equation}
We now prepare the fundamental formulas for a gradient pseudo-Ricci soliton on a real hypersurface.
\begin{lem} Let $M$ be a real hypersurface of a complex space form $M^n(c), c\ne 0$. If $M$ admits a gradient pseudo-Ricci soliton, then
\begin{eqnarray}
g(\nabla_{X}Df,Y)=g(SX,Y)-\lambda g(X,Y)-\mu\eta(X)\eta(Y),
\end{eqnarray}
\begin{eqnarray}
g(R(X,Y)Df,Z)&=&g((\nabla_{X}S)Y,Z)-g((\nabla_{Y}S)X,Z)\\
& &-\mu g(\phi AX,Y)\eta(Z)-\mu\eta(Y)g(\phi AX,Z) \nonumber\\
& &+\mu g(\phi AY,X)\eta(Z)+\mu \eta(X)g(\phi AY,Z), \nonumber
\end{eqnarray}
\begin{eqnarray}
g(SX,Df)=-\frac{1}{2}\nabla_{X} Sc-\mu g(\phi A\xi,X).
\end{eqnarray}
\end{lem}
\begin{proof}
For any vector fields $X$ and $Y$, by the assumption and (19), we see
\begin{eqnarray*}
{\rm Hess}f(X,Y)&=&g(\nabla_{X}Df,Y)\\
&=&g(SX,Y)-\lambda g(X,Y)-\mu\eta(X)\eta(Y).
\end{eqnarray*}
This shows the first equation. Then, using the standard facts about covariant differentiation,
\begin{eqnarray*}
g(\nabla_X\nabla_Y Df,Z)&=&g(\nabla_X S)Y,Z)+g(S\nabla_X Y,Z)\\
& &\quad -\lambda g(\nabla_X Y,Z) - \mu g(\phi AX,Y)\eta(Z)\\
& &\quad -\mu\eta(\nabla_X Y)\eta(Z)-\mu\eta(Y) g(\phi AX,Z).
\end{eqnarray*}
We also have
\begin{eqnarray*}
g(\nabla_{[X,Y]}Df,Z)=g(S[X,Y],Z)-\lambda g([X,Y],Z)-\mu\eta([X,Y])\eta(Z).
\end{eqnarray*}
Using these equations and
\begin{eqnarray*}
& &g(R(X,Y)Df,Z)\\
& &=g(\nabla_X\nabla_Y Df,Z)-g(\nabla_Y\nabla_X Df,Z)-g(\nabla_{[X,Y]}Df,Z),
\end{eqnarray*}
we have equation (22).
Let $\{e_j\}$ be an orthonormal basis of $M$. By (22), we obtain
\begin{eqnarray*}
& &\sum^{2n-1}_{j=1} g(R(X,e_j)Df,e_j) \\
& &=\sum_j g((\nabla_{X}S)e_j,e_j)-g((\nabla_{e_j}S)X,e_j)+\mu g(\phi A\xi,X).
\end{eqnarray*}
From the equation above and (20), we have (23).
\end{proof}
Using Lemma 3.3, we have the following lemma.
\begin{lem}
Let $M$ be a non-Hopf hypersurface in a complex space form $M^n(c)$, $c\neq 0$, $n\geq 3$. If the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$, and $M$ admits a gradient pseudo-Ricci soliton, then we have
\begin{eqnarray}
& &-(e_1\beta)=c(e_1 f)+(a_1\alpha-h^2)(e_1f),\\
& &(\xi \lambda_1)=-c(\xi f) +(h^2 -a_1\alpha)(\xi f)\\
& &(\lambda_1-\lambda_j)g(\nabla_\xi e_1,e_j) + a_1(\lambda_j-\beta+\mu)g(\phi e_1,e_j)=0,\\
& &-(e_j\beta) +h(\beta-\lambda_j-\mu)g(\phi e_1,e_j)=c(e_j f)+a_j\alpha (e_jf),\\
& &a_j h(e_j f)=(\lambda_j-\lambda_1)g(\nabla_\xi e_j,e_1) + a_j(\lambda_1-\beta+\mu)g(\phi e_j,e_1),\\
& &(\xi \lambda_j)=-(c+a_j \alpha)(\xi f)- a_j h(e_1 f),\\
& &h a_j (e_j f)=\{a_1(\lambda_j -\beta)+a_j(\lambda_1-\beta)+ \mu(a_1+a_j)\}g(\phi e_j,e_1),\\
& &(\lambda_j - \lambda_1)g(\nabla_{e_1} e_j, e_1) - (e_j \lambda_1)\\
& &\quad =c\{(e_j f) -3(\phi e_1 f)g(\phi e_j,e_1)\} + a_1a_j(e_j f),\nonumber\\
& &(e_1\lambda_j) + (\lambda_j - \lambda_1)g(\nabla_{e_j} e_1,e_j)\\
& &\quad = c\{-(e_1 f)+3(\phi e_j f)g(\phi e_1,e_j)\}-a_1a_j(e_1f)-ha_j(\xi f)\nonumber\\
& &\{(a_i+ a_j)(\mu-\beta) + a_j\lambda_i + a_i\lambda_j\}g(\phi e_i,e_j)=0,\\
& &(\lambda_i-\lambda_1)g(\nabla_{e_j} e_i,e_1) + (\lambda_1 - \lambda_j)g(\nabla_{e_i} e_j,e_1)\\
& &\quad =c\{(\phi e_i f)g(\phi e_j, e_1)- (\phi e_j f)g(\phi e_i,e_1)\nonumber\\
& &\qquad -2g(\phi e_i,e_j)(\phi e_1 f)\},\nonumber\\
& &(\lambda_i - \lambda_j)g(\nabla_{e_j} e_i, e_j)-(e_i\lambda_j)\\
& &\quad =c\{(e_i f)-3(\phi e_j f)g(\phi e_i,e_j)\} + a_ia_j (e_i f),\nonumber\\
& &(\lambda_i-\lambda_k)g(\nabla_{e_j} e_i, e_k) -(\lambda_j - \lambda_k)g(\nabla_{e_i} e_j, e_k)\\
& &\quad =c\{(\phi e_i f)g(\phi e_j,e_k) - (\phi e_j f)g(\phi e_i, e_k)\nonumber\\
& &\qquad -2g(\phi e_j. e_i) g(\phi Df, e_k)\}.\nonumber
\end{eqnarray}
\end{lem}
\begin{proof}
From (22), we have
\begin{eqnarray*}
& &g(R(\xi, e_1)Df, \xi)\\
& &=g((\nabla_\xi S)e_1, \xi) - g((\nabla_{e_1} S)\xi, \xi)\\
& &=g(\nabla_\xi \lambda_1 e_1, \xi )- g(S\nabla_\xi e_1,\xi) - g(\nabla_{e_1}\beta\xi,\xi) + g(S\nabla_{e_1} \xi,\xi)\\
& &=-(e_1\beta).
\end{eqnarray*}
On the other hand, by the equation of Gauss,
\begin{eqnarray*}
g(R(\xi, e_1)Df, \xi)=c(e_1 f) + (a_1\alpha-h^2)(e_1 f).
\end{eqnarray*}
Thus we obtain (24). Similarly, substituting $e_1, e_j (j\geq 2), \xi$ into $X, Y, Z$ in (22), we have the other equations.
\end{proof}
Next we prepare the following
\begin{lem}
Let $M$ be a non-Hopf hypersurface in a complex space form $M^n(c)$, $c\neq 0$, $n\geq 3$. If the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$, and $M$ admits a gradient pseudo-Ricci soliton, then we have
\begin{eqnarray}
& &\xi(\xi f)-h(\phi e_1 f)=\beta-\lambda-\mu,\\
& &e_1(\xi f)-a_1(\phi e_1 f)=0,\\
& &e_j(\xi f)-a_j(\phi e_j f)=0.
\end{eqnarray}
\end{lem}
\begin{proof}
By (21), we have
$$g(\nabla_\xi Df,\xi)=\beta-\lambda-\mu.$$
On the other hand, calculating the left-hand side implies
$$g(\nabla_\xi Df, \xi)=\nabla_\xi (\xi f) - g(Df, \phi A\xi)=\xi(\xi f)-h(\phi e_1 f).$$
So we have (37). Similarly, substituting $e_1, e_j (j\geq 2), \xi$ into $X, Y$ in (21), we have the other equations.
\end{proof}
At the end of this section, we have the following lemma directly from (23).
\begin{lem}
Let $M$ be a non-Hopf hypersurface in a complex space form $M^n(c)$, $c\neq 0$, $n\geq 3$. If the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$, and $M$ admits a gradient pseudo-Ricci soliton, then we have
\begin{eqnarray}
& &\beta(\xi f)=-\frac{1}{2}\xi S_c,\\
& &\lambda_1(e_1 f)=-\frac{1}{2}e_1 S_c,\\
& &\lambda_j(e_j f)=-\frac{1}{2}e_j S_c -h\mu g(\phi e_1,e_j).
\end{eqnarray}
\end{lem}
\section[4]{Non-trivial example}
In order to describe the distinction of the paper easier, I will first show the existence of an important example of a 3-dimensional non-Hopf hypersurface.
Let $M$ be a 3-dimensional real hypersurface of a complex space form $M^2(c), c\ne 0$ with $S\xi = \beta\xi$. Then $M$ is a Hopf hypersurface or a non-Hopf hypersurface such that
$$A\xi = \alpha \xi +he_1, \hspace{0.5cm}Ae_1=h\xi, \hspace{0.5cm}Ae_2=0\ \ (\phi e_1=e_2),$$
where $\{\xi, e_1, e_2=\phi e_1\}$ is an orthonormal basis.
In the following, we suppose that $M$ is a non-Hopf hypersurface, that is, $h\ne 0$. By (18), we have
\begin{eqnarray*}
& &\beta=g(S\xi,\xi)=2c+\alpha a_1-h^2,\\
& &\lambda_1=g(Se_1,e_1)=5c+\alpha a_1-h^2, \ \ \lambda_2=g(Se_2,e_2)=5c
\end{eqnarray*}
and the scalar curvature $Sc$ is given by
$$Sc=12c+\alpha a_1-h^2.$$
We remark that the equations in Lemma 2.1 hold except for the case that three distinct integers appear.
\begin{theorem} Let $M$ be a 3-dimensional non-Hopf hypersurface of a non-flat complex space form $M^2(c)$ with $S\xi = \beta\xi$. If $M$ admits a gradient pseudo-Ricci soliton, then $M$ is a ruled real hypersurface with $h^2=-c$, and
$$Df=\frac{1}{2}he_2,\hspace{0.5cm} \lambda=5c,\hspace{0.5cm} \mu=-\frac{5}{2}c.$$
\end{theorem}
\begin{proof} If $M$ admits a gradient pseudo-Ricci soliton, then
$$g(R(\xi,e_2)Df,e_2)=g((\nabla_{\xi}S)e_2,e_2)-g((\nabla_{e_2}S)\xi,e_2)=0.$$
From the equation of Gauss, using $Ae_2=0$, we see $\xi f=0$. Similarly,
$$g(R(e_1,e_2)Df,e_2)=g((\nabla_{e_1}S)e_2,e_2)-g((\nabla_{e_2}S)e_1,e_2)=0$$
reduce
$$-4cg(e_1,Df)=(\lambda_2-\lambda_1)g(\nabla_{e_2}e_1,e_2).$$
By (9) and $a_2=0$, $g(\nabla_{e_2}e_1,e_2)=0$, and hence $e_1 f=0$. If $e_2 f=0$, we conclude $Df=0$ and $M$ is a pseudo-Einstein real hypersurface. So we have $h=0$, which is a contradiction. Thus we can set $Df=me_2$, $m$ being a non-zero function. Then, from (21) we obtain
$$g(\nabla_{e_2}Df,e_2)=e_2 m = \lambda_2-\lambda=5c-\lambda.$$
We also have $g(\nabla_{e_1}Df,\xi)=-a_1 g(Df,e_2)=0$. Since $m=e_2f\neq 0$, we have $a_1=0$ and $M$ is a ruled real hypersurface.
By $g(\nabla_{\xi}Df,\xi)=-hg(Df,e_2)=-mh$ and (21), we obtain
$$h^2-mh=2c-\lambda-\mu.$$
Hence $h^2-mh$ is a constant. So we have
$$2h(e_2 h)-(e_2m)h-m(e_2 h)=0.$$
On the other hand, by (11), $e_2 h = c+h^2$. Using $e_2 m=5c-\lambda$,
$$2h^4-(mh)h^2+(-3c+\lambda)h^2-cmh=0.$$
Substituting $mh=-2c+h^2+\lambda+\mu$,
$$h^4-(2c+\mu)h^2+c(2c-\lambda-\mu)=0.$$
Consequently, we see $h$ is a constant, and hence $m$ is also a constant. Then we have $\lambda=5c$ and $c+h^2=0$, $c<0$. Moreover, by (21), we have $g(\nabla_{e_1}Df,e_1)=\lambda_1-\lambda=-h^2=c$. From $Df=me_2$, we see $mg(\nabla_{e_1}e_2,e_1)=c$. By (15), we also have $hg(\nabla_{e_1}e_2,e_1)=2c$. From these equations we obtain $m=\frac{1}{2}h$. From (23), we also have
$$g(Se_2,Df)=-\frac{1}{2}e_2 Sc-\mu h= -\mu h,$$
because $Sc$ is a constant. From these equations, we have $\mu=-\frac{5}{2}c$. These complete our assertion.\\
\end{proof}
In Lemma 1 and Lemma 2 in \cite{MAK}, Maeda, Adachi and Kim studied a shape operator of ruled real hypersurfaces. We consider the case $\nu^2=h^2=|c|$, $c<0$ in Lemma 2. We prove this example satisfies the condition $S\xi=\beta\xi$ for some function $\beta$ and admits a gradient pseudo-Ricci soliton.
\begin{lem}
Let $M$ be a ruled real hypersurface in $M^2(c)$, $c<0$. If $h^2=-c$, then the Ricci tensor satisfies $S\xi=\beta \xi$ for some function $\beta$ and $M$ admits a gradient pseudo-Ricci soliton for
$$\lambda=5c,\ \mu=-\frac{5}{2}c,\ f(\rho(s))=\frac{h}{2}s,$$
where $\rho$ is a geodesic with $\dot{\rho}(0)=e_2(\rho(0))$, which is the integral curve of $e_2$ through the point $\rho(0)$.
\end{lem}
\begin{proof}
Using $A\xi=he_1+\alpha\xi$, $Ae_1=h\xi$ and $Ae_2=0$,
$$S\xi=(2c-h^2)\xi=3c\xi.$$
Next we have
\begin{eqnarray*}
g(Se_1,e_1)=5c-h^2=6c.
\end{eqnarray*}
On the other hand, we obtain
$${\rm{Hess}}f(e_1,e_1)=g(\nabla_{e_1} Df, e_1)=\frac{h}{2}g(\nabla_{e_1} e_2, e_1).$$
From (15), we have
$$h g(\nabla_{e_1} e_2, e_1)=c-h^2=-2h^2.$$
Thus we obtain ${\rm{Hess}}(f)(e_1,e_1)=-h^2=c$. Hence we have
$${\rm{Hess}}f(e_1,e_1)=g(Se_1,e_1)-5c g(e_1,e_1)+\frac{5c}{2}\eta(e_1)\eta(e_1).$$
Similarly we obtain
$${\rm{Hess}}f(X,Y)=g(SX,Y)-5c g(X,Y) +\frac{5c}{2} \eta (X)\eta(Y)$$
when $X$ and $Y$ are one of $e_1,e_2,\xi$, respectively.
\end{proof}
From these results, we have the following
\begin{theorem}
A real hypersurface of $M^2(c)$ admits a gradient pseudo-Ricci soliton and the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$ if and only if $M$ is one of the following
\begin{itemize}
\item[(1)] pseudo-Einstein real hypersurface,
\item[(2)] a ruled real hypersurface with a unit vector field $e_1$ orthogonal to $\xi$ and the shape operator satisfies
$$A\xi=\alpha\xi \pm \sqrt{|c|}e_1,\ Ae_1=\pm \sqrt{|c|}\xi,\ A\phi e_1=0$$
for some function $\alpha$.
\end{itemize}
\end{theorem}
\section[5]{Non-Hopf hypersurfaces}
If the Ricci tensor $S$ of a real hypersurface $M$ of a non-flat complex space form satisfies $S\xi=\beta\xi$, by Lemma 2.2, $M$ is a Hopf hypersurface or a non-Hopf hypersurface which satisfies (16), (17) and (18).
In this section, under the assumption $S\xi=\beta\xi$ and $n\geq 3$, we study a gradient pseudo-Ricci soliton on a non-Hopf hypersurfaces $M$. The purpose of this section is to prove the following theorem.
\begin{theorem} Let $M$ be a real hypersurface of a non-flat complex space form $M^n(c)$, $n\geq 3$. Suppose the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$. If $M$ admits a gradient pseudo-Ricci soliton, then $M$ is a Hopf hypersurface.
\end{theorem}
The proof of this theorem will follow from a series of lemmas. In the following, we suppose that $M$ is a real hypersurface of $M^n(c), c\ne 0, n\geq 3$ with $S\xi=\beta \xi$, and $M$ admits a gradient pseudo-Ricci soliton
$${\rm Hess}f = Ric - \lambda g - \mu\eta\otimes\eta,$$
where $-f$ on $M$ denote the potential function.
We study the case that $M$ is non-Hopf, and work in an open set where $h\neq 0$. Here, in view of Lemma 2.2, we take an orthonormal basis
$$\{\xi,e_1,\cdots,e_{2n-2}\}$$
locally, such that
$$A\xi=\alpha\xi+he_1,\ \ Ae_1=a_1 e_1+h\xi,\ \ Ae_j=a_j e_j\ \ (j=2,\cdots,2n-2).$$
\begin{lem} $\xi f=0$.
\end{lem}
\begin{proof}
By (5) and (9), we have
$$(a_1-a_i)(\xi a_i)=h(e_1 a_i).$$
Thus we obtain
$$\frac{1}{2}\xi a_i^2=a_1(\xi a_i)-h(e_1 a_i).$$
Using (16), we have
$$\xi (\sum_{j=2}^{2n-2} a_i^2)=0.$$
On the other hand, (16), (18) and (29) imply
$$\xi \sum_{j=2}^{2n-2}a_j^2=(2n-3)c(\xi f)=0.$$
So we obtain $\xi f=0$.\\
\end{proof}
\begin{lem} If $a_1\neq 0$, then $a_1\alpha-h^2$ is constant.
\end{lem}
\begin{proof}
Using Lemma 5.2 and (37), we have
$$-h(\phi e_1 f)=(2n-2)c+a_1\alpha-h^2-\lambda - \mu.$$
By (38), we get
\begin{equation}
a_1(\phi e_1 f)=0,
\end{equation}
Since $a_1\neq 0$, we have $\phi e_1 f=0$. Thus we obtain
$$(2n-2)c + a_1\alpha - h^2 -\lambda-\mu=0.$$
Since $\lambda$, $\mu$ and $c$ are constant, $a_1\alpha- h^2$ is also constant.
\end{proof}
\begin{lem} $a_1=0$.
\end{lem}
\begin{proof}
We suppose $a_1\neq 0$. From Lemma 5.2 and (38), we obtain $\phi e_1 f=0$. By Lemma 5.2 and (39), we have
\begin{equation}
a_j(\phi e_j f)=0.
\end{equation}
If $a_j\neq 0$ for all $j\geq 2$, then $\phi e_j f=0$. By Lemma 5.2, we have $Df=0$. In this case $M$ is pseudo-Einstein. This contradicts the assumption that $M$ is not Hopf. So we see that there exists some $j$ such that $a_j=0$.
By Lemma 5.3 and (24), we obtain
$$(c+a_1\alpha-h^2)(e_1 f)=0.$$
We suppose $a_j=0$. Then, from (32) and (35), we have
\begin{eqnarray}
& &(\lambda_1-\lambda_j)g(\nabla_{e_j} e_1,e_j) =c\{(e_1 f)-3(\phi e_j f)g(\phi e_1,e_j)\},\nonumber\\
& &(\lambda_i - \lambda_j) g(\nabla_{e_j} e_i, e_j) =c\{(e_i f)-3(\phi e_j f)g(\phi e_i, e_j)\}.
\end{eqnarray}
By (9), we see that $g(\nabla_{e_j} e_1,e_j)=0$. So if $a_j=0$, then we have
$$c(e_1 f)=3c (\phi e_j f)g(\phi e_1,e_j).$$
We denote $p$ the number of $a_j$ that satisfies $a_j=0$. We remark that if $a_j\neq 0$, then $\phi e_j f=0$ from (44). Then we obtain
\begin{eqnarray*}
pc(e_1 f)&=&\sum_{a_j=0}3c (\phi e_j f)g(\phi e_1,e_j)\\
&=& \sum_{j=1}^{2n-2} 3c (\phi e_j f)g(\phi e_1,e_j)\\
&=& -3c(e_1 f).
\end{eqnarray*}
From this equation, we obtain $(e_1 f)=0$.
Next, by (4), when $a_j=0$ and $i\neq j$, we have $a_ig(\nabla_{e_j} e_i, e_j)=0$. Since $\lambda_i-\lambda_j=(a_1+\alpha)a_i-a_i^2$, we see that
$$(\lambda_i- \lambda_j)g(\nabla_{e_j} e_i, e_j)=0.$$
From (45), we obtain
$$(e_i f)- 3(\phi e_j f)g(\phi e_i,e_j)=0.$$
Thus we have
\begin{eqnarray*}
pc(e_i f)&=&\sum_{a_j=0} 3(\phi e_j f)g(\phi e_i,e_j)\\
&=& \sum_{j=1}^{2n-2} 3(\phi e_j f)g(\phi e_i,e_j)\\
&=& -3c(e_i f).
\end{eqnarray*}
Here we used $\phi e_1 f=0$. From this equation, we obtain $e_i f=0$ for $i\neq j$.
If $p\geq 2$, there exist $a_j=0$ and $a_k=0$. Then we have $e_i f=0$ when $i\neq j$ or $i\neq k$. Thus we see that $e_i f=0$ for any $i\geq 2$, and hence $Df=0$. In this case, $M$ is pseudo-Einstein and Hopf, this is a contradiction.
Next we consider the case that $p=1$. If $a_j=0$, then we have $Df=m e_j$ for some function $m$. Since $a_1\neq 0$ and $a_i\neq 0$ for any $i\neq j$, (43) and (44) imply that $\phi e_1 f=0$ and $\phi e_i f=0\ (i\neq j)$. Moreover, from Lemma 5.1, we have $\xi f=0$. Since $e_j$ is spanned by $\phi e_i$, $i\neq j$, so we see that $e_j f=0$. Hence we have $Df=0$ and this is a contradiction.
\end{proof}
From the proof of this lemma, we also have
\begin{lem} If there exist $j\geq 2$ such that $a_j=0$, then we have $e_1 f=0$.
\end{lem}
Next we prove the following
\begin{lem} There exist $j\geq 2$ such that $a_j=0$.
\end{lem}
\begin{proof}
We suppose $a_j\neq 0$ for any $j\geq 2$. By Lemma 5.2 and (39), we have $\phi e_j f=0$ for any $j\geq 2$ and $\xi f=0$. So we can represent $Df=m\phi e_1$ for some function $m$.
By Lemma 5.4, we have $a_1=0$. So (37) implies that
\begin{equation}
-h(\phi e_1 f)=\{(2n-2)c-h^2\}-\lambda -\mu.
\end{equation}
Since $Df=m\phi e_1$, we obtain
\begin{eqnarray}
g(\nabla_{e_1} Df, e_1)&=& g(\nabla_{e_1} m\phi e_1,e_1)\nonumber\\
&=& -mg(\phi e_1,\nabla_{e_1} e_1)\nonumber\\
&=& -\sum_{k\geq 2} m g(\phi e_1,e_k)g(e_k,\nabla_{e_1} e_1).\nonumber
\end{eqnarray}
Since $a_1=0$ and $a_j\neq 0$ for $j\geq 2$, (8) implies
$$g(\nabla_{e_1} e_1, e_j)=-2h g(\phi e_j, e_1).$$
From these equations, we obtain
\begin{eqnarray*}
g(\nabla_{e_1} Df,e_1)&=&\sum_{k\geq 2} 2mh g(\phi e_1,e_k)g(\phi e_k,e_1)\\
&=&-2mh.
\end{eqnarray*}
On the other hand, by (21), we have
$$g(\nabla_{e_1} Df, e_1) = \lambda_1-\lambda.$$
From these equations and (18), we obtain
\begin{equation}
-2mh=(2n+1)c - h^2 - \lambda.
\end{equation}
By (46) and (47), we have $mh=-3c-\mu$, and hence $mh$ is constant. Moreover, again using (46) and (47), we have
$$h^2=(2n-5)c-\lambda-2\mu.$$
So $h$ is constant, from which we see that $m$ is also constant.
From (7) and $a_1=0$, we obtain
\begin{equation}
(2c+a_j\alpha +2h^2)g(\phi e_j, e_1)=0.
\end{equation}
If there exist $e_i$ and $e_j$ such that $g(\phi e_j,e_1)\neq 0$ and $g(\phi e_i,e_1)\neq 0$, then we have
$$2c+a_j\alpha+2h^2 = 2c+a_i\alpha +2h^2=0,$$
and hence $(a_j-a_i)\alpha=0$. When $\alpha=0$, using (11), (12) and $c+h^2=0$, we have $g(\nabla_\xi e_i,e_1)=0$ and $g(e_1,\phi e_i)=0$ for any $i\geq 2$. This is a contradiction. So we see that $\alpha\neq 0$. Hence if $g(\phi e_j,e_1)\neq 0$ and $g(\phi e_i,e_1)\neq 0$, then $a_i=a_j$. Thus we can represent $A\phi e_1=a\phi e_1$ for some function $a$. Taking a suitable permutation, we can put $\phi e_1=e_2$ and $a_2=a$. By (48), we remark that
\begin{equation}
2c+a_2\alpha+2h^2=0.
\end{equation}
Thus we see that $a_2\alpha$ is constant.
By (21), we have
$$g(\nabla_{e_2} Df,e_2)=(2n+1)c+a_2\alpha -a_2^2 - \lambda.$$
Since $m$ is constant, we obtain
$$g(\nabla_{e_2} Df,e_2)=g(\nabla_{e_2} me_2,e_2)=0.$$
From these equations, we have
\begin{equation}
a_2^2=(2n+1)c+a_2\alpha - \lambda.
\end{equation}
Since $a_2\alpha$ is constant, we see that $a_2$ is constant and $\alpha$ is also constant since $a_2\neq 0$.
By (11) and (12), we have
\begin{eqnarray*}
& &(c+a_2\alpha + h^2) g(\phi e_2,e_1) - a_2g(\nabla_\xi e_2,e_1)=0,\\
& &h(\alpha-3a_2)g(\phi e_2,e_1) + hg(\nabla_\xi e_2,e_1)=0.
\end{eqnarray*}
From these equations, we obtain
$$c+2a_2\alpha-3a_2^2 + h^2=0.$$
Combining this with (49), we have
\begin{equation}
c+h^2+a_2^2=0.
\end{equation}
From these equations, we see that $\alpha=2a_2$.
By (8), we have $g(\nabla_{e_1} e_2,e_1)=-2h$. Using (31), we obtain
$$(\lambda_2-\lambda_1)g(\nabla_{e_1}e_2,e_1)=4cm.$$
Hence we see that $2cm=-h(a_2\alpha - a_2 + h^2)$. Since $\alpha= 2a_2$, by (51), we have $h=2m$. By (47), we have $\lambda=(2n+1)c$. So (50) implies that $a_2=\alpha$, this is a contradiction.
\end{proof}
From this lemma, there exists $a_j= 0$, $j\geq 2$. For this integer $j$, using (9), we have $g(\nabla_{e_j} e_1, e_j)=0$. Besides, by Lemma 5.5 and (32), we have
$$3c(\phi e_j f)g(\phi e_1,e_j)=0,$$
Hence we see that if $a_j=0$, then we have $\phi e_j f=0$ or $g(\phi e_1,e_j)=0$. When $g(\phi e_1,e_j)=0$, by (11), we have $e_j h=0$. In this case, (7) implies that $g(\nabla_{e_1} e_j, e_1)=0$. Thus from (31), we have $e_j f=0$.
Taking a suitable permutation, we can take an orthonormal basis
$\{\xi,e_1,e_2,\cdots, e_{2n-2}\}$ such that $a_j=0$ and $\phi e_j f=0$ for $j=2,\cdots,q$, $a_j=0$ and $\phi e_j f\neq 0$ for $j=q+1,\cdots, r$, $a_j\neq 0$ for $j=r+1,\cdots, 2n-2$. We put $H_{01}=\langle e_2,\cdots, e_q\rangle$, $H_{02}=\langle e_{q+1},\cdots, e_r\rangle$, $H_0=\langle e_2,\cdots, e_r\rangle$, $H_1=\langle e_{r+1},\cdots, e_{2n-2}\rangle$, respectively. That is, for $j, t\geq 2$,
\begin{eqnarray*}
& &H_{01}={\rm Span}\{e_j : a_j=0, \phi e_j f=0\},\\
& &H_{02}={\rm Span}\{e_j : a_j=0, \phi e_j f\ne 0\},\\
& &H_1={\rm Span}\{e_t : a_t\ne 0, \phi e_t f =0\},\\
& &H_0=H_{01}\oplus H_{02}.
\end{eqnarray*}
If $e_j\in H_{02}$, then $g(\phi e_1,e_j)=0$. This implies that $\phi e_1 \in H_{01}\oplus H_1$.
We note that if $e_t\in H_1$, then (30) implies
\begin{equation}
h(e_t f)=(3c+\mu)g(\phi e_t,e_1).
\end{equation}
On the other hand, when $e_i\in H_0$ and $e_t\in H_1$, by (33),
\begin{equation}
(\mu+ 3c+ h^2)g(\phi e_i, e_t)=0.
\end{equation}
\begin{lem}
We have $\mu + 3c+h^2\neq 0$.
\end{lem}
\begin{proof}
We suppose $\mu + 3c+h^2=0$. We remark that $h$ is constant in this case. When $e_j\in H_0$, by (11),
$$(c+h^2)g(\phi e_j,e_1)=0.$$
First we consider the case $c+h^2\neq 0$. In this case we see that $\phi e_1\in H_1$. Using (7) and (8), we have
\begin{eqnarray*}
& &(2c+a_t\alpha)g(\phi e_t,e_1)+hg(\nabla_{e_1} e_t, e_1)=0,\\
& &2a_t hg(\phi e_t, e_1)-a_j g(\nabla_{e_1} e_t,e_1)=0.
\end{eqnarray*}
From these equations, using $a_t\neq 0$, we have
$$(2h^2+2c+a_t\alpha)g(\phi e_t,e_1)=0.$$
Thus we see that if $g(\phi e_1,e_t)\neq 0$, then $a_t\alpha=-2h^2-2c\neq 0$. So we can put $a_t=m$ where $m\alpha=-2c-2h^2$. Since $\phi e_1\in H_1$, we can represent $\phi e_1=\sum_{e_k\in H_1} \mu_k e_k$. We remark that if $\mu_k\neq 0$, then $A e_k=me_k$. Hence we obtain
$$A\phi e_1=\sum_{e_k\in H_1}\mu_k Ae_k=m\left(\sum_{e_k\in H_1} \mu_ke_k\right)=m\phi e_1.$$
Using (11), (26) and $A\phi e_1=m\phi e_1$, we obtain
\begin{eqnarray*}
& &(c+h^2)g(\phi e_t, e_1)-a_t g(\nabla_\xi e_t, e_1)=0,\\
& &(h^2+2c-m^2)g(\nabla_\xi e_1,e_t)=0.
\end{eqnarray*}
From these equations, we have $(c+h^2)(h^2+2c-m^2)=0$. Since $h^2+c\neq 0$, we have $m^2=h^2+2c$, from which we see that $m$ is constant. By $m\alpha=-2c-2h^2$, $\alpha$ is also constant. Thus, putting $e_t=\phi e_1$, (12) implies that
$$g(\nabla_{\xi}e_t,e_1)=\alpha-3m.$$
Using (8), we have $g(\nabla_{e_1} e_t ,e_1)=-2h$. From (28) we obtain $e_t f=h$. By (31), we have
$$(m\alpha - m^2 + h^2)g(\nabla_{e_1} e_t,e_1)=4ch.$$
From these equations, we have $h^2+c=0$, this is a contradiction.
Next we consider the case $h^2+c=0$. Again using (7) and (8), we have
$$a_t\alpha g(\phi e_t,e_1)=0.$$
If there exists $e_t\in H_1$ such that $g(\phi e_t,e_1)\neq 0$, then $a_t\alpha=0$. Since $a_t\neq 0$, we have $\alpha=0$. From (11) and $c+h^2=0$, we obtain $g(\nabla_\xi e_j,e_1)=0$ for any $j\geq 2$. From this and (12), we have
$$-3a_j hg(e_1,\phi e_j)=0$$
for any $j\geq 2$. This contradicts to the assumption that there exists $e_t\in H_1$ such that $g(\phi e_t,e_1)\neq 0$. Hence this case does not occur. So we see that for any $e_t\in H_1$, we have $g(\phi e_t,e_1)=0$, that is, $\phi e_1\in H_0$. Taking a suitable permutation, we can put $e_2=\phi e_1$. By (27) and $c+h^2=0$, we have $e_2 f=0$. Thus (31) implies that $g(\nabla_{e_1} e_2, e_1)=0$. By (7), we have $cg(\phi e_2, e_1)=0$. This is a contradiction.
\end{proof}
From this lemma and (53), we see that $g(\phi e_i, e_t)=0$ for $e_i\in H_0$ and $e_t\in H_1$, that is, $\phi H_0\perp H_1$.
When $e_i\in H_{02}$, then $\phi e_i f\ne 0$ and hence $g(\phi e_i,e_1)=0$. So, we have $\phi H_{02}\subset H_0$. Moreover, when $e_i, e_j \in H_{02}$, since $a_i=a_j=0$, $e_if=0$ and $\phi e_j f\neq 0$, (35) implies that $g(\phi e_i, e_j)=0$. Hence we see that $\phi H_{02}\subset H_{01}$.
When $e_i,e_j\in H_{01}$ and $i\neq j$, since $a_i=a_j=0$ and $\phi e_i f=\phi e_j f=0$, (35) implies that $e_i f=0$ and $e_j f=0$. Thus we see that if ${\rm{dim}}H_{01}\geq 2$, then we have $e_if=0$ for any $e_i\in H_{01}$. On the other hand, since $\phi H_{02}\subset H_{01}$ and $\phi e_j f\neq 0$ for $e_j\in H_{02}$, from which we see that ${\rm{dim}}H_{02}=0$.
Next we consider the case ${\rm{dim}}H_{01}=1$, that is, $H_{01}=\langle e_2 \rangle$. Since $\phi H_{02}\subset H_{01}$, we see that ${\rm{dim}}H_{02}\leq 1$. If ${\rm{dim}}H_{02}=1$, then we have $\phi H_{02}=H_{01}$. On the other hand, when ${\rm{dim}}H_{02}=0$, since $\phi H_1\perp H_0$, we have $\phi e_2=\pm e_1$. We can suppose $\phi e_1=e_2$.
From these consideration, we have the following lemma.
\begin{lem} We have the following three cases:\\
{\rm Case 1}: ${\rm dim}H_{01}\geq 2$, $H_0=H_{01}$, $e_j f=0, \phi e_j f=0\ \ (e_j\in H_{01})$,\\
{\rm Case 2}: ${\rm dim}H_{01}=1$, $\phi H_{02}=H_{01}, e_j f\ne 0\ \ (e_j\in H_{01}), \phi e_1 \in H_1$,\\
{\rm Case 3}: ${\rm dim}H_{01}=1$, ${\rm{dim}}H_{02}=0$, $\phi H_1=H_1$, $H_0$ is spanned by $\phi e_1$.\\
\end{lem}
Next we show the following
\begin{lem} {\rm Case 1} and {\rm Case 2} do not occur.
\end{lem}
\begin{proof}
From (35), we have
\begin{eqnarray*}
& &(\lambda_t-\lambda_i)g(\nabla_{e_i} e_t,e_i) - (e_t \lambda_i)\\
& &=c\{(e_t f)-e(\phi e_i f)g(\phi e_t,e_i)\}+a_ia_t(e_j f).
\end{eqnarray*}
If $a_i=0$ and $a_t\neq 0$, by (4), we have $a_jg(\nabla_{e_i} e_t, e_i)=0$. Since $\lambda_i=(2n+1)c$ and $\lambda_t=(2n+1)c + a_t {\rm{tr}}A -a_t^2$, we obtain
\begin{eqnarray*}
& &(\lambda_t-\lambda_i)g(\nabla_{e_i} e_t,e_i) - (e_j \lambda_i)\\
& &\quad =(a_t{\rm{tr}}A-a_t^2)g(\nabla_{e_i} e_t, e_i)=0.
\end{eqnarray*}
On the other hand, since $\phi H_1\perp H_0$, we have $g(\phi e_t,e_i)=0$. From these equations, we see that $e_t f=0$ for any $e_t\in H_1$.
When the shape operator satisfies the Case 1, by Lemma 5.4 and Lemma 5.8, we have $Df=0$ and $M$ is pseudo-Einstein. This contradict the assumption that $h\neq 0$.
Next we consider the Case 2. We put $e_i\in H_{01}$, $e_j\in H_{02}$ and $\phi e_j=e_i$. Since $a_i=a_j=0$ and $\phi e_1\in H_1$, (10) and (34) imply
\begin{eqnarray*}
& &2cg(\phi e_i,e_j)=hg(\nabla_{e_i} e_j,e_1)-hg(\nabla_{e_j} e_i,e_1),\\
& &h^2 g(\nabla_{e_j} e_i,e_1)-h^2 g(\nabla_{e_i} e_j,e_1)\\
& &\quad =-2g(\phi e_i,e_j)(\phi e_1 f)c.
\end{eqnarray*}
From these equation, we have $\phi e_1 f=h\neq 0$. Since $\phi e_1\in H_1$, this is a contradiction.
\end{proof}
We consider the Case 3 and prove the following lemma.
\begin{lem} We have ${\rm{dim}}H_{01}=1$, ${\rm{dim}}H_{02}=0$, $\phi e_1=e_2\in H_{01}$ and $a_1=a_2=0, a_t \ne 0, t=3,\dots,2n-2$. Moreover, we have
$$ h^2=-c, \ \ Df=\frac{h}{2}\phi e_1, \ \ \lambda=(2n+1)c, \ \ \mu=-\frac{5c}{2} $$.
\end{lem}
\begin{proof}
From the same proof of Lemma 5.9, we have $e_t f=0$ for any $e_t\in H_1$. So we obtain $Df=m\phi e_1=me_2$ for some function $m$. By (37), we obtain
\begin{equation}
-mh=\{(2n-2)c-h^2\}-\lambda-\mu.
\end{equation}
On the other hand, by (21),
$$mg(\nabla_{e_1} e_2,e_1)=\lambda_1-\lambda.$$
Using (15), we have
$$hg(\nabla_{e_1} e_1, e_2)=-c+h^2.$$
Combining these equations, we obtain
\begin{equation}
h\{(2n+1)c-h^2 -\lambda\}=m(c-h^2).
\end{equation}
From (54) and (55), we have
\begin{equation}
(2c+\mu) h^2 + c\{(2n-2)c-\lambda-\mu\}=0.
\end{equation}
First we consider the case that $2c+\mu=0$. Then we have $\lambda=2nc$. Moreover (54) implies that $mh=h^2$. Since $h\neq 0$, we have $m=h$. Since $Df=he_2$, we have $g(\nabla_{e_2} Df, e_2)=e_2 h$. Thus, from (21), we obtain
$$e_2 h= \lambda_2-\lambda=c.$$
Combining this with (7) and (15), we have $h^2=0$. This is a contradiction.
Next we suppose that $2c+\mu\neq 0$. From (54) and (56), we see that $h$ and $m$ are constant. Using (7) and (15), we have
\begin{eqnarray*}
& &hg(\nabla_{e_1} e_1,e_2)=-2c,\\
& &hg(\nabla_{e_1} e_1,e_2)=-c+h^2.
\end{eqnarray*}
So we get $h^2+c=0$.
Since $m$ is constant, we see that $g(\nabla_{e_2} Df, e_2)=0$. So we have
$$0=\lambda_2 - \lambda=(2n+1)c-\lambda.$$
Thus we have $\lambda=(2n+1)c$. Using (55), we obtain $h=2m$. So (54) implies that $\mu=-\frac{5c}{2}$.
\end{proof}
\begin{lem} {\rm Case 3} does not occur.
\end{lem}
\begin{proof}
By (11), we have $g(\nabla_\xi e_s, e_1)=0$ when $e_s\in H_1$. Thus (12) implies that $e_s \alpha=0$. Next, by (4) and (35), we have
\begin{eqnarray}
& &(a_s - a_t) g(\nabla_{e_t} e_s, e_t) -(e_s a_t)=0,\\
& &(\lambda_s-\lambda_t) g(\nabla_{e_t} e_s, e_t) - (e_s \lambda_j)=0.
\end{eqnarray}
when $e_s,e_t\in H_1$ and $s\neq t$. Combining these equations, we have
\begin{eqnarray*}
0&=& (a_s \alpha - a_s^2 - a_t\alpha + a_t^2)g(\nabla_{e_t} e_s, e_t) - e_s (a_t \alpha -a_t^2)\\
&=& (\alpha - a_s - a_t)(a_s - a_t) g(\nabla_{e_t} e_s ,e_t) + (2a_t -\alpha)(e_s a_t)\\
&=&(\alpha - a_s - a_t)(e_s a_t) + (2a_t - \alpha)(e_s a_t)\\
&=& (a_t - a_s)(e_s a_t).
\end{eqnarray*}
Thus, when $a_s\neq a_t$, we have $(e_s a_t)=0 $. On the other hand, by (4), if $a_s=a_t$, then we have $(e_s a_t)=0$. So we see that if $e_s,e_t\in H_1$ and $s\neq t$, then $e_s a_t=0$. Since $\sum_{t\geq 2}a_t=0$, we have
$$0=e_s (\sum_{t\geq 2} a_t)=e_s a_s.$$
Next, by (5) and (32), we obtain
\begin{eqnarray*}
& &-a_t g(\nabla_{e_t} e_1, e_t) - (e_1 a_t)=0,\\
& &(e_1 \lambda_s) + (a_s \alpha - a_s^2 + h^2) g(\nabla_{e_s} e_1, e_s)=0.
\end{eqnarray*}
From these equations, we have
\begin{eqnarray*}
0&=&e_1 (a_s \alpha - a_s^2) + (a_s \alpha - a_s^2 + h^2) g(\nabla_{e_s} e_1, e_s)\\
&=&(\alpha - 2a_s) (e_1 a_s) + (a_s \alpha - a_s^2 + h^2) g(\nabla_{e_s} e_1, e_s)\\
&=& (a_s^2 + h^2)g(\nabla_{e_s} e_1, e_s).
\end{eqnarray*}
Here we note that (14) implies that $(e_1\alpha)=0$ since $h$ is constant by $h^2=-c$ in Lemma 5.10. Since $a_s^2 + h^2>0$, we have $g(\nabla_{e_s} e_1, e_s)=0$ for any $e_s \in H_1$. By (5) and (9), we obtain $e_1 a_s=0$, $\xi a_s=0$.
Next we consider $e_2 a_s$. From (4) and (35), we have
\begin{eqnarray*}
& & a_sg(\nabla_{e_s} e_2, e_s) = -e_2 a_s,\\
& &(a_s^2 - a_s\alpha)g(\nabla_{e_s} e_2 ,e_s) - (e_2 \lambda_s)=\frac{hc}{2},
\end{eqnarray*}
from which we obtain
\begin{equation}
a_s (e_2 a_s) - a_s (e_2\alpha)=\frac{hc}{2}.
\end{equation}
By (21), we obtain
$$g(\nabla_{e_s} Df, e_s)=\lambda_s - (2n+1)c = a_s\alpha - a_s^2.$$
Since $Df= \frac{h}{2} e_2$, we have
$$\frac{h}{2} g(\nabla_{e_s} e_2, e_s) = a_s\alpha -a_s^2.$$
By (4), we have
$$a_i g(\nabla_{e_s} e_2, e_s)+(e_2 a_s)=0.$$
From these equations, we have
\begin{equation}
h(e_2 a_s) = 2a_i(a_s^2 - a_s\alpha).
\end{equation}
Using (59), we obtain
\begin{equation}
2a_s^4 - 2a_s^3\alpha - \frac{1}{2}h^2 c - a_s h(e_2\alpha)=0.
\end{equation}
Next we compute $e_2\alpha$. By (12) and (26), we have $g(\nabla_\xi e_1,e_2)=0$ and
\begin{equation}
(e_2\alpha) = h\alpha.
\end{equation}
From (61), using $h^2=-c$, we obtain
$$2a_s^4 - 2a_s^3\alpha - \frac{1}{2}h^2 c - a_s h^2 \alpha=0.$$
So we obtain
$$a_s\alpha = \frac{2a_s^4 + \frac{c^2}{2}}{2a_s^2-c}>0.$$
We remark that $c<0$ by $h^2+c=0$, so we have $a_s^2-c>0$. Since $\sum_{s=1}^{2n-2} a_s =0$, this is a contradiction.
\end{proof}
From these lemmas we have our Theorem 5.1.\\
\section[6]{Hopf hypersurfaces}
In this section we consider the case that $M$ is a Hopf hypersurface of a non-flat complex space form $M^n(c)$, $n\geq 2$. We remark that a Hopf hypersurface satisfies the condition $S\xi =\beta\xi$ for some function $\beta$.
We take an orthonormal basis $\{\xi, e_1, \dots, e_{2n-2}\}$ such that
$$A\xi = \alpha \xi, \ \ Ae_j=a_j e_j, \ \ j=1,\dots, 2n-2.$$
We notice that $\alpha$ is a constant. We also obtain
$$S\xi = \beta \xi, \ \ Se_j=\lambda_j e_j, \ \ j=1,\dots, 2n-2,$$
where
\begin{eqnarray*}
\beta&=&(2n-2)c+\alpha\sum a_j,\\
\lambda_j&=&(2n+1)c+a_j{\rm tr}A-a_j^2,\\
Sc&=&4(n^2-1)c+({\rm tr}A)^2-\alpha^2-\sum a_j^2.
\end{eqnarray*}
By the equation of Codazzi, we have $\xi a_j=0$ for all $j=1,\dots, 2n-2$.
So we see that $\xi \beta =0, \xi \lambda_j=0, \xi( {\rm tr}A)=0$ and $\xi Sc=0.$
Moreover, from Proposition A, we have
\begin{eqnarray}
(2a_j-\alpha)\bar a_j=\alpha a_j+2c,
\end{eqnarray}
where we set $\bar a_j =g(A\phi e_j,e_j)$.
We show the following
\begin{theorem} Let $M$ be a Hopf hypersurface of a non-flat complex space form $M^n(c)$, $n\geq 2$. If $M$ admits a gradient pseudo-Ricci soliton, then $M$ is a pseudo-Einstein real hypersurface.
\end{theorem}
\begin{proof}
By (22), we have
$$g(R(\xi,e_j)e_j,Df)=0.$$
From the equation of Gauss,
$$g(R(\xi,e_j)e_j,Df)=(c+\alpha a_j)g(\xi, Df).$$
Hence we have
$$(c+\alpha a_j)g(\xi, Df)=0.$$
If $c+\alpha a_j=0$ for all $j$, then $a_j=a_i$ for all $i, j$. We put $a=a_j, j=1,\dots, 2n-2$. Then (63) implies $(2a-\alpha)a=c$. Hence we have
$$2a^2=c+\alpha a =0.$$
Thus we obtain $a=0$ and $c=0$. This is a contradiction.
So there exists $j$ such that $c+\alpha a_j\ne 0$, from which we obtain $g(\xi, Df)=0$. This shows $\xi f=0$. Therefore, by (21),
$${\rm Hess}f(e_j,\xi)=-g(Df,\phi Ae_j)=-a_j g(Df,\phi e_j)=0.$$
If $a_j\ne0$, then $g(Df,\phi e_j)=0$. When $a_j=0$, (63) implies $-\alpha \bar a_j=2c$. Thus we see that
$$\bar a_j=-\frac{2c}{\alpha}$$
is constant. Moreover, the equation of Gauss and (22) imply
\begin{eqnarray*}
g(R(\phi e_j,e_j)Df, e_j) &=& -4cg(\phi e_j, Df)\\
&=& -\bar a_j({\rm tr}A-\bar a_j)g(\nabla_{e_j}\phi e_j,e_j).\nonumber
\end{eqnarray*}
From the equation of Codazzi, we also have
$$\bar a_j g(\nabla_{e_j}\phi e_j, e_j)=0.$$
Hence we have $4cg(\phi e_j,Df)=0$. Consequently for all $j$, we have
$$g(\phi e_j,Df)=0.$$
Thus, we have $Df=0$, which proves our assertion.
\end{proof}
From Theorem 4.1, Theorem 5.3 and Theorem 6.1, we have Theorem 1.1. Regarding a Ricci soliton, we will state the following
\begin{theorem}
Let $M$ be a real hypersurface of a non-flat complex space form $M^n(c)$ and suppose that the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$. Then $M$ does not admit a gradient Ricci soliton.
\end{theorem}
\begin{proof}
When $M$ admits a gradient Ricci soliton, we have $\mu=0$. Suppose $M$ is not Hopf. Then, in the proof of Theorem 5.10, we see $\mu=-\frac{5}{2}c$. This is a contradiction. Let $M$ be a Hopf hypersurface. Then, from Theorem 6.1 and $\mu=0$, $M$ is an Einstein real hypersurface. This is also a contradiction. Hence $M$ does not admit a gradient Ricci soliton.
\end{proof}
This implies Theorem 1.2. From Theorem 5.1 and Theorem 6.1, we also have the following result.
\begin{theorem} Let $M$ be a real hypersurface of a non-flat complex space form $M^n(c)$. Suppose $n\geq 3 $ and the Ricci tensor $S$ of $M$ satisfies $S\xi=\beta\xi$ for some function $\beta$. If $M$ admits a gradient pseudo-Ricci soliton, then $M$ is a pseudo-Einstein real hypersurface.
\end{theorem} |
1311.7281 | \section{Introduction}
Gravity is supposed to be the only dominant force at large distances during the present epoch. According to this and due to several puzzles concerning the evolution of the Universe (for instance dark energy, dark matter and so on), it is reasonable to consider that we might have not fully understood it on a cosmological scale.
\\
The first attempts to modify the Einstein's gravity go back in 1919, when Weyl \cite{weyl} added a quadratic term in the Weyl tensor to the Einstein-Hilbert Lagrangian. Later many authors gave attention to modifications of the gravitational theory, for example Eddington \cite{eddington}, Lanczos, Bach, Schrodinger and then Buchdahl \cite{buchdal} that analyzed the actions considering singularity free oscillating cosmology.
In the 1960's in the context of quantum gravity the Einstein's Lagrangian was modified introducing terms containing higher orders of the scalar curvature \cite{dewitt}.
\\
Very interesting classes of extended gravity are the so called ``$f(R)$ theories'', coming from a direct generalization of the Einstein's Lagrangian (for complete reviews see
\cite{{Amendola:2006we},{silvestri},{faraoni1},{capozziello1},{defelice1},{capozziello2},{sotiriou},{defelice},{nojiri0}} and references therein).
Among these, $f(R)$ gravity seems to be an interesting model that is relatively simple and it may have many applications in astrophysics, cosmology and high energy phenomena \cite{{elizalde},{Capozziello:2007ec}}. The paradigm consists of adding higher order curvature invariants.
The simplest modification of gravity which still preserves all the symmetries of General Relativity (GR) consists of the extension of Einstein-Hilbert action
\begin{equation}
S_{{EH}}=-\frac{1}{16\pi\,G}\int_{\Omega} d^4x\,\sqrt{-g}\,R
\end{equation}
where instead of $R$, the Ricci scalar curvature, an arbitrary function $f(R)$ is present,
\begin{equation}
S=\int_{\Omega} d^4x\,\sqrt{-g}\,\;f(R) \,
\end{equation}
where $g$ is determinant of the metric $g_{\mu \nu}$. From a conceptual point of view we have no a priori reason to consider the gravitational Lagrangian as a linear function of Ricci scalar R.
From the technical point of view, this way to proceed directly allows us to write field equations in order to compare them with GR ones. Moreover, they are directly related to scalar-tensor theories by a peculiar conformal transformation of the metric involving a scalar field $\phi$ \cite{chiba}. This way to proceed gives field equations which are also ghost free \cite{{Biswas:2011ar},{Biswas:2013cha}}.
\\
It has been established that our Universe is undergoing an accelerated phase. Indeed a series of observations based on Supernovae type Ia \cite{{riess},{perlmutter},{tonry}} can be explained by the accelerated expansion of the Universe. Within the mathematical framework of the GR and the idea that our cosmo is homogeneous and isotropic, the scientists assume that this acceleration is due to some kind of negative pressure form due to ``dark energy''. Cosmologists have proposed many models of dark energy but those models have many free parameters and constraints from observational data.
This discovery has revolutionized modern cosmology. There are many explanations and theoretical models in literature, for example quintessence, k-essence, Chaplygin gas... and so on.
The simple explanation for the Universe's accelerated expansion is a cosmological constant, $\Lambda$, that is to say a nonzero vacuum energy that drives the acceleration of the Universe. From a observational viewpoint it is important to say that $\Lambda$CDM model of the Universe is in agreement with data coming from observations. But this model shows incongruences and is ``unnatural'', because poses important theoretical questions: what is the reason why the nonzero vacuum energy should drive the acceleration? Why is the cosmological constant so small? This is known as ``cosmological constant problem'' and it is a fundamental problem in cosmology and physics. In other terms the nature of dark energy as well as its cosmological origin remain unknown and a real mystery.
\\
For a resolution to this problem we invoke the class of $f(R)$ theories. For sure, this is just a particular class of extended gravity theories. Some different interesting approaches have been studied in \cite{Biswas:2011ar,Biswas:2013cha}.
Nevertheless recent research has shown a plausible alternative to this picture, in fact it has been shown that such cases lead to an effective dark energy
\cite{{capozziello2},{uno},{due},{tre},{quattro},{cinque},{sei},{sette},{otto},{nove},{dieci},{undici},{dodici},{tredici},{quattordici},{quindici},{sedici},{diciassette},{diciotto},{diciannove},{venti},{ventuno},{ventidue},{ventitre},{ventiquattro},{venticinque},{ventisei},{ventisette},{ventotto},{ventinove},{trenta},{trentuno},{trentadue},{trentatre},{trentaquattro},{trentacinque},{trentasei},{trentasette},{trentotto},{trentanove},{quaranta},{cento}}; in other terms these models mimic the accelerated expansion of the Universe by a modification of general relativity that converts the attractive gravity into a repulsive interaction on cosmological scales.
If we consider a small correction to the Einstein-Hilbert action, for example, by adding an $1/R$ term, we have an acceleration of the Universe because of the $1/R$ term which is able to dominate as the Hubble parameter decreases. This theory shows \cite{chiba} that there is an equivalence with scalar-tensor gravity without scalar kinetic term. It is important to say that this connection to scalar-tensor gravity is provided by a conformal transformation connecting the Einstein frame and the Jordan one and is valid for all extended theories of gravity that have an action with $f(R)$, a function of Ricci scalar in which $f(R)$ has nonzero second derivate with respect to $R$. These $f(R)$ models contain higher order gravity terms that may be the cause of the acceleration of the Universe.
\\
In other terms, modifying general relativity allows us to eliminate the dark energy, but this approach does not explain the minuscule value of vacuum energy. It is also important to stress that we don't know the exact functional form of the Lagrangian, therefore it is necessary to test theoretical considerations with observational data.
\\
This paper is organized as follows: in Sec. II we report the well-know derivation of the modified field equations in $f(R)$ gravity; in Sec. III we focus on perturbed Lagrangian; in Sec. IV we consider the Hubble parameter; in Sec. V we study our model in relation to the apparent acceleration of the Universe. We summarize our conclusions in Sec. VI.
\vskip 1.3truecm
\section{Deriving field equations in $f( R)$ gravity}
In this Section we report the standard way \cite{Dyer:2008hb,Guarnizo:2010xr} to obtain the modified equations in $f(R)$ gravity. We start from a theory described by the Lagrangian density $\sqrt{-g} f(R)$ and we apply the variational principle $\delta \int d^4 x \sqrt{-g} f(R)=0$. This gives:
\begin{eqnarray}
\delta_{g^{\mu\nu}}S&=&\int_\Omega d^4x\left[ \delta(\sqrt{-g})f( R)+\sqrt{-g}\,\delta (f( R)) \right]
\nonumber\\
&=&\int_\Omega d^4x\left[ -\frac{1}{2}\sqrt{-g}\,g_{\mu \nu}f( R)\,\delta g^{\mu \nu}+\sqrt{-g}\,f'( R)\delta (g^{\mu \nu}R_{\mu \nu}) \right]\\
\label{eq:first_variation}
&=&\int_\Omega d^4x\sqrt{-g}\left[ -\frac{1}{2}\,g_{\mu \nu} f( R)\,\delta g^{\mu \nu}+f'( R)\delta g^{\mu \nu}R_{\mu \nu} +f'( R)g^{\mu \nu}\delta R_{\mu \nu} \right]\nonumber
\end{eqnarray}
where $f'( R)\equiv \frac{d f( R)}{dR}$ and
\begin{equation}
\delta \sqrt{-g}=-\frac{1}{2}\sqrt{-g}\,g_{\mu \nu}\delta g^{\mu \nu}.
\end{equation}
The last term in eq.~\eqref{eq:first_variation} gives rise to the boundary effects because $\delta R_{\mu \nu}$ contains $(\delta \partial g)_{\partial \Omega}$ that it is nonzero.
First of all, let's notice that:
\begin{equation}
\delta R_{\mu \nu}=\nabla_\alpha\delta{\Gamma_{\mu \nu}}^\alpha-\nabla_\mu \delta{\Gamma_{\alpha \nu}}^\alpha.
\end{equation}
where $\Gamma_{\mu \nu}^{\alpha}$ are the usual Christoffel symbols constructed from $g_{\mu \nu}$.
Now, let's rewrite our relation in the local inertial frame where $\Gamma=0\Rightarrow \nabla\rightarrow \partial$ and the metricity condition becomes $\partial_\alpha g_{\mu \nu}=0$. In this way we obtain:
\begin{eqnarray}
\delta {\Gamma_{\mu \nu}}^\alpha&=&\frac{1}{2}\,g^{\alpha \rho}\left( \partial_\mu \delta g_{\nu \rho}+\partial_\nu \delta g_{\rho \mu}-\partial_\rho \delta g_{\mu \nu} \right)\\
\delta {\Gamma_{\alpha \nu}}^\alpha&=&\frac{1}{2}\,g^{\alpha \rho}\partial_\nu \delta g_{\rho \alpha}
\end{eqnarray}
and
\begin{equation}
\label{eq:second_variation}
\left( g^{\mu \nu}\delta R_{\mu \nu} \right)_{\Gamma=0}=\partial^\rho \partial^\nu \delta g_{\rho \nu}-g^{\alpha\rho}\partial_\mu \partial^\mu \delta g_{\alpha \rho}.
\end{equation}
If we release the inertial frame hypothesis, we have to replace $\partial$ with $\nabla$, so that eq.(\ref{eq:second_variation}) becomes:
\begin{eqnarray}
\label{eq:third_variation}
g^{\mu \nu}\delta R_{\mu \nu} &=& \nabla^\rho \nabla^\nu \delta g_{\rho \nu}-\nabla_\mu \nabla^\mu \delta \left( g^{\alpha\rho}g_{\alpha \rho}\right)\nonumber
\\
&=& \nabla_\mu\left[ g_{\alpha \beta} \nabla^\mu\delta g^{\alpha \beta}-\nabla_\nu\delta g^{\mu \nu} \right]
\end{eqnarray}
where we used $g^{\alpha \beta}\,\delta g_{\alpha \beta}=-g_{\alpha \beta}\,\delta g^{\alpha \beta}$, $\delta g_{\alpha \beta}=-g_{\alpha \rho}\,g_{\beta \sigma}\,\delta g^{\rho \beta}$ and the metricity condition $\nabla g=0$.
In other word, the last term in (\ref{eq:first_variation}) becomes:
\begin{eqnarray}
\int_\Omega & d^4x&\,\sqrt{-g}\,f'( R) \nabla_\mu \left[g_{\alpha \beta} \nabla^\mu\delta g^{\alpha \beta}-\nabla_\nu\delta g^{\mu \nu}\right] \nonumber
\\
&=& \int_\Omega d^4x\,\sqrt{-g}\,\nabla_\mu\left[ f'( R)(g_{\alpha \beta} \nabla^\mu\delta g^{\alpha \beta}-\nabla_\nu\delta g^{\mu \nu})\right] \nonumber
\\
&-&\int_\Omega d^4x\sqrt{-g}\,\{\nabla_\mu\left[ \left( \nabla^\mu f'( R) \right)\,g_{\alpha \beta}\,\delta g^{\alpha \beta} \right] -\nabla_\alpha \nabla^\alpha f'( R)\,g_{\mu \nu}\,\delta g^{\mu \nu}\}
\nonumber
\\
&+&\int_\Omega d^4x\sqrt{-g}\,\{\nabla_\nu\left[ \left( \nabla_\mu f'( R) \right)\,\delta g^{\mu \nu} \right]-\nabla_\nu \nabla_\mu f'( R)\,\delta g^{\mu \nu}\}.
\end{eqnarray}
It is important to note that second integral and the fourth one do not contribute to the variation; in fact they can be changed in two flux integrals evaluated on the boundary $\partial \Omega$, where $\delta g=0$. On the other hand, the third integral and the fifth one give a relevant contribute to the variation, that appears as follows:
\begin{equation}
\label{eq:variation1}
\int_\Omega d^4x\,\sqrt{-g}\,\delta g^{\mu \nu} \left[ R_{\mu \nu} f'( R)-\frac{1}{2}\,g_{\mu \nu}f( R)+ g_{\mu \nu}\Box f'( R)-\nabla_\mu \nabla_\nu f'( R) \right]
\end{equation}
where $\Box\equiv \nabla_\alpha \nabla^\alpha$. Here we may add the variation of the material action which gives the standard condition
\begin{equation}
\label{eq:variation2}
\int_\Omega d^4x\,\sqrt{-g}\,\frac{1}{2}\,T_{\mu \nu}\,\delta g^{\mu \nu},
\end{equation}
in order to obtain all the terms proportional to $\delta g^{\mu \nu}$. There is still another term that contributes to the variation; this one can be rewritten as a flux integral as follows:
\begin{equation}
\label{eq:border_variation}
\int_{\partial\Omega}dS^\mu\sqrt{-g}\,f'( R)\left[ g_{\alpha \beta} \nabla^\mu \delta g^{\alpha \beta}-\nabla_\nu\delta g^{\mu \nu} \right]\equiv \delta_g S_{b}.
\end{equation}
In this way, we have obtained a system of fourth-order differential bulk equations that must determine 10 components of the symmetric tensor $g_{\mu\nu}$. Therefore, we are allowed to fix 40 initial conditions, in order to have a well defined solvable problem. We already fixed 20 of them by requiring that $\delta g_{\mu\nu}|_{\partial\Omega}=0$ so we are still left with 20 conditions, that are not enough to completely eliminate the boundary terms. In fact, in eq.(\ref{eq:border_variation}), we have 80 degrees of freedom.
In GR ($f'( R)=0$) the boundary contribution can be eliminated by adding the well-know York-Gibbons-Hawking action:
\begin{equation}
S_{YGH}=\int_\Omega d^4x\sqrt{-g}\,\nabla_\mu V^\mu_{YGH} =\int_{\partial \Omega}d^3\xi\sqrt{|h|}\,2K
\end{equation}
where $h_{\alpha\beta}\equiv g_{\alpha\beta}+\epsilon\,\eta_\alpha\eta_\beta$ in the induced metric on the boundary, and $K\equiv h^{\mu\nu}K_{\mu\nu}=2h^{\mu\nu}\nabla_\mu\eta_\nu$ is the extrinsic curvature of the boundary hypersurface.
Therefore, in our case a possible boundary term is:
\begin{equation}
S_{B}=\int_{\partial \Omega}d^4x\sqrt{-g}\,\nabla_\mu\left[f'( R)\,V^\mu_{YGH}\right].
\end{equation}
By varying with respect to $g$, we obtain that:
\begin{equation}
\label{eq:total_boundary_variation}
\delta_gS_{B} = \int_{\partial\Omega}d^3\xi\sqrt{|h|}\,f'( R)\eta^\mu h^{\nu\alpha}\partial_\mu\delta g_{\nu\alpha} +\int_{\partial\Omega}dS_\mu\sqrt{-g}\,2K\,f''( R)\,g^{\mu\nu}\delta R_{\mu\nu}.
\end{equation}
The first integral of eq.(\ref{eq:total_boundary_variation}) exactly eliminate the contribution of eq.(\ref{eq:border_variation}). In addition, $R_{\mu\nu}$ is a symmetric tensor, so that the requirement $\delta R_{\mu\nu}=0$ on the boundary can be used to impose the additional 20 initial conditions allowing us to completely solve the bulk equations:
\begin{equation}
\label{eq:fieldEq}
R_{\mu \nu} f'( R)-\frac{1}{2}\,g_{\mu \nu}f( R)+g_{\mu \nu}\Box f'( R)-\nabla_\mu \nabla_\nu f'( R)=-\frac{1}{2}T_{\mu\nu}.
\end{equation}
Before concluding this section, let us notice that $\left(g^{\mu\nu}\delta R_{\mu\nu}\right)_{\partial \Omega}=\left( \delta R\right)_{\partial\Omega}$; in other words, the total contributions on the boundary can be rewritten as a scalar degree of freedom, using the well-know equivalence between $f( R)$ gravity and scalar-tensor theory \cite{Dyer:2008hb}.
\vskip 2truecm
\section{Perturbed Lagrangian}
In this Section we investigate a perturbation of the general relativity solution in a purely matter dominated Friedmann universe. In particular we'll obtain a modified expression for the expansion parameter $a$.
\\
To this end, let us consider a generic and unknown Lagrangian for a modified theory of gravity which depends only on the Ricci scalar $R=g^{\mu\nu}R_{\mu\nu}$, so we can write $\mathcal{L}=f( R)$. As we saw in the previous section the field equations are given by (\ref{eq:fieldEq}). Because we are restricting to Friedmann-Lema\^itre-Robertson-Walker (FLRW) models, with only one function to determine, we consider the trace of these equations:
\begin{equation}
\label{eq:Trace}
R \, f'( R)-2f( R)+3\,\Box f'( R)=-\frac{1}{2}\,T.
\end{equation}
In general, $f( R)$ is not specified, therefore it is possible to consider its power series expansion instead of the exact form:
\begin{equation}
\label{eq:powSerExp}
f( R)=\sum_{n=0}^\infty c_n R^n \, .
\end{equation}
From (\ref{eq:powSerExp}), it directly follows that
\begin{equation}
f'( R)=\sum_{n=1}^\infty n\,c_n\,R^{n-1}.
\end{equation}
In this way, equation (\ref{eq:Trace}) can be rewritten as follows:
\begin{equation}
\left( \sum_{n=1}^\infty n\,c_n\,R^{n-1} \right)R -2 \sum_{n=0}^\infty c_n\,R^n+ 3\Box \left( \sum_{n=1}^\infty n\,c_n\,R^{n-1} \right) = -\frac{1}{2}\, T
\end{equation}
so we obtain:
\begin{equation}
\sum_{n=1}^\infty c_n\left[ (n-2)R^n+3\,n\,\Box R^{n-1} \right]=-\frac{1}{2}\,T+2\,c_0.
\end{equation}
At this point we assume that there isn't a cosmological constant term, i.e. that $c_0=0$, because our purpose is to explain the apparent acceleration of the universe as an effect due to the Lagrangian's higher order terms without introducing any kind of dark energy. Furthermore, let's divide all of the equation by $c_1=-\frac{1}{2\chi}$ and let's define $C_n=\frac{c_n}{c_1}$, so:
\begin{equation}
\label{eq:equations}
-R=\chi\,T, \;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\text{if } \;\;\;\;\;\;\;n=1
\end{equation}
\begin{equation}
-R+6\,C_2\,\Box\,R=\chi\,T, \;\;\;\;\text{if } \;\;\;\;\;\;\;n=2
\end{equation}
and so on. We stop our expansion because we assume that all of the relevant corrections to general relativity can be treated as a first order perturbative correction. Having this consideration in mind, let us consider a flat FLRW metric:
\begin{equation}
ds^2=dt^2-a^2(t)\left[ dx^2+dy^2+dz^2 \right].
\end{equation}
In such a way, we obtain $\Box R =\ddot R+3\,\frac{\dot a}{a} \dot R$; so eq.~(\ref{eq:equations}) becomes:
\begin{equation}
-R=\chi\,T, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\text{if } \;\;\;\; n=1
\end{equation}
\begin{equation}
-R+6\,C_2\,\left(\ddot R+3\,\frac{\dot a}{a} \dot R\right)=\chi\,T, \;\;\;\;\text{if } \;\;\;\;\; n=2.
\end{equation}
Now, because we are interested in finding the perturbative solution of a GR solution for a purely matter dominated Friedmann model, let us expand our function as $a(t)\approx a_0(t)+C_2\,a_1(t)$, considering that $T = \rho_0\,a^{-3}\approx \rho_0\, a_0^{-3}\left( 1-3\,C_2 \frac{a_1}{a_0} \right)$
\begin{eqnarray}
\label{eq:unperturbedFRW}
&-&R\left[a_0(t)\right]=\frac{\chi\,\rho_0}{a_0^3}\\
\label{eq:firstCorrectionFRW}
&-&R\left[ a(t) \right]+6\,C_2\,\left\{ \ddot R\left[ a(t) \right]+3\,\frac{\dot a(t)}{a(t)}\dot R\left[ a(t) \right] \right\}=\chi\,T.
\end{eqnarray}
The solution of eq.~(\ref{eq:unperturbedFRW}) is given by $a_0(t)=\left( 1+\frac{\sqrt{3\chi\rho_0}}{2}\,t \right)^{2/3}$. Now we want to find the expression of $a_1$ by expanding eq.~(\ref{eq:firstCorrectionFRW}) as follows:
\begin{equation}
\label{eq:solvenda}
-R\left[ a_0+C_2\,a_1 \right]+6\,C_2\,\left\{ \ddot R\left[ a_0+C_2\,a_1 \right]
+ 3\,\frac{\dot a_0+C_2\,\dot a_1}{a_0+C_2\,a_1}\,\dot R\left[ a_0+C_2\,a_1 \right] \right\}=\frac{\chi\,\rho_0}{a_0^3}\left( 1-3\,C_2\frac{a_1}{a_0} \right)
\end{equation}
we obtain
\begin{equation}
-R\left[ a_0 \right]-C_2\,a_1\,\frac{\partial R\left[ a_0 \right]}{\partial a}+6\,C_2\,\left\{ \right. \ddot R\left[ a_0\right]
+3\,\frac{\dot a_0}{a_0}\,\dot R\left[ a_0 \right] \left. \right\}=\frac{\chi\,\rho_0}{a_0^3}\left( 1-3\,C_2\frac{a_1}{a_0} \right) \, ,
\end{equation}
that is to say
\begin{equation}
a_1=\left\{6\,\ddot R\left[ a_0\right]+18\,\frac{\dot a_0}{a_0}\,\dot R\left[ a_0 \right]\right\}\left(\frac{\partial R\left[ a_0 \right]}{\partial a}-3\,\frac{\chi\rho_0}{a_0^3}\right)^{-1}.
\end{equation}
At this step, remembering that $R(t)=-6\left\{ \left( \dot a/a \right)^2+\ddot a/a \right\}$ and inserting the expression for $a_0$, we finally find:
\begin{equation}
a_1(t)=\frac{3\,\chi\rho_0}{2\left( 1+\frac{\sqrt{3\chi\rho_0}}{2}\,t \right)^{4/3}}.
\end{equation}
and then:
\begin{align}
\label{eq:solution1}
a(t)&\approx \left( 1+\frac{\sqrt{3\chi\rho_0}}{2}\,t \right)^{2/3}+C_2\frac{3\,\chi\rho_0}{2\left( 1+\frac{\sqrt{3\,\chi\rho_0}}{2}\,t \right)^{4/3}}\nonumber\\
&=\left( 1+\frac{3\,h}{2}\,t \right)^{2/3}+C_2\frac{9\,h^2}{2\left( 1+\frac{3\,h}{2}\,t \right)^{4/3}}
\end{align}
that is the most general expression of the expansion parameter in our model and directly reduces to general relativity solution for $C_2=0$, where we defined $h\equiv\frac{\sqrt{3\,\chi\rho_0}}{3}$.
\\
Differently from the usual GR normalization, the perturbed scale factor as written in eq.~\eqref{eq:solution1} is not equal to one at the present time. Nevertheless we can recover the usual normalization by redefining $a( t)$ as $a( t)-\frac{9C_2 h^2}{2}$. This is possible by taking into account Eq. (30). In fact, let us consider the differential equation for the first-order correction $a_1$ following from Eq. (30). Using the explicit form of the zeroth-order solution, $a_0(t) =(1+ 3ht/2)^{2/3}$, and using the definition $h^2= \chi \rho_0/3$, we find that all terms proportional to $a_1$ cancel each other, and we are left with an equation for $a_1$ containing only its time derivatives $\dot a_1$ and $\ddot a_1$. Hence, we can safely subtract a constant term from the solution (33), and the result is still a viable solution for $a_1(t)$.
\\
From now on, we will then refer to the following expression for the scale factor:
\begin{equation}
\label{eq:solution2}
a(t)=\left( 1+\frac{3\,h}{2}\,t \right)^{2/3}+C_2\frac{9\,h^2}{2\left( 1+\frac{3\,h}{2}\,t \right)^{4/3}}-\frac{9\,C_2 h^2}{2},
\end{equation}
which is automatically normalized to 1 nowadays (i.e., at $t=0$). Eq. (35) represents a very interesting dependence of the scale factor $a(t)$ on the time $t$ and we note that,
because of the normalization $a_0(0)=1$ and $a_1(0)=0$, the perturbative approach we are using is well grounded. It is true that, since $a_0$ decreases towards the past while $a_1$increases, if we consider a small but finite $C_2$, then the perturbative correction $|C_2\frac{a_1}{a_0}|$ in the past was larger than today. However, thanks to the chosen normalization $C_2{a_1(0)}/{a_0(0)}=0$, we can still expect that the perturbative approach is valid in a given range of time, depending on the values of $h$ and $C_2$.
\\
In the following sections we will calculate some fundamental parameters like the Hubble function and the acceleration. In particular, the last section is dedicated to the comparison with supernovae Ia Union2 data, in order to obtain an experimental estimate of $C_2$ by a best-fit procedure.
\vskip 2truecm
\section{Hubble parameter}
In the previous section we have found the solution for $a(t)$, that contains two free parameters $h$ and $C_2$. In general relativity $h$ is just the Hubble parameter evaluated today. However, in the perturbed approach, this interpretation is no longer viable: in fact, remembering that the Hubble function is related to the expansion rate by
\begin{equation}
\label{eq:Hubble_function}
H(t)=\frac{\dot a(t)}{a(t)}\approx \frac{\dot a_0}{a_0}\left[ 1+C_2\,\left( \frac{\dot a_1}{\dot a_0}-\frac{a_1}{a_0} \right) \right]
\end{equation}
we obtain
\begin{equation}
\label{eqxx}
H(t)= \frac{2 h}{2+3 h t}+\frac{9\,C_2\,h^3}{2}\left[ \left( 1+\frac{3}{2}\,h\,t \right)^{-5/3} -3\left( 1+\frac{3}{2}\,h\,t \right)^{-3}\right].
\end{equation}
So the Hubble constant is:
\begin{equation}
\label{eq:Hubble_constrain}
H_0\equiv H(0)=h-9\,C_2\,h^3.
\end{equation}
At this point, using the observed value, say $H_0=67$ km/s Mpc$^{-1}$, we are able to determine a relation among $C_2$ and $h$ by inverting (\ref{eq:Hubble_constrain}), i.e.
\begin{equation}
\label{eq:C2ofh}
C_2=\frac{h-H_0}{9\,h^3}.
\end{equation}
This relates $C_2$ to $h$ by $H_0$ and allows us to rewrite eq.~(35) as
\begin{equation}
\label{eq:solution2}
a(t)= \left( 1+\frac{3}{2}\,h\,t \right)^{2/3}+\frac{h-H_0}{2\,h}\left[\left( 1+\frac{3}{2}\,h\,t \right)^{-4/3}-1\right].
\end{equation}
This expression contains the only parameter $h$ and we notice that, as $C_2$ approaches $0$, we have $h\rightarrow H_0$. By the way we stress once again that this interpretation falls down in the perturbed model where $h$ is just a parameter fixed by the initial condition $\dot a(0)=H_0$.
\\
In the next section, we shall use our solution eq.~(40) in order to study the apparent acceleration of the universe.
\vskip 2truecm
\section{Acceleration and Luminosity distance}
In order to study the acceleration properties of our model, let us compute the second derivative of $a(t)$:
\begin{equation}
\label{eq:acceleration}
\ddot a(t)=-\frac{h^2}{2\,\left( 1+\frac{3}{2}\,h\,t \right)^{4/3}}+\frac{h-H_0}{2}\frac{7\,h}{\left( 1+\frac{3}{2}\,h\,t \right)^{10/3}}.
\end{equation}
The previous expression, when evaluated at the present time, becomes equal to $(\ddot a/a)_\text{today}$ because of our choice $a(0)=1$, so, from eq.~(\ref{eq:acceleration}) we have:
\begin{equation}
\label{eq:acceleration_today}
\left(\frac{\ddot a}{a}\right)_\text{today}=\frac{h}{2}\left( 6\,h-7\,H_0 \right).
\end{equation}
Therefore acceleration at the present epoch appears if eq.~(\ref{eq:acceleration_today}) is greater than 0 i.e.
\begin{equation}
h<0 \qquad \text{or}\qquad h>\frac {7} {6} \,H_0.
\end{equation}
This analysis shows a very important and interesting feature of this perturbative model. Without introducing any kind of dark energy, an accelerated expansion of the Universe is possible even in presence of a purely matter component and in presence of a little correction in the Lagrangian.
\\
At this point, it is very interesting to check our model with the Supernovae Ia data considering, in particular, the recent Union2 compilation \cite{union2}. To this end, let us evaluate the luminosity distance: it is well know that, in a Friedmann-Lema\^itre-Robertson-Walker spacetime, the luminosity distance $d_L$ is:
\begin{equation}
d_L(z)=\left( 1+z \right)\int_0^z\frac{dz'}{H(z')}
\end{equation}
where $H(z)$ is the Hubble function given in eq.~(\ref{eq:Hubble_function}) with the constrain eq.~(\ref{eq:Hubble_constrain}). The dependence on the redshift is obtained by considering the expression of $z$ for a stationary and geodesic observer, i.e.
\begin{equation}
1+z=\frac{1}{a(t)}.
\end{equation}
Moreover, defining $y\equiv\left( 1+\frac{3}{2}\,h\,t \right)^{2/3}$, it is possible rewrite last equation as:
{\begin{align}
\frac{1}{1+z}=a(t)=y+\frac{h-H_0}{2h}\left( y^{-2}-1 \right)\Rightarrow
y^3-A_1(z)\,y^2+A_2=0
\end{align}}
where
\begin{eqnarray}
&A_1(z)=\frac{h-H_0}{2\,h}+\frac{1}{1+z}\nonumber\\
&A_2=\frac{h-H_0}{2\,h}.
\end{eqnarray}
Finally, by solving the third order polynomial, we find that
\begin{equation}
y(z)=\frac{1}{3}\left[ A_1+\frac{A_1^2}{A_3}+A_3 \right]
\end{equation}
or equivalently
\begin{equation}
t(z)=\frac{2}{3\,h}\left[y(z)^{3/2}-1\right]
\end{equation}
with
\begin{equation}
A_3^3(z)=A_1^3+\frac{3}{2}\left( \sqrt{81\,A_2^2-12\,A_2A_1^3}-9A_2 \right).
\end{equation}
Once this relation is found, we can insert it into definition of $d_L$ and, by numerical integration, we are able to evaluate the so-called distance modulus
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{figure2.pdf}
\caption{The Hubble diagram of the Union 2 dataset. The plot illustrates the best-fit result with the only parameter h. We have h=$(13.9 \pm 0.8)$ km/s Mpc$^{-1}$ with $ {\chi^2 } / \text{d.o.f.} =1.1$. We have considered for the Hubble constant the value ${H_0=67}$ km/s Mpc$^{-1}$. }
\label{figure2}
\end{figure}
\begin{equation}
\label{mu}
\mu(z,h)=5\log_{10} \left[\frac{d_L(z, h)} {1Mpc}\right] +25\, .
\end{equation}
We have in principle one free parameter $h$, since we will use for the Hubble constant the recent value $H_0= 67$ Km/s Mpc$^{-1}$ given by Planck data. It is then possible to fit the experimental data $\mu^\text{obs} (z_i) \pm \Delta \mu(z_i)$, where $\Delta \mu(z_i)$ is the relative error of the modulus distance with respect to the $i-th$ Supernova Ia, by means of a $\chi^2$ analysis with
\begin{equation}
\label{chi2}
\chi^2 = \sum_{i=1}^{557} \left[ \frac{\mu^\text{obs} (z_i) - \mu(z_i, h)} {\Delta \mu(z_i)} \right]\, .
\end{equation}
In Fig. 2 we plot the distance modulus vs redshift in the Hubble diagram. Minimizing the $\chi^2$ expression we find the best -fit value $h=(13.9 \pm 0.8)$ Km/s Mpc$^{-1}$ with $\chi^2/ \text{d.o.f.} = 1.1$. The best-fit red curve is superimposed to the Union 2 data set (with error bars). It is then interesting to see that the model considered here is able to reproduce the corresponding best-fit results for a homogeneous $\Lambda$CDM model in Friedman-Lemaitre-Robertson-Walker metric, without introducing the cosmological constant.
Moreover, the value we found for $C_2$ is $-(2.2\pm0.4)\times10^{-3}$ (km/s Mps$^{-1})^{-2}$. It is important to stress that this best-fit value corresponds to a decelerated expansion of the Universe and that, with these values of $h$ and $C_2$, the condition required for the validity of our perturbative expansion is satisfied in the appropriate redshift range.
\section{Conclusion}
The cosmic acceleration produced an intriguing shock to cosmologists in 1998. Cosmological constant, dark energy, backreaction of inhomogeneities has been invoked in General Relativity. It is also possible to fit supernovae data with different models, for example LTB models in \cite{Cosmai:2013iga} or LTB-anisotropic model of the Universe \cite{Fanizza:2014tua}.
A theory of modified Einstein gravity could be an explanation.
In this research we have investigated a perturbative approach based on a FLRW metric in $f(R)$ gravity and we have studied a model which in general is able to describe data about the supernovae Ia. This model mimics a cosmological evolution consistent with observations.
\\
Among the many forms of the function $f(R)$ present in the literature, here we discuss an expansion of f(R) in power series of $R^n$. Capozziello {\it et al.} \cite {{capofinale},{capofinale2}} have introduced an action with a term $f(R) \sim R^n$ and they have shown that this lead to an accelerated expansion for $n \simeq 3/2$.
We consider a second order expansion of the cosmological parameter $a(t)$. We have found an approximate expression given by eq.~(\ref{eq:solution1}) that contains two parameters but, taking into account the observed value of the Hubble constant, it is possible to reduce ourself to the case of a single-parameter model, see eq.~(\ref{eq:solution2}). Notice that we used the Hubble parameter's value determined by Planck, in order to (possibly) reduce the tension of the $H_0$ determination between the CMB observations and the SNIa ones. In this context there is no need of introducing a dark energy component in order to explain supernovae data: in fact, fitting the data released by Union 2 we have found that the experimental points in the Hubble diagram can be accurately described also by model presented in this paper. \\
In conclusion we want to stress that this work offers a possible explanation of the apparent acceleration of the Universe as a consequence of a dynamical approach in which we consider a perturbed general relativity solution, based on a modified gravitational theory, without introducing cosmological constant and/or dark energy. Therefore it is possible that the usually claimed acceleration effect is not the manifestation of an increase of the expansion velocity, but rather the first signal of a gravitational Lagrangian different from the Einstein-Hilbert Lagrangian. From a philosophical point of view the core of the problem is that we have a limited number of cosmological tests available, in order to discriminate among different theories candidate to explain the observed Universe.
\\
Keeping in mind that there are numerous possibility for $f(R)$, we do not forget that our choice is one of the most simple to consider. No doubts, the details must be more complicated that the model discussed here. The study if this model may also provide some specific effects that could discriminate between other possibilities. This will be done in a future work.
\\
\\
\section*{Aknowledgements}
The authors would like to thank M. Gasperini for useful discussions. This work is supported by the research grant ``Theoretical Astroparticle Physics'' No. 2012CPPYP7 under the program PRIN 2012 funded by the Ministero dell'Istruzione, Universit\`a e della Ricerca (MIUR). This work is also supported by the italian Istituto Nazionale di Fisica Nucleare (INFN) through the ``Theoretical Astroparticle Physics'' (TASP) project. |
1311.7120 | \section{Introduction}
Our purpose is to prove maximal inequalities for purely discontinuous
martingales taking values in $L_q$ spaces, with bounds expressed
in terms of predictable elements only. In particular, if $\mu$ is a
random measure with compensator $\nu$ and $\bar{\mu}:=\mu-\nu$, the main
result takes the form
\[
\Bigl( \mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}^p_{L_q} \Bigr)^{1/p}
\eqsim_{p,q} \norm[\big]{g}_{\mathcal{I}_{p,q}}
\qquad \forall p,\,q \in \mathopen]1,\infty\mathclose[,
\]
where the integrand $g$ takes values in an $L_q$ space, and the norm
on the right-hand side depends, roughly speaking, on integrals of
functionals of $g$ with respect to $\nu$ only.
Estimates of this type were obtained for the first time, assuming that
$\mu$ is a Poisson random measure, by Dirksen \cite{Dirksen}, using an
abstract (and very elegant) approach. Namely, he first obtains
suitable vector-valued generalizations of Rosenthal's inequality for
sums of independent random variables, and then deduces from them
inequalities for stochastic integrals of step processes with respect
to compensated Poissonian measures by means of decoupling
techniques. His proofs rely on several sophisticated arguments,
pertaining to the interplay between the geometry of Banach spaces and
estimates for series of random variables on them, as well as, as
already mentioned, to (vector-valued extensions of) decoupling
inequalities.
Our approach is completely different and uses essentially only
stochastic calculus for \emph{real} semimartingales. Our interest in
this problem arose while working on \cite{cm:EJP10}, where we derived
a very special case of the above maximal inequality (namely the case
$p=q\geq 2)$, by a rather elementary integration in space of a
corresponding inequality for real-valued processes (apparently) due to
Novikov \cite{Nov:75}. The latter inequality has been known for almost
40 years (cf.~\cite{cm:BJ-surv} for an extensive review and a brief
historical account). However, it seems to be generally believed that,
as common wisdom suggests, this naive approach is doomed to fail if
one wants to estimate the $p$-th moment of the $L_q$-norm, with $p
\neq q$. This is actually one of the main reasons, at least from the
point of view of someone interested in stochastic PDEs, why developing
stochastic integration and, more generally, stochastic calculus on
Banach spaces is a worthy endeavor (see e.g. \cite{vNVW:max} and
references therein for recent developments in this direction, when the
integrator is a Wiener process). One of the main messages of the
present work is that the common belief just described is exaggerated,
in the sense that one can indeed get very far using only pointwise
estimates, integration, and classical results of stochastic
calculus. In a figurative way, thinking to a scale of abstraction's
level, one could say that Dirksen's results are obtained ``from
above'', and ours are obtained ``from below''.
\section{Main result}
Let $(X,\mathcal{A},n)$ be a measure space, and denote $L_q$ spaces on
$X$ simply by $L_q$, for any $q \geq 1$.
All random elements will be defined on a fixed filtered probability
space $(\Omega,\mathcal{F},\mathbb{F},\P)$,
$\mathbb{F}:=(\mathcal{F}_t)_{t \geq 0}$. We shall use the notation
$\norm{\cdot}_{\L_p}$ to denote
$\bigl(\mathbb{E}|\cdot|^p\bigr)^{1/p}$. Similarly, given a normed space $E$
and an $E$-valued random variable $\xi$, we shall use the notation
$\norm{\xi}_{\L_p(E)}:=\bigl(\mathbb{E}\norm{\xi}_E^p\bigr)^{1/p}$, sometimes
omitting the parentheses around $E$ if the meaning is clear. A
completely similar convention will be in place also for other
integrals on other spaces. Let $\mu$ be a random measure on $\mathbb{R}_+
\times Z$, $\nu$ be its dual predictable projection (compensator), and
$\bar{\mu}:=\mu-\nu$ be the corresponding compensated random
measure. Integrals over $\mathbb{R}_+ \times Z$ will be denoted simply by
an integration sign. Let $g: \Omega \times \mathbb{R}_+ \times Z \times X
\to \mathbb{R}$ be such that $(\omega,t,z) \mapsto g(\omega,t,z,x)$ is
predictable for each $x$ and $(\omega,t,z) \mapsto
\norm[\big]{g(\omega,t,z,\cdot)} \in L_q$ for all $(\omega,t,z)$.
For any $p_1,p_2,p_3 \in [1,\infty]$, introduce the spaces
\[
L_{p_1,p_2,p_3} := \L_{p_1}L_{p_2}(\nu)L_{p_3},
\qquad
\tilde{L}_{p_1,p_2} := \L_{p_1}L_{p_2}L_2(\nu),
\]
where $L_p(\nu):=L_p(\mathbb{R}_+ \times Z,\nu)$ and the above convention
about $L_p$ spaces with mixed norms is in place. In particular, this
means that, for instance,
\[
\xi \in L_{p_2}(\nu)L_{p_3} \quad \Leftrightarrow \quad
\norm[\big]{\xi}_{L_{p_2}(\nu)L_{p_3}} := \biggl(
\int_{\mathbb{R}_+ \times Z} \norm{\xi}^{p/2}_{L_{p_3}(X)} \,d\nu
\biggr)^{1/p_2} < \infty.
\]
The (proof of the) following theorem, whose original formulation is due
to Dirksen \cite{Dirksen}, is our main result.
\begin{thm} \label{thm:m}
Let $p$, $q \in \mathopen]1,\infty\mathclose[$. One has
\[
\norm[\Big]{\sup_{t \geq 0} \norm{(g \star \bar{\mu})_t}_{L_q}}_{\L_p}
\eqsim_{p,q} \norm{g}_{\mathcal{I}_{p,q}},
\]
where
\begin{equation} \label{eq:ipq}
\mathcal{I}_{p,q} :=
\begin{cases}
L_{p,p,q} + L_{p,q,q} + \tilde{L}_{p,q}, &\quad 1 < p \leq q \leq 2,\\
(L_{p,p,q} \cap L_{p,q,q}) + \tilde{L}_{p,q}, &\quad 1 < q \leq p \leq 2,\\
L_{p,p,q} \cap (L_{p,q,q} + \tilde{L}_{p,q}), &\quad 1 < q < 2 \leq p,\\
L_{p,p,q} + (L_{p,q,q} \cap \tilde{L}_{p,q}), &\quad 1 < p < 2 \leq q,\\
(L_{p,p,q} + L_{p,q,q}) \cap \tilde{L}_{p,q}, &\quad 2 \leq p \leq q,\\
L_{p,p,q} \cap L_{p,q,q} \cap \tilde{L}_{p,q}, &\quad 2 \leq q \leq p.
\end{cases}
\end{equation}
\end{thm}
The proof of theorem, which follows by a series of Lemmata and Propositions,
is in Section \ref{sec:p} below.
An explicit description of the norms of the spaces appearing above is
as follows:
\begin{align*}
\norm[\big]{g}_{L_{p,p,q}} &=
\left( \mathbb{E}\int \norm[\big]{g}^p_{L_q}\,d\nu \right)^{1/p},\\
\norm[\big]{g}_{L_{p,q,q}} &=
\left( \mathbb{E}\biggl(\int \norm[\big]{g}^q_{L_q}\,d\nu\biggr)^{p/q} \right)^{1/p},\\
\norm[\big]{g}_{\tilde{L}_{p,q}} &=
\left( \mathbb{E}\norm[\bigg]{\biggl(\int |g|^2\,d\nu\biggr)^{1/2}}^p_{L_q}
\right)^{1/p}.
\end{align*}
\section{Preliminaries and auxiliary results}
The maximal inequalities in the following theorem are known, and their
proofs can be found e.g. in \cite{cm:SEE2,cm:JFA10,cm:BJ-surv}.
\begin{thm} \label{thm:H}
Let $g$ take values in a Hilbert space $H$. Then one has
\begin{align}
\label{eq:sqh}
\mathbb{E} \sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_H^p &\lesssim_p
\mathbb{E} \biggl( \int \norm{g}_H^2 \,d\nu \biggr)^{\frac12 p}
&\qquad \forall p \in \left]0,2\right],\\
\label{eq:mejeto}
\mathbb{E} \sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_H^p &\lesssim_p
\mathbb{E} \int \norm{g}_H^p \,d\nu
&\qquad \forall p \in \left[1,2\right],\\
\label{eq:mejo}
\mathbb{E} \sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_H^p &\lesssim_p
\mathbb{E} \int \norm{g}_H^p \,d\nu
+ \mathbb{E} \biggl(\int \norm{g}_H^2 \,d\nu\biggr)^{\frac12 p}
&\qquad \forall p \in \left[2,\infty\right[.
\end{align}
\end{thm}
\medskip
Throughout the rest of the paper, unless otherwise stated, we shall
use the notation $M:=g \ast \bar{\mu}$, as well as
\[
[M,M]_t := \int_0^t\!\!\!\int_Z |g|^2\,d\mu,
\qquad
\ip{M}{M}_t := \int_0^t\!\!\!\int_Z |g|^2\,d\nu.
\]
Moreover, for notational compactness, we shall write, for any
$L_q$-valued process $Y$,
\[
\norm[\big]{Y^*_\infty}_{\L_pL_q}
:= \Bigl( \mathbb{E}\sup_{t \geq 0} \norm{Y_t}_{L_q}^p \Bigr)^{1/p}.
\]
\medskip
The following estimate plays a crucial role throughout the paper.
\begin{thm} \label{thm:iBDG}
One has, for any $p$, $q \in \mathopen]1,\infty\mathclose[$,
\begin{equation}
\label{eq:iBDG}
\norm[\big]{[M,M]^{1/2}_\infty}_{\L_pL_q} \lesssim_{p,q}
\norm[\big]{M^*_\infty}_{\L_pL_q}
\lesssim_{p,q} \norm[\big]{[M,M]^{1/2}_\infty}_{\L_pL_q}.
\end{equation}
Moreover, the upper bound also holds for $p=q=1$.
\end{thm}
\begin{proof}
The map $M \mapsto [M,M]_T^{1/2}$ is sublinear and bounded on
$\L_pL_p$ and on $\L_pL_2$ for all $p \in
\mathopen]1,\infty\mathclose[$. The inequality on the left then
follows by the extension of Riesz-Thorin interpolation due to
Benedek and Panzone \cite{BenPan}, coupled with the linearization
theorem by Janson \cite{Jans:interp}.
To prove the inequality on the right, we use a duality argument: we
have
\[
\norm[\big]{M_\infty}_{\L_pL_q} =
\sup_{\zeta \in B_1(\L_{p'}L_{q'})} \mathbb{E}(M_\infty,\zeta),
\]
where $(\cdot,\cdot)$ stands for the duality form between $L_p$ and
$L_{p'}$. Now take a martingale $N$ with final value
$N_\infty=\zeta$, and note
\begin{align*}
\mathbb{E}(M_\infty,\zeta) &= \mathbb{E}(M_\infty,N_\infty) = \mathbb{E}[M_\infty,N_\infty]\\
&\leq \norm[\big]{[M,M]^{1/2}_\infty}_{\L_pL_q}
\norm[\big]{[N,N]^{1/2}_\infty}_{\L_{p'}L_{q'}}\\
&\lesssim_{p,q} \norm[\big]{[M,M]^{1/2}_\infty}_{\L_pL_q}
\norm[\big]{N_\infty}_{\L_{p'}L_{q'}}
\leq \norm[\big]{[M,M]^{1/2}_\infty}_{\L_pL_q},
\end{align*}
from which the second inequality follows immediately.
\end{proof}
\medskip
The following simple estimate will be used repeatedly.
\begin{lemma} \label{lm:stimette}
Let $(X,\mathcal{A},m)$ be a measure space and $p > r \geq 1$. If $f
\in L_r(X,m) \cap L_p(X,m)$, then $f \in L_q(X,m)$ for all $q \in
\mathopen]r,\mathclose p[$, and
\[
\norm{f}^\alpha_{L_q} \leq \norm{f}^\alpha_{L_r} + \norm{f}^\alpha_{L_p}
\qquad \forall \alpha>0.
\]
\end{lemma}
\begin{proof}
Let $\theta \in [{0,1}[$ be such that $q=r\theta+p(1-\theta)$. By
Lyapunov's inequality one has, after raising to the power $\alpha$,
\[
\norm{f}^\alpha_{L_q} \leq \norm{f}_{L_r}^{\alpha\theta} \,
\norm{f}_{L_p}^{\alpha(1-\theta)}.
\]
Young's inequality $ab \leq a^s/s + b^{s'}/s'$ with conjugate
exponents $s=1/\theta$ and $s'=1/(1-\theta)$ yields
\[
\norm{f}^\alpha_{L_q} \leq \theta \norm{f}_{L_r}^\alpha +
(1-\theta) \norm{f}_{L_p}^{\alpha}
\leq \norm{f}_{L_r}^\alpha + \norm{f}_{L_p}^{\alpha}.
\qedhere
\]
\end{proof}
\medskip
We shall also use several times the following inequality between norms
of functions in $L_p$-spaces with mixed norms, which sometimes goes under
the name of H\"older-Minkowski's inequality:
\begin{equation} \label{eq:HM}
\norm[\big]{f}_{L_p(L_q)} \leq \norm[\big]{f}_{L_q(L_p)}
\qquad \forall p \geq q.
\end{equation}
\section{Proof of the main result} \label{sec:p}
It is enough to prove only the upper bounds
\[
\norm[\big]{(g \ast \bar{\mu})_\infty^*}_{\L_pL_q}
\lesssim_{p,q} \norm[\big]{g}_{\mathcal{I}_{p,q}},
\]
as the lower bounds will follow by duality (in fact, the dual of
$\mathcal{I}_{p,q}$ is $\mathcal{I}_{p',q'}$ -- cf.~\cite{Dirksen}).
It should be noted that in many cases we prove in fact more general
results than those needed to obtain the above upper bound.
\subsection{Case $1 < p \leq q \leq 2$}
The upper bound in Theorem \ref{thm:m} with parameters $p$ and $q$
such that $1 < p \leq q \leq 2$ is a consequence of the next three
Propositions.
\begin{prop} \label{prop:uno}
Let $1 \leq q \leq 2$, $0 < p \leq q$. One has
\begin{equation*}
\mathbb{E}\sup_{t\geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\biggl( \int
\norm[\big]{g}_{L_q}^q\,d\nu \biggr)^{p/q}.
\end{equation*}
\end{prop}
\begin{proof}
Inequality \eqref{eq:mejeto} with exponent $1 \leq q \leq
2$ and $H=\mathbb{R}$ yields
\[
\mathbb{E}\sup_{t \geq 0} \big\lvert (g \star \bar{\mu})_t \big\rvert^q \lesssim_q
\mathbb{E}\int |g|^q\,d\nu,
\]
hence also, by Fatou's lemma and Tonelli's theorem,
\[
\mathbb{E}\sup_{t\geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^q \lesssim_q
\mathbb{E} \int \norm[\big]{g}_{L_q}^q\,d\nu.
\]
Let $T$ be any stopping time. Replacing $g$ with
$g\mathbf{1}_{[0,T]}(t)$, the previous inequality implies
\[
\mathbb{E}\sup_{t\leq T} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^q \lesssim_q
\mathbb{E} \int_0^T\!\!\!\int_Z \norm[\big]{g}_{L_q}^q\,d\nu.
\]
Lenglart's domination inequality finally gives
\[
\mathbb{E}\sup_{t\geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\biggl( \int \norm[\big]{g}_{L_q}^q\,d\nu \biggr)^{p/q}
\]
for any $0 < p < q$.
\end{proof}
\begin{rmk}
The localization step spelled out in the previous proof, which is
needed to apply Lenglart's domination inequality, will be implicitly
assumed in the proofs to come.
\end{rmk}
\begin{prop}
Let $0 < p \leq q \leq 2$. One has
\begin{equation*}
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\norm[\bigg]{\biggl(\int |g|^2\,d\nu\biggr)^{1/2}}_{L_q}^p.
\end{equation*}
\end{prop}
\begin{proof}
Inequality \eqref{eq:sqh} with exponent $q \in
\mathopen]0,2\mathclose]$ and $H=\mathbb{R}$ yields
\[
\mathbb{E} \sup_{t \geq 0} \big\lvert (g \star \bar{\mu})_t \big\rvert^q \lesssim_q
\mathbb{E}\Bigl( \int |g|^2\,d\nu \Bigr)^{q/2}.
\]
Integrating over $X$, taking into account Fatou's lemma and
Tonelli's theorem, one obtains
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^q
\lesssim_q
\mathbb{E}\norm[\bigg]{\biggl(\int_0^T |g|^2\,d\nu\biggr)^{1/2}}_{L_q}^q,
\]
which in turn yields, appealing to Lenglart's domination
inequality,
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\norm[\bigg]{\biggl(\int |g|^2\,d\nu\biggr)^{1/2}}_{L_q}^p.
\qedhere
\]
\end{proof}
\begin{prop} \label{prop:13}
Let $1 < p \leq q \leq 2$. One has
\begin{equation*}
\mathbb{E}\sup_{t\geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\int \norm[\big]{g}_{L_q}^p\,d\nu.
\end{equation*}
\end{prop}
\begin{proof}
By Theorem \ref{thm:iBDG} one has
\[
\norm[\big]{(g \star \bar{\mu})^*_\infty}_{\L_pL_q} \lesssim_{p,q}
\norm[\big]{[M,M]_\infty^{1/2}}_{\L_pL_q} =
\norm[\big]{\norm{\Delta M}_{\ell_2}}_{\L_pL_q} =
\norm[\big]{\Delta M}_{\L_pL_q\ell_2}.
\]
Since $p \leq 2$, one has $\norm{\Delta M}_{\ell_2} \leq
\norm{\Delta M}_{\ell_p}$, hence, by inequality \eqref{eq:HM},
\[
\norm[\big]{\Delta M}^p_{\L_pL_q\ell_2} \leq
\norm[\big]{\Delta M}^p_{\L_pL_q\ell_p} \leq
\norm[\big]{\Delta M}^p_{\L_p\ell_pL_q} =
\mathbb{E} \int \norm{g}_{L_q}^p\,d\mu = \mathbb{E} \int \norm{g}_{L_q}^p\,d\nu.
\qedhere
\]
\end{proof}
\subsection{Case $1 < q \leq p \leq 2$}
The upper bound in Theorem \ref{thm:m} with parameters $p$ and $q$
such that $1 < q \leq p \leq 2$ is a consequence of the next two
Propositions.
\begin{prop} \label{lm:34}
Let $1 < q \leq p \leq 2$. Then one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\int \norm[\big]{g}_{L_q}^p\,d\nu
+ \mathbb{E}\biggl(\int \norm[\big]{g}_{L_q}^q\,d\nu \biggr)^{p/q}.
\]
\end{prop}
\begin{proof}
Appealing to Theorem \ref{thm:iBDG}, one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\norm[\big]{[M,M]_\infty^{1/2}}_{L_q}^p =
\norm[\big]{\Delta M}_{\L_pL_q\ell_2}^p,
\]
where, since $q \leq 2$,
\[
\norm[\big]{\Delta M}_{\L_pL_q\ell_2}^p \leq
\norm[\big]{\Delta M}_{\L_pL_q\ell_q}^p =
\norm[\big]{\Delta M}_{\L_p\ell_qL_q}^p =
\mathbb{E}\biggl(\int \norm[\big]{g}_{L_q}^q\,d\mu \biggr)^{p/q}.
\]
Writing
\[
\int \norm[\big]{g}_{L_q}^q\,d\mu = \int \norm[\big]{g}_{L_q}^q\,d\bar{\mu}
+ \int \norm[\big]{g}_{L_q}^q\,d\nu,
\]
the previous two inequalities yield
\begin{align*}
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})}_{L_q}^p
&\lesssim_{p,q} \mathbb{E}\biggl|\int \norm[\big]{g}_{L_q}^q\,d\bar{\mu} \biggr|^{p/q}
+ \mathbb{E}\biggl(\int \norm[\big]{g}_{L_q}^q\,d\nu \biggr)^{p/q}\\
&\lesssim \mathbb{E}\int \norm[\big]{g}_{L_q}^p\,d\nu
+ \mathbb{E}\biggl(\int \norm[\big]{g}_{L_q}^q\,d\nu \biggr)^{p/q},
\end{align*}
where we have used inequality \eqref{eq:mejo} with exponent $p/q
\leq 2$.
\end{proof}
\begin{prop} \label{prop:rifatta}
Let $1 < q \leq p \leq 2$. Then one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}_{L_q}^p
\]
\end{prop}
For the proof we need the following estimate.
\begin{lemma}
Let $2 \leq p \leq q$. Then
\[
\norm[\big]{\ip{M}{M}_\infty^{1/2}}_{\L_pL_q} \lesssim_{p,q}
\norm[\big]{M_\infty}_{\L_pL_q}.
\]
\end{lemma}
\begin{proof}
One has
\begin{align*}
\mathbb{E}\norm[\big]{\ip{M}{M}_\infty^{1/2}}^p_{L_q} &=
\mathbb{E}\norm[\big]{\ip{M}{M}_\infty}^{p/2}_{L_{q/2}}\\
&= \mathbb{E} \norm[\bigg]{\int |g|^2\,d\nu}^{p/2}_{L_{q/2}}
\lesssim \mathbb{E} \norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}}
+ \mathbb{E} \norm[\bigg]{\int |g|^2\,d\mu}^{p/2}_{L_{q/2}},
\end{align*}
where, thanks to Theorem \ref{thm:iBDG} and Doob's inequality,
\[
\mathbb{E} \norm[\bigg]{\int |g|^2\,d\mu}^{p/2}_{L_{q/2}} =
\mathbb{E} \norm[\big]{[M,M]_\infty}^{p/2}_{L_{q/2}} =
\mathbb{E} \norm[\big]{[M,M]^{1/2}_\infty}^p_{L_q} \lesssim_{p,q}
\mathbb{E} \norm[\big]{M_\infty}^p_{L_q}.
\]
Moreover, setting $N:=|g|^2 \ast \bar{\mu}$, one has, again by Theorem
\ref{thm:iBDG},
\begin{align*}
\mathbb{E} \norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}} &\lesssim_{p,q}
\mathbb{E} \norm[\big]{[N,N]_\infty^{1/2}}^{p/2}_{L_{q/2}}
= \mathbb{E} \norm[\bigg]{\biggl( \int |g|^4\,d\mu \biggr)^{1/2}}^{p/2}_{L_{q/2}}\\
&= \mathbb{E} \norm*{\Bigl(\sum \norm{\Delta M}^4\Bigr)^{1/2}}^{p/2}_{L_{q/2}}
= \mathbb{E} \norm*{\norm[\big]{\Delta M}_{\ell_4}^2}^{p/2}_{L_{q/2}}\\
&= \mathbb{E} \norm[\Big]{\norm[\big]{\Delta M}_{\ell_4}}^p_{L_q}
\leq \mathbb{E} \norm[\Big]{\norm[\big]{\Delta M}_{\ell_2}}^p_{L_q}\\
&= \mathbb{E} \norm[\big]{[M,M]_\infty^{1/2}}^p_{L_q}
\lesssim_{p,q} \mathbb{E} \norm[\big]{M_\infty}^p_{L_q}.
\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:rifatta}]
We use a duality argument: let us write
\[
\norm[\big]{M_\infty}_{\L_pL_q} =
\sup_{N_\infty\in B_1} \mathbb{E} \int_X M_\infty N_\infty,
\]
where $B_1$ stands for the unit ball of $\L_{p'}L_{q'}$. Introduce
the martingale $N$ defined by $N_t=\mathbb{E}[N_\infty|\mathcal{F}_t]$ for
all $t \geq 0$. The identity $\mathbb{E} M_\infty N_\infty = \mathbb{E}
\ip{M}{N}_\infty$, Kunita-Watanabe's inequality, H\"older's
inequality, and the previous Lemma imply
\begin{align*}
\mathbb{E}\int_X M_\infty N_\infty &\leq \norm[\big]{\ip{M}{M}_\infty^{1/2}}_{\L_pL_q}
\, \norm[\big]{\ip{N}{N}_\infty^{1/2}}_{\L_{p'}L_{q'}}\\
&\lesssim_{p,q} \norm[\big]{\ip{M}{M}_\infty^{1/2}}_{\L_pL_q} \,
\norm[\big]{N_\infty}_{\L_{p'}L_{q'}}
\leq \norm[\big]{\ip{M}{M}_\infty^{1/2}}_{\L_pL_q},
\end{align*}
whence the conclusion, because
\[
\ip{M}{M}_\infty = \int |g|^2\,d\nu.
\qedhere
\]
\end{proof}
\subsection{Case $1 < p \leq 2 \leq q$}
The upper bound in Theorem \ref{thm:m} with parameters $p$ and $q$
such that $1 < p \leq 2 \leq q$ follows by the next two Propositions.
\begin{prop} \label{prop:bella}
Let $q \geq 2$, $0 < p \leq q$. Then one has
\[
\mathbb{E} \sup_{t \leq T} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p
\lesssim_{p,q} \mathbb{E} \biggl( \int \norm{g}_{L_q}^q \,d\nu \biggr)^{p/q}
+ \mathbb{E} \norm[\bigg]{\biggl( \int |g|^2 \,d\nu \biggr)^{1/2}}^p_{L_q}.
\]
\end{prop}
\begin{proof}
Inequality \eqref{eq:mejo}, with exponent $q \geq 2$ and $H=\mathbb{R}$, and
integration over $X$ yield
\begin{equation} \label{eq:mejo_Lp}
\mathbb{E} \sup_{t \leq T} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^q
\lesssim_q \mathbb{E} \int \norm{g}_{L_q}^q \,d\nu
+ \mathbb{E} \norm[\bigg]{\biggl( \int |g|^2 \,d\nu \biggr)^{1/2}}^q_{L_q},
\end{equation}
therefore, by Lenglart's domination inequality,
\begin{equation*}
\mathbb{E} \sup_{t \leq T} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p
\lesssim_{p,q} \mathbb{E} \biggl( \int \norm{g}_{L_q}^q \,d\nu \biggr)^{p/q}
+ \mathbb{E} \norm[\bigg]{\biggl( \int |g|^2 \,d\nu \biggr)^{1/2}}^p_{L_q}.
\qedhere
\end{equation*}
\end{proof}
\begin{prop} \label{prop:due}
Let $1 < p \leq 2 \leq q$. Then one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \ast \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\int \norm{g}^p_{L_q}\,d\nu.
\]
\end{prop}
\begin{proof}
By Theorem \ref{thm:iBDG} and H\"older-Minkowski's inequality
\eqref{eq:HM}, one has
\[
\norm[\big]{(g \ast \bar{\mu})^*_\infty}_{\L_pL_q} \lesssim_{p,q}
\norm[\big]{[M,M]_\infty^{1/2}}_{\L_pL_q} =
\norm[\big]{\Delta M}_{\L_pL_q\ell_2}
\leq \norm[\big]{\Delta M}_{\L_p\ell_2L_q}
\leq \norm[\big]{\Delta M}_{\L_p\ell_pL_q},
\]
where
\[
\norm[\big]{\Delta M}_{\L_p\ell_pL_q}^p =
\mathbb{E}\sum \norm[\big]{\Delta M}_{L_q}^p =
\mathbb{E}\int \norm[\big]{g}_{L_q}^p\,d\mu =
\mathbb{E}\int \norm[\big]{g}_{L_q}^p\,d\nu.
\qedhere
\]
\end{proof}
\subsection{Case $1 < q \leq 2 \leq p$}
The upper bound in Theorem \ref{thm:m} with parameters $p$ and $q$
such that $1 < q \leq 2 \leq p$ is a consequence of the next two
Propositions.
\begin{prop} \label{lm:38}
Let $1 < q \leq 2 \leq p$. Then one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\biggl( \int \norm{g}_{L_q}^q\,d\nu \biggr)^{p/q}
+ \mathbb{E} \int \norm{g}_{L_q}^p\,d\nu.
\]
\end{prop}
\begin{proof}
Proceeding as in the proof of Proposition \ref{lm:34}, one obtains
\[
\mathbb{E}\norm[\big]{(g \star \bar{\mu})_\infty^*}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\biggl( \int \norm[\big]{g}_{L_q}^q \,d\mu \biggr)^{p/q}
\lesssim \mathbb{E}\biggl| \int \norm[\big]{g}_{L_q}^q \,d\bar{\mu} \biggr|^{p/q}
+ \mathbb{E}\biggl( \int \norm[\big]{g}_{L_q}^q \,d\nu \biggr)^{p/q}.
\]
If $p \leq 2q$, i.e. if $p/q \leq 2$, the second inequality in
Theorem \ref{thm:H} (with $H=\mathbb{R}$) yields
\[
\mathbb{E}\biggl| \int \norm[\big]{g}_{L_q}^q \,d\bar{\mu} \biggr|^{p/q}
\lesssim_{p,q} \mathbb{E}\int \norm[\big]{g}_{L_q}^p \,d\nu,
\]
as desired. Otherwise, if $p > 2q$, i.e. if $p/q>2$, the third
inequality in Theorem \ref{thm:H} (with $H=\mathbb{R}$) yields
\begin{align*}
\mathbb{E}\biggl| \int \norm[\big]{g}_{L_q}^q \,d\bar{\mu} \biggr|^{p/q}
&\lesssim \mathbb{E}\int \norm[\big]{g}_{L_q}^p \,d\nu
+ \mathbb{E}\biggl( \int \norm[\big]{g}_{L_q}^{2q}\,d\nu \biggr)^{p/(2q)}\\
&= \mathbb{E}\int \norm[\big]{g}_{L_q}^p \,d\nu
+ \mathbb{E}\norm*{\norm[\big]{g}_{L_q}}^p_{L_{2q}(\nu)}.
\end{align*}
Since we have $q < 2q < p$, by Lemma \ref{lm:stimette} one
immediately gets
\begin{align*}
\mathbb{E}\norm*{\norm[\big]{g}_{L_q}}^p_{L_{2q}(\nu)} &\leq
\mathbb{E}\norm*{\norm[\big]{g}_{L_q}}^p_{L_{q}(\nu)}
+ \mathbb{E}\norm*{\norm[\big]{g}_{L_q}}^p_{L_{p}(\nu)}\\
&= \mathbb{E}\biggl( \int \norm[\big]{g}_{L_q}^q \,d\nu \biggr)^{p/q}
+ \mathbb{E}\int \norm[\big]{g}_{L_q}^p \,d\nu,
\end{align*}
and the proof is concluded.
\end{proof}
\begin{prop} \label{prop:q2p}
Let $1 \leq q \leq 2 \leq p$. Then one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q}^p \lesssim_{p,q}
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}_{L_q}^p
+ \mathbb{E} \int \norm{g}_{L_q}^p\,d\nu.
\]
\end{prop}
For the proof we need the following result by Lenglart, Lepingle and
Pratelli (see \cite[Lemma 1.1]{LeLePr}).
\begin{lemma}
Let $A$ and $B$ be increasing adapted processes. If there exist
$r>0$ and $\alpha>0$ such that
\[
\mathbb{E}(A_{T-}-A_{S-})^r \leq \alpha \mathbb{E} B_{T-}^r \mathbf{1}_{\{S<T\}}
\]
for all stopping times $S$ and $T$ such that $S<T$, then one has,
for any moderate function $F$,
\[
\mathbb{E} F(A_\infty) \lesssim_{r,\alpha,F} \mathbb{E} F(B_\infty).
\]
\end{lemma}
\begin{proof}[Proof of Proposition \ref{prop:q2p}]
We shall proceed in several steps.
\textsc{Step 1.} We introduce the Davis decomposition of $M:=g \star
\bar{\mu}$: define the real-valued process $S_t:=\sup_{s \leq t}
\norm{\Delta M_s}_{L_q}$, and set
\[
K^1_t := \sum_{\norm{\Delta M_s}>2S_{t-}} \Delta M_s,
\qquad K^2:= \widetilde{K^1}, \qquad K:=K^1-K^2.
\]
Then there exists an $L_q$-valued predictable process $g'$ such that
$K^1 = g' \star \mu$, hence $K^2 = g' \star \nu$ and $K= g' \star
\bar{\mu}$. Now set $L=M-K = (g-g') \star \bar{\mu}$.
\smallskip
\textsc{Step 2.} Denoting the total variation by
$\norm{\cdot}_{TV}$, one has
\[
M^*_\infty \leq K^*_\infty + L^*_\infty \leq \norm{K}_{TV} + L^*_\infty,
\]
hence, for any $p \geq 1$,
\[
\mathbb{E}\sup_{t \geq 0} \norm{M_t}_{L_q}^p \lesssim_p \mathbb{E}\norm{K}^p_{TV(L_q)}
+ \mathbb{E}\sup_{t \geq 0} \norm{L_t}_{L_q}^p,
\]
where
\[
\mathbb{E}\norm{K}^p_{TV} \lesssim_p \mathbb{E}\norm{K^1}^p_{TV}
+ \mathbb{E}\norm{\widetilde{K^1}}^p_{TV}
\lesssim \mathbb{E}\norm{K^1}^p_{TV},
\]
and $\norm{K^1}_{TV(L_q)} \lesssim S_\infty$ implies
\[
\mathbb{E}\norm{K}^p_{TV} \lesssim_p \mathbb{E} S_\infty^p.
\]
\smallskip
\textsc{Step 3.} Let $S$, $T$ be any stopping times. Setting
\[
(L^{S,T})_t := \bigl(L_{(S+t) \wedge T} - L_{S-}\bigr) \mathbf{1}_{\{S<T\}},
\]
we are going to show that
\[
L_{T-}^* - L_{S-}^* \leq (L^{S,T})_\infty^* \mathbf{1}_{\{S<T\}}.
\]
Since $t \mapsto L^*_t$ is increasing and the right-hand side is
positive, we can assume $S<T$ without loss of generality. If
$L^*_{T-}=L^*_{S-}$, the inequality is obviously true, hence we can
assume $L^*_{T-} > L^*_{S-}$, thus also
\[
L^*_{T-} \leq \sup_{S \leq s \leq T} \norm{L_s}_{L_q}.
\]
We therefore have, writing $\norm{\cdot}$ instead of
$\norm{\cdot}_{L_q}$ for compactness of notation,
\begin{align*}
L^*_{T-} - L^*_{S-} &\leq
\bigl( \sup_{s\in[S,T]} \norm{L_s} - \sup_{s < S} \norm{L_s} \bigr)
\mathbf{1}_{\{S<T\}}\\
&\leq \bigl( \sup_{s\in[S,T]} \norm{L_s} - \norm{L_{S-}} \bigr)
\mathbf{1}_{\{S<T\}}\\
&\leq \bigl( \sup_{s \in [S,T]} \norm{L_s - L_{S-}}\bigr)
\mathbf{1}_{\{S<T\}}
= (L^{S,T})_\infty^* \mathbf{1}_{\{S<T\}},
\end{align*}
where we have used the obvious estimates $\sup_{s<S} \norm{L_s} \geq
\norm{L_{S-}}$ and $\norm{x}-\norm{y} \leq \norm{x-y}$.
\smallskip
\textsc{Step 4.} The previous step immediately implies
\[
\mathbb{E} \bigl( L_{T-}^* - L_{S-}^* \bigr)^q
\lesssim \mathbb{E} \bigl( (L^{S,T})_\infty^* \bigr)^q \mathbf{1}_{\{S<T\}},
\]
By Theorem \ref{thm:iBDG} we have
\[
\mathbb{E} \bigl( (L^{S,T})_\infty^* \bigr)^q \mathbf{1}_{\{S<T\}} \lesssim_q
\mathbb{E} \norm[\big]{[L^{S,T},L^{S,T}]_\infty^{1/2}}_{L_q}^q,
\]
where
\[
[L^{S,T},L^{S,T}]_\infty \leq [L,L]_T + \norm{\Delta L_S}^2
\leq [L,L]_{T-} + \norm{\Delta L_S}^2 + \norm{\Delta L_T}^2,
\]
hence
\[
\norm[\big]{[L^{S,T},L^{S,T}]_\infty^{1/2}}_{L_q} \leq
\norm[\big]{[L,L]_{T-}^{1/2}}_{L_q} + \norm{\Delta L_S}_{L_q}
+ \norm{\Delta L_T}_{L_q}
\lesssim \norm[\big]{[L,L]_{T-}^{1/2}}_{L_q} + S_{T-},
\]
therefore also, recalling that $q \leq 2$,
\[
\mathbb{E} \bigl( L_{T-}^* - L_{S-}^* \bigr)^q \lesssim_q
\mathbb{E} \norm[\big]{[L,L]_{T-}^{1/2}}_{L_q}^q + \mathbb{E} S_{T-}^q \lesssim_q
\mathbb{E} \norm[\big]{\ip{L}{L}_{T-}^{1/2}}_{L_q}^q + \mathbb{E} S_{T-}^q.
\]
We can now apply the previous Lemma to obtain
\[
\mathbb{E} (L^*_\infty)^p \lesssim_{p,q}
\mathbb{E} \norm[\big]{\ip{L}{L}_\infty^{1/2}}_{L_q}^p + \mathbb{E} S_\infty^p.
\]
\smallskip
\textsc{Step 5.} Recalling the estimate on $K$, we are left with
\[
\mathbb{E} (M^*_\infty)^p \lesssim
\mathbb{E} \norm[\big]{\ip{L}{L}_\infty^{1/2}}_{L_q}^p + \mathbb{E} S_\infty^p,
\]
where, since $L=(g-g') \star \bar{\mu}$ and $|g'| \leq |g|$ pointwise (in
fact by construction $g'$ represents some jumps of $M$ only),
\[
\ip{L}{L}_\infty = \int |g-g'|^2\,d\nu \lesssim \int |g|^2\,d\nu,
\]
and
\[
\mathbb{E} S_\infty^p = \mathbb{E} \sup_{s\geq 0} \norm{\Delta M_s}^p_{L_q}
\leq \mathbb{E} \sum \norm{\Delta M}^p_{L_q}
= \mathbb{E} \int \norm{g}_{L_q}^p\,d\mu
= \mathbb{E} \int \norm{g}_{L_q}^p\,d\nu.
\qedhere
\]
\end{proof}
\subsection{Case $2 \leq p \leq q$}
The upper bound in Theorem \ref{thm:m} with parameters $p$ and $q$
such that $2 \leq p \leq q$ follows by Proposition \ref{prop:bella}
and by the next Proposition.
\begin{prop}
Let $2 \leq p \leq q$. Then one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}^p_{L_q} \lesssim_{p,q}
\mathbb{E}\int \norm[\big]{g}_{L_q}^p\,d\nu
+ \mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu\biggr)^{1/2}}_{L_q}.
\]
\end{prop}
\begin{proof}
Recalling Theorem \ref{thm:iBDG}, one has
\[
\mathbb{E}\norm[\big]{(g \ast \bar{\mu})^*_\infty}^p_{L_q} \lesssim_{p,q}
\mathbb{E}\norm[\big]{[M,M]^{1/2}_\infty}^p_{L_q}
= \mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\mu \biggr)^{1/2}}^p_{L_q}.
\]
If $p=2$, it holds
\begin{align*}
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\mu \biggr)^{1/2}}^2_{L_q}
&= \mathbb{E}\norm[\bigg]{\int |g|^2\,d\mu }_{L_{q/2}}\\
&\leq \mathbb{E}\int \norm[\big]{|g|^2}_{L_{q/2}}\,d\mu
= \mathbb{E}\int \norm[\big]{g}^2_{L_q}\,d\mu
= \mathbb{E}\int \norm[\big]{g}^2_{L_q}\,d\nu,
\end{align*}
that is the claim is proved in the case $p=2$.
Therefore we assume, for the rest of the proof, that $p>2$. One has
\[
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\mu \biggr)^{1/2}}^p_{L_q}
\leq \mathbb{E}\norm[\bigg]{\bigg\lvert \int |g|^2\,d\bar{\mu} \bigg\rvert^{1/2}}^p_{L_q}
+ \mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}^p_{L_q},
\]
where
\[
\mathbb{E}\norm[\bigg]{\bigg\lvert \int |g|^2\,d\bar{\mu} \bigg\rvert^{1/2}}^p_{L_q}
= \mathbb{E}\norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}}.
\]
If $2<p \leq 4$, i.e. if $1 < p/2 \leq 2$, Propositions
\ref{prop:13} and \ref{prop:due} imply that
\[
\mathbb{E}\norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}}
\lesssim_{p,q} \mathbb{E} \int \norm[\big]{\,|g|^2\,}^{p/2}_{L_{q/2}}\,d\nu
= \mathbb{E}\int \norm[\big]{g}^p_{L_q}\,d\nu.
\]
We have thus proved that the claim of the Proposition is true for
any $p$, $q$ such that $2 < p \leq 4$ and $q \geq p$. We now proceed
by induction, i.e. we assume the claim is true for $2 < p \leq 2^n$
(which we have just verified for $n=2$), and we show that the claim
is true for $2 < p \leq 2^{n+1}$. In fact, we have just seen that
\[
\mathbb{E}\norm[\big]{(g \ast \bar{\mu})^*_\infty}^p_{L_q} \lesssim_{p,q}
\mathbb{E}\norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}}
+ \mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}^p_{L_q},
\]
where, since $p/2 \leq 2^n$, the inductive assumption implies
\begin{align*}
\mathbb{E}\norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}} &\lesssim_{p,q}
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^4\,d\nu \biggr)^{1/2}}^{p/2}_{L_{q/2}}
+ \mathbb{E}\int \norm[\big]{|g|^2}^{p/2}_{L_{q/2}}\,d\nu\\
&= \mathbb{E}\norm[\bigg]{\biggl( \int |g|^4\,d\nu \biggr)^{1/4}}^p_{L_q}
+ \mathbb{E}\int \norm[\big]{g}^p_{L_q}\,d\nu.
\end{align*}
Note that, by Lemma \ref{lm:stimette}, since $4 < p$, one has
\[
\biggl(\int |g|^4\,d\nu \biggr)^{1/2} = \norm[\big]{g}_{L_4(\nu)}
\leq \norm[\big]{g}_{L_2(\nu)} + \norm[\big]{g}_{L_p(\nu)},
\]
hence
\[
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^4\,d\nu \biggr)^{1/4}}^p_{L_q}
\lesssim
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}^p_{L_q}
+ \mathbb{E} \norm[\Big]{\norm[\big]{g}_{L_p(\nu)}}^p_{L_q}.
\]
The proof is finished by observing that, since $q \geq p$,
H\"older-Minkowski's inequality \eqref{eq:HM} yields
\[
\mathbb{E} \norm[\Big]{\norm[\big]{g}_{L_p(\nu)}}^p_{L_q} \leq
\mathbb{E} \norm[\Big]{ \norm[\big]{g}_{L_q} }^p_{L_p(\nu)}
= \mathbb{E} \int \norm[\big]{g}^p_{L_q}\,d\nu.
\qedhere
\]
\end{proof}
\subsection{Case $2 \leq q \leq p$}
The upper bound in Theorem \ref{thm:m} with parameters $p$ and $q$
such that $2 \leq p \leq q$ is proved in the following
Proposition.
\begin{prop}
Let $2 \leq q \leq p$. Then one has
\[
\mathbb{E}\sup_{t \geq 0} \norm[\big]{(g \star \bar{\mu})_t}_{L_q} \lesssim_{p,q}
\mathbb{E}\int \norm[\big]{g}^p_{L_q}\,d\nu
+ \mathbb{E}\biggl( \int \norm[\big]{g}^q_{L_q}\,d\nu \biggr)^{p/q}
+ \mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}^p_{L_q}
\]
\end{prop}
\begin{proof}
The claim is certainly true if $q=2$, by Theorem \ref{thm:H}. We
shall therefore assume $q>2$ from now on. In view of Theorem
\ref{thm:iBDG}, it is enough to estimate the $\L_pL_q$ norm of
$[M,M]_\infty^{1/2}$. We use once again the decomposition
\[
[M,M]_\infty^{1/2} = \biggl( \int |g|^2\,d\mu \biggr)^{1/2}
\leq \biggl| \int |g|^2\,d\bar{\mu} \biggr|^{1/2}
+ \biggl( \int |g|^2\,d\nu \biggr)^{1/2},
\]
and its immediate consequence
\[
\norm[\big]{[M,M]_\infty^{1/2}}_{\L_pL_q} \leq
\norm[\bigg]{\biggl| \int |g|^2\,d\bar{\mu} \biggr|^{1/2}}_{\L_pL_q}
+ \norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}_{\L_pL_q}.
\]
We are thus left with the task of estimating the first term on the
right-hand side. In particular, writing
\begin{equation} \label{eq:tardi}
\mathbb{E}\norm[\bigg]{\biggl| \int |g|^2\,d\bar{\mu} \biggr|^{1/2}}^p_{L_q}
= \mathbb{E}\norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}},
\end{equation}
we observe that, assuming $2 < q \leq 4$, i.e. $1<q/2 \leq 2$,
Propositions \ref{lm:34} and \ref{lm:38} imply
\begin{align*}
\mathbb{E}\norm[\bigg]{\biggl| \int |g|^2\,d\bar{\mu} \biggr|^{1/2}}^p_{L_q}
&\lesssim_{p,q}
\mathbb{E}\biggl( \int \norm[\big]{|g|^2}^{q/2}_{L_{q/2}}\,d\nu \biggr)^{p/q}
+ \mathbb{E}\int \norm[\big]{|g|^2}^{p/2}_{L_{q/2}}\,d\nu\\
&= \mathbb{E}\biggl( \int \norm[\big]{g}^q_{L_q}\,d\nu \biggr)^{p/q}
+ \mathbb{E}\int \norm[\big]{g}^p_{L_q}\,d\nu.
\end{align*}
We have thus proved that the claim is claim is true for any $q \in
[2,4]$. We proceed by induction, showing that if the claim is true
for $2 \leq q \leq 2^n$ (which is indeed the case with $n=2$), then
the claim remains true for $2 \leq q \leq 2^{n+1}$. In view of the
reasoning at the beginning of the proof, it suffices to estimate the
term on the right-hand side of \eqref{eq:tardi}: one has
\begin{align*}
\mathbb{E}\norm[\bigg]{\int |g|^2\,d\bar{\mu}}^{p/2}_{L_{q/2}} &\lesssim_{p,q}
\mathbb{E}\biggl( \int \norm[\big]{|g|^2}^{q/2}_{L_{q/2}}\,d\nu \biggr)^{p/q}
+ \mathbb{E}\int \norm[\big]{|g|^2}^{p/2}_{L_{q/2}}\,d\nu
+ \mathbb{E}\norm[\bigg]{\biggl( \int |g|^4\,d\nu \biggr)^{1/2}}^{p/2}_{L_{q/2}}\\
&= \mathbb{E}\biggl( \int \norm[\big]{g}^q_{L_q}\,d\nu \biggr)^{p/q}
+ \mathbb{E}\int \norm[\big]{g}^p_{L_q}\,d\nu
+ \mathbb{E}\norm[\bigg]{\biggl( \int |g|^4\,d\nu \biggr)^{1/4}}^p_{L_q}
\end{align*}
Since $4 < q$, Lemma \ref{lm:stimette} yields
\[
\biggl( \int |g|^4\,d\nu \biggr)^{1/4} = \norm{g}_{L_4(\nu)}
\leq \norm{g}_{L_2(\nu)} + \norm{g}_{L_q(\nu)},
\]
hence also
\begin{align*}
\mathbb{E}\norm[\bigg]{\biggl( \int |g|^4\,d\nu \biggr)^{1/4}}^p_{L_q}
&\leq \mathbb{E}\norm*{\norm[\big]{g}_{L_2(\nu)}}^p_{L_q}
+ \mathbb{E}\norm*{\norm[\big]{g}_{L_q(\nu)}}^p_{L_q}\\
&= \mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}^p_{L_q}
+ \mathbb{E}\norm*{\norm[\big]{g}_{L_q}}^p_{L_q(\nu)}\\
&= \mathbb{E}\norm[\bigg]{\biggl( \int |g|^2\,d\nu \biggr)^{1/2}}^p_{L_q}
+ \mathbb{E}\biggl( \int \norm[\big]{g}^q_{L_q}\,d\nu \biggr)^{p/q},
\end{align*}
thus concluding the proof.
\end{proof}
\let\oldbibliography\thebibliography
\renewcommand{\thebibliography}[1]{%
\oldbibliography{#1}%
\setlength{\itemsep}{-1pt}%
}
\bibliographystyle{amsplain} |
hep-ph/9404341 | \section{Introduction}
\hspace{0.6cm}One of the most important problems of modern
neutrino physics is the
investigation of neutrino properties \cite{1}: neutrino masses and mixings,
nature of massive neutrinos (Dirac or Majorana), electromagnetic properties
of neutrinos, etc. In this paper we shall be interested in the
existence of a neutrino magnetic moment and its manifestation.
In the last few years, the interest in a magnetic moment of neutrinos was
connected in part with the solar neutrino problem.
It has been argued from time
to time that the solar neutrino flux detected in the Chlorine experiment
\cite{2}
has shown some anticorrelation with sun spot activity. Its most reasonable
explanation would involve \cite{3} a neutrino magnetic moment. Results
from Kamiokande III \cite{4} however do not indicate any time variation of
the neutrino signal. Nevertheless, the search for
a neutrino magnetic moment continues to be one of the ways to look for
effects beyond the standard
model and efforts are worth to continue in this direction \cite{5}.
If the standard theory is extended to include the right-handed neutrino field,
the resulting Dirac neutrino with mass $m_{\nu}$ acquires a magnetic moment
\cite{6}
\begin{equation}
\mu_{\nu}^{s} = \frac{3}{4 \sqrt{2} \pi^{2}} G_{F} m_{\nu} m \mu_{B} \simeq
3.2.10^{-19} (\frac{m_{\nu}}{eV}) \mu_{B}
\end{equation}
\noindent
where $\mu_{B} = e/2m$ is the Bohr magneton and $m$ is the electron
mass. From the latest measurements of
the electron spectrum in $^{3}H$ $\beta$-decay \cite{7} the following
upper limit of the electron neutrino mass was obtained
\begin{equation}
m_{\nu_{e}}<7.2 eV (95\% C.L.)
\end{equation}
It follows from (1) and (2) that the "standard" contribution (1)
to the electron neutrino magnetic
moment is less than $2\,10^{-18} \mu_{B}$. Such a small
upper bound cannot be reached in any present day experiment. However,
there exist
many models beyond the standard theory in which the induced magnetic
moment of neutrinos could be many orders of magnitude bigger than $\mu_{\nu}
^{s}$ \cite{8}.
The lowest bounds on the neutrino magnetic moment come from astrophysical
arguments. If neutrinos have magnetic moments, then their coupling with an
off-shell photon $\gamma^{*}$ in a star can cause $\gamma^{*} \rightarrow
\nu + \bar{\nu}$ to occur. Once the neutrinos are produced, they will escape
carrying away energy. From the absence of such an anomalous energy loss
mechanism in red giants one finds
$$
\mu_{\nu} < 7\,10^{-11} \mu_{B}
$$
\noindent
Using the neutrino data from Supernova 1987A, there are stringent bounds which
apply for Dirac neutrinos, in order to allow the right-handed species to
escape from the supernova. One gets \cite{9}
$$
\mu_{\nu} < 10^{-12} \mu_{B}
$$
The bounds from the supernova have been questioned by Voloshin \cite{10}
if there are strong magnetic fields in the supernova. Another astrophysical
constraint comes from consideration of the luminosity before and after
stellar helium flash \cite{11} in red giants
$$
\mu_{\nu} < 3 \, 10^{-12} \mu_{B}
$$
There are laboratory bounds from terrestrial neutrino experiments. From
\mbox{$\bar{\nu_{e}} e^{-} \rightarrow \bar{\nu_{e}} e^{-}$} in reactor
experiments \cite{12} the bound on the neutrino magnetic moment
$$
\mu_{\nu} < 2.4\,10^{-10} \mu_{B}
$$
\noindent
has been set. This limit applies to electron antineutrinos. From beam
stop LAMPF neutrino data it follows \cite{13}
$$
\mu_{\nu_{e}} < 1.1 \, 10^{-9} \mu_{B}, \quad \mu_{\nu_{\mu}}
< 7.4 \, 10^{-10}
\mu_{B}
$$
Several new proposals \cite{14} plan to reach a much better sensitivity in the
investigation of
the $\bar{\nu}_{e}$ magnetic moment ( at the level of $10^{-11}
\mu_{B}$ ). In these experiments, the process under study to obtain information
about $\mu_{\nu}$ is that of elastic antineutrino-electron scattering at small
energies. Its sensitivity to $\mu_{\nu}$ is connected with
the fact that at low enough values of $Q^{2}$ the contribution of the
electromagnetic amplitude to the cross section of the process becomes
comparable to the contribution of the weak amplitude.
This is the case for $Q^{2} \sim MeV^{2}$ at values
$\mu_{\nu}\simeq (10^{-10}$, $10^{-11}) \mu_{B}$.
The penetration in the region of such small
$Q^{2}$ requires, however, to measure small energies of recoil
electrons ($\leq MeV$).
Several other transitions \cite{15} could be envisaged and have been proposed
to obtain information about the neutrino magnetic moment.
An appropriate selection of quantum numbers using nuclear transitions
to enhance the electromagnetic amplitude looks,
however, negative due to the presence of both vector and axial-vector
components in the weak amplitude, so no general enhancement of the magnetic
moment contribution relative to the weak one is found on these grounds.
Coherent neutrino-nucleus scattering keeps, as in the electron case, the
vector current contribution to both the magnetic and the weak amplitude,
but the nuclear recoil is difficult to measure ( much more difficult than
for electrons ). The strategy to get a relative enhancement of the magnetic
moment amplitude on this dynamical basis is satisfied exceptionally around
one point for electron antineutrino-electron elastic scattering, for which
there is a dynamical zero for the weak cross section \cite{16} at leading
order for $E_{\nu} = m_{e}/(4 sin^{2} \theta_{w})$ and forward electrons.
In this paper we will consider the process
\begin{equation}
\nu (\bar{\nu}) + e \rightarrow \nu (\bar{\nu}) + e + \gamma
\end{equation}
\noindent
for which there are contributions to the cross section from the weak
interaction as well as from the
neutrino magnetic moment. This reaction has been considered before in a
different context \cite{17}. Even if the process (3)
has an additional power of
$\alpha$ in the cross section relative to the elastic case, the restriction
to low recoil energies in order to reach down low values of $Q^{2}$ is a priori
not necessary. As we will see, the limit $Q^{2} = 0$ at fixed values of the
recoil energies is precisely obtained at the favourable situation of the
maximal opening angle between electron and photon in the final state. Whatever
the
experimental limit on the total recoil energies $\nu$ could be, the inelastic
process (3) is able to lead to lower values of $Q^{2}$ than the elastic one,
as shown by the ratio $x = Q^{2} /(2 m \nu)$ varying from 1 to 0. The
argument of using low incident neutrino energies to lower the effective
contact interaction cross section of the standard theory relative to the
smoother energy dependent magnetic cross section comes here as for the
elastic process.
The paper is organized as follows. In Section 2 we present the calculation
of the amplitudes for the process (3) and the observables.
Section 3 discusses the kinematics and the phase space details in different
variables appropriate to their experimental accessibility.
In Section 4 we analyze the behaviour of both weak and magnetic cross sections
at low $Q^{2}$ for different limiting cases: either at fixed $\nu$ or at
fixed $x$-values, by performing an analytic calculation in these limits.
General results are given in Section 5 with special emphasis in its
presentation for the inclusive distribution $d^{2} \sigma / dx d\nu$. Some
conclusions are given in Section 6.
\section{Weak and electromagnetic amplitudes}
In this Section we present shortly the results of the calculation of the
cross section for the process
\begin{equation}
\nu (l) + e (p) \rightarrow \nu (l') + e (p') + \gamma (k)
\end{equation}
The standard effective Hamiltonian of the weak interaction of neutrinos
and electrons has the form
\begin{equation}
H_{W} = \frac{G_{F}}{\sqrt{2}} \sum_{l} \bar{\nu}_{l} \gamma^{\mu}
(1 - \gamma_{5}) \nu_{l} \bar{e} \Gamma_{\mu} e + h. c.
\end{equation}
\noindent
Here
\begin{equation}
\begin{array}{lll}
\Gamma_{\mu} & = & \gamma_{\mu} [g_{L}^{(l)} \frac{\displaystyle
{1 - \gamma_{5}}}{\displaystyle{2}}
+ g_{R}^{(l)} \frac{\displaystyle{1 + \gamma_{5}}}{\displaystyle{2}}]
\end{array}
\end{equation}
\noindent
where
\begin{equation}
\begin{array}{lll}
g_{L}^{(l)} & = & - 1 + 2 sin^{2} \theta_{W} + 2 \delta_{l e}\\
\\
g^{(l)}_{R} & = & 2 sin^{2} \theta_{W}
\end{array}
\end{equation}
\noindent
and $\theta_{W}$ is the electroweak mixing angle. The term $\delta_{l e}$
in Eq. (7) takes into account the charged current contribution to the
effective Hamiltonian in its charge-retention-form.
The invariant T-matrix element generated by the Hamiltonian (5) for the
radiative process (3) is obtained by adding the two amplitudes
associated with the insertion of the photon in the
incoming or outgoing electron leg:
\begin{equation}
\begin{array}{rl}
T_{W} = & \frac{G_{F}}{\sqrt{2}} e \bar{u} (l') \gamma^{\mu}
(1 - \gamma_{5}) u (l) \\
\\
& \times \bar{u} (p') \left\{ \Gamma_{\mu}
(\hbox{\large{a}} \varepsilon^{*} )
+ [\Gamma_{\mu} \frac{\displaystyle{\not{k} \not{\varepsilon}^{*}}}
{\displaystyle{2 (p k)}} +
\frac{\displaystyle{\not{\varepsilon}^{*} \not{k}}}{\displaystyle{2 (p' k)}}
\Gamma_{\mu} ] \right\}
u (p)
\end{array}
\end{equation}
\noindent
where $\hbox{\large{a}}$ is the four-vector
\begin{equation}
\hbox{\large{a}}^{\alpha} = \frac{p'^{\alpha}}{(p' k)}
- \frac{p^{\alpha}}{(p k)}
\end{equation}
\noindent
and $\epsilon$ is the polarization vector of the photon. Let
us notice that the use
of the Dirac equation has allowed to rewrite the matrix element of the
process in such a way that the first term of Eq. (8) corresponds to $\gamma$-
emission by the electron charge whereas the second term is induced by the
electron magnetic moment. Such a decomposition simplifies considerably the
calculation of the cross section.
We will not enter into the details of the rather cumbersome calculations for
the cross section. Taking the appropriate sum for the neutrino spin states
(only left-handed components contribute) as well as the sum and average for
the electron spin states, one obtains from Eq. (8) the following.
\begin{equation}
\begin{array}{rl}
\sum |T_{W}|^{2} = & 32 G_{F}^{2} e^{2} \left\{ [ - g^{2}_{L} (l p) (l' p')
- g^{2}_{R} (l' p) (lp') + g_{L} g_{R} m^{2} (ll') \right. ] \hbox{\large{a}}
^{2} \\
\\
& + [g_{L}^{2} (l' p') \left\{(\hbox{\large{a}}l)
(pk) - [(\hbox{\large{a}}p) - 1] (lk) \right\} \\
\\
& + g_{R}^{2} (lp') \left\{ (\hbox{\large{a}}l') (pk)
- [(\hbox{\large{a}}p) - 1] (l' k) \right\} ]
\frac{\displaystyle{1}}{\displaystyle{(kp)}}\\
\\
& + [g_{L}^{2} (lp) \{ (\hbox{\large{a}}l') (p'k) - [(\hbox{\large{a}} p')
-1] (l' k) \} \\
\\
& + g_{R}^{2} (l' p) \{ (\hbox{\large{a}}l) (p'k) - [(\hbox{\large{a}}p')
-1] (lk) \} ] \frac{\displaystyle{1}}{\displaystyle{(kp')}}\\
\\
& - 2 g_{L} g_{R} m^{2} \left. \frac{\displaystyle{(lk) (l'k)}}
{\displaystyle{(pk) (p'k)}}\right\}
\end{array}
\end{equation}
\noindent
where $m$ is the electron mass.
We are going to take also into account the contribution to the cross section
of the process from the diagrams with $\gamma$-exchange between neutrino and
electron vertices, due to a possible neutrino magnetic moment. The matrix
element of the electromagnetic current between initial and final neutrino
states has the form
\begin{equation}
i f_{M} \sigma_{\mu \nu} q^{\nu}
\end{equation}
\noindent
where $q = l - l' = p' + k - p$ is the momentum transfer. The coupling
$f_{M}$ at $q^{2} = 0$ is the neutrino magnetic moment $\mu_{\nu}$. We are not
going to consider a possible neutrino electric dipole moment, which is
both P-and CP-odd.
The corresponding invariant T-matrix element is given now by the amplitudes
associated to the two diagrams of Fig. 1.
\begin{equation}
\begin{array}{rl}
T_{M} = & \frac{\displaystyle{e^{2}}}{\displaystyle{q^{2}}} f_{M}
\bar{u} (l') \sigma^{\mu \nu} q_{\nu}
u (l) \\
\\
& \times \bar{u} (p) \left\{ \gamma_{\mu}
(\hbox{\large{a}} \varepsilon^{*})
+ [\gamma_{\mu} \frac{\displaystyle{\not{k} \not{\varepsilon}^{*}}}
{\displaystyle{2 (p k)}} +
\frac{\displaystyle{\not{\varepsilon}^{*} \not{k}}}{\displaystyle{2 (p' k)}}
\gamma_{\mu} ] \right\}
u (p)
\end{array}
\end{equation}
\noindent
with $\hbox{\large{a}}$ as given by Eq. (9). Again the two terms of Eq. (12)
correspond to \mbox{$\gamma$-emission} by the electron
charge and magnetic moment,
respectively. In this case the neutrino vertex changes its chirality, so
for massless left handed incoming neutrinos one can obtain the
corresponding transition probability by averaging over
initial neutrino spin states and summing over final ones. With this recipe, it
is straightforward to obtain
\begin{equation}
\begin{array}{rl}
\sum |T_{M}|^{2} = & \frac{\displaystyle{32e^{4} f_{M}^{2}}}{
\displaystyle{q^{2}}} \{ (lp) (lp') \hbox{\large{a}}^{2}\\
\\
& + [(\hbox{\large{a}}p) (lk) - (\hbox{\large{a}}l) (pk)]
\frac{\displaystyle{(lp')}}{\displaystyle{(kp)}}\\
\\
& + [(\hbox{\large{a}}p') (lk) - (\hbox{\large{a}}l)
(p'k)] \frac{\displaystyle{(lp)}}{\displaystyle{(kp')}}\\
\\
& - \frac{\displaystyle{(lk) (lp')}}
{\displaystyle{(kp)}} - \frac{\displaystyle{(lk) (lp)}}
{\displaystyle{(kp')}} + \frac{\displaystyle{m^{2} (kl)^{2}}}
{\displaystyle{(kp) (kp')}} \}
\end{array}
\end{equation}
This neutrino magnetic moment contribution (13) adds incoherently to the
weak interaction result (10) as a consequence of the opposite final neutrino
helicity induced by $T_{W}$ and $T_{M}$ for massless neutrinos.
The three-body final state cross section is given, with the normalization
used for the invariant amplitudes, by
$$
d \sigma = \frac{1}{8 (lp)} \frac{1}{(2 \pi)^{5}} \delta^{4} (l + p - l' -
p' - k) \frac{d^{3}l'}{2 E'_{\nu}} \frac{d^{3} p'}{2 E'} \frac{d^{3} k}{2
E_{\gamma}} .
$$
\begin{equation}
\times \sum [|T_{w}|^{2} + |T_{M}|^{2} ]
\end{equation}
The observables of interest in terms of momenta of the recoil electron and
the emitted photon are studied in the next section.
\section{Kinematics}
The differential cross section of the process (3) depends on 5 independent
variables. It is convenient to choose the following invariant variables
\begin{equation}
\begin{array}{ll}
s & = (l + p)^{2}\\
t_{1} & = (l' - l)^{2}\\
s_{1} & = (l' + p' )^{2}\\
t_{2} & = (p - k)^{2}\\
s_{2} & = (p'+ k)^{2}
\end{array}
\end{equation}
\noindent
for which the phase space
integral can be written as
\begin{equation}
\begin{array}{c}
\displaystyle{\int} \frac{\displaystyle{d^{3}p'}}{\displaystyle{2 E'}}
\frac{\displaystyle{d^{3}l'}}{\displaystyle{2 E'_{\nu}}}
\frac{\displaystyle{d^{3} k}}{\displaystyle{2 E_{
\gamma}}} \delta ( p + l - p' - l' - k) = \\
\\
= \frac{\displaystyle{\pi}}
{\displaystyle{16 \lambda^{1/2} (s, m^{2}, 0)}} \displaystyle{\int}
\frac{\displaystyle{dt_{1} ds_{1} dt_{2}
ds_{2}}}{
\left(\displaystyle{- \Delta_{4}}\right)^{1/2}}
\end{array}
\end{equation}
\noindent
where $\Delta_{4}$ is the 4 x 4 symmetric Gram determinant \cite{18}. The
integration domain is given by the condition $\Delta_{4}\leq 0$.
The weak and electromagnetic squared amplitudes, as obtained in Section 2,
can be written in the form
\begin{equation}
\begin{array}{l}
|T_{W}|^{2} = \frac{ \displaystyle{f (s, t_{1}, s_{1}, t_{2}, s_{2})}}
{\displaystyle{(s_{2} - m^{2})^{2}
(t_{2}- m^{2})^{2}}}\\
\\
|T_{M}|^{2} = \frac{\displaystyle{g (s, t_{1}, s_{1}, t_{2}, s_{2})}}
{\displaystyle{t_{1} (s_{2} - m^{2})^{2}
(t_{2} - m^{2})^{2}}}
\end{array}
\end{equation}
\noindent
where $f$ and $g$ are third degree (or lower) polynomials of the invariants.
Fixing the other invariants, the variable $s_{1}$ corresponds to the
( unobservable ) angle between the outgoing neutrino and
electron momenta. The integration over $s_{1}$ can be performed
analytically, being $f$ and $g$ second degree polynomials in $s_{1}$. In the
Appendix we give the exact results for the triple differential cross section
once $s_{1}$ has been integrated over, in terms of appropriate variables
(see below).
The remaining variables $t_{1}, s_{2}, t_{2}$ are observable quantities, for
which the physical region is given by the following invariant conditions:
\begin{equation}
\begin{array}{l}
1) \, \,t_{1} \leq 0; \; m^{2} \leq s_{2} \leq s, \, t_{2} \leq m^{2}\\
\\
2) \, \, G (s, t_{1}, s_{2}, 0, m^{2}, 0) \leq 0 \Rightarrow (s - s_{2})
(s - m^{2}) + st_{1} \geq 0\\
\\
3) \, \, G (s_{2}, t_{2}, 0, t_{1}, m^{2}, m^{2}) \leq 0 \Rightarrow
\left|
\begin{array}{ccccc}
0 & 1 & 1 & 1 & 1\\
1 & 0 & m^{2} & s_{2} & 0\\
1 & m^{2} & 0 & t_{1} & t_{2}\\
1 & s_{2} & t_{1} & 0 & m^{2}\\
1 & 0 & t_{2} & m^{2} & 0\\
\end{array}
\right| \geq 0
\end{array}
\end{equation}
\noindent
with the $G$-function as defined in reference \cite{18}.
The integration over $t_{2}$ which is associated with the photon energy
$E_{\gamma}$ in the LAB frame , $t_{2}= m^{2} - 2m E_{\gamma}$ ,
can still be
performed on analytic grounds in some cases.
Our next discussion is the translation of the physical region
(18) of the invariant variables into that for the geometrical variables in the
LAB frame:
electron-photon opening angle $\theta$, electron recoil energy $T$ and photon
energy $E_{\gamma}$, or in terms of the dimensionless variables
$x, y, \omega$ to be defined below.
The relation is given by
\begin{equation}
\left\{
\begin{array}{ll}
t_{1} & \equiv - Q^{2} = 2 T \left[(m - E_{\gamma}) + E_{\gamma}
\sqrt{1 + \frac{2m}{T}}
cos \theta \right]\\
\\
s_{2} & = t_{1} + m^{2} + 2m (T + E_{\gamma})\\
\\
t_{2} & = m^{2} - 2m E_{\gamma}
\end{array} \right.
\end{equation}
Then eqs. (18) lead to
\mbox{$-4E_{\nu}(E_{\nu}-T-E_{\gamma})\leq t_{1} \leq 0$}
and $E_{\nu}\geq T+E_{\gamma}$. For a given
recoil energy $T$ of the electron, the physical region in the plane $(E_{
\gamma}, \theta )$ is given by Fig. 2
There are many interesting features in Fig. 2. The line at which $Q^{2} = 0$
corresponds to the maximal opening angle
\begin{equation}
Q^{2} = 0 \longleftrightarrow cos \theta = \frac{1}{\sqrt{1 + \frac{2m}{T}}}
\left( 1 - \frac{m}{E_{\gamma}}\right)
\end{equation}
\noindent
allowed for photon energies
\begin{equation}
E_{\gamma}^{0} = \frac{m}{1 + \sqrt{1 + \frac{2m}{T}}} \leq E_{\gamma}
\leq E_{\gamma}^{m} = E_{\nu} - T
\end{equation}
For lower photon energies $0 < E_{\gamma} < E_{\gamma}^{0}$,
the maximum opening angle is 180$^{0}$ and $Q^{2}$ decreases from its
elastic scattering value $Q^{2}_{el} = 2 m T$ (at $E_{\gamma} = 0$) to
reach $Q^{2} = 0$ (at $E_{\gamma} = E_{\gamma}^{0}$). We see, therefore,
that for any values of $T$ and $E_{\gamma}$ $(T + E_{\gamma} \leq E_{\nu})$
there always exists a region of opening angles for which $Q^{2}$ is lower than
the corresponding $Q^{2}_{el}$. Furthermore, this region is found at the
highest
allowed values of $\theta $.
Other interesting points and boundaries in Fig. 2 are the following:
- $\theta_{1}$ is the opening angle in the inelastic process for which
one obtains $Q^{2} = Q^{2}_{el}$. It is given by
\begin{equation}
Q^{2} = Q^{2}_{el} \longleftrightarrow cos \theta_{1} = \frac{\displaystyle{
1}}{\displaystyle{
\sqrt{1+\frac{2m}{T}}}}
\end{equation}
- $ \theta_{0}$ is the minimum opening angle for which $Q^{2} = 0$ is
reachable. It is given by
\begin{equation}
cos \theta_{0} = [1 - \frac{m}{E_{\nu} - T} ] cos \theta_{1} \Longrightarrow
\theta_{0} > \theta_{1}
\end{equation}
- For the domain of the high energy photons
\begin{equation}
E_{\gamma}^{1} = E_{\nu} - \frac{T}{2} (1 + \sqrt{1 + \frac{2m}{T}})
\leq E_{\gamma} \leq E_{\gamma}^{m}
\end{equation}
\noindent
the maximum $Q^{2} = 4 E_{\nu} (E_{\nu} - T - E_{\gamma})$ corresponds to
a minimum opening angle
\begin{equation}
cos \theta = \frac{4 E_{\nu} ( E_{\nu} - T - E_{\gamma}) - 2 T (m -
E_{\gamma})}
{ 2 T E_{\gamma}\sqrt{1+\frac{2m}{T}} }
\end{equation}
\\
It is now of interest to introduce the dimensionless variables
\begin{equation}
x = Q^{2} /(2m \nu) \, , \, y = \frac{\nu}{E_{\nu}} \, , \,
\omega = \frac{E_{\gamma}}{E_{\nu}}
\end{equation}
\noindent
with $\nu = T + E_{\gamma}$ the total energy release of the process in the
laboratory system. For fixed $x$ and $y$, the $\omega$-integration
in the cross section, although
cumbersome, is straightforwardly made in an analytic way. We discuss some
interesting limits for the inclusive cross section in $x$ and $y$ in the
next section, in particular for its low $Q^{2}$-behaviour as a consequence
of CVC and PCAC. First we determine the physical region in terms of these
variables, following eqs. (18):
\newpage
1)
\begin{equation}
0 \leq x \leq 1
\end{equation}
\noindent
where $Q^{2} = 0$ for $x = 0$ at fixed $y$, whereas $E_{\gamma} = 0$
for $x = 1$
2)
\begin{equation}
Q^{2} \leq 4 E_{\nu} (E_{\nu} - \nu) \Rightarrow 0 \leq y \leq (1 +
\frac{mx}{2 E_{\nu}})^{-1}
\end{equation}
\noindent
with no threshold for the inelastic process.
3)
\begin{equation}
\left\{
\begin{array}{l}
\omega^{-} \leq \omega \leq \omega^{+} \\
\\
\omega^{\pm} = y (1 - x) \frac{\displaystyle{1 + \frac{E_{\nu}}{m} y [1 \pm
\sqrt{ 1 + \frac{2mx}{E_{\nu} y}}]}}{\displaystyle{
1 + 2 \frac{E_{\nu}}{m} y (1 - x)}}
\end{array} \right.
\end{equation}
\\
One notices in Eq. (29) the soft-photon limit $E_{\gamma} \rightarrow 0$
at the elastic scattering kinematics $x \rightarrow 1$. We represent
in Fig. 3, the photon energy limits $E_{\nu} \omega^{\pm}/m$
as functions of $x$ and $\frac{E_{\nu}}{m} y$; in this
form these results are universal, independent of the incoming neutrino energy
$E_{\nu}$ except for the maximum allowed value for $\frac{E_{\nu}}{m}y$
( see Eq. (28) ).
In Fig. 4 we give the allowed domain of the variables $(Q^{2}, \nu)$, where
the constraint of $x$ fixed represents an straight-line
and $\nu_{0} = 2 E_{\nu}^{2} / (2 E_{\nu} + m)$.
For a given $y$, associated
for example with an experimental cut in energy release, it is possible now
to reach $Q^{2}$-values lower than $Q^{2}_{el}$ for $x < 1$. This
is nothing but a manifestation of the features discussed in Fig. 2 for the
geometrical variables. Furthermore, at fixed $x$, one can also approach $Q^{2}
\rightarrow 0$ taking $\nu \rightarrow 0$ .
\section{Low-$Q^{2}$ behaviour}
We are interested in the behaviour of both the weak and the electromagnetic
cross sections at low $Q^{2}$, with a view to enhance the second contribution
with respect to the first one. As emphasized before, it is an straightforward
, though cumbersome, matter to obtain the triple differential cross section
in the
variables, $x, y, \omega$, as given in the Appendix; in order to check the
results and discuss the physics of the process some limits will be
illuminating.
First we consider, at $y$, $\omega$ fixed, the
expansion around $x \rightarrow 0$. The weak cross section is
\begin{equation}
\begin{array}{rl}
\frac{\displaystyle{d \sigma_{W}}}{\displaystyle{d x d y d \omega}}_{x<<1}
\simeq &
\frac{\displaystyle{G^{2} m^{2}}}{\displaystyle{\pi^{2}}}\left.
\alpha \frac{\displaystyle{1}}{\displaystyle{y^{3} \omega}}
\right\{ W (y, \omega) g^{2}_{A}\\
\\
& \left.+ \frac{\displaystyle{E_{\nu} x y}}
{\displaystyle{2m}} [V (y, \omega)
g^{2}_{V} + A (y, \omega) g^{2}_{A} + I (y, \omega) g_{V} g_{A} ] \right\}
\end{array}
\end{equation}
\noindent
where
\begin{equation}
\begin{array}{ll}
W (y, \omega) = & (1 - y) ( y - \omega)^{2}\\
\\
V (y, \omega) = & (1 - y + \frac{\displaystyle{y^{2}}}{\displaystyle{2}}
) [y^{2} + \omega^{2} - \frac{
\displaystyle{2m}}{\displaystyle{E_{\nu}}}
(y - \omega) + \frac{\displaystyle{m^{2}}}
{\displaystyle{E_{\nu}^{2} y \omega}} (y - \omega)^{2} ]\\
\\
A (y, \omega) = & (1 - y + \frac{\displaystyle{y^{2}}}{\displaystyle{2}}
) ( y^{2} + \omega^{2})\\
\\
& - \frac{\displaystyle{2m}}{\displaystyle{E_{\nu}}}
( y - \omega) [ (1 - y) \frac{\displaystyle{2y - 5 \omega}}{\displaystyle{y}}
- \frac{\displaystyle{y}}{\displaystyle{2}} (y + 2 \omega)]\\
\\
& + \frac{\displaystyle{m^{2}}}{\displaystyle{E^{2}_{\nu} y \omega}}
(y - \omega)^{2} [ (1 - y)
\frac{\displaystyle{y - 12 \omega}}{\displaystyle{y}} -
\frac{\displaystyle{y}}{\displaystyle{2}} (y + 4 \omega)]\\
\\
I (y, \omega) = & y (2 - y) (y^{2} - \omega^{2})
\end{array}
\end{equation}
\noindent
and the couplings are
\begin{equation}
g_{V} = \frac{g_{L} + g_{R}}{2}, \, g_{A} = \frac{g_{L} - g_{R}}{2}
\end{equation}
\noindent
in terms of the chiral couplings of Eq. (7).
There are interesting features associated with this result. At $x = 0$ the
only survival term in the cross section goes like $g^{2}_{A}$. By
the use of CVC and a leptonic analogue of PCAC,
Sehgal and Weber \cite{19} reproduced this term as the
analogue of Adler's theorem for hadronic reactions. It is well known that,
due to CVC, the structure function associated with inelastic excitations
mediated by the vector current goes like $Q^{2}$ at fixed $\nu$.
So only the $g_{A}^{2}$-term can survive at $x = 0$. This term has a
contribution for the leptonic current proportional to the electron mass,
hence
the global scale $m^{2}$ appearing in Eq. (30) is now understood.
Nevertheless, it is important to stress that $W(y,\omega)$ will be the dominant
term only in a
very restricted range around $x=0$. So, for example,
this term gives a good approximation
provided $\nu>>m$ (high incoming energies) but within the restricted range
$Q^{2} < <4 m^{2}$. This is so because
the linear term in $x$, in fact, goes as
$Q^{2}/ 4 m^{2}$.
Furthermore, the $W (y, \omega)$ dependence goes like the square
of the recoil energy of the electron. If $\nu << m$ there are high
cancellations
in this term, seen for example when one integrates over $\omega$ at fixed $y$.
We conclude that the $x = 0$ term is only important at high incoming
energies with $\nu >> m$, but with $Q^{2} << 4m^{2}$.
Our strategy will be just the contrary, i.e., have $\nu < m$ with low
$Q^{2}$, in order to suppress the $x = 0$ $g_{A}^{2}$-term in the weak cross
section.
The linear term in $Q^{2}$, within the bracket of Eq. (30), contains
contributions from the vector and axial couplings to electrons and their
interference. The purely vector contribution can be understood from the
Compton scattering cross section, where $y$ would be the energy of the incoming
photon and $\omega$ the energy of the outgoing photon , both normalized to
$E_{\nu}$. The Klein-Nishina formula \cite{20} for
the cross section distribution,
when written with the appropriate variable $\omega$ instead of the
scattering angle, reads
\begin{equation}
\frac{d \sigma^{\gamma \gamma}}{d \omega} = \frac{\pi \alpha^{2}}{m E_{\nu}}
\frac{1}{y^{3} \omega} [y^{2} + \omega^{2} - 2 \frac{m}{E_{\nu}} (y - \omega)
+ \frac{m^{2}}{E_{\nu}^{2} y \omega} (y - \omega)^{2} ]
\end{equation}
\\
\noindent
which is immediately identified with the $V (y, \omega)$ term of Eq. (30).
The axial term $A (y, \omega)$ has a different behaviour and it will tend
to $V (y, \omega)$ only in the limit \mbox{$m/E_{\nu}
\rightarrow 0$}.
The cross section induced by a neutrino magnetic moment $\mu_{\nu} \not=
0$ gives, in the limit $x \rightarrow 0$.
\begin{equation}
\begin{array}{ll}
\frac{\displaystyle{d \sigma_{M}}}{\displaystyle{d x dy d\omega}}_{ x < < 1}
\simeq
& \frac{\displaystyle{\alpha^{3}}}{\displaystyle{2 m^{2}}}
\left( \frac{\displaystyle{\mu_{\nu}}}
{\displaystyle{\mu_{B}}}\right)^{2}
\frac{\displaystyle{1}}{\displaystyle{y^{3} \omega}}
\left\{ M (y,\omega) + \frac{\displaystyle{x}}
{\displaystyle{2 y^{2} \omega}} N (y, \omega) \right\}
\end{array}
\end{equation}
\\
\noindent
where the $(y, \omega)$-functions are
\begin{equation}
\begin{array}{ll}
M (y, \omega) = & (1 - y) [(y^{2} + \omega^{2}) - 2 \frac{\displaystyle{m}}
{\displaystyle{E_{\nu}}} ( y -
\omega) + \frac{\displaystyle{m^{2}}}{\displaystyle{E_{\nu}^{2} y \omega}}
(y - \omega)^{2}]\\
\\
N (y, \omega) = & 2 y^{2} \omega (y - \omega) [(1 - y) (y + 5 \omega) + y^{2}
\omega ] \\
\\
& - \frac{\displaystyle{2m}}{\displaystyle{E_{\nu}}}
y [(1 - y ) (y^{3} - 6 y^{2} \omega + 5 y \omega^{2}
+ 6 \omega^{3})
- y^{2} \omega (y^{2} - y \omega - \omega^{2})]\\
\\
& - \frac{\displaystyle{2 m^{2}}}{\displaystyle{E_{\nu}^{2}}}
y ( y - \omega) [2 (1 - y) (3 y - 11 \omega)
+ y^{2} (y - 4 \omega)] \\
\\
& - \frac{\displaystyle{m^{3}}}{\displaystyle{E_{\nu}^{3}}} (y - \omega)^{2}
[8 (1 - y) + 3 y^{2}]
\end{array}
\end{equation}
The first point to be noticed in Eq. (34) is the absence of the $1/x$
singularity associated with the photon propagator in the
magnetic contribution present in the
elastic scattering cross section. This is again due to the conservation of
the electromagnetic current in the electron vertex, implying a linear
$Q^{2}$-behaviour of the structure function, at $\nu$ fixed, for
inelastic excitations. The leading $M (y, \omega)$ term is again, like
$V (y, \omega)$, obtainable from the Compton scattering cross section. In
fact, one can write
\begin{equation}
\left.\frac{d \sigma_{M}}{d x d y d \omega} \right|_{x = 0} =
\frac{\alpha}{2 \pi} \frac{E_{\nu}}{m} (\frac{\mu_{\nu}}{\mu_{B}})^{2}
(1 - y) \frac{d \sigma^{\gamma \gamma}}{d \omega}
\end{equation}
\noindent
with $\sigma^{\gamma \gamma}$ given by Eq. (33). Contrary to the behaviour that
we have discussed for $W (y, \omega)$ in the weak cross section, the term
$M( y, \omega)$ is not here suppressed with respect to the linear term
in $x, N (y, \omega)$, so Eq. (36) is a very good approximation to the
magnetic cross section at low energies and low values of $Q^{2}$. Taking
the ratio of cross sections at $Q^{2} = 0$, we have
\begin{equation}
\begin{array}{rl}
\left.\frac{\displaystyle{d \sigma_{M}}}{\displaystyle{d \sigma_{W}}}
\right|_{x = 0} = & (\frac{\displaystyle{\mu_{\nu}}}
{\displaystyle{\mu_{B}}})^{2}
\frac{\displaystyle{\pi^{2} \alpha^{2}}}{\displaystyle{G^{2} m^{2}}}
\frac{\displaystyle{1}}{\displaystyle{2 m g^{2}_{A} T}} \\
\\
& \times \left\{ \frac{\displaystyle{2 E_{\gamma} (E_{\gamma} + T)
+ T^{2}}}{\displaystyle{m T}} +
\frac{\displaystyle{m T - 2 E_{\gamma} (E_{\gamma} + T)}}
{\displaystyle{E_{\gamma} (E_{\gamma} + T)}}
\right\}
\end{array}
\end{equation}
\\
\noindent
where the global factor in front of the bracket is a typical measure of
this ratio for the elastic scattering process at the same value of $T$. We
remind the reader that $x = 0$ is then obtained by the suitable choice of
the maximal opening angle between electron and photon. A glance at
Eq. (37) would say that the highest cross section
ratios are obtained for the hardest
photon limit $E_{\gamma} >> T$, with values higher than the elastic ones at
will. Even more, one would say that higher neutrino energies are favoured
in order to have hard photons but
the discussion after Eq. (30) should have clarified that a little departure
from $x = 0$ under these conditions is enough to enhance the
next linear term in $x$ so that the ratio (37)
becomes diluted.
To conclude, the strategy to reach low enough $Q^{2}$-values, approaching
$\theta_{max}$ at fixed $(y, \omega)$, works only in a very limited angular
range around $\theta \simeq \theta_{\max}$. Whenever the results are integrated
over a wider region of $\theta$, the
ratio $d \sigma_{M} / d \sigma_{W}$ will be
diluted.
We can consider the approach to $Q^{2} \rightarrow 0$ following the lines
of fixed $x$ of Fig. 4. The vector contribution is in this case not
penalized due to CVC with respect to the axial
contribution, as it was the case for $x\rightarrow 0$: the structure function
goes like $Q^{2} / \nu$ and the limit $\nu \rightarrow 0$ is not
physically forbidden for our process. It is thus of interest to study the
inclusive cross sections $d \sigma / dx dy$ and explore their
behaviour when $y \rightarrow 0$ at
fixed $x$. We can use the results of the triple differential cross sections
given in the Appendix for the integration in
$\omega$, with the condition $\nu < < m$, and obtain
\begin{equation}
\begin{array}{ll}
\frac{\displaystyle{d^2 \sigma_{W}}}{\displaystyle{d x d \nu}} \simeq & \frac{\displaystyle{4}}{\displaystyle{3}}
\frac{\displaystyle{G^{2} \alpha}}{\displaystyle{\pi^{2}}}
\frac{\displaystyle{1}}{\displaystyle{1-x}}\nu
\left\{x[ (g_{V}^{2} + g_{A}^{2}) - \frac{\displaystyle{\nu}}{\displaystyle{E_{\nu}}}
(g_{V}^2+g_{A}^2-2xg_{V}g_{A})-\frac{\displaystyle{x}}{\displaystyle{2}}
\frac{\displaystyle{m \nu}}{\displaystyle{E_{\nu}^2}} (g_{V}^2-g_{A}^2)]\right.\\
&\\
&\left.+ \frac{\displaystyle{\nu}}{{m}}[ (\frac{\displaystyle{17}}{\displaystyle{10}}-2)x g_{V}^2
+\frac{\displaystyle{1}}{\displaystyle{10}}(37x^2-60x+20) g^{2}_{A} ] + O(\nu^2)
\right\}
\end{array}
\end{equation}
\noindent
for the weak cross section, whereas
\begin{equation}
\begin{array}{ll}
\frac{\displaystyle{d^2 \sigma_{M}}}{\displaystyle{dx d \nu}} \simeq &
\frac{ \displaystyle{4 \alpha^{3}}}{\displaystyle{3 m^{3}}} \left(
\frac{\displaystyle{\mu_{\nu}}}{\displaystyle{\mu_{B}}}\right)^{2} \frac{\displaystyle{1}}{\displaystyle{1-x}}
\left\{1 -\frac{\displaystyle{\nu}}{\displaystyle{E_{\nu}}}+\left(\frac{\displaystyle{17}}{\displaystyle{10}}x
-2\right)\frac{\displaystyle{\nu}}{\displaystyle{m}}+O(\nu^2)\right\}
\end{array}
\end{equation}
\noindent
gives the magnetic moment cross section, which is much less sensitive to
low $x$ values. There is no need of an infrared cutoff in $\omega$ as
far as $x\neq 1$ and $\nu \neq 0$ ( see eq. (29) );
if needed experimentally, it must be
included in this integration.
The ratio of (39) to (38) shows a very essential feature:
the most favourable sensitivity to a neutrino magnetic moment in the
inelastic process comes from the region of low excitation energy $\nu$
and, subsequently, from low $x$ values. As seen in Fig.4, lowering $\nu$
automatically lowers $Q^{2}$ and the behaviour of the structure
functions are then more favourable than for low $Q^{2}$ at fixed $\nu$.
The equations (38) and (39), valid for $x$ fixed, show the soft photon
factorization in the limit $x\rightarrow 1$. The factor in the first
square bracket in the r.h.s. of eq. (38) is, at $x=1$, proportional to the
weak elastic cross section up to $O(\nu^2)$; so it is the second square
bracket in front of $\nu /m$, once more at leading order in $\nu$. Note that
this factor becomes $-\frac{3}{10}(g_{V}^2+g_{A}^2)$ at $x=1$. In eq. (39),
$1-\nu/E_{\nu}$ is proportional to the elastic magnetic moment cross
section and the remaining $-3/10$ factor of the $\nu/m$ term $x=1$ is
the same signal of soft photon factorization as before. Finally, note
that in eq. (38) only the last $g_{A}^2$ term survives at $x=0$, with a
$\nu^2$ suppression due to the strong cancellations
in the $\omega$-integration
of $W(y,\omega)$.
\section{Results.}
\hspace{0.6cm} In this Section we present detailed numerical
results of the weak and electromagnetic
cross section for the inelastic process (3) both for the triple
differential cross sections
$d^3\sigma_{W,M}/dTdE_{\gamma}d(cos\theta )$
and for the inclusive cross sections $d^2\sigma_{W,M}/d\nu dx$.
We have made an analysis of the ratio
$d \sigma_{M}/d \sigma_{W}$ as illustrated in figs. 5 and 6 for incoming
energies $E_{\nu} = 1 MeV$ for electron antineutrinos and $E_{\nu} = 29.79$
MeV for muon neutrinos from $\pi$-decay at rest,
respectively, using the complete expressions
without any approximations ( $T=0.2 MeV$ ).
We give the regions in the plane $(\theta,
E_{\gamma})$ for which the cross section, when integrated from $\theta$ to
$\theta_{\max}$ ($Q^{2} = 0$) at each $E_{\gamma}$, satisfies the following
requirement: the ratio
$d \sigma_{M}/ d \sigma_{W}$ is 5, 4, 3 or 2 times larger than the elastic
scattering ratio for the same $T$. Even if the ratio
increases with $E_{\gamma}$ on the $Q^{2}=0$ line,
the angular width becomes more and more narrow,
as we expected from the analysis of the previous section.
Fig. 7 gives the inclusive cross section $d^2\sigma/d\nu dx$ for
electron-antineutrino scattering at \mbox{$E_{\nu} = 1 MeV$}, separating (a)
the weak contribution, (b) the magnetic moment contribution for
$\mu_{\nu}=10^{-10}\mu_{B}$, and (c) their
ratio. The conclusion obtained in the
last section by the use of analytic limits is dramatically confirmed by
these results: the highest sensitivity is obtained for the lowest values of
$\nu$ and, by going down to low values of $x$, the sensitivity is
higher than for the elastic scattering case with $x = 1$. On the contrary,
once $\nu$ is high enough, the sensitivity is not
improved when lowering the value of $x$. At $x = 1$ and $\nu =2E_{\nu}^2/
(2E_{\nu}+m)$
one can still see at $E_{\nu} = 1 MeV$ the residual effect of the elastic
zero present \cite{16} at $E_{\nu} = m/(4sin^2 \theta_{W})\simeq 0.51 MeV$.
Experimentally one should consider cuts both in $E_{\gamma}$ and $T$ which
could modify our results for the inclusive cross sections which are sensitive
to the cut in $T$ ($T^{th}$) for small $x$ and to the experimental
threshold in $E_{\gamma}$ ($E_{\gamma}^{th}$) for
$x\simeq 1$. However the main features remain
the same as shown in figure 7(d), where the ratio $d\sigma_{M}/d\sigma_{W}$
has been plotted for $E_{\nu}=1 MeV$ taking as experimental thresholds
$T\geq T^{th}= 100 KeV$, $E_{\gamma}\geq E_{\gamma}^{th}=100 KeV$. Note that
there is still a high sensitivity at small $\nu$ values, higher when we
subsequently consider low $x$ values.
The general features are not highly sensitive to the incoming neutrino energy
within the range of the reactor antineutrino spectrum.
In Fig. 8 we present a similar analysis to that of
Fig. 7(c), but the cross sections have been averaged
over a realistic \cite{21} antineutrino spectrum.
This result shows similar features as those described in the
monoenergetic case. The only difference
is the disappearance of the remanent
of the elastic zero at maximum electron recoil energy, due to the average
over incoming neutrino energy
\section{Outlook.}
\hspace{0.5cm}New neutrino physics can be
introduced to generate a neutrino magnetic moment
as large as $10^{-10} \mu_{B}$, a magnitude which can be tested in planned
laboratory experiments on electron-antineutrino scattering by electrons.
Furthermore this value corresponds roughly to
the scale needed to play a role in solar
neutrino physics. The laboratory tests look for a high enough sensitivity
to the neutrino magnetic moment by lowering the accessible $Q^{2}$ to
enhance this contribution relative to the standard weak interaction cross
section. The method based on the elastic scattering has the limitation
associated
with the cut in recoil energy needed to observe the process.
With a view to be able to reach, for a given recoil energy, lower values of
$Q^{2}$ than for the elastic process, we have studied in this paper the
weak and neutrino magnetic moment contributions to the cross section for the
inelastic radiative process $\bar{\nu} + e \rightarrow \bar{\nu} + e
+ \gamma$.
We have analyzed the inelastic process in its kinematic and dynamic behaviour
in
order to find the regions of higher sensitivity. For given recoil energies of
both
electron and photon, the value $Q^{2} = 0$ is reachable for the highest
possible values of the opening angle between the two outgoing particles: if
$E_{\gamma} < m$, this configuration corresponds to $\theta > 90^{0}$.
The $Q^{2} = 0$ kinematic configuration is always very favourable to
enhance the neutrino magnetic moment contribution, even if at high values of
the inelastic excitation energy $\nu$ the beneficial effect of the $1/Q^{2}$
-photon propagator is lost for inelastic scattering due to CVC.
The integration of events
around an angular region below the maximum $\theta$ dilutes, however, this
enhancement: the angular region of interest is more limited with
increasing energy of the photon. We conclude that the most interesting
situation
corresponds to the inelastic configurations $x < 1$ for low values of the
excitation energy $\nu$. Even for the inclusive cross section $d^{2} \sigma /
dx d \nu$, this effect is clearly manifested in our results
of Figs. 7 and 8. It is understood as a suppression of the weak cross
section, Eq. (38), whereas the magnetic moment contribution has an smooth
behaviour, Eq. (39). Although absolute cross sections are small ( for instance,
$\sigma_{M}/\sigma_{W}=4.4$, $\sigma_{M}=2.7\, 10^{-47} cm^2$ for $\mu_{\nu} =
10^{-10} \mu_{B}$ at \mbox{$E_{\nu}=1 MeV$} integrating over $\nu<0.5 MeV$ ,
$x<0.5$ )
, the standard model contribution is suppressed in these
circumstances more strongly than in the elastic scattering case.
\vspace{3cm}
{\bf ACKNOWLEDGEMENTS}
We would like to thank J.A. Pe\~{n}arrocha,
L.M. Sehgal and S.K Singh, for discussions on the topic of this
paper. J. S. acknowledges the Spanish Ministry of Education and Science
for his fellowship. This work was supported in part by CICYT under Grant
AEN/93-0234.
\newpage |
hep-th/9404160 | \section{Introduction}
Some time ago G.Ponzano and T.Regge discovered a connection between
the expansion of the $6j$-symbol and the partition function for $3D$
euclidean quantum gravity \cite{PR}. These ideas were developed by
M.Carfora, M.Martellini and A.Marzuoli for the $4D$ euclidean quantum gravity
\cite{CMM}. In this paper the general correlation between $3nj$-symbols and
$D$
euclidean quantum gravity is shown.
The main aspect here is the correlation between geometrical properties of the
$D$-simplex and the corresponding $3nj$-symbol. The $D$-simplex in the
$D$-dimentional space has $D+1$ vertices and $C^{2}_{D+1}=\frac {1}{2}D(D+1)$
edges.
No less than
$D-3$ edges belong to $D-3$-dimensional subspace and no more than
$\frac {1}{2}D(D-1)+3$ edges belong to 3-dimensional subspace $R^3$. To every
edge
belonging to $R^3$ we relate some quantity of the angular momenta $j_i$ in the
$3nj$-symbol. Therefore, $3nj$-symbol must contain no less than $\frac
{1}{2}D(D-1)+3$
angular momenta.
There are $3nj$-symbols of the first and second kind, depending on the
possibility to
bound the first and the last momenta \cite{JB}(see fig.1). We shall consider
$3nj$-symbols
of only the first kind. The application of $3nj$-symbols of the second kind is
the same.
Any $3nj$-symbol can be represented as a sum of products of the corresponding
$6j$-symbols.
\begin {eqnarray}
\left\{ \begin{array}{cccc} j_1 & j_2 & ... & j_n \\ l_1 & l_2 & ... & l_n \\
k_1 & k_2 & ... &
k_n\end{array}\right\} =\sum _x(-1)^{R_n+(n-1)x}(2x+1)
\left\{\begin{array}{ccc} j_1 & k_1 & x \\ k_2 & j_2 &
l_1\end{array}\right\}\times
\nonumber\\
\times
\left\{\begin{array}{ccc} j_2 & k_2 & x \\ k_3 & j_3 &
l_2\end{array}\right\}\times...
\times
\left\{\begin{array}{ccc} j_{n-1} & k_{n-1} & x \\ k_n & j_n &
l_{n-1}\end{array}\right\}
\left\{\begin{array}{ccc} j_n & k_n & x \\ j_1 & k_1 & l_n\end{array}\right\},
\end{eqnarray}
where $R_n=\sum^n_{i=1}(j_i+l_i+k_i)$.
Inasmuch as the asymptotic form of the Racah-Wigner $6j$-symbol according to
Ponzano-Regge formula \cite{PR} can be expressed by the euclidean
Einstein-Regge
action $S_R[T]$ for the 3-simplex $T$ (tetrahedron)
\begin{equation}
\left\{\begin{array}{ccc} j_1 & j_2 & j_3 \\ j_4 & j_5 & j_6\end{array}\right\}
\rightarrow \\ exp(i\sum_{m=1}^6j_m \theta _m)=exp(iS_R[T]),
\end{equation}
at $j_m \gg 1$, expanding the $3nj$-symbol into the sum of products of
6j-symbols we
would obtain a correlation between the euclidean action of gravity $S_R$ and
the
$3nj$-symbol.
There are, however, some difficulties in the implementation of this project. In
particular,
since expression (2) is valid only at $j_i
\gg 1$ expanding the $3nj$-symbol into 6j-simbols we are summing by the
variable
moment $x$, which obeys the condition of triangle $|j_i-j_j|
\leq x\leq j_i-j_j$, from which it follows that even at $j_i,j_j \gg 1$,
we may receive $|j_i-j_j|\simeq 1$, and expression (2) becomes invalid.
Consequently, for
any meaning of the space-time dimension $D$ we must choose a special type of
the $3nj$
symbol, at the condition
\begin{equation}
3n > \frac{1}{2}D(D-1)+3,\quad n >2, \; D>3.
\end{equation}
To escape summing when expanding (1), let us e|ual one of the momenta to zero:
\begin {eqnarray}
\left\{ \begin{array}{cccc} j_1 & j_2 & ... & 0 \\ l_1 & l_2 & ... & l_n \\
k_1 & k_2 & ... &
k_n\end{array}\right\} =\delta (l_n,k_1)\delta (j_{n-1},l_{n-1}) \times
\nonumber\\
\times[(2k_1+1)(2j_{n-1}+1)]
^{-\frac{1}{2}}
(-1)^{R_{n-1}+nk_n-j_1+k_{n-1}+j_{n-1}} \times \\
\left\{\begin{array}{ccc} j_1 & k_1 & x \\ k_2 & j_2 & l_1\end{array}\right\}
\left\{\begin{array}{ccc} j_2 & k_2 & x\\ k_3 & j_3 & l_2\end{array}\right\}
...\left\{\begin{array}{ccc} j_{n-1} & k_{n-1} & x \\ k_n & j_n &
l_{n-1}\end{array}\right\}\nonumber
\end{eqnarray}
In this case the $3nj$-symbol depends on only $3(n-1)$ momenta.
When the number of edges of the $D$-simplex belonging to subspace $R^3$ is a
number
multiple to 3, the quantity $3n$ in the $3nj$-symbol is
just equal to the number of edges in $R^3$ plus 3, i.e. $3n=\frac
{1}{2}D(D-1)+6$. When
the number of edges in $R^3$ is the number not multiple to 3, then we put
$3n=\frac{1}{2}D(D-1)+8$ and impose additional conditions like $j_i=j_k$ or/and
$j_i=j_{n-1}$ to decrease the number of independent momenta. Some information
about
D-simplices and corresponding $3nj$-symbols for $D\leq 11$ is given in Table 1.
Now, for the reduced $3nj$-symbol we use equation (1) and the expression of
Ponzano-Regge (2):
\begin{eqnarray}
\left\{\begin{array}{c}reduced \\ 3nj \end{array}\right\} \mapsto
\frac{N}{(12\pi )^{\frac{n-2}{2}}\prod ^{\frac{n-2}{2}}_{k=1}V_k^{\frac{1}{2}}
} \times
\nonumber \\
\times exp(i\sum^{n-2}_{m=1}j_{m}\theta_m+
i\sum^{n}_{m'=1}j_{m'}\theta_m'+
i\sum^{n}_{m''=2}k_{m''}\theta_m''),
\end{eqnarray}
where $N\equiv N_1=(-1)^{R_{n-1}+nk_n-j_1+k_{n-1}+j_{n-1}}\delta
(l_n,k_1)\delta(j_{n-1},l_{n-1}) \times \\
\times[(2k_1+1)(2j_{n-1}+1)]^{-\frac{1}{2}}$, for
$D\neq 3n+2$ and $N\equiv N_1\delta(j_i,j_{n-1})\delta(j_{i-1},j_{n-2}),\\
i\neq n$ for
$D=3n+2$, $V_k$ - volume of the $k$-th tetrahedron.
Let us introduce new denotations
\begin{eqnarray}
(j_1,...,j_{n-1},l_1,...,l_n,k_2,...,k_n)\rightarrow (J_1,...,J_M), \\
(\theta _m,\theta_{m'},\theta_{m''}) \rightarrow (\psi _1,..., \psi_M),
\end{eqnarray}
Then the reduced $3nj$-symbol is
\begin{eqnarray}
\left\{\begin{array}{c}reduced \\ 3nj \end{array}\right\} \mapsto
\frac{N}{(12\pi )^{\frac{n-2}{2}}\prod ^{\frac{n-2}{2}}_{k=1}V_k^{\frac{1}{2}}
}\times
\nonumber \\
\times exp(i\sum^{M}_{m=1}J_{m}\psi_{m}).
\end{eqnarray}
Let us consider the gaussian integral over the real variables $\psi_i$ for the
real non
singular and symmetric $M\times M$ matrix $\Delta $:
\begin{eqnarray}
Z[J_i,\Delta _{ik}]=\int \left\{\begin{array}{c}reduced \\ 3nj
\end{array}\right\}
exp(-\frac{1}{2}\sum_{i,k}^M \psi_i\Delta_{ik}\psi_k)\prod ^M_{j=1}d\psi_j\quad
{}.
\end{eqnarray}
Using the formula for gaussian integration in the semiclassical limit $J_i\gg
1$ we find the
following results:
\begin{eqnarray}
Z[J_i,\Delta _{ik}]\sim \frac{\hat{N}}{(det\Delta)^{\frac{1}{2}}}
exp(-\sum^M_{i,k=1}J_i(\Delta^{-1})_{ik}J_k),
\end{eqnarray}
where $\hat{N}=(2\pi)^{M-\frac{n-2}{2}}N(6^{-\frac{n-2}{2}}
\prod ^{\frac{n-2}{2}}_{k=1}V^{\frac{1}{2}}_k)^{-1}$.
For the correlation with euclidian quantum gravity we must find the geometrical
interpretation of this gaussian integral. The geometrical meaning of expression
(9) is more
clear if we note that the graphic representation of the $3nj$-symbol in Fig.1
is a
two-dimensional projection of
the $2n$-simplex. Besides, there is a correlation between $D$ and $D+1$
simplices. Every
$(D+1)$-simplex may be built joining two $D$-simplices, by indentification of
two
$(D-1)$-subsimplices and linking the two last free vertices by a new edge in
$(D+1)$
subspace. This means, in particular, that $(D+1)$-simplex at $D>2$ may be
represented
as a
unification of the tetrahedrons with rigid identification edges. This is a
geometrical
analogue
of expanded (7).
Consider
\begin{eqnarray}
\sum^M_{i,k=1}J_i(\Delta^{-1})_{ik}J_k\equiv \sum^M_{i,k=1}\Theta_{ik}A_{ki},
\end{eqnarray}
where $A_{ki}$ is the area of the 2-face of $D$-simplex containing $J_i$ and
$J_k$, and
$\Theta_{ik}$ is a defect angle if the 2-face containing $J_i$ and $J_k$
belongs to interior
of $D$-simplex, or it is an angle between the outer normals of the two
$(D_1)$-simplices
sharing the $(J_iJ_k)$-face, if this 2-face belongs to the boundary of the
$D$-simplex.
For a general $D$-simplicial manifold ${\it M{^{D}}}$ with up to an arbitrary
term depending
on the edge length the euclidean Einstein-Regge action
\begin{equation}
S_R[M^D]\sim \sum_{\sigma \in int M^D}A(\sigma)\epsilon(\sigma)+\sum_{\sigma
\in
\partial M^D}A(\sigma)\alpha (\sigma),
\end{equation}
where $\sigma$ stands for the 2-simplex where the curvature is concentrated and
$A(\sigma)$ is the area of $\sigma $; $\epsilon(\sigma)$ is the defect angle
associated
with the 2-simplex $\sigma $, and $\alpha (\sigma )$ must be interpreted as the
angle
between the outer normals of the two boundary 3-simplices intersecting at
$\sigma$.
Then, according to (10) and (11), we may write
\begin{equation}
Z[J_i,\Delta _{ik}]\sim
\frac{\hat{N}}{(det\Delta)^{\frac{1}{2}}}
exp(-S_R[\sigma ^D ]),
\end{equation}
where $S_r[\sigma^D]$ is the Einstein-Regge action for the $D$-simplex
$\sigma$. We
may consider this expression as a semiclassical limit of the partition function
for $D$
gravity and as a direct generalization of the Ponzano and Regge and
Carfora-Martellini-Marzuoli result.
In principle, there is another way to generalize the Ponzano-Regge ideas. We
may try to
build $3nj$-symbols for every unitary algebra $su(N)$ and, possible, for their
products and
use the correlation between the action of quantum gravity and a corresponding
$3nj$-symbol.
We could receive useful results only for some particular cases in $D=4$ and
$D=6$ by
using the isomorphisms $so(4)\simeq su(2)\oplus su(2)$ and $so(6)\simeq
su(4)$. Then,
for $su(4)$ algebra, for instance, $21j$-symbol corresponds to $D=6$-simplex.
In general,
there are several variants for building the euclidean quantum action for higher
$D$ as a
unification of 3 or 4, or ... $D-1$ simplices and corresponding $3nj$- momenta.
For other
dimensions, we must again use $3nj$-symbols at $n>2$. On the other hand, on
this way
we have one interesting possibility to build a pseudoeuclidean quantum gravity
action in
$D=3$ by introducing $6j$-symbols for $su(1,1)$ group and use isomorphism
$su(1,1)\simeq so(2,1)$. In any case we may consider all these possibilities as
a kind of
selfconsistency: various ways of building the $D$-simplex must give us the same
result.
These ideas may be applied to superspace. By analogy with the rotation case,
the super
$6j$-symbol $(s6j)$ for the superalgebra $osp(1|2)$ is a symmetric
transformation
coefficient that relates two bases in an irreducible representation space. The
irreducible
representations of the superalgebra $osp(1|2)$ have been analysed in
\cite{S,B}. The
$s6j$-symbols for the superalgebra $osp(1|2)$ possess, in addition to the usual
tetrahedral
symmetry, the so-called Regge symmetry in some particular cases\cite{D}. Then
we may
introduce $s3nj$-supersymbols and consider the semiclassical limit of the
euclidean
action in the corresponding superspace for $D$ supergravity.
\vspace{5mm}
\newpage
{\Large \bf{Acknowledgements}}
\vspace{5mm}
Author wants to express his gratitude to the Swedish Institute for Grant
304/01 GH/MLH, which gave him the possibility to enjoy the kind
hospitality of Prof. Antti Niemi, Doc. Staffan Yngve and all members of
the Institute of Theoretical Physics, Uppsala University.
\newpage |
gr-qc/9404028 | \section{Introduction}
For many years unstable quantum states were represented by Gamow vectors
\cite{1}, i.e., eigenvectors corresponding to complex eigenvalues of the
hamiltonian. But since the hamiltonian is self adjoint,
if we use a Hilbert space as state space, eigenvalues must be real. For this
reason, Gamow vectors were deshonorably excluded from ordinary quantum
mechanics
and they were considered just as useful (but not rigorous) analogies or
approximations. Nevertheless some years ago it was proved that Gamow vectors
belong to an extension of Hilbert space, namely a rigged Hilbert space with a
nuclear subspace based in Hardy class functions (see e.g.\ \cite{2,3,4} and
bibliography therein). Since then, Gamow vectores are legal citizens of an
extended version of quantum mechanics, where complex eigenvalues greately help
the computation of survival probabilities, life times, Liapunov variables, the
evolution toward equilibrium, etc. \cite {5,6}. These eigenvalues also
allow the introduction of more refined physical concepts, e. g. the
thermodynamical arrow of time can be defined in a way which is free of the
usual
criticisms (Lochshmidt objection, coarse-graining ambiguities, non-systematic
approximations, etc.). Precisely, studying this arrow of time we can see that
all the problem of time asymmetry essentialy has a cosmological origin
\cite{5,6}. Therefore it is natural to demand if unstable quantum states,
considered as vectors of a rigged Hilbert space, can be used in quantum
cosmology. This is, in fact, the case and the first examples are Refs \cite {7}
and \cite {8} where a simplified (toy model) version of the universe is studied
as a Friedrichs model [4].
In this paper we will present a complete (not only a toy model)
semiclassical model of the universe following the line of Refs.\cite {9} and
\cite {10} and we will show how the presence of unstable quantum states
enlarges the set of cases where we can prove that the decoherence phenomena
appears. Also correlations appear (for unstable states) explaining the outcome
of
a classical universe. Essentially, in this paper, we study, just
one example, but we will also especulate about eventual generalizations.
But first, let us briefly recall the guiding lines of the extension from
Hilbert space to rigged Hilbert space in usual quantum mechanics. The
traditional
set of states of a quantum system is a Hilbert
space, which leads, as it is well known, to time-reversibility. It is
precisely
this property which changes drastically with the extension of the Hilbert space
$\cal{H}$ to a rigged Hilbert space. This extension corresponds essentialy to
the transition from a space of
square integrable functions to a space of distributions. This procedure is not
unique and different distribution spaces can be defined which are based on
different test function spaces. If we choose as the test function space
$\Phi_-$, generated by the eigenfunctions of the energy $\omega$ which are
analytic in the lower complex halfplane, when the real variable
$\omega$ is promoted to a complex variable $z$ (precisely Hardy class
functions), we obtain the dual space $\Phi^\times_-$, which is the required
extension of the Hilbert space $\cal{H}$. The corresponding Gel'fand triplet is
then
\begin{equation}\Phi_- \subset {\cal H} \subset \Phi^{\times}_-.\end{equation}
If the same procedure is performed in the upper complex plane, the resulting
triplet reads
\begin{equation}\Phi_+ \subset {\cal H} \subset
{\Phi^{\times}_+}.\end{equation}
As we will see the first of these choices, hence the space $\Phi^{\times}_-$,
corresponds to unstable dacaying states while the second one, namely
$\Phi^{\times}_+$, corresponds to unstable growing states. In fact, the
complex
poles of the S-matrix are related, as it is well known, with unstable physical
states. These poles can then be transformed into
complex eigenvalues $ z_n$ of the Hamiltonian using standard methods [3]. The
essence of the rigged Hilbert space rather than the Hilbert space, as the
framework for the quantum states of the system, is clearly exhibited precisely
at this stage: the eigenvalues $z_n$ of a Hermitian operator are not real
anymore in this extended space \cite{6,11}. If Im $z_n > 0$ then a growing
prefactor appears in the time evolution of the corresponding eigenvector $ | n
_+\rangle $, giving rise to a growing state belonging to the rigged Hilbert
space $ \Phi^\times_+ $. On
the contrary, if Im $z_n < 0 $ the prefactor is a decaying one, the
corresponding state $\vert n_-\rangle$ is decaying and belongs to another
rigged
Hilbert space, $
\Phi^\times_-
$. Finally, Im
$z_n = 0
$ corresponds to an ordinary stable state belonging to the ordinary Hilbert
space
$ {\cal H} =
\Phi^{\times}_+ \cap \Phi^{\times}_- $ (more general models contain both,
growing and decaying states [5,6]).
If $K$ is the Wigner time-reversal operator we have
\begin{equation} K : \Phi^{\times}_- \rightarrow \Phi^{\times}_+ \ ; K:
\Phi^{\times}_+ \rightarrow \Phi^{\times}_-,
\end{equation}
\noindent since decaying states are transformed into growing states (or
vice-versa) by time-invers\-ion. Then the choice of $\Phi^{\times}_-$
(or $\Phi^{\times}_+$) as our space of quantum states implies that $K$ is not
defined inside
$\Phi^{\times}_-$ (or $\Phi^{\times}_+$), so that irreversibility
naturally appears and therefore the arrow of time also appears in the quantum
regime.
It follows that the choice between $\Phi_-$ or $\Phi_+$ is irrelevant,
since these two objetcs are identical (namely one can be obtained from the
other
by a mathematical symmetry transformation), and therefore the universes, that
we
will obtain with one choice or the other, are also identical and not
distinguishable. Only the names
${\it past}$ and {\it future} or ${\it decaying}$ and ${\it growing}$ will
change but physics is the same and, e.g., we will always have equilibrium
toward the future.
Let us summarize the organization of this paper. Section 2 introduce the
model that we will study. In Section 3 it will be demostrated that the S-matrix
of the model has an infinite set of complex poles (or branch cuts), and how
these
poles are transformed in complex eigenvalues that originate, in turn, damping
factors of unstable decaying states. In Section 4 it is shown how these
damping factors enlarge the set of initial conditions where decoherence occurs.
In Section 5 we will prove that there is correlation in all unstable states.
Finally, we briefly state our conclusions in Section 6. Two Appendixes
complement this work.
\section{The Model}
\setcounter{equation}{0}
Let us consider the model of Sec. 3 of Ref. \cite{10} where a
Robertson-Walker metric is studied (that we will mainly consider in the
flat case), with a total action $ S = S_g + S_f $, being $ S_g $ the
gravitational action and $ S_f $ the matter action (the usual action of a
spinless massive field $\Phi$). The gravitational action is given by
\begin{equation}S_g = M^2 \int d \eta [ -\frac {1}{2} \dot a^2 -
V(a)],\end{equation}
\noindent
where $M$ is the Planck mass, $\eta$ is the conformal time, $a$ is the
Robertson-Walker scale factor, $\dot a = da/d\eta $, and $ V(a) $ is
the potential function that arises from the spatial curvature, a possible
cosmological constant and eventually a classical matter field. As this last
field is arbitrary, for the sake of simplicity, let us study the case
where the classical matter field is such that $ V(a) = B^2/2 ( 1 - A^2/a^2)$
where $A$ and $B$ are arbitrary constants.
This case is the simplest of all, but we believe that the main features that
we will find will also be presented in more general cases, as we will argue
bellow. The role played by the classical field is completely natural, in the
context of this paper. In fact, we will essentially work using some results of
quantum field theory in curved space-time, where the geometry of space-time is
fixed ``a priori" (namely there is no back-reaction). The classical field is,
precisely, the agency that do this job, fixing a class of possible geometries
(but the properties that we will find will be the same for almost all classical
field).
The Wheeler-DeWitt equation for our Robertson-Walker model is
\begin{equation}\Big[\frac{1}{2M^2}\partial_{a^2}+M^2V(a)-\frac{1}{2}\int_k\bigg(
\partial_{\phi_k^2} -
\Omega_k^2\phi_k^2\bigg)\Big]\Psi(a,\Phi)=0.\end{equation}
Thus after making the WKB ansatz, the Hamilton-Jacobi equation appears as
\cite{10}
\begin{equation}( \frac{dS}{da} )^2 = B^2 ( 1 - \frac{A^2}{a^2}
),\end{equation}
\noindent
where $S$ is the principal Jacobi function. Thus the (semi)
classical time parameter or WKB time $\eta$ is given by
\begin{equation}\frac {d}{d\eta} = \frac{dS}{da} \frac{d}{da}.\end{equation}
Then in our simplified model we have the following class of geometries, in
terms
of this conformal time
$d\eta =a^{-1} dt$ \cite{12}
\begin{equation}a = \pm (A^2 + B^2 \eta^2 )^{\frac{1}{2}} + C \;\end{equation}
\noindent
where $C$ is an arbitrary constant. Using different values for this constant
and different choices for the $\pm$ sign we obtain different
classical solutions (in a more general case many constants would be necessary).
Going now to Ref. \cite{13} (eq.\ (3.113)) we can see that the
semiclassical (or quantum field theory in curved space-time) problem is solved
for all four dimensional, spatially flat, cosmological models with scale factor
\begin{equation}C(\eta) = a^2 = A^2 + B^2 \eta^2 \; \ -\infty < \eta <
\infty
\,
\end{equation}
\noindent
where $ A $ and $ B $ are constants. Then if we consider a massive,
conformally coupled scalar field, the energy function $ \Omega ^2_k $
reads
\begin{equation}\Omega^2_k = m^2 a^2 + k^2 = k^2 + m^2( A^2 + B^2 \eta^2 ) \,
\end{equation}
\noindent
where $m$ is the mass of the quantum matter field and $ k^2 =|\vec k |^2$,
where
$\vec k/a$ is the linear momentum of this field, in the case of flat
space Robertson-Walker universe (or a function of this momentum in the two
other
cases, namely open and close, being $k$ is a discrete variable in the close
case). Then (2.7) coincides with the last equation of page 70 of
Ref. \cite{13}.
If we ideally consider the evolution of the universe from $ \eta \rightarrow
- \infty $ to $ \eta \rightarrow + \infty $ (even if really we would like to
have
only an expanding universe and therefore $\eta \geq 0$, we will discuss
this issue below) and we define the corresponding adiabatic vacua $ |0, in
\rangle $ for $ \eta \rightarrow - \infty $ and $ | 0, out \rangle $ for $ \eta
\rightarrow + \infty $ the Bogolyubov coefficients are (Ref. \cite{13},
eq.\ (3.124))
\begin{equation}\alpha_{kj} = \frac {i (2\pi)^\frac{1}{2} \exp (- \frac{\pi}{4}
\lambda_k)}{\Gamma[\frac{1}{2}( 1- i \lambda_k) ] } \delta_{kj}
= \alpha_k \delta_{kj},
\end{equation}
\begin{equation}
\beta_{kj} = - i \exp ( - \frac{\pi}{2} \lambda_k ) \delta_{kj}
= \beta_k \delta_{kj},\end{equation}
\noindent
where $\lambda_k = k^2/Bm + A^2m/B$ and $\delta_{kj} $ is the Kronecker
$\delta$,
for the discrete case, and the Dirac $\delta$, for the continuous one.
Let us comment now on the choice of the vacua since really we would like to
study only the evolution $\eta \geq 0$. The
$|0, {\rm out} \rangle $ vacuum is the adiabatic physical vacuum for $ a
\rightarrow
+ \infty$, where the classical regime must naturally appear, therefore
it is a completely reasonable vacuum. Let us suppose that the vacuum at $\eta
=
0$ is just $| 0, in \rangle $. This is, of course, a completely
arbitrary choice, that will be discussed in the next section, where we will
introduce a general vacuum at $\eta = 0$ that we will call $| 0, 0 \rangle $.
Anyhow with this arbitrary choice (2.8) and (2.9) are corrects.
\section{The poles of the S-matrix and the unstable quantum states}
\setcounter{equation}{0}
From (3.46) and (3.47) of Ref. \cite{13}, or more generally from
Sec. 2 of Ref. \cite{14} it can be seen that there is a pole in the
S-matrix (between the ``in" and the ``out" Fock spaces) where the
function $\Lambda_{ji} = - i \sum_k \beta_{kj} \alpha^{-1}_{ik}$ has a pole,
namely where $\alpha_{kj} = 0$ ( or $\beta_{kj}$ has a pole). Using (2.8) it
must
be $\alpha_k = 0$ or, which is the same thing, that $ \Gamma[1/2(1
-i\lambda_k)]$ would have a pole. $\Gamma (z)$ has a pole if $z=-n$ (n= 0, 1,
2,..., see, e.g., \cite{15} or \cite{16}) (no poles are produced by the
$\beta$'s
given by (2.9)). Therefore
$S$ has a pole if
\begin{equation}k^2 = m B [-\frac{m A^2}{B}- 2i ( n + \frac{1}{2}
)],\end{equation}
\noindent
and the squared energy, for each pole, reads
\begin{equation}\Omega^2_k = m^2 a^2 + m B [-\frac{m A^2}{B}- 2i ( n +
\frac{1}{2} )].\end{equation}
We will call this energy $\Omega_k$ simply $\Omega_n$. Thus, we have an
infinite set of unstable states with mean life
\begin{equation}\tau_n = \frac{2^{\frac{1}{2}}\{m^2 (a^2 - A^2) + [m^4 (a^2 -
A^2) + 4 B^2 m^2 (n + \frac{1}{2})^2]^{\frac{1}{2}}\}^{\frac{1}{2}}}{2 B m
(n+\frac{1}{2})}.\end{equation}
Let us observe that the energy and mean life are $a$-dependent. Therefore we
have two possibilities:
1) either we can consider that the in and out states corresponds to $a>>1$,
where
these mean lifes are big but still finite, or
2) we transform all the equations to the non-rescaled case, where the
physical real values are the physical time $t=\int a d\eta$, the physical
energy $\Omega_k/a$ and the physical momentun $\vec k/a$.
We follow the first alternative, and sketch the second one in the Appendix A.
Therefore the universe evolution creates unstable particles as well
as stable ones. Using the standard method explained in Refs. \cite{2} and
\cite{3} we can promote these unstable states to vectors of an adequate rigged
Hilbert space and build a basis of this space with stable modes with real
energies $\Omega_k $ plus unstable modes with complex ``energy" $\Omega_n $
given
by (3.2) (in the open case this procedure is direct, since we have a
continuous spectrum to begin with, in the close case we must use assumption 3
of
Ref. \cite {7}).
This would be the state of affairs if we use the (quite arbitry) vacuum
$|0,in\rangle $ of section 2. In this case we have found an infinite discrete
set of unstable states. What happens if we use a generic (i.e., almost any)
vacuum $|0,0\rangle $ at $ \eta = 0 $? This generic vacuum will be
related to $ | 0,in\rangle $ by some Bogolyubov
coefficients $ \bar{\alpha}_{kj}$, $\bar{\beta}_{kj}$. Then the $
\bar{\bar{\alpha}}$ coefficient relating $ | 0,0\rangle $ to
$ | 0,out\rangle $ reads
\begin{equation}\bar{\bar{\alpha}}_{ik}= \sum_j \bar{\alpha}_{ij} \alpha
_{jk} +\bar{\beta}_{ij}
\beta^*_{jk}= \frac{\bar{\alpha}_{ik} i (2\pi)^{\frac{1}{2}} \exp(
-\frac{\pi}{4}\lambda_k)}{\Gamma [\frac{1}{2}( 1 - i\lambda_k)]}
+ \bar{\beta}_{ik} i \exp( - \frac{\pi}{2} \lambda_k).\end{equation}
The poles are now located where this alpha vanishes. The roots in $k$ of the
corresponding equation, $\bar{\bar{\alpha}}_{ik} = 0 $ can be found only if
we fix the arbitrary coefficients $\bar{\alpha}_{ik}$, $\bar{\beta}_{ik}$.
Of course if these coefficients are fixed in a very particular way the equation
will have no roots. But if the functions $\bar{\alpha}_{ik}$,
$\bar{\beta}_{ik}$ are fixed in a generic (i.e. in almost any) way, the
equation will have a set of complex roots, that correspond to unstable
particles
created by the universe evolution. This statement is equivalent to claim that a
generic S-matrix, for our problem, has infinite numbers of poles or cuts. Let
us
say an infinite set of poles to precise the ideas (cuts will be studied in
Appendix A). Even if we have not, by now, a rigorous mathematical proof of
this
theorem, we think that we can sketch a reasonable convincing demonstration
Let $ \{ | n, in \rangle \}$, $\{ \vert 0, 0 \rangle \}$, and $ \{\vert m, out
\rangle \}$ be the basis of the Fock spaces corresponding to vacua $| 0,
in\rangle$, $| 0, 0 \rangle$, and $| 0, out\rangle$. The in-out S-matrix, with
an
infinite set of poles, reads
\begin{equation}S_{nm} = \langle n, in | m, out\rangle = \sum_l \langle n, in|
l, 0 \rangle \langle l, 0 | m, out\rangle\end{equation}
\noindent
where the states $\vert l,0\rangle$ are a complete set.
From some (infinite) values of $n$ and $m$ we know that $S_{nm}$
has poles. Let us consider one of these values, then the l.h.s. of
(3.4) has also a pole, and theferore one of its terms has a pole. Then,
either:
i) one of the factors inside the summatory of (3.4) has a finite
number of poles and the other one an infinite number of poles, or
ii) both factors have an infinite number of poles.
But (i) must be excluded since, in this case, time evolution $ ( -\infty < \eta
<
0 )$ would be qualitatively different to evolution $( 0 < \eta < +\infty )$ and
this fact would break the time symmetry, which is impossible since evolution
equations, time evolution of $a$, and boundary conditions are time symmetric
with respect $\eta =0$. Then, the $0-out$ matrix $\langle l, 0 | m,
out\rangle$,
corresponding to evolution $\eta \geq 0$, has an infinite number of poles.
We give an alternative demonstration in Appendix B, which is valid for every
evolution and every spatial geometry.
Even if a rigorous proof of these facts would be welcomed we believe that the
reasonings above, and the ones in Appendix B, are quite convincing. Essentially
the periodical nature of
$\bar{\bar{\alpha}}_{ik}$ is inherited from its definition;
$\bar{\bar{\alpha}}_{ik}=(\bar{u}_i , u_k)$ (eq. (3.36), Ref. [13]) where
$\bar{u}_i$ and $u_k$ are two different negative frequency solutions of the
corresponding Klein-Gordon equation. As in flat space-time, these solutions are
functions like $\exp (-i k t)$, they somehow must keep the periodicity in $k$
in curved space-time. The $\alpha$ coeficients always have a periodic
behaviour in the complex plane as it is shown in equations (3.91), (3.124),
(4.60), (4.61), (4.95), (5.41), (5.110), and (5.111) of Ref. [13]. Therefore
the
S-matrix has an infinite and discrete set of complex eigenvalues for almost any
initial condition $\vert 0,0\rangle$.
Now that we know that the 0-out S-matrix has an infinite set of complex poles,
we can find the complex eigenvalues [2,3].
As the \{ $\vert k,out\rangle\}$ basis is complete we have
\begin{equation}\int_k \vert k,out\rangle\langle k,out\vert dk=1,\end{equation}
\noindent
where $\vert k,out\rangle$$ \in {\cal H}$ and the integral means that we must
integrate over the continuous spectrum of energies and other quantum numbers.
Using the standard techniques of Ref. [2,3] we can transform the last equation
in
\begin{equation}\sum_n\vert n,out-\rangle\langle n,out+\vert
+\int_k\vert k,out-\rangle \langle k,out+\vert dk=1,\end{equation}
\noindent
where $\vert n,out-\rangle$, $\vert k,out-\rangle$$\in \Phi_-^\times$ and the
first summatory corresponds to the discrete unstable modes and the integral to
the stable continuous ones.
We choose a $\Phi_-$ test function based in Hardy functions for below, all the
poles will have negative imaginary part and all the unstable states are
decaying
ones, and they will belong to $\Phi^{\times}_-$ (as we already know we can also
make a symmetric choice).
According to Wheeler-DeWitt equation (2.2), the field hamiltonian reads
\begin{equation}h=\frac{1}{2}\int_k\bigg(-\partial_{\phi_k^2}^2+\Omega_k^2
\phi_k^2 \bigg)dk =\int_k\Omega_ka_k^\dagger a_k dk,\end{equation}
\noindent
where $a_k$ and $a_k^\dagger$ are the usual creation and anihilation operators.
From now, we will always refer to the out case with $a >> 1$ and $h$,
$\Omega_k$, $a_k$, and $a_k^\dagger$
will be $h^{out}$,$\Omega_k^{out}$, $a_k^{out}$, and $a_k^{\dagger out}$. There
are new creation and anihilation operators for the discrete spectrum:
$\bar{a}_n^{out}$, $\bar{a}_n^{\star out}$ and for the continous ones
$\bar{a}_k^{out}$, $\bar{a}_k^{\star out}$ (the definition of $\star$ is given
in
Appendix B). Vectors $\vert n,out-\rangle$ will be created by the repeated
action of $\bar{a}_n^{\star out}$ on $\vert 0,out\rangle$, and vectors
$\vert k,out-\rangle$ will be created by $\bar{a}_k^{\star out}$ analogously.
Therefore $h^{out}$ now reads
\begin{equation}h^{out}=\sum_n\bar{a}^{\star out}_n
\bar{a}^{out}_n+\int_k\Omega_k\bar{a}^{\star out}_k \bar{a}^{out}_k
dk,\end{equation}
\noindent
and we will have
\begin{equation}h^{out}\vert n,out\rangle = \Omega_n n \vert n,out\rangle
,\end{equation}
\noindent
so the evolution of $\vert n,out\rangle$ $\in \Phi_-^\times$ has a damping
prefactor $\exp(-n \eta /\tau_n)$ since $\Omega_n$ has an imaginary commponent.
Thus, as we now have damping factors $\exp(- n \eta /\tau_n)$ in the evolution
equations, it will be very easy to find Lyapunov variables, and in particular a
growing entropy for almost any initial condition as in Refs. \cite{5,6}. This
result must be compared with the one of Ref. \cite{17} where a Luapunov
variable
was found for the universe evolution using the standard methods
\cite{18,19} based in an arbitrary coarse-graining and a particular
(generalized molecular chaos) initial condition. The new result is much more
satisfactory than the old one, since now we have a growing entropy for almost
any initial conditions solving Lochsmidt criticisms.
\section{Decoherence}
\setcounter{equation}{0}
Decoherence is a dissipative process, and we know \cite{20} that it is closely
related to another dissipative phenomenon, that is particle creation from
the gravitational field \cite{21,22,23}. Particle creation has been studied in
the quantum field theory in curved spaces \cite{13,24} as the semiclassical
limit of quantum cosmology. We will restrict ourselves to the semiclassical
approximation to study the decoherence phenomenon.
Decoherence naturally appears in systems where the hamiltonian has complex
eigenvalues, as it is proved in Ref. \cite{25}. Let us consider the formalism
developed in Refs. \cite{9} and \cite{10} to see that, this is also the case,
in the system we are studying. We labelled
the three-geometry with the scalar factor $a$ (the indices $\alpha$ and
$\beta$ simbolizes the choice of the sign and constant in (2.5)), and
$\Phi_N$ is the mode $N$ of the matter field; precisely, we have used $n$ for
the discrete unstable states and
$k$ for the continous stable states; when we will be refering to both kinds of
states, we will call the index
$N$.
The WKB solution of the Wheeler-DeWitt equation reads (\cite{9} eq. (2.8))
\begin{equation}\Psi(a,[\Phi_N])=\exp [iM
S(a)]\chi(a,[\Phi_N]),\end{equation}
\noindent
where $S$ is the principal Jacobi function of (2.3) and
$\chi(a,[\Phi_N])$ can always be written as
\begin{equation}\chi(a,[\Phi_N])=\prod_N \chi_N(\eta ,\Phi_N).\end{equation}
We can obtain $\chi_N(\eta ,\Phi_N)$ by a Gaussian
approximation \cite{10}
\begin{equation}\chi_N(\eta ,\Phi_N) = A_N(\eta) \exp [i \alpha (\eta) -
B_N(\eta)\Phi_N^2].\end{equation}
Functions $A_N(\eta)$ and $\alpha_N(\eta)$ are real while $B_N(\eta)$ is
complex,
precisely $B_N(\eta)=B_{NR}(\eta)+iB_{NI}(\eta)$ and can be obtained solving
the
system
\cite{10}:
\begin{equation}A_N(\eta)=\pi^{-\frac{1}{4}}(2B_{NR}(\eta))^{\frac{1}{2}},
\end{equation}
\begin{equation}\dot{\alpha}_N(\eta)=-B_{NR}(\eta),\end{equation}
\begin{equation}\dot{B}_N(\eta)=-2iB_N^2(\eta)+\frac{i}{2}\Omega_N^2(\eta).\end{equation}
From [10] or \cite{20} we can learn the conditions for the ocurrence of
decoherence if we use only the real $\Omega_N$, namely the ones of the
continuous spectrum. In Ref. \cite{26} the computation is made only through a
linear approximation of $B(a)$ as a function of $a$. In Ref. [20] decoherence
takes place only in the case where the Bogolyubov coefficients
$\beta_n$ are small and imaginary. In Ref. [10] decoherence takes place unless
the environment is very ordered and fine tuned. Thus we cannot say that there
is
decoherence for almost any initial condition if the
$\Omega_N$ are all real, like in these works. Let us see, what happens
in our model if we use a basis with infinite complex modes $\Omega_n$ as well
as
real modes
$\Omega_k$.
From the wave function (4.1), and after the integration on modes of the scalar
field (considered here as the ``environment"), we obtain the following ${\it
reduced}$ density matrix
$$\bar{\rho}_r(a, a') = \exp[-iMS_{\alpha}(a) +iMS_{\alpha}(a')]
\bar{\rho}^{\alpha \alpha}(a,a')$$
$$+ \exp[-i M S_{\alpha}(a) + i M S_{\beta}(a')]\bar{\rho}^{\alpha
\beta}(a,a')$$
$$+\exp[-iM S_{\beta}(a) + iM S_{\alpha}(a')]
\bar{\rho}^{\beta \alpha}(a,a') $$
\begin{equation}+ \exp [-i M S_{\beta} (a) + i M S_{\beta}(a')]
\bar{\rho}^{\beta
\beta}(a,a'),\end{equation}
\noindent
where, as we have said, $\alpha$ and $\beta$ symbolize two different classical
solutions, namely two different choices of the sign $\pm$ and the constant $C$
of
(2.5), and
\begin{equation}\bar{\rho}_r^{\alpha \beta} (a,a') = \prod_N
\bar{\rho}_{rN}^{\alpha \beta}(a,a') = \prod_N \int d\Phi_N
{\chi_N^\alpha}^\ast(\eta ,\Phi_N)\chi_N^\beta(\eta',\Phi_N).\end{equation}
From (3.20) of Ref. [10], it is
\begin{equation}B_N=-\frac{i}{2}\frac{\dot g_N}{g_N},\end{equation}
\noindent
where $g_N$ is the wave function that represents the quantum state of the
universe being also the solution of
the differential equation
\begin{equation}\ddot g_N+\Omega_N^2 g_N=0,\end{equation}
\noindent
$\Omega_N$ can be the complex energy $\Omega_n$ in our treatment. From (2.6),
(2.7), and (3.1) we know that if the initial sate is $\vert 0,in\rangle$, the
complex energies are
\begin{equation}\Omega_n^2=m^2a^2-m[2iB(n+\frac{1}{2})+mA^2].\end{equation}
In the more general case we use an arbitrary initial state $\vert 0,0\rangle$,
instead of $\vert 0,in\rangle$. From the discussion of Sec. 3, we know that,
in a generic case, an infinite set of complex poles does exist. Then we must
change (3.1) by $k^2=k_n^2$ ($n=0, 1, 2, .....$) where this are the points
where the infinite poles are located in complex plane $k^2$; (3.2) now reads
\begin{equation}\Omega_n^2=m^2a^2+k_n^2.\end{equation}
Let us now see that decoherence takes place if there is an infinite set of
complex modes (even in a more general case than the one of the time evolution
fixed by (2.3), the only one we have studied in great detail above, if we use
the theorem of Appendix B).
Let us consider the asympthotic (or adiabatic) expansion of function $g_N$ when
$a\rightarrow +\infty$ in the basis of the out modes. $g_N$ is the wave
function
that represent the state of the universe, corresponding to the arbitrary
initial
state
$\vert 0,0\rangle$, and its expansion reads
\begin{equation}g_N=\frac{P_N}{\sqrt{2\Omega_N}}\exp [-i\int_0^\eta \Omega_N
d\eta]+\frac{Q_N}{\sqrt{2\Omega_N}} \exp [i \int_0^\eta \Omega_N
d\eta],\end{equation}
\noindent
where $P_N$ and $Q_N$ are arbitrary coefficients showing that $\vert
0,0\rangle$ is really arbitrary.
It is obvious that if all the $\Omega_N$ are real, like in the case of the
$\Omega_k$, (4.13) will have an oscilatory nature, as well as its derivative.
This will also be the behaviour of $B_k$ in (4.9). Therefore the limit of $B_k$
when
$\eta \rightarrow +\infty$ will be not well defined even if $B_k$ itself is
bounded.
But if $\Omega_N$ is complex the second term of (4.13) will have a damping
factor and the first a growing one. In fact, the complex extension of eq.
(4.13) (with $N=k$) reads
\begin{equation}g_n=\frac{P_n}{\sqrt{2\Omega_n}}\exp [-i\int_0^\eta \Omega_n
d\eta]+\frac{Q_n}{\sqrt{2\Omega_n}} \exp [i \int_0^\eta \Omega_n
d\eta],\end{equation}
Therefore when $\eta \rightarrow +\infty$ we have
\begin{equation}B_n
\approx -\frac{i}{2}\frac{\dot{g}_N}{g_N}=\frac{1}{2}\Omega_N.
\end{equation}
Then we have two cases:
i) $\Omega_N=\Omega_k$ $\in {\cal R}^+$ for the real factors. Then we see that
when
$\eta \rightarrow +\infty$, the r.h.s. of (4.8) is an oscillatory function
with no limit in general. We only have a good limit for some particular
initial conditions (as $Q_N=0$ or $P_N=0$ \cite{9,10,20,26}).
ii) $\Omega_N=\Omega_n=E_n-\frac{i}{2}\tau_n^{-1}$ $\in {\cal C}$ for the
complex factors. If we choose the lower Hardy class space $\Phi_-$ to define
our
rigged Hilbert space we will have a positive imaginary part, and there will be
a
growing factor in the first term of (4.13) and a damping factor in the second
one. In this case, for $a\rightarrow +\infty$, we have a definite limit:
$B_n=1/2$ $\Omega_n$.
So we can say nothing about the limit of the real factors (and therefore
nothing in general for the product of these real factors) while the complex
factors have definite limits for every initial conditions, namely for every
$\vert 0,0\rangle$ state.
Therefore let us compute the $\bar{\rho}_{rn}^{\alpha\beta}$ for the complex
factor, since these are the only quantities whose limits we know for sure. So
let
us compute these matrix elements using eq. (2.29) of Ref. [9] or (2.24) of
[10],
namely
\begin{equation}\bar{\rho}_{rn}^{\alpha\beta}(a,a')=\bigg(\frac{4B_{nR}
(\eta,\alpha)B_{nR}(\eta',\beta)}{[B^\ast_n(\eta,\alpha)+B_n(\eta',\beta)]^2}\bigg)^
\frac{1}{4}\exp
[-i\alpha_n(\eta,\alpha)+i\alpha_n(\eta',\beta)],\end{equation}
\noindent
where $\alpha$ and $\beta$ mean that the $B$
refers to these classical solutions. $\bar{\rho}_r^{\alpha\beta}(a,a')$ can be
obtained using (4.8). Let us first compute
\begin{equation}log \vert \bar{\rho}_r^{\alpha\beta}(a,a')\vert =\sum_n
log \vert \bar{\rho}_{rn}^{\alpha\beta}(a,a')\vert .\end{equation}
Now it can be proved that if ${\cal I}m B_n \approx {\cal I}m \frac{1}{2}
\Omega_n \not= 0$,
\begin{equation}\vert \bar{\rho}_{rn}^{\alpha\beta}(a,a')\vert <
1.\end{equation}
In fact, calling $B_n^\ast (\eta ,\alpha)= z = x+iy$ and $B_n(\eta',\beta') =
\zeta = \xi + i y$, we can compute
\begin{equation}\Big\vert \frac{ 4 x \xi}{(x+\xi )^2 + (y+\eta)]^2}\Big\vert
^\frac{1}{4} < \Big\vert \frac{4 x \xi}{\vert x+\xi\vert ^2}\Big\vert
^\frac{1}{4}
\leq 1, \end{equation}
\noindent
since from $\vert x+\xi\vert ^2 \geq 0$ it follows that $4x\xi\leq \vert
x+\xi\vert ^2$. Then all terms of the r.h.s. of (4.17) are negative if
${\cal I}m$ $B_n \not = 0$.
Now from (4.11) or (4.12) we can see that the $B_n(\eta,\alpha)\approx 1/2
\Omega_n(\eta,\alpha)$, corresponding to the discrete complex modes, have all
almost the same asympthotic value when $a \rightarrow +\infty$, therefore all
the
terms of the r.h.s. of (4.17) have almost the same asymthotic value. As they
are all negative and almost equal, the summatory of (4.17) has an asynthotic
value $-\infty$ and therefore $\bar{\rho}_r^{\alpha\beta}(a,a')$ of
(4.8) vanishes only considering the discrete complex modes factors.
Then we have decoherence if $B_n^\ast (\eta,\alpha) \not = B_n(\eta',\beta)$
namely if $\Omega^\ast_n(\eta,\alpha)\not = \Omega_n(\eta',\beta)$, or using
(4.12), if (for an infinite set of $n$) we have
\begin{equation}m^2[\pm (A^2+B^2\eta^2)^\frac{1}{2}+C_\alpha]^2 \not =
m^2[\pm(A^2+B^2{\eta'}^2)^\frac{1}{2}+C_\beta]^2.\end{equation}
So we necessarily have decoherence:
i) for different classical solutions, i.e. for different constants $C_\alpha
\not
= C_\beta$, or different $\pm$ signs, even if the time is the same $\eta =
\eta'$.
ii) for the same classical solutions ($C_\alpha = C_\beta$, and same $\pm$
sign)
if the times $\eta$ and $\eta'$ are different.
Now we can discuss the choice of either $\Phi_-$ or $\Phi_+$ for the test
function spaces. $\Phi_-$ produces the space $\Phi_-^\times$ of the decaying
states, while $\Phi_+$ will produce the space $\Phi_+^\times$ or growing
states.
Of course one choice becomes the other if we change the $\pm$ sign in (2.5) and
also in all other equations that are a consequence of (2.5). To choose $\Phi_-$
or
$\Phi_+$ corresponds to the choice of the arrow of time in the quantum regime
as
explained in the introducction. Choosing the Gel'fand triplet
$\Phi_-\subset {\cal H}\subset \Phi_-^\times$ is equivalent to say that the
unstable created particles produced by the universe expansion will decay. This
is the motivation of the choice of the arrow of time in the quantum regime.
Nevertheless we can conceive more complex models where growing and decaying
particles could coexist at the same time. Then, we can adopt a more
conservative
attitude. We can consider that the space
$\Phi_+\oplus\Phi_-$ as the space of the test functions with the correponding
space
$\Phi_+^\times
\oplus
\Phi_-^\times$ where there are mixed decaying and growing states. This choice
is
possible in the quantum regime, and therefore there will be no arrow of time in
this regime. But in the classical limit, solution in space
$\Phi_+^\times$ will decohere with the solutions in $\Phi_-^\times$ since they
correspond to different choices of the $\pm$ sign [26].
Therefore for a classical universe either we have a state in $\Phi_+^\times$ or
$\Phi_-^\times$ (which, on the other hand, it is an irrelevant choice). Under
this perspective, we will not have an arrow of time in the quantum regime but
this arrow will appear naturally in the classical regime.
\section{Correlation}
\setcounter{equation}{0}
From Ref. [10] we can also learn the conditions for the existence of
correlation. But, for the same reason used in Sec. 4, we cannot say that
there is correlation for almost any initial condition. This correlation
depends on the initial conditions, and it can be easily obtained from system
(4.4)-(4.6). In fact for certain initial condition, if the $\Omega_N^2$ are
real, it turns out that
$B_{NR}=0$ and all the conditions for the existence of correlations of Ref.
[10]
are not fulfilled and therefore there is
no correlation. Precisely, if
$B_{NR}=0$ and
$B_N=iB_{NI}$ and all energies $\Omega_N$ are real (it would be better to
call it $\Omega_k$), (4.6) reads
\begin{equation}\dot{B}_{NI}=2B_{NI}^2+\frac{1}{2} \Omega_N^2,\end{equation}
\noindent
where all the variables are real. Therefore if $B_{NR}=0$ at $a=0$, $B_{NR}=0$
at
every time and there is no correlation. Thus correlation, as decoherence,
depends crucially on the initial condition and there is no correlation for the
above initial condition. Correlation takes place inside each classical solution
and it therefore can be computed using the Wigner function, associated with
$\bar{\rho}_{rn}^{\alpha\alpha}(\eta,\eta')$ \cite{10,27}
\begin{equation}F^{\alpha\alpha}_W(n)(a,P)=\int_{-\infty}^{+\infty}d\Delta
\exp{(-2iP\Delta)} \bar{\rho}_{rn}^{\alpha\alpha}(a-\frac{\Delta}{M},
a+\frac{\Delta}{M}).\end{equation}
\noindent
where $a$, $a'$ $=a\pm \frac{\Delta}{M}$, and $P$ is the canonical momentum.
Nothing new can be said about the real continuous modes, all was already said
in
Ref. [10]. We must only study the complex discrete unstable modes. This is
nevertheless important since, most likely, the universe is in an unstable mode
or more generally in a linear combination of unstables modes (see Ref. [7] and
\cite{28,29}, where the universe is in a ``tunneling" unstable state, i.e. a
typical Gamow vector).
Then we can repeat the reasonings of [10] from (2.24) to (2.28) and, with the
same assumptions we will arrive to this last equation, that now reads
\begin{equation}F^{\alpha\alpha}_W(n)(T,P)\approx
C^2(T)\sqrt{\frac{\pi}{\sigma^2}} \exp\Big[-\frac{(P-M\dot S+\dot
\alpha-\frac{\dot{B}_{nI}}{4B_{nR}})^2}{\sigma^2}\Big].\end{equation}
In the case of the our unstable states we have, for $a(\eta) \rightarrow
+\infty$
\begin{equation}\dot\alpha =-B_{nR}=-\frac{1}{\sqrt
2}\Big[m^2B^2\eta^2+(m^4B^4\eta^4+4m^2B^2(n+\frac{1}{2})^2)^{\frac{1}{
2}}\Big]^{\frac{1}{2}},\end{equation}
\begin{equation}\dot{B}_{nI}=2\sqrt{2}\frac{m^3B^3
\eta}{\Big[m^2B^2\eta^2+\bigg(m^4B^4\eta^4+4m^2B^2(n+\frac{1}{2})^2\bigg)^
{\frac{1}{2}}\Big]^2+4m^2B^2(n+\frac{1}{2})^2},\end{equation}
\noindent
and the inverse of the correlation width of the reduced density matrix is
\begin{equation}\sigma^2=\frac{1}{2}\frac{m^4B^4\eta^2\Big[m^2B^2\eta^2+
(m^4B^4\eta^4+4m^2B^2(n+\frac{
1}{2}))^\frac{1}{2}\Big]^{\frac{1}{2}}}{\Big[m^2B^2\eta^2+
(m^4B^4\eta^4+4m^2B^2(n+\frac{
1}{2}))^\frac{1}{2}\Big]^2+4m^2B^2(n+\frac{1}{2})},\end{equation}
\noindent
if $\eta$ is big: $\vert \dot{B}_R\vert > \vert
\dot{B}_I\vert$.
We can see that when $ \eta\rightarrow +\infty$, $\sigma^2
\rightarrow 0$, and there is a good correlation and the Wigner function is a
Gaussian function, of width $\sigma$, peaked about
\begin{equation}P=M\dot S-\dot
\alpha+\frac{\dot{B}_{nI}}{4B_{nR}},\end{equation}
\noindent
where the first term of the r.h.s. gives the classical result and the last two
are the quantum correlation to the classical trajectory.
On the other hand, in this state we can predict strong correlations
between coordinates and momenta [10], because
\begin{equation}\bigg(M\dot S-\dot\alpha +\frac{B_{nI}}{4B_{nR}}\bigg)^2>>
\sigma^2,\end{equation}
\noindent
(in the preceeding equation we can see that, in the limit of large $\eta$, the
l.h.s. is proportional to $\eta^2$, while $\sigma^2 \sim \eta^{-1}$).
For the generic initial
condition $\vert 0,0\rangle$ we can use (4.12) and we will reach to the same
conclusion. Therefore there is a perfect correlation for the unstable states of
our model. Decoherence and correlation will produce the outcome of an
asymthotic
classical regime in the far future.
\section{Conclusions}
We have demonstrated that, the S-matrix of almost any quantum field theory in
curved spaces model, has an infinite set of poles (or cuts). The presence of
this singularities produces the appearance of unstable states (with complex
eigenvalues) in the universe evolution. The corresponding eigenvectors are
Gamow vectors and produce exponentially
decaying terms as in the Friedrichs model of resonances. But the best
feature of these decaying terms is that they simplify and clarify
calculations.
E. Calzetta and F. Mazzitelli [20] have demonstrated that, under suitable
conditions, the expansion of the universe leads to decoherence if this
expansion
produces particle creation as well. Our unstable states
enlarge the set of initial conditions where decoherence occurs. In fact, the
damping factors (related to the imaginary part of S-matrix's poles), allow that
the interference elements of the reduced density matrix, dissapear for almost
any initial conditions.
Following the reasonings of Ref. [10], we also demostrate that the unstable
states satisfy the correlation conditions, which, with the decoherence
phenomenon, are the origin of the semiclassical Einstein equations.
For simplicity, we assume (as usual) that the state of the
environment can be described by a Gaussian wave function (eq. (4.3)). This is
indeed a restricted class of states [10], but general states could also be
implemented in our formalism. The arbitrary election of the
coefficients $P_N$ and
$Q_N$, in eq. (4.13), shows that the set of initial conditions is really
arbitrary.
Finally, we can say that the existence of unstable states in the universe
evolution (coming from singularities in the Riemann second sheet of the
analytical extension of the S-matrix) can help us to understand the quantum to
classical transition and other dissipative aspects of the universe
evolution.
\vskip 1cm
\section*{Acknowledgments}
We would like to thank I. Prigogine, L. Bombelli, E. Gunzig and
F.D. Mazzitelli for discussions. This work was partially supported by
the Directorate-General for Science, Research and Development of the Commission
of the European Communities under contract ECRU002 (DG-III), by the Institute
Internationaux de Physique et de Chimie Solvay, and the University of Buenos
Aires.
\vskip 1cm |
1812.06719 | \section{Introduction}
The quantization of analog signals to a finite number of bits is an essential step in many signal processing problems: it allows one to digitally transmit, process, and reconstruct signals. In \emph{quantized compressed sensing} the focus is on the recovery of low-complexity signals (e.g., signals that have a sparse representation in a given basis) from their quantized measurements. Such recovery problems are natural, appear frequently in real-world applications, and have been studied extensively in recent years (see, for example, the survey \cite{Dir18}).
\par
A very popular model in quantized compressed sensing is \emph{one-bit compressed sensing}. In this setup, the unknown signal is a (sparse) vector $x \in \mathbb{R}^n$ and linear measurements of the signal are generated using a \emph{measurement matrix} $A\in\mathbb{R}^{m\times n}$ where $m\ll n$. To make the model realistic, the $m$ linear measurements $(Ax)_i, \ 1 \leq i \leq m$ are corrupted by (random) noise, resulting in the analog measurement vector $Ax+\nu_{\operatorname{noise}}$. Then, each noisy measurement, that is, each coordinate of the vector $Ax+\nu_{\operatorname{noise}}$, is quantized into a single bit by comparing it to a threshold. During this quantization process corruption may occur again, leading to several `sign flips'. In other words, if we set $\tau_{\operatorname{thres}}\in \mathbb{R}^m$ to be the vector whose coordinates are the quantization thresholds, $\text{sign}(\cdot)$ is the sign function applied element-wise, and
\begin{equation} \label{eqn:1-bitModel}
q=\text{sign}(Ax + \nu_{\operatorname{noise}} + \tau_{\operatorname{thres}}),
\end{equation}
then the data one actually receives is a corrupted vector $q_{\operatorname{corr}} \in \{-1,1\}^m$, obtained from $q$ by several (possibly adversarial) sign changes.
In realistic situations, one has no control on the noise vector $\nu_{\operatorname{noise}}$ which determines the pre-quantization (analog) noise, nor on the sign changes that may occur during quantization. The one component that can be controlled is the vector $\tau_{\operatorname{thres}}$ which determines the thresholds used in the quantization process. As it happens, if the quantization thresholds are either fixed or are random and independent, then the one-bit quantizer $\text{sign}(\cdot + \tau_{\operatorname{thres}})$ can be implemented very efficiently; it should therefore come as no surprise that it is popular in engineering literature (see e.g.\ \cite{BoB08,MoH15}).
\par
Our main interest here is to explore one-bit compressed sensing in realistic problems and to present realistic solutions for such problems. This requires addressing two core issues:
\par
\noindent{\bf \underline{Noise.}} It is a fact of life that noise plays a significant role in real-world problems. Indeed, one encounters noise at the analog, pre-quantization phase and also during the quantization process. What plays a crucial role is the noise level one faces: when the analog noise vector $\nu_{\operatorname{noise}}$ has iid coordinates, the noise level is captured by the variance of the coordinates; and during quantization that noise level is the maximal number of bits that can be `flipped'.
In realistic problems the two noise levels can be substantial: the variance of the coordinates of $\nu_{\operatorname{noise}}$ can be some constant that has nothing to do with the required reconstruction accuracy, and at the same time, the number of sign changes that may occur during quantization can be a fixed proportion of $m$. As a result, solutions to realistic recovery problems must be based on procedures that are robust to the effect of significant noise levels.
\par
\noindent{{\bf \underline {Structured measurement matrices.}} In classical (`unquantized') compressed sensing, it is well known that optimal reconstruction guarantees are enjoyed by completely random measurement matrices, such as the standard Gaussian matrix. Unfortunately, such matrices are extremely difficult to `realize' in practice, as real-world measurement schemes are subject to physical constraints, and those constraints lead to highly structured measurement matrices. Thus, if one is looking for a realistic procedure, \emph{one must use structured measurement matrices}.
\vskip0.4cm
Despite the popularity of one-bit compressed sensing, the current state-of-the-art falls well-short of addressing realistic scenarios. Firstly, all existing results deal with problems that are either noiseless or have an analog noise level (i.e. the variance of the coordinates of $\nu_{\operatorname{noise}}$) that is small relative to the wanted reconstruction accuracy, making the problem de-facto noiseless (see \cite{Dir18} for an overview of these results). Moreover, the issue of post-quantization bit corruptions is typically not dealt with at all (two exceptions are \cite{DiM18a,PlV13}).
Secondly, almost all the relevant work has focused on a standard Gaussian measurement matrix. The reason is that one-bit compressed sensing can very easily \emph{fail} when using a non-Gaussian matrix---even if that matrix is known to perform optimally in classical compressed sensing. Indeed, when all the thresholds are set to $0$ (the scenario studied, e.g., in \cite{ALP14,DJR17}) there are $2$-sparse vectors that are `far away' from one another and still cannot be distinguished based on their quantized Bernoulli measurements (see \cite{ALP14}). Recently it was shown in \cite{DiM18a} that one-bit compressed sensing is possible for a large class of non-Gaussian measurement matrices---though still with iid rows---by invoking \emph{dithering}; that is, by selecting well-designed random thresholds for the quantization process. Unfortunately, while \cite{DiM18a} extends the scope of the method beyond the Gaussian case, it still does not address the key difficulty: that measurement matrices with iid rows are rather useless when it comes to the study of realistic problems.
Intuitively, the constraint that the measurement matrix should be structured is a major obstacle, because the behaviour of a structured measurement matrix is likely to be less favourable than of a fully random one. Thankfully, not all is lost: there are examples in classical compressed sensing literature which show that near-optimal sample complexities can still be achieved using \emph{structured random matrices}; that is, using matrices that are generated by injecting some (minimal) randomness into realistic measurement models.
A very popular family of structured random matrices are the randomly sub-sampled circulant matrices, where the resulting measurements amount to randomly sub-sampling the discrete circular convolution of the unknown signal with a random pulse (for more details see below). This method of measurement is very popular and is used extensively in applications, ranging from SAR radar imaging through optical imaging and channel estimation (see e.g.\ \cite{Rom09} and the references therein).
\par
The goal of this article is to resolve the two issues that are at the heart of real-world problems: that the measurement matrix must be structured and that the given measurements are noisy. Indeed,
\begin{framed}
We establish an optimal (up to logarithmic factors) one-bit sparse recovery procedure for realistic problems: the pre-quantization noise can be high; during the quantization process a large fraction of the signs may change in an adversarial way; and the measurement matrix is structured---a \emph{randomly sub-sampled circulant matrix}.
\end{framed}
\par
Before we formulate our main results, let us introduce some notation, beginning with the measurement matrix we use. Let $\xi\in \mathbb{R}^n$ be a random vector with independent, mean-zero, unit variance, $L$-subgaussian\footnote{Recall that a centred random variable is $L$-subgaussian if for every $p \geq 2$, $\|\xi\|_{L_p} \leq L\sqrt{p} \|\xi\|_{L_2}$.} coordinates. Let $\Gamma_{\xi}$ be the circulant matrix generated by $\xi$; that is, the $j$-th row of $\Gamma_\xi$ is $(\xi_{j \ominus k})_{k=1}^n$ where $\ominus$ is subtraction mod $n$. Consider independent $\{0,1\}$-valued random variables $\delta_1,\ldots \delta_n$ with mean $\delta=m/n$, which are independent of $\xi$; let $I=\{i\in [n] \ : \ \delta_i=1\}$ and set $R_I$ to be the associated restriction operator. The measurement matrix we use is $A =R_I\Gamma_{\xi}$, i.e., a randomly sub-sampled circulant matrix whose rows are chosen from the rows of $\Gamma_\xi$ according to the selectors $(\delta_i)_{i=1}^n$.
\par
Next, let us turn to the analog noise vector. Let $\nu_1,...,\nu_m$ be independent copies of a random variable $\nu$ (that need not be centred) which are also independent of $(\delta_i)_{i=1}^n$ and $\xi$. Thus, the noise vector $\nu_{\operatorname{noise}}=(\nu_i)_{i=1}^m$ consists of iid coordinates, but can have a nontrivial `drift'.
The choice of the thresholds used in the quantization process turns out to be of central importance. The thresholds are defined using $\tau_1,...,\tau_m$, which are independent copies of a centred random variable $\tau$. Set $\tau_{\operatorname{thres}}=(\tau_i)_{i=1}^m$ and assume that $\tau_{\operatorname{thres}}$ is independent of $(\delta_i)_{i=1}^n$, $\xi$, and $\nu_{\operatorname{noise}}$.
Finally, we assume that at most $\beta m $ bits are corrupted arbitrarily during quantization for some parameter $0<\beta<1$. Thus, if $q$ is as in \eqref{eqn:1-bitModel} (i.e., $q$ is the `perfect' quantization of the vector of noisy analog measurements) and $d_H$ denotes the Hamming distance, then instead of $q$ one observes a corrupted measurement vector $q_{\operatorname{corr}} \in \{-1,1\}^m$ which satisfies $d_H(q_{\operatorname{corr}},q)\leq \beta m$.
\par
Throughout we assume that the unknown signal is $s$-sparse and denote by $\Sigma_{s,n}$ the set of $s$-sparse vectors in the Euclidean unit ball in $\mathbb{R}^n$. The recovery procedure we use is
\begin{equation} \label{eqn:progIsomorphicIntro}
\max_{z\in \Sigma_{s,n}} \frac{1}{m}\inr{q_{\operatorname{corr}},Az} - \frac{1}{2\lambda} \frac{\|\Gamma_{\xi} z\|_2^2}{n},
\end{equation}
and its performance is described in the following theorem, which is the main result of this article.
\begin{Theorem} \label{thm:isomorphic}
For $L \geq 1$ there exist constants $c_1,...,c_4$ that depend only on the subgaussian constant $L$, and poly-logarithmic factors $\gamma_1,\gamma_2$ satisfying
$$
\gamma_1\leq \log(s)\log(n), \qquad \gamma_2\leq \log(n)\log\log(n)
$$
such that the following holds. Fix $0<\rho<1$ and assume that $\nu$ is $L$-subgaussian and that $|\mathbb{E} \nu| \leq c_1 \rho$. Set $\bar{\nu}=\nu - \mathbb{E} \nu$, let
$$
\lambda \geq c_2 \gamma_1 \max\{\|\bar{\nu}\|_{L_2},1\} \log(e\gamma_1^2 \max\{\|\bar{\nu} \|_{L_2},1\}/\rho),
$$
and set $\beta$ such that
$$
\beta \sqrt{\log(e/\beta)} \leq \frac{c_3}{\gamma_1 \gamma_2} \cdot \frac{\rho}{\lambda}.
$$
Let $\tau$ be uniformly distributed on $[-\lambda,\lambda]$ and set
$$
m \geq c_4 \gamma_1^2 \gamma_2^2 \frac{\lambda^2 s \log(en/s)}{\rho^2}.
$$
Then, with probability at least $1-(\frac{s}{n})^2$, for any $s$-sparse $x\in \mathbb{R}^n$ with $\|x\|_2\leq 1$, any solution $x^{\#}$ to \eqref{eqn:progIsomorphicIntro} satisfies $\|x^{\#}-x\|_2\leq \rho$.
\end{Theorem}
\begin{Remark}
As the proof of Theorem~\ref{thm:isomorphic} shows, the probability estimate can be improved to $1-(s/n)^\zeta$ for any $\zeta \geq 2$ at a price of modified constants $c_1,...,c_4$.
\end{Remark}
The number of bits that can be safely corrupted during quantization without damaging the accuracy is, up to logarithmic terms, the best that one can hope for in the setting of Theorem~\ref{thm:isomorphic} --- it is possible to show that if one aims for recovery with accuracy $\rho$, then no more than $\sim \rho m$ of the bits can be corrupted in an adversarial way during quantization (up to logarithmic terms). But what is more striking is that Theorem~\ref{thm:isomorphic} is (almost) optimal in a rather strong (minimax) sense, as the next result shows.
\begin{Theorem} \label{thm:lower}
Let $\nu$ be a centred Gaussian random variable, set $A$ to be a (random) measurement matrix that satisfies, with probability at least $0.95$,
\begin{equation} \label{eq:cond-on-A-lower-bound}
\|Ax\|_2 \leq \kappa \sqrt{m} \|x\|_2, \qquad \text{for all } x \in \Sigma_{s,n}.
\end{equation}
Let $\Psi$ be any recovery procedure such that, for every fixed $x \in \Sigma_{s,n}$, when receiving as data the measurement matrix $A$ and the noisy linear measurements $((Ax)_i+\nu_i)_{i=1}^m$, $\Psi$ returns $x^\sharp$ that satisfies $\|x^\sharp-x\|_2 \leq \rho$ with probability $0.9$. Then
$$
m \geq c\kappa^{-2} \|\bar{\nu}\|_{L_2}^2 \frac{s \log(en/s)}{\rho^2}.
$$
\end{Theorem}
The meaning of Theorem~\ref{thm:lower} is that even if one receives the noisy analog linear measurements prior to quantization, and is then free to use those measurements as one sees fit, the sample size required for recovery with accuracy $\rho$ is at least $\|\bar{\nu}\|_{L_2}^2 s\log(en/s)/\rho^2$. In light of Theorem \ref{thm:isomorphic}, and perhaps contrary to intuition, this means that quantization is not a statistically expensive procedure in the presence of nontrivial analog noise: by using one-bit quantization with uniformly distributed thresholds, combined with the efficient recovery scheme \eqref{eqn:progIsomorphicIntro}, the recovery performance is the best that one can hope for (up to a poly-logarithmic factor), even if one had been given the complete noisy analog measurements. In particular, sophisticated quantization schemes that collect more bits per measurement (see e.g., \cite{DJR17,XuJ18}) and/or quantize in an adaptive way (e.g., the methods in \cite{FKS17,HuS18}) are not effective in realistic problems in which the analog noise level is nontrivial.
The situation in the less realistic scenario of a low analog noise level is entirely different and an appropriate version of Theorem~\ref{thm:isomorphic} may be used to achieve the optimal sample complexity in that scenario as well (see Section \ref{sec:extensions}).
\begin{Remark}
It is well known that \eqref{eq:cond-on-A-lower-bound} holds with probability $0.95$ for many random measurement matrices studied in compressed sensing if $m\geq c\gamma s \log(en/s)$ and $\gamma$ is a poly-logarithmic factor; in particular, \eqref{eq:cond-on-A-lower-bound} is satisfied when $A$ has iid subgaussian rows or when $A$ is a partial circulant matrix generated by an $L$-subgaussian random vector.
\end{Remark}
\par
The article is organized as follows. In Section~\ref{sec:analysisGen} we analyze the recovery procedure \eqref{eqn:progIsomorphicIntro} for a general matrix $\Gamma$ (and not only for a circulant matrix $\Gamma_{\xi}$) and deduce sufficient conditions on $\Gamma$ that ensure that the procedure is successful. In Section~\ref{sec:circulnat} we verify that the required conditions are satisfied by a subgaussian circulant matrix, thereby completing the proof of Theorem~\ref{thm:isomorphic}. Section~\ref{sec:lower} is devoted to the proof of Theorem~\ref{thm:lower}, and in Section~\ref{sec:extensions} we sketch several extensions of Theorem~\ref{thm:isomorphic}, including its implications for the low noise regime.
\subsection{Notation}
For $k\in \mathbb{N}$ let $[k]=\{1,\ldots,k\}$. $|S|$ denotes the cardinality of a set $S$. Given $x \in \mathbb{R}^n$, set $\|x\|_0=|\{i \in [n] \ : \ x_i\neq 0\}|$; let $\Sigma_{s,n}=\{x\in \mathbb{R}^n \ : \ \|x\|_0\leq s, \ \|x\|_2\leq 1\}$ be the set of $s$-sparse vectors in the Euclidean unit ball; $\|x\|_p$ denotes the $\ell_p$-norm and put $B_p^n = \{x \in \mathbb{R}^n : \|x\|_p \leq 1\}$.
Recall that $d_H$ is the (unnormalized) Hamming distance on the discrete cube and for a centred random variable $\xi$ set
$$
\|\xi\|_{\psi_2} = \sup_{p\geq 1}\frac{\|\xi\|_{L_p}}{\sqrt{p}}.
$$
Finally, $c$ and $C$ denote absolute constants; their value may change from line to line. $c_\alpha$ or $C(\alpha)$ denotes a constant that depends only on the parameter $\alpha$. We write $a\lesssim_{\alpha} b$ if $a\leq C_{\alpha} b$, and $a\simeq_{\alpha} b$ means that both $a\lesssim_{\alpha} b$ and $a\gtrsim_{\alpha} b$ hold.
\section{{Analysis of the recovery method}} \label{sec:analysisGen}
In what follows $\Gamma$ is an ${n\times n}$ matrix, and the measurement matrix we consider is obtained by randomly selecting rows of $\Gamma$ using independent $\{0,1\}$-valued random variables (selectors) $\delta_1,\ldots,\delta_n$ with mean $\delta=m/n$. Hence, $A$ is defined by
$$
A z = \sum_{i=1}^n \delta_i \inr{\Gamma z,e_i} e_i.
$$
Observe that the number of measurements may be slightly different from $m$. It is the cardinality of the set $\{i \in [n]: \delta_i=1\}$, which, by the Chernoff bound, concentrates in $[m/2,3m/2]$ with probability at least $1-e^{-cm}$ .
\par
Our proof of Theorem~\ref{thm:isomorphic} consists of two independent components. We first show that the program \eqref{eqn:progIsomorphicIntro} succeeds if $\Gamma$ behaves `as if it were a Gaussian matrix' in two distinct ways:
\begin{itemize}
\item It acts as an isomorphism on sparse vectors, i.e., for suitable constants $0<c<C<\infty$,
\begin{equation}
\label{eqn:isomorphicIntro}
c\|x\|_2\leq \frac{1}{\sqrt{n}}\|\Gamma x\|_2\leq C\|x\|_2, \qquad \text{for all} \ x\in \Sigma_{s,n}.
\end{equation}
\item Any vector in $\Gamma(\Sigma_{s,n})$ satisfies a \emph{growth property}: that is, for every $x\in \Sigma_{s,n}$,
\begin{equation} \label{eq:growth-0}
\|\Gamma x\|_{[k]} \leq \gamma_1 \sqrt{\frac{k\log(en/k)}{n}} \|\Gamma x\|_{2}, \qquad \text{for all} \ k\geq s
\end{equation}
where $\gamma_1$ is a poly-logarithmic factor in $s$ and $n$. Here, for a vector $w\in \mathbb{R}^n$, $w^*$ is the non-increasing rearrangement of $(|w_i|)_{i=1}^n$ and
$$
\|w\|_{[k]} = \Bigl(\sum_{i=1}^k (w_i^*)^2\Bigr)^{1/2}
$$
is the $\ell_2$-norm of the $k$-largest coordinates.
\end{itemize}
\par
In the second part of the proof we show that a random circulant matrix generated by a subgaussian random vector exhibits the Gaussian-like behaviour \eqref{eqn:isomorphicIntro} and \eqref{eq:growth-0} with high probability, despite the rather `limited randomness' such a matrix has. This surprising feature is discussed in detail in Section~\ref{sec:circulnat}.
To start our analysis fix the matrices $\Gamma$ and $A$; the given set $T\subset \mathbb{R}^n$; and the corrupted vector of quantized measurements $q_{\operatorname{corr}}$. Define the functional $\phi:\mathbb{R}^n\to \mathbb{R}$ by
\begin{equation} \label{eqn:FunIsomorphic}
\phi(z)=\frac{1}{m}\inr{q_{\operatorname{corr}},Az} - \frac{1}{2\lambda} \frac{\|\Gamma z\|_2^2}{n}.
\end{equation}
The recovery procedure we explore is
\begin{equation}
\label{eqn:progIsomorphic}
\max_{z \in T} \phi(z).
\end{equation}
Although our focus is on the set $T=\Sigma_{s,n}$ (leading to the program \eqref{eqn:progIsomorphicIntro}), the method of analysis presented here can be used to study \eqref{eqn:progIsomorphic} for other sets $T$, most notably $T=\sqrt{s} B_1^n\cap B_2^n$. The latter set is used in approximate sparse recovery problems (see more details in Section~\ref{sec:extensions}).
\par
To establish Theorem~\ref{thm:isomorphic} consider the `excess functional' $\phi(z)-\phi(x)$. In what follows we show that for the wanted reconstruction error $\rho$, and using $m$ measurements, one can ensure that $\phi(z)-\phi(x) < 0$ whenever $x,z \in T$ and $\|x-z\|_2 \geq c\rho$. That implies that, for any $x\in T$, any solution $x^{\#}$ to \eqref{eqn:progIsomorphic} satisfies $\|x^{\#}-x\|_2 \leq c\rho$.
\subsection{Decomposition of the excess risk}
The first step in the proof is a decomposition of the excess functional. Observe that
\begin{align}
\label{eqn:mainDecompExcess}
\phi(z)-\phi(x) & = \frac{1}{m} \inr{q_{\operatorname{corr}},Az-Ax} -\frac{1}{2\lambda}\frac{\|\Gamma z\|_2^2}{n} + \frac{1}{2\lambda}\frac{\|\Gamma x\|_2^2}{n} \nonumber
\\
& = \frac{1}{m} \inr{q_{\operatorname{corr}}-\operatorname{sign}{(Ax+\nu_{\operatorname{noise}}+\tau_{\operatorname{thres}})},A(z-x)} \nonumber
\\
& + \frac{1}{m} \left(\inr{\operatorname{sign}{(Ax+\nu_{\operatorname{noise}}+\tau_{\operatorname{thres}})},A(z-x)} \right. \nonumber \\
& \qquad \qquad \qquad \qquad \qquad - \left. \mathbb{E}_{\delta \otimes \nu \otimes \tau} \inr{\operatorname{sign}{(Ax+\nu_{\operatorname{noise}}+\tau_{\operatorname{thres}})},A(z-x)} \right) \nonumber
\\
& + \frac{1}{m}\mathbb{E}_{\delta \otimes \nu \otimes \tau} \inr{\operatorname{sign}{(Ax+\nu_{\operatorname{noise}}+\tau_{\operatorname{thres}})},A(z-x)} -\frac{1}{2\lambda}\frac{\|\Gamma z\|_2^2}{n} + \frac{1}{2\lambda}\frac{\|\Gamma x\|_2^2}{n} \nonumber
\\
& =: (1)+(2)+(3),
\end{align}
where $\mathbb{E}_{\delta \otimes \nu \otimes \tau}$ is the expectation with respect to $(\delta_i)_{i=1}^n$, $(\nu_i)_{i=1}^n$ and $(\tau_i)_{i=1}^n$, $\nu_{\operatorname{noise}}=(\delta_i \nu_i)_{i=1}^n$ and $\tau_{\operatorname{thres}}=(\delta_i \tau_i)_{i=1}^n$.
The goal is to use this decomposition and find a constant $C$ and a high probability event on which, for every $x \in T$ and $z \in T$ that satisfy $\|x-z\|_2 \gtrsim \rho$,
\begin{equation} \label{eq:path}
|(1)| \leq C\|x-z\|_2^2; \ \ |(2)| \leq C\|x-z\|_2^2 \ \ {\rm and} \ \ (3) \leq -4C\|x-z\|_2^2,
\end{equation}
implying that $\phi(z)-\phi(x) \leq -2C\|x-z\|_2^2$ when $\|x-z\|_2 \gtrsim \rho$.
\vskip0.4cm
Writing $q_{\operatorname{corr}} =(q_i)_{i=1}^n$, the three terms in \eqref{eqn:mainDecompExcess} are
\begin{equation} \label{eq:(1)-using-Gamma}
(1) = \frac{1}{m} \sum_{i=1}^n \delta_i \bigl(q_i -\operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i)\bigr) \cdot \bigl(\Gamma(z-x)\bigr)_i;
\end{equation}
\begin{align} \label{eq:(2)-using-Gamma}
(2) = \frac{1}{m}\sum_{i=1}^n \Bigl( & \delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot \bigl(\Gamma(z-x)\bigr)_i
\\
& - \mathbb{E}_{\delta \otimes \nu \otimes \tau} \ \delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot \bigl(\Gamma(z-x)\bigr)_i\Bigr); \nonumber
\end{align}
and
\begin{equation} \label{eq:(3)-using-Gamma}
(3) = \frac{1}{m}\sum_{i=1}^n \mathbb{E}_{\delta \otimes \nu \otimes \tau} \ \delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot \bigl(\Gamma(z-x)\bigr)_i -\frac{1}{2\lambda}\frac{\|\Gamma z\|_2^2}{n} + \frac{1}{2\lambda}\frac{\|\Gamma x\|_2^2}{n}.
\end{equation}
\vskip0.4cm
\noindent {\bf{\underline{The term \eqref{eq:(3)-using-Gamma}}}}
\vskip0.4cm
To estimate \eqref{eq:(3)-using-Gamma} it suffices to show that for every $x \in T$ and every $z \in T$ that satisfies $\|z-x\|_2 \gtrsim \rho$,
\begin{framed}
\begin{equation} \label{eq:est-on-(3)-1}
\frac{1}{m}\sum_{i=1}^n \mathbb{E}_{\delta \otimes \nu \otimes \tau} \delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \bigl(\Gamma(z-x)\bigr)_i \leq \frac{1}{\lambda} \frac{\inr{\Gamma x,\Gamma (z-x)}}{n} + \frac{\rho}{16 \lambda} \frac{\|\Gamma (z-x)\|_2}{\sqrt{n}}.
\end{equation}
\end{framed}
Indeed, if that is the case then
\begin{align*}
(3) \leq & \frac{1}{\lambda n} \left(\inr{\Gamma x,\Gamma (z-x)} - \frac{\|\Gamma z\|_2^2}{2} + \frac{\|\Gamma x\|_2^2}{2} \right) + \frac{\rho }{16 \lambda}\frac{\|\Gamma(z-x)\|_2}{\sqrt{n}}
\\
= & \frac{1}{2\lambda} \frac{\|\Gamma(z-x)\|_2}{\sqrt{n}} \left(- \frac{\|\Gamma (z-x)\|_2}{\sqrt{n}}+\frac{\rho}{8} \right)=(*).
\end{align*}
Hence, if the matrix $\Gamma$ satisfies a \emph{small-ball property}, namely, that there is a constant $0<\kappa<1$ such that for every $x,z \in T$,
\begin{equation}
\label{eqn:SBGammaAss}
\frac{\|\Gamma(z-x)\|_2}{\sqrt{n}} \geq \kappa \|z-x\|_2,
\end{equation}
and if $\|x-z\|_2 \geq \rho/4\kappa$, then
\begin{equation} \label{eq:est-on-(3)-2}
(*) \leq -\frac{1}{2\lambda} \frac{\|\Gamma(z-x)\|}{\sqrt{n}} \cdot \frac{\kappa}{2} \|z-x\|_2 \leq -\frac{\kappa^2}{4\lambda} \|z-x\|_2^2,
\end{equation}
which is the wanted estimate. Of course, it suffices if \eqref{eqn:SBGammaAss} holds only when $\|x-z\|_2 \gtrsim \rho$.
\vskip0.4cm
\noindent {\bf{\underline{The term \eqref{eq:(2)-using-Gamma}}}}
\vskip0.4cm
If we set
$$
\Psi(x,y)= \frac{1}{m}\sum_{i=1}^n \delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot \bigl(\Gamma(y/\|y\|_2^2) \bigr)_i,
$$
then
$$(2)=\|z-x\|_2^2 \ \bigl(\Psi(x,z-x)-\mathbb{E}_{\delta \otimes \nu \otimes \tau}\Psi(x,z-x)\bigr).$$
Observe that
\begin{align*}
& \sup_{x \in T} \ \sup_{\{z \in T, \ \|x-z\|_2 \geq \rho\}} \bigl|\Psi(x,z-x)-\mathbb{E}_{\delta \otimes \nu \otimes \tau}\Psi(x,z-x)\bigr|
\\
& \qquad \leq \sup_{x \in T} \ \sup_{\{y \in T-T, \ \|y\|_2 \geq \rho\}} \bigl|\Psi(x,y)-\mathbb{E}_{\delta \otimes \nu \otimes \tau}\Psi(x,y)\bigr|
\end{align*}
and the wanted estimate on \eqref{eq:(2)-using-Gamma} follows once one identifies a high probability event on which, for every $x \in T$ and any $y \in T-T$ such that $\|y\|_2 \geq \rho$,
$$
\bigl|\Psi(x,y)-\mathbb{E}_{\delta \otimes \nu \otimes \tau}\Psi(x,y)\bigr| \leq \frac{1}{16 \lambda}.
$$
Such an estimate calls for a `{\emph star-shape argument}': if $f:\mathbb{R}^n\to \mathbb{R}_+$ is positive homogeneous and $W\subset \mathbb{R}^n$ is star-shaped around $0$, i.e., $\theta w\in W$ for all $w\in W$ and $0<\theta<1$, then
$$
\sup_{\{w\in W \ : \ \|w\|_2\geq \rho\}} f(w/\|w\|_2^2) \leq \sup_{\{w\in W \ : \ \|w\|_2=\rho\}} f(w)/\rho^2.
$$
Observe that for every fixed $x$, $(\delta_i)_{i=1}^n$, $(\nu_i)_{i=1}^n$, and $(\tau_i)_{i=1}^n$ the function
$$
f(w)=\Bigl|\frac{1}{m}\sum_{i=1}^n \bigl(\delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot (\Gamma w)_i - \mathbb{E}_{\delta \otimes \nu \otimes \tau} \delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot (\Gamma w)_i \bigr)\Bigr|
$$
is positive homogenous in $w$, and by the star-shape argument
$$
\sup_{\{y \in T-T, \ \|y\|_2 \geq \rho\} } f(y/\|y\|_2^2) \leq \sup_{\{y \in {\rm star}(T-T), \ \|y\|_2 = \rho\}} \frac{f(y)}{\rho^2},
$$
where for a set $W$ we denote by ${\rm star}(W)$ the set $\{\theta w : 0 \leq \theta \leq 1, \ w \in W\}$.
Therefore, one has to show that with high probability,
\begin{align*}
\sup_{x \in T} \ \sup_{\{y \in {\rm star}(T-T), \|y\|_2 = \rho\} } \Bigl| & \frac{1}{m}\sum_{i=1}^n \Bigl(\delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot (\Gamma y)_i
\\
& \qquad -\mathbb{E}_{\delta \otimes \nu \otimes \tau} \delta_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot (\Gamma y)_i \Bigr)\Bigr| \leq \frac{\rho^2}{16 \lambda}, \nonumber
\end{align*}
which follows by a standard symmetrization argument \cite{GiZ84} once
\begin{framed}
\begin{equation} \label{eq:est-on-(2)-1}
\sup_{x \in T} \ \sup_{\{ y \in {\rm star}(T-T), \ \|y\|_2 = \rho\} } \Bigl| \frac{1}{m}\sum_{i=1}^n \delta_i \varepsilon_i \operatorname{sign}((\Gamma x)_i+\nu_i +\tau_i) \cdot (\Gamma y)_i \Bigr| \leq \frac{\rho^2}{32 \lambda};
\end{equation}
\end{framed}
\noindent here and throughout, $(\varepsilon_i)_{i=1}^n$ are independent, symmetric $\{-1,1\}$-valued random variables that are independent of $(\delta_i)_{i=1}^n$, $(\nu_i)_{i=1}^n$, and $(\tau_i)_{i=1}^n$.
\vskip0.4cm
\noindent {\bf {\underline{The term \eqref{eq:(1)-using-Gamma}}}}
\vskip0.4cm
Using an almost identical argument, it suffices to show that with high probability,
\begin{equation} \label{eq:est-on-(1)-1}
\sup_{x \in T} \sup_{\{y \in {\rm star}(T-T), \ \|y\|_2 = \rho\}} \Bigl| \frac{1}{m} \sum_{i=1}^n \delta_i \Bigl(q_i -\operatorname{sign}\bigl((\Gamma x)_i+\nu_i +\tau_i\bigr)\Bigr) \cdot \bigl(\Gamma y\bigr)_i \Bigr| \leq \frac{\rho^2}{16 \lambda}.
\end{equation}
Set $J=\{j : \delta_j =1\}$ and recall that for every target vector $x$ and any realization of $(\delta_i)_{i=1}^n$, $(\nu_i)_{i=1}^n$ and $(\tau_i)_{i=1}^n$ one has that $|\{j \in J : q_j \not = \operatorname{sign}((\Gamma x)_j+\nu_j +\tau_j)\}| \leq \beta m$. Therefore,
$$
\Bigl| \frac{1}{m} \sum_{i=1}^n \delta_i \Bigl(q_i -\operatorname{sign}\bigl((\Gamma x)_i+\nu_i +\tau_i\bigr)\Bigr) \bigl(\Gamma y\bigr)_i \Bigr| \leq \max_{|I| \leq \beta m} \frac{1}{m} \sum_{i\in I} \delta_i |(\Gamma y)_i|
$$
and one has to show that on a high probability event,
\begin{framed}
\begin{equation} \label{eq:est-on-(1)-2}
\sup_{\{y \in {\rm star}(T-T), \ \|y\|_2 = \rho\}} \max_{|I| \leq \beta m} \frac{1}{m} \sum_{i=1}^n \delta_i |(\Gamma y)_i| \leq \frac{\rho^2}{16 \lambda}.
\end{equation}
\end{framed}
\subsection{Controlling the three terms}
Before continuing with the study of the excess loss functional, let us explore the sets $T$ and ${\rm star}(T-T) \cap \rho S^{n-1}$ in the case that we are interested in.
Observe that if $T=\Sigma_{s,n}$, then
\begin{equation}
\label{eqn:starHullSp}
{\rm star}(T-T) \cap \rho S^{n-1} \subset 2\rho \Sigma_{2s,n}.
\end{equation}
Motivated by \eqref{eqn:starHullSp}, that means exploring \eqref{eq:est-on-(3)-1}, \eqref{eq:est-on-(2)-1} and \eqref{eq:est-on-(1)-2} for the pair of sets
\begin{equation} \label{eq:sets-0}
\Sigma_{s,n}, \ \ \ \ \rho \Sigma_{2s,n}.
\end{equation}
As will become clear, the geometry of the images of the two sets under $\Gamma$ is of the utmost importance; specifically, the elements of the images need to satisfy the following fundamental property.
\vskip0.4cm
Given a vector $(x_i)_{i=1}^n$, recall that $(x_i^*)_{i=1}^n$ is the nonincreasing rearrangement of $(|x_i|)_{i=1}^n$ and that for $1 \leq k \leq n$,
$$
\|x\|_{[k]}= \Bigl(\sum_{i \leq k} (x_i^*)^2 \Bigr)^{1/2}.
$$
\begin{Definition} \label{def:growth-property}
A vector $x \in \mathbb{R}^n$ satisfies the growth property with parameters $r$ and $\gamma_1\geq 1$ if for every $r \leq k \leq n$,
\begin{equation} \label{eq:growth}
\|x\|_{[k]} \leq \gamma_1 \sqrt{\frac{k\log(en/k)}{n}} \|x\|_{2}.
\end{equation}
\end{Definition}
\vskip0.4cm
The motivation behind Definition \ref{def:growth-property} is regularity, as vectors that satisfy \eqref{eq:growth} are `well-spread'. Indeed, the contribution to $\|x\|_2$ by the $k$ largest coordinates of $(|x_i|)_{i=1}^n$ is rather limited unless $k$ is close to $n$. Moreover, while there is little information on how the coordinates $(x_1^*,...,x_r^*)$ are distributed, beyond that the coordinates of $x$ are almost constant and contribute a proportion of $\|x\|_2$. To see that note that if
$$
\gamma_1 \sqrt{\frac{r \log(en/r)}{n}} \leq \frac{1}{2}
$$
then
$$
\Bigl(\sum_{i=r+1}^n (x_i^*)^2\Bigr)^{1/2} \geq \frac{\|x\|_2}{2},
$$
and at the same time, for any $k \geq r$,
\begin{equation} \label{eq:monotone-single}
x_k^* \leq \frac{\|x\|_{[k]}}{\sqrt{k}} \leq \gamma_1 \sqrt{\frac{\log(en/k)}{n}} \|x\|_2.
\end{equation}
\subsection{Proof of \eqref{eq:est-on-(3)-1}}
Recall that by our assumptions, $\bar{\nu}=\nu-\mathbb{E} \nu$ is an $L$-subgaussian random variable and that $\tau$ is distributed uniformly in $[-\lambda,\lambda]$.
\vskip0.3cm
\begin{Theorem} \label{thm:est-on-(3)-sparse}
There exist constants $c_1,c_2,c_3$ and $c_4$ that depend only on $L$ for which the following holds.
Assume that $0<\rho<1$; that
\begin{itemize}
\item[$(a)$] for every $t \in \Sigma_{2s,n}$, $\|\Gamma t\|_2\leq c_1\sqrt{n}$ and $\Gamma t$ satisfies the growth property \eqref{eq:growth} with constants $r=2s$ and $\gamma_1\geq 1$;
\item[$(b)$] also, $| \mathbb{E} \nu | \leq c_2 \rho$,
$$
\rho \geq c_3 \gamma_1^2 \max\{\|\bar{\nu}\|_{L_2},1\}\frac{s}{n}\log(en/s),
$$
and
$$
\lambda \geq c_4 \gamma_1 \max\{\|\bar{\nu}\|_{L_2},1\} \log(e\gamma_1^2 \max\{\|\bar{\nu}\|_{L_2},1\}/\rho).
$$
\end{itemize}
Then for every $x,z \in \Sigma_{s,n}$,
$$
\left|\mathbb{E}_{\delta \otimes \nu \otimes \tau} \frac{1}{m}\sum_{i=1}^n \delta_i \operatorname{sign} \bigl( (\Gamma x)_i + \nu_i +\tau_i \bigr)\cdot (\Gamma (x-z))_i - \frac{1}{\lambda} \frac{\inr{\Gamma x,\Gamma (z-x)}}{n}\right| \leq \frac{\rho}{16 \lambda} \frac{\|\Gamma (z-x)\|_2}{\sqrt{n}}.
$$
In particular, if for every $t \in \Sigma_{2s,n}$, $\|\Gamma t\|_2/\sqrt{n} \geq \kappa \|t\|_2$, then for any $x,z \in \Sigma_{s,n}$ that satisfy $\|x-z\|_2 \geq 4\rho/\kappa$ one has
$$
(3) \leq -\frac{\kappa^2}{4\lambda} \|z-x\|_2^2.
$$
\end{Theorem}
\vskip0.4cm
The key estimate in the proof of Theorem \ref{thm:est-on-(3)-sparse} is as follows. For every ${w,v} \in \mathbb{R}^n$ set
$$
Z_{w,v} = \frac{1}{m} \sum_{i=1}^n \delta_i v_i \operatorname{sign} ({w_i}+\nu_i + \tau_i)
$$
where $(\delta_i)_{i=1}^n$ are, as always, independent, $\{0,1\}$-valued random variables with mean $\delta =m/n$. Theorem~\ref{thm:est-on-(3)-sparse} is an immediate application of \eqref{eq:est-on-(3)-2} and the following fact, with the choices $w=\Gamma x$ and $v=\Gamma(z-x)$.
\begin{Theorem} \label{thm:Z-wv-est}
There exist constants $c_1$ and $c_2$ that depend only on $L$ for which the following holds. Let $w,v \in \mathbb{R}^n$ satisfy the growth property \eqref{eq:growth} with parameters $r$ and $\gamma_1$. Set $0<\rho<1$ and $0<\theta<1$ such that
\begin{equation} \label{eq:cond-1-on-r}
\gamma_1 \sqrt{\frac{\max\{\|\bar{\nu}\|_{L_2},1\} \ r \log(en/r)}{n}} \leq \theta \sqrt{\rho},
\end{equation}
and let $\bar{k}$ to be the largest integer that satisfies
\begin{equation} \label{eq:cond-1-on-bar-k}
\gamma_1 \sqrt{\frac{\max\{\|\bar{\nu}\|_{L_2},1\} \ k \log(en/k)}{n}} \leq 2 \theta \sqrt{\rho}.
\end{equation}
If
$$
\lambda \geq 4|\mathbb{E} \nu|+c_1 \gamma_1\sqrt{\log(en/\bar{k})} \cdot \max\left\{\|\bar{\nu}\|_{L_2}, \frac{\|w\|_2}{\sqrt{n}}\right\},
$$
then
\begin{equation} \label{eq:Z-wv-est}
\left|\mathbb{E}_{\delta \otimes \nu \otimes \tau} Z_{w,v} - \frac{\inr{w,v}}{n \lambda} \right| \leq \frac{c_2}{\lambda} \left(|\mathbb{E} \nu|+ \theta^2 \rho \left(1+ \frac{\|w\|_2}{\sqrt{n}}\right) \right) \frac{\|v\|_2}{\sqrt{n}}.
\end{equation}
\end{Theorem}
Before we begin with the proof of Theorem \ref{thm:Z-wv-est}, let us note a few facts that follow from the growth property \eqref{eq:growth}, and in particular from \eqref{eq:monotone-single}:
\begin{Lemma} \label{lemma:growth-outcome}
There is an absolute constant $c$ for which the following holds. If $x \in \mathbb{R}^n$ satisfies \eqref{eq:growth} and $r \leq \ell \leq k$,
$$
\sum_{i=\ell}^k x_i^* \leq c\gamma_1 \frac{k\sqrt{\log(en/k)}}{\sqrt{n}} \ \ \ {\rm and} \ \ \ \Bigl(\sum_{i=\ell}^k (x_i^*)^2 \Bigr)^{1/2} \leq c \gamma_1 \sqrt{\frac{k\log(en/k)}{n}}.
$$
\end{Lemma}
\begin{Remark} \label{rem:growth}
It is straightforward to verify that if $x,y \in \mathbb{R}^n$ satisfy \eqref{eq:growth} and $\alpha_\ell \leq 2^{-\ell}$ then for every $k \geq r$,
$$
\sum_{\ell=k+1}^n \alpha_\ell \|x\|_{[\ell]} \|y\|_{[\ell]} \leq c 2^{-k} \gamma_1^2 \|x\|_{2} \|y\|_{2} \frac{k\log(en/k)}{n},
$$
where $c$ is an absolute constant.
\end{Remark}
We omit the standard proofs of these facts.
\vskip0.4cm
\noindent{\bf Proof of Theorem \ref{thm:Z-wv-est}.} Throughout this proof, we will slightly abuse notation and denote by $\nu_{\operatorname{noise}}$ the vector $(\nu_i)_{i=1}^n$ (rather than $(\delta_i\nu_i)_{i=1}^n$). Recall that $\tau$ is distributed uniformly in $[-\lambda,\lambda]$ and thus, for any $y \in \mathbb{R}$,
\begin{equation}
\mathbb{E}_\tau \operatorname{sign}(y+\tau) =
\begin{cases}
y/\lambda & \mbox{if} \ |y| \leq \lambda,
\\
\mathbbm{1}_{\{y > \lambda\}} - \mathbbm{1}_{\{y<-\lambda\}} & \mbox{otherwise}.
\end{cases}
\end{equation}
Let $\bar{\nu}_i=\nu_i - \mathbb{E} \nu$ and set $I=\{i : |w_i+\bar{\nu}_i| > \lambda\}$ (which is a random set that depends on $w$ as well). Taking the expectation with respect to $(\delta_i)_{i=1}^n$ and using that $\delta=m/n$,
\begin{align*}
\mathbb{E}_{\delta \otimes \tau } Z_{w,v} & = \frac{1}{n}\sum_{i=1}^n v_i \mathbb{E}_\tau \operatorname{sign} (w_i+\nu_i+\tau_i)
\\
& = \frac{1}{n} \sum_{i \in I^c} v_i \frac{w_i+\nu_i}{\lambda} + \frac{1}{n} \sum_{i \in I} v_i\left(\mathbbm{1}_{\{w_i+\nu_i > \lambda\}}-\mathbbm{1}_{\{w_i+\nu_i < - \lambda\}} \right)
\\
& = \frac{1}{n} \sum_{i=1}^n v_i \frac{w_i+\nu_i}{\lambda} + \frac{1}{n} \sum_{i \in I} v_i\left(- \frac{w_i+\nu_i}{\lambda} + \mathbbm{1}_{\{w_i+\nu_i > \lambda\}}-\mathbbm{1}_{\{w_i+\nu_i < - \lambda\}} \right).
\end{align*}
Taking the expectation $\mathbb{E}_\nu$ (i.e., with respect to $(\nu_i)_{i=1}^n$) consider the resulting two terms. Firstly,
\begin{equation} \label{eq:simple-1}
\mathbb{E}_\nu \frac{1}{n} \sum_{i=1}^n v_i \frac{w_i+\nu_i}{\lambda} = \frac{\inr{w,v}}{n \lambda} + \frac{\mathbb{E} \nu}{n \lambda} \sum_{i=1}^n v_i \leq \frac{\inr{w,v}}{n \lambda} + \frac{1}{\lambda} |\mathbb{E}\nu| \frac{\|v\|_2}{\sqrt{n}}.
\end{equation}
Secondly, let us turn to the more difficult term,
$$
\left| \mathbb{E}_\nu \frac{1}{n} \sum_{i \in I} v_i \left( -\frac{w_i+\nu_i}{\lambda} + \mathbbm{1}_{\{w_i+\nu_i > \lambda\}}-\mathbbm{1}_{\{w_i+\nu_i < - \lambda\}} \right) \right| =(\triangle).
$$
Using that $\mathbbm{1}_{\{\alpha>\lambda\}}\leq |\alpha|/\lambda$ for any $\alpha\in \mathbb{R}$, it follows that for every $1 \leq i \leq n$,
$$
\left| - v_i \left( \frac{w_i+\nu_i}{\lambda} + \mathbbm{1}_{\{w_i+\nu_i > \lambda\}}-\mathbbm{1}_{\{w_i+\nu_i < - \lambda\}} \right) \right| \leq 3|v_i| \cdot \frac{|w_i+\nu_i|}{\lambda};
$$
hence,
$$
(\triangle) \leq \frac{3}{\lambda n} \mathbb{E}_\nu \sum_{i \in I} |v_i| \cdot |w_i + \nu_i| \leq \frac{3 |\mathbb{E} \nu|}{\lambda n} \sum_{i \in I} |v_i| + \frac{3}{\lambda n} \mathbb{E}_\nu \sum_{i \in I} |v_i| \cdot |w_i + \bar{\nu}_i|.
$$
Clearly,
$$
\frac{3 |\mathbb{E} \nu|}{\lambda n} \sum_{i \in I} |v_i| \leq \frac{3}{\lambda} |\mathbb{E} \nu| \frac{\|v\|_2}{\sqrt{n}},
$$
and all that is left is to control
$$
\frac{3}{\lambda n} \mathbb{E}_\nu \sum_{i \in I} |v_i| \cdot |w_i + \bar{\nu}_i|.
$$
To that end, it is standard to verify that if $(a_i)$ and $(b_i)$ are sequences then $\bigl| \sum a_i b_i \bigr| \leq \sum a_i^* b_i^*$. Therefore,
\begin{align*}
& \mathbb{E}_\nu \sum_{i \in I} |v_i| \cdot |w_i+\bar{\nu}_i| \leq \mathbb{E}_\nu \Bigl(\sum_{i \in I} |v_i| |w_i| + \sum_{i \in I} |v_i| |\bar{\nu}_i| \Bigr) \leq \sum_{\ell=1}^n \mathbb{E}_\nu \Bigl(\mathbbm{1}_{\{|I|=\ell\}} \sum_{i=1}^\ell v_i^* \cdot (w_i^*+\bar{\nu}_i^*)\Bigr)
\\
\leq & \sum_{\ell=1}^n \Bigl( \sum_{i=1}^\ell (v_i^*)^2 \Bigr)^{1/2} \Bigl( \sum_{i=1}^\ell (w_i^*)^2 \Bigr)^{1/2} \mathbb{P}_{\nu} (|I| = \ell) + \sum_{\ell=1}^n \Bigl( \sum_{i=1}^\ell (v_i^*)^2 \Bigr)^{1/2} \mathbb{E}_\nu \Bigl[\mathbbm{1}_{\{|I|=\ell\}}\Bigl( \sum_{i=1}^\ell (\bar{\nu}_i^*)^2 \Bigr)^{1/2}\Bigr]
\\
= & \sum_{\ell=1}^n \|v\|_{[\ell]}\|w\|_{[\ell]}\mathbb{P}_{\nu} (|I| = \ell) + \sum_{\ell=1}^n \|v\|_{[\ell]} \mathbb{E}_{\nu} \Bigl[\mathbbm{1}_{\{|I|=\ell\}}\|\bar{\nu}_{\operatorname{noise}}\|_{[\ell]}\Bigr]
\\
=& (*) + (**).
\end{align*}
Estimating $(*)$ and $(**)$ requires some preparation. Recall that $\bar{k}$ is the largest integer for which
$$
\gamma_1 \sqrt{\frac{ \max\{\|\bar{\nu}\|_{L_2},1\} \ k \log(en/k)}{n}} \leq 2\theta \sqrt{\rho},
$$
implying in particular that $\bar{k} \geq r$; hence, by \eqref{eq:monotone-single}, for $x=w$ or $x=v$,
$$
x_{\bar{k}}^* \leq \gamma_1 \sqrt{\log(en/\bar{k})} \frac{\|x\|_2}{\sqrt{n}}.
$$
Also,
$$
\lambda \geq 2 \gamma_1 \sqrt{\log(en/\bar{k})} \frac{\|w\|_2}{\sqrt{n}}
$$
and therefore,
$$
w_{\bar{k}}^* \leq \frac{\lambda}{2}.
$$
Next, if $\ell \geq 2\bar{k}$ there are at most $\ell/2$ indices $i$ for which $|w_i| \geq \lambda/2$, and therefore, the event
$\{ |I| = \ell \} = \left\{ \left| \left\{ i : |w_i +\bar{\nu}_i | \geq \lambda\right\} \right| = \ell \right\}$
is contained in the event
$$
{\mathcal C}_\ell = \left\{ \left| \left\{ i : |\bar{\nu}_i| \geq \frac{\lambda}{2} \right\} \right| \geq \frac{\ell}{2} \right\}.
$$
By a standard binomial estimate, for every $\ell \geq 2\bar{k}$,
\begin{equation} \label{eq:probab-c-ell}
\mathbb{P}( {\mathcal C}_\ell) \leq \binom{n}{\ell/2} \mathbb{P}^\ell (|\bar{\nu}| \geq \lambda/2) \leq \exp(-c^\prime(L) \ell \log(en/\ell))
\end{equation}
provided that $\lambda \gtrsim_L \|\bar{\nu}\|_{L_2}\sqrt{\log(en/\ell)}$, which is the case, again using that $\ell \geq 2\bar{k}$.
Finally, if $\ell \leq 2\bar{k}$ then for $x=w$ or $x=v$,
$$
\|x\|_{[\ell]} \leq 2\|x\|_{[\bar{k}]} \leq 4\theta \sqrt{\rho} \|x\|_2.
$$
\vskip0.3cm
Consider the term $(**)$. Since $\bar{\nu}_{\operatorname{noise}}=(\bar{\nu}_i)_{i=1}^n$ has iid $L$-subgaussian coordinates,
\begin{equation*}
(\mathbb{E} \|\bar{\nu}_{\operatorname{noise}}\|_{[\ell]}^2)^{1/2} \leq c(L)\|\bar{\nu}\|_{L_2} \sqrt{\ell \log(en/\ell)}.
\end{equation*}
Hence,
\begin{align*}
& \sum_{\ell=1}^{2\bar{k}} \|v\|_{[\ell]} \mathbb{E}_{\nu} \Bigl[\mathbbm{1}_{\{|I|=\ell\}}\|\bar{\nu}_{\operatorname{noise}}\|_{[\ell]}\Bigr] \leq \|v\|_{[2\bar{k}]} \sum_{i=1}^{2\bar{k}} \mathbb{E}_\nu \Bigl[\mathbbm{1}_{\{|I|=\ell\}}\|\bar{\nu}_{\operatorname{noise}}\|_{[\ell]}\Bigr]
\\
\leq & \|v\|_{[2\bar{k}]} \mathbb{E}_\nu \Bigl[ \|\bar{\nu}_{\operatorname{noise}}\|_{[2\bar{k}]} \mathbbm{1}_{\{|I| \leq 2\bar{k}\}}\Bigr] \leq \|v\|_{[2\bar{k}]} \mathbb{E} \|\bar{\nu}_{\operatorname{noise}}\|_{[2\bar{k}]} \leq c(L) \|\bar{\nu}\|_{L_2} \|v\|_{[\bar{k}]} \sqrt{\bar{k} \log(en/\bar{k})}
\\
\leq & c(L) {\|{\bar \nu}\|_{L_2}} \gamma_1 \bar{k} \log(en/\bar{k}) \frac{\|v\|_2}{\sqrt{n}}.
\end{align*}
Turning to the sum on $\ell \in [2\bar{k},n]$,
\begin{align*}
& \sum_{\ell=2\bar{k}+1}^{n} \|v\|_{[\ell]} \mathbb{E}_\nu \Bigl[\mathbbm{1}_{\{|I|=\ell\}} \|\bar{\nu}_{\operatorname{noise}}\|_{[\ell]} \Bigr] \leq \sum_{\ell=2\bar{k}+1}^{n} \|v\|_{[\ell]} (\mathbb{E} \|\bar{\nu}_{\operatorname{noise}}\|_{[\ell]}^2)^{1/2} \cdot \mathbb{P}_\nu^{1/2}(|I|=\ell)
\\
\leq & c(L) \sum_{\ell=2\bar{k}+1}^{n} \|v\|_{[\ell]} \|\bar{\nu}\|_{L_2} \sqrt{\ell \log(en/\ell)} \mathbb{P}_\nu^{1/2}(|I| = \ell)=(\triangle \triangle).
\end{align*}
By \eqref{eq:probab-c-ell} it is evident that $\mathbb{P}_\nu^{1/2}(|I| = \ell) \leq \exp(- c^\prime(L) \ell \log(en/\ell))$, and by Remark \ref{rem:growth}
$$
(\triangle \triangle) \leq c(L) \|\bar{\nu}\|_{L_2} \gamma_1 \frac{\|v\|_2}{\sqrt{n}} \bar{k}\log(en/\bar{k}) \exp(-c^\prime(L) \bar{k} \log(en/\bar{k})) \leq c(L) \|\bar{\nu}\|_{L_2} \gamma_1 \frac{\|v\|_2}{\sqrt{n}};
$$
thus
\begin{equation} \label{eq:(b)}
(**) \leq c(L) \|\bar{\nu}\|_{L_2} \gamma_1 \bar{k} \log(en/\bar{k}) \frac{\|v\|_2}{\sqrt{n}} \leq c(L) \theta^2 \rho \sqrt{n} \|v\|_2.
\end{equation}
The estimate on $(*)$ follows the same path, by splitting the sum to $\ell \in [1,2\bar{k}]$ and $\ell \in [2\bar{k}+1,n]$. Indeed,
\begin{align*}
& \sum_{\ell=1}^{2\bar{k}} \Bigl( \sum_{i=1}^\ell (v_i^*)^2 \Bigr)^{1/2} \Bigl( \sum_{i=1}^\ell (w_i^*)^2 \Bigr)^{1/2} \mathbb{P}_{\nu} (|I| = \ell) = \sum_{\ell=1}^{2\bar{k}} \|v\|_{[\ell]} \|w\|_{[\ell]} \mathbb{P}_\nu (|I|=\ell)
\\
\leq & \|v\|_{[2\bar{k}]} \|w\|_{[2\bar{k}]} \leq c(L) \theta^2 \rho \|v\|_2 \|w\|_2,
\end{align*}
and
\begin{align*}
& \sum_{\ell=2\bar{k}+1}^n \|v\|_{[\ell]} \|w\|_{[\ell]} \mathbb{P}_{\nu} (|I| = \ell) \lesssim_L \sum_{\ell=2\bar{k}}^n \|v\|_{[\ell]} \cdot \|w\|_{[\ell]} \exp(-c^\prime(L) \ell \log(en/\ell))
\\
\lesssim_L & \|v\|_{2} \cdot \|w\|_{2} \frac{\bar{k} \log(en/\bar{k})}{n} \cdot \exp(-c^\prime(L) \bar{k} \log(en/\bar{k})),
\end{align*}
using Remark \ref{rem:growth} once again. Hence,
\begin{equation} \label{eq:(a)}
(*) \leq c(L) \|v\|_2 \|w\|_2 \theta^2 \rho.
\end{equation}
Therefore, combining \eqref{eq:simple-1}, \eqref{eq:(b)} and \eqref{eq:(a)} it follows that
$$
\left|\mathbb{E}_{\delta \otimes \nu \otimes \tau } Z_{w,v} - \frac{\inr{w,v}}{n \lambda} \right| \leq \frac{c(L)}{\lambda} \left(|\mathbb{E} \nu| + \theta^2 \rho \left(1+ \frac{\|w\|_2}{\sqrt{n}}\right) \right) \frac{\|v\|_2}{\sqrt{n}},
$$
as claimed.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\subsection{Proof of \eqref{eq:est-on-(1)-2}}
In what follows set $1 \leq r \leq n$; let $W \subset \mathbb{R}^n$ be a set that satisfies
$$
\log|W| \leq \gamma_2 r \log(en/r)
$$
for a suitable constant $\gamma_2$; and assume that every $w \in W$ satisfies the growth property \eqref{eq:growth} with constants $r$ and $\gamma_1$. The goal is to obtain an estimate that holds uniformly for every $w \in W$ on
\begin{equation} \label{eq:est-max-beta}
\frac{1}{m} \sup_{u \in \eta B_2^n} \max_{|I| \leq \beta m} \sum_{i \in I} \delta_i |w_i+u_i|,
\end{equation}
where $\eta$ is very small.
The idea behind the proof is that the set $W$ is well-behaved: on the one hand, its cardinality is reasonable, and on the other hand, the growth property \eqref{eq:growth} implies that vectors in $W$ are `well-spread', making them friendly to the application of selectors. Because we are interested in small perturbations of vectors in $W$ by vectors whose Euclidean norm is at most $\eta$, the impact of the perturbations is negligible.
\begin{Theorem} \label{thm:selectors-simple}
There exist absolute constants $c_1$ and $c_2$ such that the following holds. Let $W$ be as above, set $0<\beta<1$ and assume that
\begin{equation} \label{eq:cond-selector-1}
m \geq \frac{r \log^{3/2}(en/r)}{\beta}.
\end{equation}
Then with probability at least $1-2\exp(-c_1 \min\{\gamma_2 r \log(en/r),\beta m\})$ for every $w \in W$,
\begin{equation*}
\frac{1}{m} \sup_{u \in \eta B_2^n} \max_{|I| \leq \beta m} \sum_{i \in I} \delta_i |w_i+u_i| \leq \frac{\eta \sqrt{n}}{m} + c_2 \gamma_1 \gamma_2 \beta \sqrt{\log(e/\beta)} \frac{\|w\|_2}{\sqrt{n}}.
\end{equation*}
\end{Theorem}
\noindent {\bf Proof.}\ \ Clearly, by the triangle inequality, for every $w \in W$,
\begin{equation*}
\sup_{u \in \eta B_2^n} \max_{|I| \leq \beta m} \sum_{i \in I} \delta_i |w_i+u_i| \leq \max_{|I| \leq \beta m} \sum_{i \in I} \delta_i |w_i| + \sup_{u \in \eta B_2^n} \|u\|_1
\leq \max_{|I| \leq \beta m} \sum_{i \in I} \delta_i |w_i| + \eta \sqrt{n}.
\end{equation*}
Fix $w \in W$ and without loss of generality assume that its coordinates $w_i$ are nonnegative and non-increasing. Let $r$ be as in \eqref{eq:growth} and recall that $\beta m \geq r \log^{3/2}(en/r)$. Set $I_1=\{1,...,r\}$ and $I_2=\{\beta m,...,2\beta m /\delta\}$, and since $|I_1 \cup I_2|=2\beta m/\delta$, Chernoff's inequality implies that with probability at least $1-2\exp(-c\beta m)$,
$$
|\{i : \delta_i =1\} \cap \{1,...,2\beta m/\delta\} | \geq \beta m.
$$
On that event,
$$
\max_{|I| \leq \beta m} \sum_{i \in I} \delta_i w_i \leq \sum_{i \in I_1} w_i + \sum_{i \in I_2} \delta_i w_i.
$$
Clearly,
$$
\sum_{i \in I_1} w_i \leq \sqrt{r} \|w\|_{[r]} \leq \gamma_1 r \sqrt{\log(en/r)} \cdot \frac{\|w\|_2}{\sqrt{n}}.
$$
As for the second term, by the growth property \eqref{eq:growth}, for every $i \in I_2$
$$
w_i \leq w_r \leq \frac{\|w\|_{[r]}}{\sqrt{r}} \leq \gamma_1 \sqrt{\log(en/r)} \cdot \frac{\|w\|_2}{\sqrt{n}};
$$
recalling that $\beta m/\delta = \beta n$,
$$
\Bigl(\sum_{i \in I_2} w_i^2\Bigr)^{1/2} \leq \|w\|_{[\beta m/\delta]} \leq \gamma_1 \sqrt{n} \sqrt{\beta \log(e/\beta)} \cdot \frac{\|w\|_2}{\sqrt{n}}.
$$
By Bernstein's inequality, for $x>0$, with probability at least $1-2\exp(-x)$
\begin{align*}
\sum_{i \in I_2} \delta_i w_i \leq & \delta \sum_{i \in I_2} w_i + c\Bigl(\sqrt{x \delta} \Bigl(\sum_{i \in I_2} w_i^2\Bigr)^{1/2} + x \max_{i \in I_2} w_i \Bigr)
\\
\leq & c\gamma_1 \frac{\|w\|_2}{\sqrt{n}} \left(\delta n \cdot \beta \sqrt{\log(e/\beta)} + \sqrt{x} \sqrt{\delta n} \sqrt{\beta \log(e/\beta)} + x \sqrt{\log(en/r)} \right),
\end{align*}
where Lemma \ref{lemma:growth-outcome} is used to estimate $\sum_{i \in I_2} w_i$. Setting $x \sim \gamma_2 r \log(en/r) \geq 2 \log |W|$, it follows from the union bound that with probability at least $1-2\exp(-c^\prime \gamma_2 r \log(en/r))$, for every $w \in W$,
\begin{align*}
& \frac{1}{m} \max_{|I| \leq \beta m} \sum_{i \in I} \delta_i |w_i| \leq \frac{\eta \sqrt{n}}{m}
\\
&\qquad \qquad + c \gamma_1 \gamma_2 \frac{\|w\|_2}{\sqrt{n}} \cdot \frac{1}{m} \left( m \beta \sqrt{\log(e/\beta)} + \sqrt{m}\sqrt{r \log (en/r)} \sqrt{\beta \log(e/\beta)} + r \log^{3/2}(en/r) \right)
\\
& \leq \frac{\eta \sqrt{n}}{m} + c \beta \sqrt{\log(e/\beta)} \cdot \gamma_1 \gamma_2 \frac{\|w\|_2}{\sqrt{n}},
\end{align*}
where the last inequality holds because $\beta m \geq r \log^{3/2}(en/r)$.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\vskip0.3cm
The following is an immediate outcome of Theorem \ref{thm:selectors-simple}.
\begin{Theorem} \label{thm:est-on-1-sparse}
There exist absolute constants $c_0,c_1$ and $c_2$ for which the following holds. Assume that
\begin{itemize}
\item[$(1)$] $\Gamma(\Sigma_{2s,n}) \subset W+\eta B_2^n$ where $W\subset \Gamma(\Sigma_{2s,n})$ satisfies $\log |W| \leq \gamma_2 s \log(en/s)$;
\item[$(2)$] Every $w \in W$ satisfies the growth property \eqref{eq:growth} with constants $2s$ and $\gamma_1$;
\item[$(3)$] For every $t \in \Sigma_{2s,n}$, $\|\Gamma t\|_2/\sqrt{n} \leq 2\|t\|_2$;
\item[$(4)$] $\beta \sqrt{\log(e/\beta)} \leq (c_0/\gamma_1 \gamma_2) \cdot (\rho/\lambda)$; and
\item[$(5)$] $m \geq c_1 \frac{ s\log^{3/2}(en/s)}{\beta}$.
\end{itemize}
Then with probability at least $1-2\exp(-c_2 \gamma_2 s \log(en/s))$,
$$
\frac{1}{m} \sup_{y \in \rho \Sigma_{2s,n}} \max_{|I| \leq \beta m} \sum_{i \in I} \delta_i |(\Gamma y)_i| \leq \frac{\eta \sqrt{n}}{m} + \frac{\rho^2}{32 \lambda}
$$
\end{Theorem}
\subsection{Proof of \eqref{eq:est-on-(2)-1}} \label{sec:est-2-1-sparse}
The key component in the proof of \eqref{eq:est-on-(2)-1} is as follows:
\begin{Theorem} \label{thm:selectors-main}
There exist constants $c_0,c_1,c_2,$ and $c_3$ that depend only on $L$ for which the following holds. Consider $W, V \subset \mathbb{R}^n$ that satisfy the growth property \eqref{eq:growth} with constants $r$ and $\gamma_1$, and are such that $\log |W|, \ \log |V| \leq \gamma_2 r \log(en/r)$.
Assume further that
$$
\sup_{v \in V} \frac{\|v\|_2}{\sqrt{n}} \leq c_0 \rho, \ \ \ \sup_{w \in W} \frac{\|w\|_2}{\sqrt{n}} \leq c_0.
$$
Let
$$
\eta_W \leq \min\left\{c_1 \frac{\rho}{\gamma_1 \gamma_2 \sqrt{\log(\lambda \gamma_1 \gamma_2 /\rho)}}, \frac{\lambda}{2} \right\},
$$
and set
$$
m \geq C(\gamma_1,\gamma_2) \frac{\lambda^2 r \log^{3/2}(en/r)}{\rho^2},
$$
where $C(\gamma_1,\gamma_2) = c_2 \gamma_1^2 \gamma_2^2 \sqrt{\log \gamma_2}$.
Then with probability at least
$$
1-2\exp(-c_3\gamma_2 r \log(en/r))
$$
we have that
$$
\max_{w \in W} \sup_{u \in \eta_W B_2^n} \max_{v \in V} \sup_{u^\prime \in \eta_V B_2^n} \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i (v_i+u_i^\prime) \cdot \operatorname{sign} (w_i + u_i + \nu_i + \tau_i) \right| \leq \frac{\rho^2}{32 \lambda} + \frac{\eta_V \sqrt{n}}{m}.
$$
\end{Theorem}
\vskip0.4cm
While the estimate in Theorem \ref{thm:selectors-main} looks rather unpleasant, one should keep in mind that in the case that interests us, $\gamma_1$ and $\gamma_2$ are poly-logarithmic in $r$ and $n$, and so is $\lambda$. Also, the factors $\eta_V$ and $\eta_W$ are very small, of the order of $n^{-2}$, and as a result terms involving them are negligible. With that in mind, the outcome of Theorem \ref{thm:selectors-main} is that
$$
\max_{w \in W} \sup_{u \in \eta_W B_2^n} \max_{v \in V} \sup_{u^\prime \in \eta_V B_2^n} \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i (v_i+u_i^\prime) \cdot \operatorname{sign} (w_i + u_i + \nu_i + \tau_i) \right| \leq \frac{\rho^2}{16 \lambda}
$$
provided that
$$
m \geq \gamma \frac{r \log(en/r)}{\rho^2}
$$
where $\gamma$ is poly-logarithmic in $r$ and $n$.
\vskip0.4cm
The proof of Theorem \ref{thm:selectors-main} follows the same path as that of Theorem \ref{thm:selectors-simple}: reducing the wanted estimate to a bound on
$$
\max_{w \in W} \max_{v \in V} \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i v_i \cdot \operatorname{sign} (w_i + \nu_i + \tau_i) \right|
$$
which is handled by the union bound, taking into account the $\exp(2\gamma_2 r \log(en/r))$ pairs $(w,v)$. To achieve this reduction, one has to control the contribution of all possible $u \in \eta_W B_2^n$ and $u^\prime \in \eta_V B_2^n$. The nontrivial component in that task is identifying the random sets of signs
$$
\mathbb{S}_{w} = \left\{\operatorname{sign} (w_i + u_i + \nu_i + \tau_i)_{i \in I} : u \in \eta B_2^n \right\},
$$
where $I=\{i : \delta_i =1\}$. Because $w+u+\nu_{\rm noise}+\tau_{\rm thres}$ is a small perturbation of $w+\nu_{\rm noise}+\tau_{\rm thres}$, one may expect a `stability result': that on a high probability event, for every $w \in W$ the set $\mathbb{S}_w$ consists of small perturbations of the sign vector $(\operatorname{sign} (w_i + \nu_i + \tau_i))_{i \in I}$.
\begin{Lemma} \label{lemma:sign-perturbation}
There exist absolute constants $c_0$ and $c_1$ for which the following holds. Let $2\eta_W <\varepsilon \leq \lambda$ and set
\begin{equation} \label{eq:cond-on-m-1}
m \geq c_0 \varepsilon^{-1} \lambda \gamma_2 r \log(en/r).
\end{equation}
Then with probability at least $1-2\exp(-c_1 r \log(en/r))$ with respect to $(\delta_i)_{i=1}^n \otimes (\nu_i)_{i=1}^n \otimes (\tau_i)_{i=1}^n$, for every $w \in W$
$$
\mathbb{S}_w \subset \operatorname{sign} (w_i + \nu_i + \tau_i)_{i \in I}+2\mathcal{Z},
$$
where $I=\{i : \delta_i=1\}$, ${\mathcal Z} \subset \{-1,0,1\}^I$ and for every $z \in {\mathcal Z}$, $|{\rm supp}(z)| \leq 3\varepsilon m /\lambda$.
\end{Lemma}
\noindent {\bf Proof.}\ \ Fix $\varepsilon>0$ and note that if $|w_i + \nu_i + \tau_i| \geq \varepsilon$ and $|u_i| \leq \varepsilon/2$ then
$$
\operatorname{sign} (w_i+u_i + \nu_i + \tau_i)=\operatorname{sign} (w_i + \nu_i + \tau_i).
$$
Thus, for a well chosen $\varepsilon$ one has to show that with high probability, for every $w \in W$ and $u \in \eta B_2^n$ there are at least $(1-2\varepsilon/\lambda)m$ coordinates $i$ such that
$$
\delta_i =1, \ \ \ |w_i+ \nu_i + \tau_i| \geq \varepsilon, \ \ {\rm and} \ \ |u_i| \leq \varepsilon/2.
$$
By the choice of $\varepsilon$ one has that for every $1 \leq i \leq n$, $|u_i| \leq \|u\|_2 \leq \eta_W \leq \varepsilon/2$; that takes care of the third constraint.
To establish the other two, recall that $\delta n =m$; that $I=\{i : \delta_i=1\}$; and that with probability at least $1-2\exp(-c^\prime m)$, $m/2 \leq |I| \leq 3m/2$. Conditioned on this event, set $(a_i)_{i \in I} \in \mathbb{R}^I$ to be any sequence and put $E_i=\{ |\tau_i -a_i| < \varepsilon\}$. Note that the events $(E_i)_{i \in I}$ are independent and $\mathbb{P}_\tau(E_i) \leq \varepsilon/\lambda$; therefore, with $\tau$-probability at least $1-2\exp(-c|I|\varepsilon/\lambda)\geq 1-2\exp(-c'm\varepsilon/\lambda)$, there are at most $2(\varepsilon/\lambda)|I|\leq 3(\varepsilon/\lambda)m$ indices $i \in I$ for which $|\tau_i - a_i| < \varepsilon$. Applying this observation to $a_i = -(w_i + \nu_i)$ conditionally on $(\nu_i)_{i=1}^n$, and then invoking a Fubini argument with respect to $(\nu_i)_{i=1}^n$ and $(\delta_i)_{i=1}^n$, it follows that for every $w \in W$, with probability at least $1-4\exp(-c' m \varepsilon/\lambda)$ with respect to $(\delta_i)_{i=1}^n \otimes (\nu_i)_{i=1}^n \otimes (\tau_i)_{i=1}^n$,
\begin{equation} \label{eq:signs-in-proof-1}
\left|\left\{ i \in I : |w_i+\nu_i + \tau_i| \geq \varepsilon\right\}\right| \geq \left(1-\frac{3\varepsilon }{\lambda}\right)m.
\end{equation}
By the union bound, \eqref{eq:signs-in-proof-1} holds for every $w \in W$ provided that $\log |W| \leq cm\varepsilon/\lambda$, which is the case by the choice of $m$.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\vskip0.3cm
\noindent{\bf Proof of Theorem \ref{thm:selectors-main}.} Fix $w \in W$, $u \in \eta_W B_2^n$, $v \in V$ and $u^\prime \in \eta_V B_2^n$, and note that
\begin{align*}
& \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i (v_i+u^\prime_i) \cdot \operatorname{sign} (w_i + u_i + \nu_i + \tau_i) \right|
\\
\leq & \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i v_i \cdot \operatorname{sign} (w_i + u_i + \nu_i + \tau_i) \right| + \frac{1}{m}\sum_{i=1}^n |u_i^\prime|.
\end{align*}
The second term is bounded by at most $\sqrt{n}\|u^\prime\|_2/m \leq \eta_V \sqrt{n}/{m}$. For the first term, fix the sign vector $(\operatorname{sign} (w_i + \nu_i +\tau_i))_{i=1}^n$, let
$$
z_i=|\operatorname{sign} (w_i + u_i + \nu_i + \tau_i)-\operatorname{sign} (w_i + \nu_i +\tau_i)|,
$$
and set $J_z$ to be the support of $(z_i)_{i=1}^n$. Therefore,
\begin{align*}
& \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i v_i \cdot \operatorname{sign} (w_i + u_i + \nu_i + \tau_i) \right|
\\
\leq & \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i v_i \cdot \operatorname{sign} (w_i +\nu_i + \tau_i) \right| + 2 \left|\frac{1}{m}\sum_{j \in J_z} \delta_j |v_j| \right| = (a)_{w,v}+(b)_{w,v}.
\end{align*}
To estimate $(b)_{w,v}$, let ${\mathcal A}_1$ be the event from Lemma \ref{lemma:sign-perturbation} (with respect to $(\delta_i)_{i=1}^n \otimes (\nu_i)_{i=1}^n \otimes (\tau_i)_{i=1}^n$) for an $\varepsilon$ to be specified in what follows. Using the notation of the lemma, on the event ${\mathcal A}_1$, for every $w \in W$, $|J_z \cap I|=|{\rm supp}(z) \cap I| \leq 3\varepsilon m /\lambda$. Setting $\beta = 3\varepsilon/\lambda$, one has to estimate
$$
\frac{1}{m} \sum_{i \in J_z} \delta_i |v_i| = \frac{1}{m} \sum_{i \in J_z \cap I} \delta_i |v_i| \leq \max_{|J| \leq \beta m} \frac{1}{m}\sum_{j \in J} \delta_j |v_j|,
$$
which is precisely the process studied in Theorem \ref{thm:selectors-simple} (for $\eta=0$). In particular, if $m\geq \varepsilon^{-1}r\log^{3/2}(en/r)$, then there is an event ${\mathcal A}_2$ of probability at least
$$
1-2\exp(-c^\prime \min\{ \gamma_2 r \log(en/r), \varepsilon m/\lambda\})
$$
with respect to $(\delta_i)_{i=1}^n$, such that for every $v \in V$,
\begin{equation} \label{eq:A-2-in-proof}
\max_{|J| \leq \beta m} \frac{1}{m}\sum_{j \in J} \delta_j |v_j| \leq c \gamma_1 \gamma_2 \beta \sqrt{\log(e/\beta)} \frac{\|v\|_2}{\sqrt{n}}
\sim \gamma_1 \gamma_2 \frac{\varepsilon}{\lambda} \sqrt{\log(e \lambda/\varepsilon)} \frac{\|v\|_2}{\sqrt{n}}=(*).
\end{equation}
Set
\begin{equation} \label{eq:choice-of-eps}
\varepsilon =c\frac{\rho}{\gamma_1 \gamma_2 \sqrt{\log(\lambda \gamma_1 \gamma_2/\rho)}}
\end{equation}
for a sufficiently small constant $c$, and note that by our assumption \eqref{eq:choice-of-eps} is a `legal choice' of $\varepsilon$ (i.e., $2\eta_{W}\leq \varepsilon$). Since $\sup_{v \in V} \|v\|_2/\sqrt{n} \leq c_1 \rho$, it is evident that
$$
(*) \leq \frac{\rho^2}{64 \lambda}
$$
with probability at least $1-2\exp(-c^\prime \gamma_2 r \log(en/r))$, as the choice of $m$ implies that $\gamma_2 r \log(en/r) \leq \varepsilon m /\lambda$.
Finally, to estimate $(a)_{w,v}$ one may use the union bound. Indeed, conditioned on $(\nu_i)_{i=1}^n$ and $(\tau_i)_{i=1}^n$, each $w \in W$ is associated with a sign vector $(\zeta_i)_{i=1}^n$, defined by $\zeta_i=\operatorname{sign} (w_i+\nu_i +\tau_i)$. Therefore, as a random variable with respect to $(\varepsilon_i)_{i=1}^n$ and $(\delta_i)_{i=1}^n$,
$$
\left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i v_i \cdot \operatorname{sign} (w_i +\nu_i + \tau_i) \right| = \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i |v_i| \operatorname{sign} (v_i) \zeta_i \right|,
$$
and there are at most $|W| \cdot |V| \leq \exp(2\gamma_2 r \log(en/r))$ pairs $(v,\zeta)$. For each pair, $(\varepsilon_i \zeta_i \operatorname{sign}(v_i))_{i=1}^n$ has the same distribution as $(\varepsilon_i)_{i=1}^n$. Without loss of generality one may assume that the $v_i$'s are nonnegative and non-increasing. Hence,
$$
\left|\sum_{i=1}^n \varepsilon_i \delta_i v_i \right| \leq \sum_{i=1}^r |v_i| + \left|\sum_{i=r+1}^n \delta_i \varepsilon_i v_i\right| \leq \sqrt{r} \|v\|_{[r]} + \left|\sum_{i=r+1}^n \delta_i \varepsilon_i v_i\right|.
$$
For $i \geq r$, $v_i \leq \|v\|_{[r]}/\sqrt{r}$, so by Bernstein's inequality, with probability at least $1-\exp(-x)$,
$$
\left|\sum_{i=r+1}^n \delta_i \varepsilon_i v_i\right| \lesssim \sqrt{\delta x}\|v\|_2 + x\frac{\|v\|_{[r]}}{\sqrt{r}}.
$$
Setting $x \sim \gamma_2 r \log(en/r)$ and invoking the union bound, it follows that with probability at least $1-2\exp(-c^\prime \gamma_2 r \log(en/r))$ with respect to $(\delta_i)_{i=1}^n \otimes (\varepsilon_i)_{i=1}^n$, every pair $(v,w)$ satisfies
\begin{align*}
& \left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i v_i \cdot \operatorname{sign} (w_i +\nu_i + \tau_i) \right|
\\
& \qquad \leq c \gamma_1 \gamma_2 \left(\frac{r \sqrt{\log(en/r)}}{m} + \sqrt{\frac{r \log(en/r)}{m}} + \frac{r \log^{3/2} (en/r)}{m} \right) \cdot \frac{\|v\|_2}{\sqrt{n}},
\end{align*}
where we have used the growth property \eqref{eq:growth} to estimate $\|v\|_{[r]}$.
By the choice of $m$ and since $\sup_{v \in V} \|v\|_2 \lesssim \rho \sqrt{n}$, a Fubini argument shows that there is an event ${\mathcal A}_3$ with probability at least $1-2\exp(-c^\prime \gamma_2 r \log(en/r))$, such that for every $v \in V$ and $w \in W$,
$$
\left|\frac{1}{m}\sum_{i=1}^n \varepsilon_i \delta_i v_i \cdot \operatorname{sign} (w_i +\nu_i + \tau_i) \right| \leq \frac{\rho^2}{64 \lambda}.
$$
The claimed estimate holds on the intersection of the events ${\mathcal A}_1$, ${\mathcal A}_2$, and ${\mathcal A}_3$ and this completes the proof.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\section{Properties of $\Gamma_\xi$} \label{sec:circulnat}
In the previous section we have accumulated various conditions on the matrix $\Gamma$ that ensure that regardless of the identity of the sparse target $x$, any solution $x^{\#}$ of \eqref{eqn:progIsomorphicIntro} satisfies that $\|x-x^{\#}\|_2 \leq \rho$. The proofs show that to recover any $s$-sparse vector it suffices that the matrix $\Gamma$ satisfies the following properties for $r=2s$:
\begin{itemize}
\item[$(M1)$] \emph{Decomposition:} $\Gamma(\Sigma_{r,n}) \subset W + \eta B_2^n$, where $W\subset \Gamma(\Sigma_{r,n})$; $\log |W| \leq \gamma_2 r \log(en/r)$; each vector in $W$ satisfies the growth property with constants $r$ and $\gamma_1$; and $\eta$ is very small, say $\eta \lesssim 1/n^2$.
\vskip0.3cm
\item[$(M2)$] \emph{Small-ball property:} that for every $t \in \Sigma_{r,n}$, $\|\Gamma t\|_2 /\sqrt{n} \geq \kappa \|t\|_2$.
\vskip0.3cm
\item[$(M3)$] \emph{Isomorphic upper estimate:} that for every $t \in \Sigma_{r,n}$, $\|\Gamma t\|_2/\sqrt{n} \leq \kappa^\prime \|t\|_2$.
\end{itemize}
\begin{Remark}
Note that the combination of $(M2)$ and $(M3)$ implies that $\Gamma/\sqrt{n}$ acts on $\Sigma_{r,n}$ in an isomorphic way. It does not imply an almost isometric estimate since the constants $\kappa$ and $\kappa^\prime$ need not be close to one.
\end{Remark}
\vskip0.4cm
To complete the proof of Theorem~\ref{thm:isomorphic}, let us show that all the necessary estimates are true with high probability for a circulant matrix generated by an isotropic $L$-subgaussian random vector that has iid coordinates. The proofs of $(M2)$ and $(M3)$ follow directly from the methods developed in \cite{MRW16}. $(M1)$ can also be derived using \cite{MRW16}, though the proof presented in what follows is somewhat simpler than the analogous claim from \cite{MRW16}.
\subsection{ $(M2)$ and $(M3)$}
To establish the small-ball property and the isomorphic upper estimate we require three facts. Let $j_0$ satisfy that $2^{j_0} = \theta (n/r)$ where $0<\theta<1$ is a suitable (small) absolute constant, and $j_1$ satisfies that $2^{j_1} = \gamma_2 r \log(en/r)$ for
$$
\gamma_2 \sim \max\left\{1,\frac{\log(er)}{\log(en/r)}\right\}.
$$
Let $T = \Sigma_{r,n} \cap S^{n-1}$ and consider $T_{j_1}, T_{j_0} \subset T$ such that $\log |T_{j_0}| \leq 2^{j_0}$ and $\log |T_{j_1}| \leq 2^{j_1}$. For every $t \in T$ let $\pi_{j_1} t \in T_{j_1}$ and put $\pi_{j_0} t \in T_{j_0}$.
\begin{Theorem} \label{thm:structure-1}
Set $r \leq cn/\log^4n$ for a suitable absolute constant $c$. There are subsets $T_{j_0}, T_{j_1} \subset T$ and maps $\pi_{j_0}$ and $\pi_{j_1}$ as above for which the following holds. With probability at least $1-2\exp(-c^\prime 2^{j_1})$, for every $t \in \Sigma_{r,n}$,
\begin{itemize}
\item $\|\Gamma_\xi (t-\pi_{j_1}t)\|_2 \leq c^{\prime \prime}/n^2$;
\end{itemize}
with probability at least $1-2\exp(-c^\prime\min\{2^{j_0},2^{j_1}\})$, for every $t \in \Sigma_{r,n} \cap S^{n-1}$,
\begin{itemize}
\item $\left| \|\Gamma_\xi \pi_{j_1}t\|_2^2 - \|\Gamma_\xi \pi_{j_0}t\|_2^2 \right| \leq c^{\prime \prime} \sqrt{n} \sqrt{r} \alpha_r \log(e r) \leq n/8$; and
\item $\left| \|\Gamma_\xi \pi_{j_0} v\|_2^2 - n \right| \leq n/16+c^{\prime \prime} \sqrt{n} \sqrt{r} \leq n/8$,
\end{itemize}
where
$$
\alpha_r =\max\left\{1, \log \left(c \frac{r}{n^2} \log r\right) \right\}.
$$
The constants $c^\prime$ and $c^{\prime \prime}$ depend only on $L$.
\end{Theorem}
\begin{Remark}
In what follows we assume that $j_1 \geq j_0$. When $j_1 \leq j_0$ the proofs are much simpler and one may set $T_{j_0}=T_{j_1}$.
\end{Remark}
Clearly, Theorem \ref{thm:structure-1} implies the wanted two-sided isomorphic estimate. Firstly, by homogeneity, it suffices to prove the estimate in $\Sigma_{r,n} \cap S^{n-1}$. Secondly, it is standard to verify that with probability at least $1-2\exp(-cn)$, $\sup_{v \in S^{n-1}} \|\Gamma_\xi v\|_2 \leq n$. Therefore, with probability at least $1-2\exp(-c^\prime \min\{2^{j_0},2^{j_1}\}))$, for every $t \in \Sigma_{r,n} \cap S^{n-1}$,
$$
\|\Gamma_\xi t\|_2^2 \geq \|\Gamma_\xi \pi_{j_1}t \|_2^2 - 2(\sup_{v \in S^{n-1}} \|\Gamma_\xi v\|_2) \|\Gamma_\xi (t - \pi_{j_1}t)\|_2 \geq \|\Gamma_\xi \pi_{j_1}t \|_2^2 - \frac{2}{n}
$$
and
$$
\|\Gamma_\xi \pi_{j_1}t \|_2^2 \geq \|\Gamma_\xi \pi_{j_0}t\|_2^2 - \frac{n}{8} \geq \frac{3n}{4},
$$
implying that
\begin{equation} \label{eq:lower-isomorphic-in-proof}
\frac{\|\Gamma_\xi t\|_2^2}{n} \geq \frac{1}{2} = \frac{\|t\|_2^2}{2}.
\end{equation}
The reverse direction follows in an identical manner.
\vskip0.4cm
Most of the proof of Theorem \ref{thm:structure-1} can be found in \cite{MRW16}. The proof of the first part of Theorem \ref{thm:structure-1} is a minor modification of Lemma 4.4 in \cite{MRW16}: the set $T_{j_1}$ is a net in $\Sigma_{r,n} \cap S^{n-1}$ with respect to the $\ell_2$ norm, and its cardinality---$\exp(\gamma_2 r \log(en/r))$ for $\gamma_2$ that is logarithmic in $n$ and $r$---suffices to ensure that the mesh-width of the net is $\sim 1/n^2$; in fact, the mesh-width can be improved to any power $n^{-\zeta}$ by multiplying $\gamma_2$ by a suitable constant. The proof of the second part of Theorem \ref{thm:structure-1} follows from a chaining argument with respect to a certain $\ell_\infty$-type norm---see Theorem 4.7 and Corollary 4.10 in \cite{MRW16}. The third part of Theorem \ref{thm:structure-1} is based on the following concentration result, which is a straightforward consequence of a subgaussian version of the Hanson-Wright inequality (see, for example, \cite[Lemma 5.1]{DJR17}): that for any $t\in S^{n-1}$ with $\|t\|_1\leq \sqrt{r}$ and $u>0$,
$$
\mathbb{P}\left( \left| \|\Gamma_\xi t\|_2^2 - n \right| \geq u \right) \leq 2\exp\left(-c^\prime\min\left\{\frac{u^2}{rn}, \frac{u}{r} \right\}\right).
$$
Now the third part of Theorem \ref{thm:structure-1} is evident by applying this to any $t \in T_{j_0}$ with $u=n/8$ and invoking the union bound.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\subsection{Proof of $(M1)$}
Let us show that for any $x \in \Sigma_{r,n}$, $\Gamma_\xi x$ satisfies the wanted growth property.
\begin{Theorem} \label{thm:structure-2}
For every $L, \zeta \geq 1$ there is a constant $c=c(L,\zeta)$ such that the following holds. With probability at least $1-(r/n)^{\zeta}$, for every $r \leq k \leq n$ and every $x \in \Sigma_{r,n}$,
\begin{equation}
\label{eqn:structure-2}
\|\Gamma_\xi x\|_{[k]} \leq c (\log n) \cdot (\log r) \cdot \sqrt{k \log (en/k)}\|x\|_2.
\end{equation}
\end{Theorem}
\vskip0.4cm
By combining Theorem \ref{thm:structure-2} and \eqref{eq:lower-isomorphic-in-proof}, it is evident that with probability at least $1-(r/n)^{\zeta}$ any $w \in \Gamma_\xi (\Sigma_{r,n})$ satisfies the growth property: for all $r \leq k \leq n$,
\begin{equation} \label{eq:decomp-est}
\|w\|_{[k]} \leq \gamma_1 \sqrt{k \log(en/k)} \frac{\|w\|_2}{\sqrt{n}},
\end{equation}
where $\gamma_1 = c (\log n) \cdot (\log r)$. By the first statement of Theorem~\ref{thm:structure-1}, property $(M1)$ is therefore satisfied with the choice $W=\Gamma_\xi (T_{j_1})$.
\vskip0.4cm
\noindent {\bf Proof.}\ \ By homogeneity it suffices to prove Theorem~\ref{thm:structure-2} for $T=\Sigma_{r,n} \cap S^{n-1}$. Just as in Theorem~\ref{thm:structure-1}, there is a set $T_{j_1} \subset T$ of cardinality at most $\exp(\gamma_2 r \log(en/r))$ and an event of probability at least $1-2\exp(-c^\prime \gamma_2 r \log(en/r))$ such that for every $t \in T$,
\begin{equation} \label{eq:approx-in-proof-1}
\|\Gamma_\xi (t - \pi_{j_1}t)\|_2 \leq \frac{c}{n^2},
\end{equation}
where $c$ is a constant that depends on $L$. Once \eqref{eqn:structure-2} is established for elements of $T_{j_1}$, it is evident that for every $t \in T$ and $r \leq k \leq n$,
\begin{align*}
\|\Gamma_\xi t\|_{[k]} \leq & \|\Gamma_\xi \pi_{j_1}t\|_{[k]} + \|\Gamma_\xi (t-\pi_{j_1}t)\|_2 \leq c (\log n) (\log r) \sqrt{k \log (en/k)} \|\pi_{j_1} t\|_2
\\
= & c (\log n) (\log r) \sqrt{k \log (en/k)} \|t\|_2.
\end{align*}
To prove that the wanted estimate holds in $T_{j_1}$, recall that for every $v,x \in \mathbb{R}^n$, $\Gamma_\xi v = \Gamma_v \xi$ and that $\Gamma_{v+x} \xi = \Gamma_v \xi + \Gamma_x \xi$. Also, $\Gamma_v = \sqrt{n} UD_{Wv}O$ where $U,W,O$ are orthonormal matrices with entries that are bounded by $1/\sqrt{n}$ (in fact, if we denote by $\mathcal{F}$ the discrete Fourier transform, then $U=\mathcal{F}^{-1}/\sqrt{n}$ and $W=O=\mathcal{F}/\sqrt{n}$) and $D_{Wv}$ is a diagonal matrix defined by $D_{ii} = \inr{W_i,v}$. Hence, for any $v, x \in \mathbb{R}^n$,
$$
\|\Gamma_v x\|_2 = \sqrt{n}\Bigl(\sum_{i=1}^n \inr{W_i,v}^2 \inr{O_i,x}^2 \Bigr)^{1/2} \leq \vertiii{v} \cdot \|x\|_2,
$$
where
$$
\vertiii{v} = \sqrt{n} \max_{1 \leq i \leq n} |\inr{W_i,v}|.
$$
Let $G$ be the standard Gaussian vector in $\mathbb{R}^n$, set $\| \cdot \|$ to be a norm on $\mathbb{R}^n$ and put $B^{\circ}$ to be the unit ball of the dual norm. Since $\xi$ is isotropic and $L$-subgaussian, a standard chaining argument shows that for any $p \geq 1$,
\begin{equation*}
\bigl(\mathbb{E} \|\Gamma_v \xi\|^p\bigr)^{1/p} \leq cL \bigl(\mathbb{E} \|\Gamma_v G\|+ \sqrt{p} \sup_{x \in B^{\circ}} \|\Gamma_v^*x\|_2 \bigr) \leq cL \bigl(\mathbb{E}\|\Gamma_v G\| +c\sqrt{p} \vertiii{v} \sup_{x \in B^\circ} \|x\|_2\bigr).
\end{equation*}
Fix $r \leq k \leq n$ and consider the norm $\| \cdot \|_{[k]}$. Clearly, the unit ball of the dual norm is the convex hull of $\Sigma_{k,n}$, implying that for every $v \in \mathbb{R}^n$,
\begin{equation} \label{eq:basic-chaining}
\bigl(\mathbb{E} \|\Gamma_v \xi\|_{[k]}^p\bigr)^{1/p} \leq cL \left(\mathbb{E} \|\Gamma_v G\|_{[k]} + \sqrt{p} \vertiii{v}\right).
\end{equation}
Observe that for every $v \in B_2^n$,
\begin{equation}
\label{eq:expec-k-largest}
\mathbb{E} \|\Gamma_v G\|_{[k]} \leq c \sqrt{k \log(en/k)}.
\end{equation}
Indeed, $\Gamma_v G = \sqrt{n}UD_{Wv}OG$ has the same distribution as $\sqrt{n}UD_{Wv}G$. For every $1 \leq \ell \leq n$ the random variable
$$
Z_\ell=\inr{\sqrt{n}UD_{Wv}G,e_\ell}=\sqrt{n} \sum_{i=1}^n g_i \inr{W_i,v} (U^*e_\ell)_i
$$
is a centred Gaussian random variable. Since $\|U^*e_\ell\|_\infty \leq 1/\sqrt{n}$, it follows that
$$
\|Z_\ell\|_{\psi_2} \simeq_L \|Z_\ell\|_{L_2} = \sqrt{n} \left(\sum_{i=1}^n \inr{W_i,v}^2 (U^*e_\ell)_i^2 \right)^{1/2} \leq \|Wv\|_2 \leq 1.
$$
Clearly,
$$
\mathbb{E} \|\Gamma_v G\|_{[k]} = \mathbb{E} \Bigl(\sum_{\ell \leq k} (Z_\ell^*)^2 \Bigr)^{1/2},
$$
and by a fact due to Klartag \cite{Kla02} (see also \cite[Lemma 3.5]{MRW16} for a proof),
$$
\Bigl(\mathbb{E} \sum_{\ell \leq k} (Z_\ell^*)^2\Bigr)^{1/2} \leq c \max_{1 \leq \ell \leq n} \|Z_\ell\|_{\psi_2} \cdot \sqrt{k\log(en/k)},
$$
implying that \eqref{eq:expec-k-largest} holds.
To complete the proof, by a well-known estimate due to Carl \cite{Car85} (see also Corollary 4.10 in \cite{MRW16}), there is a sequence of subsets $(T_j)_{j=0}^{j_1} \subset T_{j_1}$ whose cardinalities are $|T_j| \leq 2^{2^j}$,
and maps $\pi_j : T_{j_1} \to T_j$ such that for every $t \in T_{j_1}$ and every $j \leq j_1$,
$$
\vertiii{\pi_{j}t - \pi_{j-1} t} \leq c 2^{-j/2} \sqrt{r} \log(en/2^j).
$$
Set $\Delta_j t = \pi_{j+1}t - \pi_{j} t$, let $2^{\ell} = \frac{k}{r} \log(en/k)$ and assume first that $\ell \leq j_1$. Thus, for every $t \in T_{j_1}$
$$
t = \pi_{\ell} t+\sum_{j=\ell}^{j_1-1} \Delta_j t
$$
and
\begin{equation} \label{eq:chaining-1}
\sup_{t \in T_{j_1}} \|\Gamma_\xi t\|_{[k]} \leq \sup_{t \in T_{j_1}} \left(\sum_{j=\ell}^{j_1-1} \|\Gamma_{\Delta_j t} \xi \|_{[k]} + \|\Gamma_{\pi_\ell t} \xi\|_{[k]} \right).
\end{equation}
Fix $\ell \leq j \leq j_1-1$ and consider the collection of the (at most) $2^{2^{j+2}}$ random variables $\{\|\Gamma_{\Delta_j t} \xi \|_{[k]} : t \in T_{j_1}\}$. By \eqref{eq:basic-chaining} and \eqref{eq:expec-k-largest},
$$
\bigl(\mathbb{E}\|\Gamma_{\Delta_j t} \xi \|_{[k]}^p \bigr)^{1/p} \leq cL (\sqrt{k \log(en/k)} + \sqrt{p} \vertiii{\Delta_j t}) \leq cL \Bigl(\sqrt{k \log(en/k)} + \sqrt{\frac{p}{2^j}} \sqrt{r} \log(en/2^j) \Bigr).
$$
Setting $p \sim \zeta 2^j$, it follows from Markov's inequality and the union bound that, with probability at least $1-2\exp(-c^\prime \zeta 2^j)$, for every $t \in T_{j_1}$,
$$
\|\Gamma_{\Delta_j t} \xi \|_{[k]} \leq cL (\sqrt{k \log(en/k)} + \sqrt{\zeta} \sqrt{r} \log(en/2^j)).
$$
By the union bound the same assertion holds simultaneously for all $\ell \leq j \leq j_1$ with probability at least
$$
1-2\sum_{j=\ell}^{j_1-1} \exp(-c \zeta 2^j) \geq 1-2\exp(-c^\prime \zeta 2^{\ell})=1-2\exp(-c^{\prime \prime} \zeta (k/r)\log(en/k)).
$$
Turning to the second term in \eqref{eq:chaining-1}, observe that for $t \in \Sigma_{r,n}$, $\vertiii{t} \leq \sqrt{r}$. Therefore, by \eqref{eq:basic-chaining}, \eqref{eq:expec-k-largest}, Markov's inequality, and the union bound for the collection $\{ \Gamma_{\pi_\ell t} \xi : t \in T_{j_1} \}$, it is evident that with probability at least $1-2\exp(-c^\prime \zeta (k/r)\log(en/k))$,
$$
\|\Gamma_{\pi_\ell t} \xi \|_{[k]} \leq cL \sqrt{k \log(en/k)}\bigl(1 + \sqrt{\zeta} \log(en)\bigr).
$$
Intersecting the two events and applying the union bound for $r \leq k \leq n$, one has that with probability at least
$$
1-2\sum_{k=r}^n \exp(-c^\prime \zeta (k/r)\log(en/k)) \geq 1-\left(\frac{r}{n}\right)^{c^{\prime \prime} \zeta},
$$
for every $r \leq k \leq n$,
$$
\sup_{t \in T_{j_1}} \|\Gamma_\xi t\|_{[k]} \leq cL (j_1 - \ell) \sqrt{k \log(en/k)}\bigl(1 + \sqrt{\zeta} \sqrt{r} \log(en) \bigr),
$$
and the claim follows because $j_1 - \ell \lesssim \log r$.
The proof when $j_1 \leq \ell$ is much simpler and follows immediately from the union bound used for every $t \in T_{j_1}$ and using that $\vertiii{v} \leq \sqrt{r}$.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\section{Proof of minimax optimality}
\label{sec:lower}
Theorem \ref{thm:lower} is established using some modifications to a more general result from \cite{Men17}.
\vskip0.4cm
\noindent{\bf Proof of Theorem \ref{thm:lower}.}
Fix $r>0$ and $0<\alpha<1$ such that $\alpha r \geq 4\rho$. Let $T \subset \Sigma_{s,n} \cap r B_2^n$ be $\alpha r$-separated with respect to the $\ell_2$-norm. Denote the rows of $A$ by $X_1,...,X_m$ (which need not be independent) and let $\mu$ be the probability distribution of $(X_i)_{i=1}^m$. Let ${\mathcal U}\subset (\mathbb{R}^{n})^m$ be the event
$$
\left\{ \|At\|_2 \leq \kappa \sqrt{m} r \ \ \ {\rm for \ every \ } t \in T \right\}
$$
and observe that by our assumptions, $\mu({\mathcal U}) \geq 0.95$. Denote by $\nu$ the probability distribution of $(\nu_i)_{i=1}^m$ and note that the joint distribution of $\bigl((X_i)_{i=1}^m,(\nu_i)_{i=1}^m\bigr)$ is the product measure $\mu\otimes \nu$.
Fix $x\in \Sigma_{s,n}$. Since $\Psi$ is a successful recovery procedure it follows that if $\Psi$ receives $(X_i)_{i=1}^m$ and $(\inr{X_i,x}+\nu_i)_{i=1}^m$, it outputs a vector that achieves recovery accuracy $\rho$ with confidence $0.9$; in other words,
$$
\mu \otimes \nu \left(\left\{ \bigl((X_i)_{i=1}^m,(\nu_i)_{i=1}^m\bigr) \ : \ \Psi\left( \left((X_i, \inr{X_i,x}+\nu_i)\right)_{i=1}^m \right) \in x + \rho B_2^n \right\} \right) \geq 0.9.
$$
For every $\mathbb{X} = (X_i)_{i=1}^m$ and $t_j \in T$ set
$$
\mathbb{A}_j(\mathbb{X}) := \left\{ \left(\nu_i\right)_{i=1}^m : \Psi\left( \left(X_i, \inr{X_i,t_j}+\nu_i\right)_{i=1}^m \right) \in t_j + \rho B_2^n \right\} \subset \mathbb{R}^m.
$$
By Fubini's Theorem and since $\mu \otimes \nu$ is a product measure, there is an event $\Omega_j\subset (\mathbb{R}^{n})^m$ of $\mu$-probability at least $0.8$ on which $\nu(\mathbb{A}_j(\mathbb{X})) \geq 3/4$.
Let $u_j(\mathbb{X}) = (\inr{{X}_i,t_j})_{i=1}^m$, which is simply the `noise-free' part of the measurement of $t_j$ generated by the sample ${\mathbb{X}}$. The crucial fact is that for any ${\mathbb{X}} \in \Omega_j \cap \Omega_\ell$, the sets $u_j(\mathbb{X}) + \mathbb{A}_j({\mathbb{X}})$ and $u_\ell(\mathbb{X}) + \mathbb{A}_\ell({\mathbb{X}})$ are disjoint. Indeed, if
$$
z \in \left(u_j(\mathbb{X}) + \mathbb{A}_j({\mathbb{X}})\right) \cap \left(u_\ell(\mathbb{X}) + \mathbb{A}_\ell({\mathbb{X}})\right)
$$
then $\Psi( \mathbb{X}, z) \in t_j + \rho B_2^n$ and at the same time, $\Psi( \mathbb{X}, z) \in t_\ell + \rho B_2^n$, but those two balls do not intersect because $T$ is $4\rho$-separated.
As a result it follows that
$$
\sum_{j} \mathbbm{1}_{\Omega_j}(\mathbb{X}) \nu (u_j(\mathbb{X}) + \mathbb{A}_j({\mathbb{X}})) \leq 1,
$$
and setting
$$
\mathbb{B}_j({\mathbb{X}}) = -\mathbb{A}_j({\mathbb{X}}) \cap \mathbb{A}_j({\mathbb{X}}) \subset \mathbb{A}_j({\mathbb{X}}),
$$
we have
$$
\sum_{j} \mathbbm{1}_{\Omega_j}(\mathbb{X}) \nu \left(u_j({\mathbb{X}}) + \mathbb{B}_j({\mathbb{X}})\right) \leq 1.
$$
Integrating with respect to $\mu$,
$$
(*)=\sum_{j} \int \mathbbm{1}_{\Omega_j}(\mathbb{X}) \nu \left(u_j({\mathbb{X}}) +\mathbb{B}_j({\mathbb{X}})\right) \ d\mu \leq 1
$$
and all that remains is to estimate $(*)$ from below.
Recall that $\nu$ is the distribution of a Gaussian vector with mean zero and covariance $\sigma^2 I_m$. It is standard to verify (see, e.g. \cite[p.\ 82]{LeT91}) that if $K$ is a centrally symmetric subset of $\mathbb{R}^m$ and $y \in \mathbb{R}^m$ then
$$
\nu(y + K) \geq \exp(-\|y\|_2^2/2\sigma^2) \cdot \nu(K).
$$
In our case, for $\mathbb{X} \in \Omega_j$ each set $\mathbb{B}_j({\mathbb{X}})$ is centrally symmetric. Moreover, by the symmetry of $\nu$, $\nu(-\mathbb{A}_j({\mathbb{X}})) \geq 3/4$, implying that
$$
\nu(\mathbb{B}_j({\mathbb{X}})) \geq 0.5.
$$
Also, if $\mathbb{X} \in {\mathcal U}$ then $\|u_j({\mathbb{X}})\|_2 = \|A t_j\|_2 \leq \kappa \sqrt{m} r$. Note that $\mu(\Omega_j \cap {\mathcal U}) \geq 1/2$, and therefore,
\begin{align*}
(*) \geq & \frac{1}{2} \sum_{j} \int \mathbbm{1}_{\Omega_j}(\mathbb{X}) \exp(-\|u_j({\mathbb{X}})\|_2^2/2\sigma^2) \ d\mu \geq \frac{1}{2}\sum_{j} \mu(\Omega_j \cap {\mathcal U}) \exp(-\kappa^2 mr^2/2\sigma^2)
\\
\geq & \frac{1}{4} |T| \exp(-\kappa^2 mr^2/2\sigma^2),
\end{align*}
It follows that if $\log |T| \geq 2\log(4)$ then
$$
m \geq \kappa^{-2} \frac{\sigma^2}{r^2} \log |T|.
$$
To complete the proof one has to show that $\Sigma_{s,n} \cap r B_2^n$ contains an $\alpha r$-separated set whose log-cardinality is at least $\sim s \log(en/s)$ for a suitable absolute constant $0<\alpha<1$, in which case one may set $r = 4\rho/\alpha$. Indeed, it is standard to verify (see, e.g., \cite[Lemma 10.12]{FoR13}) that there is a collection $\mathbb{J}$ of subsets of $\{1,...,n\}$ whose cardinality is $s$, such that $\log |\mathbb{J}| \geq cs\log(en/s)$ and $\mathbb{J}$ is $s/2$ separated with respect to the Hamming distance. For each $J \in \mathbb{J}$, let
$$
v_J = \frac{r}{\sqrt{s}} \sum_{j \in J} e_i.
$$
Then, $v_J\in \Sigma_{s,n}\cap r B_2^n$ and for $I,J \in \mathbb{J}$,
$$
\|v_I-v_J\|_2 = \frac{r}{\sqrt{s}} |I \Delta J|^{1/2} \geq \frac{r}{\sqrt{2}};
$$
thus one may set $\alpha=1/\sqrt{2}$ and the claim follows.
{\mbox{}\nolinebreak\hfill\rule{2mm}{2mm}\par\medbreak}
\section{Extensions} \label{sec:extensions}
We conclude this article by pointing out (without providing details) some possible extensions of Theorem~\ref{thm:isomorphic} and Theorem~\ref{thm:lower}. These can be obtained by making minor modifications to the proofs presented in previous sections.
\subsection{Recovery of approximately sparse vectors}
One may extend the recovery results from sparse vectors to approximately sparse vectors. To that end, consider the recovery program
\begin{equation} \label{eqn:progIsomorphicExtend}
\max_{z\in T} \frac{1}{m}\inr{q_{\operatorname{corr}},Az} - \frac{1}{2\lambda} \frac{\|\Gamma_{\xi} z\|_2^2}{n}
\end{equation}
for the set $T=\sqrt{s} B_1^n \cap B_2^n \subset 2\operatorname{conv}(\Sigma_{2s,n})$. Thus,
\begin{equation*}
{\rm star}(T-T) \cap \rho S^{n-1} \subset 2\sqrt{s}B_1^n \cap \rho S^{n-1}
\end{equation*}
and it is straightforward to verify that for $n\geq s/\rho^2$
\begin{equation*}
2\sqrt{s}B_1^n \cap \rho S^{n-1} \subset 4 \operatorname{conv}(\rho \Sigma_{s/\rho^2,n}).
\end{equation*}
As a consequence, one needs to study \eqref{eq:est-on-(3)-1}, \eqref{eq:est-on-(2)-1} and \eqref{eq:est-on-(1)-2} for the pair of sets
$$
2\operatorname{conv}(\Sigma_{2s,n}), \ \ 4 \operatorname{conv}(\rho \Sigma_{s/\rho^2,n})
$$
rather than for the pair \eqref{eq:sets-0}.
It is straightforward to verify that with high probability any vector $x$ in the images of the sets $\operatorname{conv}(\Sigma_{2s,n})$ and $ \operatorname{conv}(\Sigma_{s/\rho^2,n})$ under $\Gamma_{\xi}$ satisfies a weaker version of the growth property: that for any $r \leq k \leq n$,
\begin{equation} \label{eq:growthWeak}
\|x\|_{[k]} \leq \gamma_1 \sqrt{k\log(en/k)}
\end{equation}
where $r=2s$ or $r=s/\rho^2$, respectively, and $\gamma_1$ is a poly-logarithmic factor in $r$ and $n$.
Using this modified growth property while following the original path of the proof, the following can be established.
\begin{Theorem} \label{thm:appSparse}
There exist constants $c_1,c_2,c_3$ depending only on $L$, and poly-logarithmic factors $\gamma_1,\gamma_2$ satisfying
$$
\gamma_1\leq \log(s/\rho^2)\log(n), \qquad \gamma_2\leq \log(n)\log\log(n)
$$
such that the following holds. Fix $0<\rho<1$; assume that $\nu$ is $L$-subgaussian and that $|\mathbb{E} \nu| \leq c_1 \rho$; and set $\bar{\nu}=\nu-\mathbb{E}\nu$. Let
$$
\lambda \geq c_2 \gamma_1 \max\{\|\bar{\nu}\|_{L_2},1\} \log(e\gamma_1^2 \max\{\|\bar{\nu}\|_{L_2},1\}/\rho)
$$
and set $\beta$ such that
$$
\beta \sqrt{\log(e/\beta)} \leq \frac{c_3}{\gamma_1 \gamma_2} \cdot \frac{\rho}{\lambda}.
$$
Let $\tau$ be uniformly distributed on $[-\lambda,\lambda]$. Put
$n\geq s/\rho^2$ and set
$$
m \geq c_3 \gamma_1^2 \gamma_2^2 \frac{\lambda^2 s \log(en/s)}{\rho^4}.
$$
Then, with probability at least $1-(\frac{s}{\rho^2 n})^{2}$, for any $x\in \mathbb{R}^n$ with $\|x\|_1\leq \sqrt{s}$ and $\|x\|_2\leq 1$, any solution $x^{\#}$ to \eqref{eqn:progIsomorphicIntro} satisfies $\|x^{\#}-x\|_2\leq \rho$.
\end{Theorem}
Moreover, just as in Theorem~\ref{thm:lower}, one can show that Theorem~\ref{thm:appSparse} is minimax optimal (up to a logarithmic factor).
\subsection{Heavier-tailed noise}
It is straightforward to establish a version of Theorem~\ref{thm:isomorphic} for heavier-tailed noise. In fact, it is enough that $\nu$ and $\lambda$ satisfy
$$\mathbb{P}(2|\nu|>\lambda)\leq c_1\rho; \ \ \ \mathbb{E}(|\nu|\mathbbm{1}_{\{2|\nu|>\lambda\}})\leq c_1\rho; \ \ {\rm and} \ \ |\mathbb{E}\nu|\leq c_1\rho.
$$
Thus, heavier-tailed noise can be compensated by stronger dithering (and, as a consequence, an increased number of measurements).
\subsection{Alternative recovery methods} \label{sec:altRec}
The outcomes of Theorem~\ref{thm:isomorphic} and Theorem ~\ref{thm:appSparse} hold for two variations of the program \eqref{eqn:progIsomorphicExtend}. Firstly, they remain valid for any solution $x^{\#}$ of
\begin{equation} \label{eqn:convTintro}
\max_{z\in T} \frac{1}{m}\langle q_{\text{corr}},Az\rangle - \frac{1}{2\lambda}\|z\|_2^2.
\end{equation}
Note that this program is equivalent to
\begin{equation}
\label{eqn:projTintro}
\min_{z\in T} \left\|z-\frac{\lambda}{m}A^*q_{\text{corr}}\right\|_2,
\end{equation}
i.e., its output is an $\ell_2$-projection of $\frac{\lambda}{m}A^*q_{\text{corr}}$ onto $T$.
The program \eqref{eqn:convTintro} has some advantages: if there is a-priori knowledge that the signal $x$ is $s$-sparse and located in the Euclidean unit ball $B_2^n$, then the program can be used for $T=\Sigma_{s,n}$ and has a closed-form solution $x^{\#}$. Indeed, if $H_s$ is the hard thresholding operator then
\begin{equation}
\label{eqn:HTsol}
x^{\#}=\min\Big\{\frac{\lambda}{m},\frac{1}{\|H_s(A^*q_{\text{corr}})\|_2}\Big\} H_s(A^*q_{\text{corr}}).
\end{equation}
Secondly, the outcomes of Theorem~\ref{thm:isomorphic} and Theorem~\ref{thm:appSparse} are satisfied by any solution of the generalized Lasso
\begin{equation} \label{eqn:genLasso}
\min_{z\in T} \left\|q-\frac{1}{2\lambda}Az\right\|_2,
\end{equation}
since this program is equivalent to
$$\max_{z\in T} \frac{1}{m}\langle q,Az\rangle - \frac{1}{2\lambda} \frac{\|Az\|_2^2}{m}.$$
\subsection{Low noise regime}
In the `low noise regime', where $\|\bar{\nu}\|_{L_2}\leq c\rho$, it is possible to combine Theorem~\ref{thm:isomorphic} with adaptive thresholds during quantization (for more information on the adaptive threshold scheme, see \cite{BFN17}). This combination leads to a quantization and reconstruction scheme that, with high probability, recovers any $s$-sparse vector up to error $\rho$ from $m\geq c \gamma \log(1/\rho)s\log(en/s)$ one-bit measurements.
In a completely noiseless setting, this number of measurements is known to be optimal up to the poly-logarithmic factor $\gamma$. Since this scheme is presented in detail in the setting of subgaussian measurements in \cite[Section 3.4]{Dir18}, we will not elaborate on this further. |
1812.06754 | \section{Introduction}
The Enskog--Vlasov (EV) kinetic equation comprises the Enskog collision
integral for dense fluids \cite{Enskog22} and a Vlasov term describing the
van-der-Waals force. The first version of the EV equation \cite{Desobrino67}
was based on the original form of the Enskog integral -- which, as shown in
\cite{LebowitzPercusSykes69}, does not comply with the Onsager relations.
\cite{VanbeijerenErnst73a} proposed a modification of the Enskog integral free
from this shortcoming, which was incorporated in the EV model in
\cite{KarkheckStell81,StellKarkheckVanbeijeren83}. \cite{GrmelaGarciacolin80}
showed that an H-theorem holds for the EV equation only subject to a certain
restriction of its coefficients, and \cite{BenilovBenilov18} proposed a
version of the EV equation that satisfies this restriction and conserves
energy as well (all of the previous versions do not).
Note that, in kinetic models, phase transitions correspond to instabilities.
For the original version of the EV equation, the presence of an instability
has been shown in \cite{Grmela71}, and it was interpreted as gas-liquid phase transition.
In the present paper, we report the results of a more detailed study. Using
the EV\ equation that conserves energy and satisfies an H-theorem, we find
\emph{two} instabilities, with respect to infinite- and finite-wavelength
perturbations -- interpreted as gas-liquid and fluid-solid transitions,
respectively. The latter result comes as a surprise, as the EV equation was
conceived as a tool for modeling of fluids only. We show, however, that it
admits periodic solutions capable of `mimicking' the solid phase.
The present paper has the following structure. In section \ref{section 2}, we
introduce the Enskog--Vlasov equation and, in section \ref{section 3}, carry
out the stability analysis of its spatially homogeneous solutions. The general
results are illustrated by applying them to noble gases in sections
\ref{section 4}--\ref{section 5}.
\section{The Enskog--Vlasov model\label{section 2}}
\subsection{The EV equation}
Consider a fluid of hard spheres of diameter $D$, characterized by the
one-particle distribution function $f(\mathbf{r},\mathbf{v},t)$ where
$\mathbf{r}$ is the position vector, $\mathbf{v}$ the velocity, and $t$ the time.
Let the molecules exert on each other a force with a pair-wise potential
$\Phi(r)$, modeling physically the van der Waals interaction of molecules. Let
$\Phi(r)$ be a monotonically growing function of $r$, so that the van der
Waals force is attractive at all distances. Letting also, without loss of
generality, $\Phi\rightarrow0$ as $r\rightarrow\infty$, we can assume that
$\Phi(r)<0$ for all $r$.
As seen later, the main characteristic of $\Phi$ -- one that affects the
fluid's macroscopic properties -- is%
\begin{equation}
E=-\int\Phi(r)\,\mathrm{d}^{3}\mathbf{r}.\label{2.1}%
\end{equation}
Using $E$, $D$, the molecular mass $m$, and the Boltzmann constant $k_{B}$, we
introduce the following nondimensional variables:%
\[
\mathbf{r}_{nd}=\frac{\mathbf{r}}{D},\qquad\mathbf{v}_{nd}=\left( \frac
{m}{ED^{3}}\right) ^{1/2}\mathbf{v},\qquad t_{nd}=\left( \frac{ED}%
{m}\right) ^{1/2}t,
\]%
\[
f_{nd}=\frac{k_{B}E^{1/2}D^{3/2}}{m^{3/2}}f,\qquad\Phi_{nd}=\frac{\Phi}%
{ED^{3}}.
\]
Note that, due to (\ref{2.1}), the nondimensional potential $\Phi_{nd}$
satisfies (the subscript $_{nd}$ omitted)%
\begin{equation}
\int\Phi(r)\,\mathrm{d}^{3}\mathbf{r}=-1.\label{2.2}%
\end{equation}
In terms of the nondimensional variables, the Enskog--Vlasov equation has the
form ($_{nd}$ omitted)%
\begin{multline}
\fl\frac{\partial f(\mathbf{r,v},t)}{\partial t}+\mathbf{v}\cdot
\mathbf{\nabla}f(\mathbf{r,v},t)+\mathbf{F}(\mathbf{r},t)\cdot\frac{\partial
f(\mathbf{r,v},t)}{\partial\mathbf{v}}\nonumber\\
=\int\int\left[ \eta(\mathbf{r},\mathbf{r}+\mathbf{\bm{\kappa}}%
,t)\,f(\mathbf{r},\mathbf{v}^{\prime},t)\,f(\mathbf{r}+\mathbf{\bm{\kappa}}%
,\mathbf{v}_{1}^{\prime},t)\right. \nonumber\\
-\left. \eta(\mathbf{r},\mathbf{r}-\mathbf{\bm{\kappa}},t)\,f(\mathbf{r}%
,\mathbf{v},t)\,f(\mathbf{r}-\mathbf{\bm{\kappa}},\mathbf{v}_{1},t)\right]
\mathbf{g}\cdot\mathbf{\bm{\kappa}}\,\operatorname{H}(\mathbf{g}%
\cdot\mathbf{\bm{\kappa}})\,\mathrm{d}^{2}\mathbf{\bm{\kappa}}\,\mathrm{d}%
^{3}\mathbf{v}_{1},\label{2.3}%
\end{multline}
where $\operatorname{H}$ is the Heaviside function,%
\begin{equation}
\mathbf{F}(\mathbf{r},t)=-\mathbf{\nabla}\int n(\mathbf{r}_{1},t)\,\Phi
(\left\vert \mathbf{r}-\mathbf{r}_{1}\right\vert )\,\mathrm{d}^{3}%
\mathbf{r}_{1}\label{2.4}%
\end{equation}
is the collective van der Waals force,%
\begin{equation}
n(\mathbf{r},t)=\int f(\mathbf{r},\mathbf{v},t)\,\mathrm{d}^{3}\mathbf{v}%
\label{2.5}%
\end{equation}
is the number density, $\mathbf{\bm{\kappa}}$ is a unit vector parameterizing
all possible orientations of a pair of spheres (molecules) at the moment of
collision, and the post-collision velocities $\left( \mathbf{v}^{\prime
},\mathbf{v}_{1}^{\prime}\right) $ are related to the pre-collision ones,
$\left( \mathbf{v},\mathbf{v}_{1}\right) $, by%
\begin{equation}
\mathbf{v}^{\prime}=\mathbf{v}+\mathbf{\bm{\kappa}}\left( \mathbf{g}%
\cdot\mathbf{\bm{\kappa}}\right) ,\qquad\mathbf{v}_{1}^{\prime}%
=\mathbf{v}_{1}-\mathbf{\bm{\kappa}}\left( \mathbf{g}\cdot
\mathbf{\bm{\kappa}}\right) \mathbf{,\qquad g}=\mathbf{v}_{1}-\mathbf{v}%
.\label{2.6}%
\end{equation}
The coefficient $\eta(\mathbf{r},\mathbf{r}_{1},t)$ which appears in the
collision integral is, generally, a functional of $n(\mathbf{r},t)$. It
originates from the main assumption of the EV theory that the two-particle
distribution function $f^{(2)}(\mathbf{r},\mathbf{v},\mathbf{r}_{1}%
,\mathbf{v}_{1},t)$ is related to the singlet $f(\mathbf{r},\mathbf{v},t)$ by%
\[
f^{(2)}(\mathbf{r},\mathbf{v},\mathbf{r}_{1},\mathbf{v}_{1},t)=\eta
(\mathbf{r},\mathbf{r}_{1},t)\,f(\mathbf{r},\mathbf{v},t)\,f(\mathbf{r}%
_{1},\mathbf{v}_{1},t).
\]
Given a specific expressions for $\eta$, equations (\ref{2.3})--(\ref{2.6})
fully determine the evolution of $f$.
There are three approaches to choosing $\eta(\mathbf{r},\mathbf{r}_{1},t)$:
\begin{enumerate}
\item In the original Enskog theory \cite{Enskog22}, $\eta$ is a function of
the number density evaluated at the midpoint between the colliding molecules,
i.e., $n(\frac{1}{2}(\mathbf{r}+\mathbf{r}_{1}),t)$. This function is supposed
to be such that the EV model describes the equation of state (EoS) of the
fluid under consideration with the best possible accuracy.
\item The authors of \cite{VanbeijerenErnst73a} derived $\eta$ from a
hypothesis that the n-particle distribution function is represented by a
product of singlet distributions and (sic!) a factor excluding all states
where the hard spheres overlap. This hypothesis does hold at equilibrium, but
should be considered as approximate otherwise. Another difficulty associated
with this approach is that the resulting $\eta$ is defined through a limiting
procedure involving multiple integrals of increasing order, making it
impossible to solve the EV equation numerically.
\item The authors of \cite{BenilovBenilov18} assumed%
\begin{multline}
\fl\eta(\mathbf{r},\mathbf{r}_{1},t)=1+\sum_{l=2}^{L}c_{l}\int^{l}\left[
{\displaystyle\prod_{i=2}^{l}}
n(\mathbf{r}_{i},t)\,\operatorname{H}(1-\left\vert \mathbf{r}%
-\mathbf{\mathbf{r}}_{i}\right\vert )\,\operatorname{H}(1-\left\vert
\mathbf{r}_{1}-\mathbf{\mathbf{r}}_{i}\right\vert )\right] \nonumber\\
\times\left[
{\displaystyle\prod_{i=2}^{l-1}}
\,\,%
{\displaystyle\prod_{j=i+1}^{l}}
\operatorname{H}(1-\left\vert \mathbf{r}_{i}-\mathbf{\mathbf{r}}%
_{j}\right\vert )\right]
{\displaystyle\prod_{i=1}^{l}}
\mathrm{d}^{3}\mathbf{r}_{i}, \label{2.7}%
\end{multline}
where $\int^{l}$ denotes $l$ repeated integrals, and the coefficients $c_{2}$,
$c_{3}$, $c_{4}$...$c_{L}$ are to be chosen to fit the properties of the fluid
under consideration. Note that the `proper' hard-sphere $\eta$ derived in
\cite{VanbeijerenErnst73a} is a particular case of (\ref{2.7}) -- one with
$L=\infty$ and certain values of $c_{l}$ (which are not easy to calculate).
\end{enumerate}
It turns out that the choice of $\eta$ affects the fundamental properties of
the EV equation.
Consider, for example, the entropy of the system, which is traditionally
assumed \cite{Desobrino67,GrmelaGarciacolin80,GrmelaGarciacolin80b,Grmela81}
to have the form%
\[
S=-\int\int f(\mathbf{r,v},t)\ln f(\mathbf{r,v},t)\,\mathrm{d}^{3}%
\mathbf{v}\,\mathrm{d}^{3}\mathbf{r}+Q[n],
\]
where the non-ideal contribution $Q[n]$ is a functional depending on
$n(\mathbf{r},t)$\footnote{The fact that $Q$ depends only on $n$ and not on
$f$ reflects the hard-sphere nature of the EV model.}. Then, the H-theorem
holds if and only if $Q[n]$ and $\eta$ are inter-related by%
\begin{equation}
\mathbf{\nabla}\frac{\delta Q[n]}{\delta n(\mathbf{r},t)}=-\int\eta
(\mathbf{r},\mathbf{r}_{1},t)\,n(\mathbf{r}_{1},t)\,(\mathbf{r}_{1}%
-\mathbf{\mathbf{r}})\,\delta(\left\vert \mathbf{r}-\mathbf{\mathbf{r}}%
_{1}\right\vert -1)\,\mathrm{d}^{3}\mathbf{r}_{1}\label{2.8}%
\end{equation}
(see \cite{GrmelaGarciacolin80} and, for more detail, Appendix A of
\cite{BenilovBenilov18}). The question of existence of $Q[n]$ as a solution of
equation (\ref{2.8}) for a given $\eta$ is not trivial. If, for example,
$\eta$ is a function of $n\left( \frac{1}{2}(\mathbf{r+r}_{1}),t\right) $ --
as in the original Enskog's theory -- (\ref{2.8}) does not seem to have a
solution ofr $Q$. For the versions of $\eta$ suggested in
\cite{VanbeijerenErnst73a,BenilovBenilov18}, on the other hand, it does. In
the latter case, an explicit expression for $Q$ can be found,%
\begin{multline}
\fl Q[n]=-\frac{1}{2}\int\int n(\mathbf{r})\,n(\mathbf{r}_{1}%
)\,\operatorname{H}(1-\left\vert \mathbf{r}-\mathbf{\mathbf{r}}_{1}\right\vert
)\,\mathrm{d}^{3}\mathbf{r}\,\mathrm{d}^{3}\mathbf{r}_{1}\nonumber\\
-\sum_{l=2}^{L}\frac{c_{l}}{l\left( l+1\right) }\int^{l}\int n(\mathbf{r}%
)\left[
{\displaystyle\prod_{i=1}^{l}}
n(\mathbf{r}_{i})\,\operatorname{H}(1-\left\vert \mathbf{r}-\mathbf{\mathbf{r}%
}_{i}\right\vert )\right] \nonumber\\
\times\left[
{\displaystyle\prod_{i=1}^{l-1}}
\,\,%
{\displaystyle\prod_{j=i+1}^{l}}
\operatorname{H}(1-\left\vert \mathbf{r}_{i}-\mathbf{\mathbf{r}}%
_{j}\right\vert )\right] \mathrm{d}^{3}\mathbf{r}\,%
{\displaystyle\prod_{i=1}^{l}}
\mathrm{d}^{3}\mathbf{r}_{i},\label{2.9}%
\end{multline}
where the coefficients $c_{l}$ are the same as in expression (\ref{2.7}) for
$\eta$.
In this paper, we shall use $\eta$ and $Q$ given by (\ref{2.7}) and
(\ref{2.9}), respectively.
We shall also need the function $\Theta(n)$ related to the functional $Q[n]$
by%
\[
\Theta(n)=-\frac{1}{n}\left( Q[n]\right) _{n=\operatorname{const}},
\]
so that (\ref{2.9}) yields%
\begin{equation}
\Theta(n)=\frac{2\pi}{3}n+\sum_{l=2}^{L}\frac{c_{l}A_{l}}{l\left( l+1\right)
}n^{l}, \label{2.10}%
\end{equation}
where%
\begin{equation}
A_{l}=\int^{l}\left[
{\displaystyle\prod_{i=1}^{l}}
\operatorname{H}(1-\left\vert \mathbf{\mathbf{r}}_{i}\right\vert )\right]
\left[
{\displaystyle\prod_{i=1}^{l-1}}
\,\,%
{\displaystyle\prod_{j=i+1}^{l}}
\operatorname{H}(1-\left\vert \mathbf{r}_{i}-\mathbf{\mathbf{r}}%
_{j}\right\vert )\right]
{\displaystyle\prod_{i=1}^{l}}
\mathrm{d}^{3}\mathbf{r}_{i} \label{2.11}%
\end{equation}
are numeric constants.
$\Theta(n)$ plays an important role in the thermodynamics of EV fluids: in
particular, their EoS is \cite{BenilovBenilov18}%
\begin{equation}
p=nT\left[ 1+n\Theta^{\prime}(n)\right] -\dfrac{1}{2}n^{2}. \label{2.12}%
\end{equation}
where $\Theta^{\prime}=\mathrm{d}\Theta/\mathrm{d}n$.
\subsection{Steady solutions of the EV equation}
Physically, steady (time independent) solutions of the EV equation must have
spatially uniform temperature and zero fluxes of mass, momentum, and energy --
which means that they must be equilibrium states.
To find these, observe that the scattering cross-section in the Enskog
integral does not depend on $\mathbf{v}$ -- as a result, the EV equation is
consistent with the following ansatz:%
\[
f(\mathbf{r},\mathbf{v},t)=\frac{n(\mathbf{r})}{\left( 2\pi T\right) ^{3/2}%
}\exp\left( -\frac{\left\vert \mathbf{v}\right\vert ^{2}}{2T}\right) ,
\]
where $T$ is the temperature. Substituting this ansatz into the EV equation
and carrying out straightforward algebra (see \cite{Grmela71}), we obtain the
following equation for $n(\mathbf{r})$:%
\begin{multline}
\fl\mathbf{\nabla}\left[ \ln n(\mathbf{r})+\frac{1}{T}\int n(\mathbf{r}%
_{1})\,\Phi(\left\vert \mathbf{r}-\mathbf{r}_{1}\right\vert \mathbf{)}%
\,\mathrm{d}^{3}\mathbf{r}_{1}\right] \\
+\int\eta(\mathbf{r},\mathbf{r}_{1})\,n(\mathbf{r}_{1})\,(\mathbf{r}%
_{1}-\mathbf{r})\,\delta(\left\vert \mathbf{r}_{1}-\mathbf{r}\right\vert
-1)\,\mathrm{d}^{3}\mathbf{r}_{1}=0.
\end{multline}
Subject to (\ref{2.8}), this equation can be integrated,%
\begin{equation}
\ln n(\mathbf{r})+\frac{1}{T}\int n(\mathbf{r}_{1})\,\Phi(\left\vert
\mathbf{r}-\mathbf{r}_{1}\right\vert \mathbf{)}\,\mathrm{d}^{3}\mathbf{r}%
_{1}-\frac{\delta Q[n]}{\delta n(\mathbf{r})}=\mathrm{const}.\label{2.13}%
\end{equation}
This equation coincides with the Euler equation from density functional theory
and also arises in equilibrium statistical mechanics (grand ensemble), where
the term involving $\Phi$ is the functional derivative of the mean field
contribution to the free energy, the $\mathrm{const}$ is the nondimensional
chemical potential divided by $T$, and $Q[n]$ is the excess free energy. The
present derivation shows that $Q[n]$ can also be interpreted as the excess
contribution to, or non-ideal part of, the entropy.
\section{The stability analysis\label{section 3}}
Consider the spatially uniform Maxwellian distribution $f_{M}(\mathbf{v})$. To
examine its stability within the framework of the EV equation, one should let%
\[
f(\mathbf{r},\mathbf{v},t)=f_{M}(\mathbf{v})+\tilde{f}(\mathbf{r}%
,\mathbf{v},t),
\]
where $\tilde{f}(\mathbf{r},\mathbf{v},t)$ is a small perturbation. It is
usually sufficient to examine harmonic perturbations only,%
\begin{equation}
\tilde{f}(\mathbf{r},\mathbf{v},t)=\hat{f}(\mathbf{v})\,\mathrm{e}%
^{ikz+\lambda t}, \label{3.1}%
\end{equation}
where $k$ is the perturbation's wavenumber, $\lambda$ is its growth/decay
rate, and $z$ is one of the spatial coordinates. Substituting (\ref{3.1}) into
the linearized EV equation, one obtains an eigenvalue problem, where $\hat
{f}(\mathbf{v})$ is the eigenfunction and $\lambda$ the eigenvalue. If, for
some $k$, an eigenvalue exists such that $\operatorname{Re}\lambda>0$, the
base state is unstable.
Unfortunately, the outlined procedure implies solving a two-dimensional
integral equation involving the $z$ and normal-to-$z$ components of
$\mathbf{v}$. This equation cannot be solved analytically, and it is even
difficult to be solved numerically.
Instead, we shall only examine \textquotedblleft frozen
waves\textquotedblright, i.e., perturbations with zero growth/decay rate,
$\lambda=0$. They are excellent stability indicators: if a frozen wave with a
wavenumber $k$ exists for a certain state, either a small increase or a small
decrease of $k$ should make it unstable. Thus, the parameter values for which
the first frozen wave bifurcates from the base state corresponds to the onset
of instability.
Admittedly, if $\operatorname{Re}\lambda$ changes sign while
$\operatorname{Im}\lambda\neq0$, this approach fails to detect destabilization
-- but in similar kinetic equations examined for stability so far
\cite{BenilovBenilov16,Fowler19}, this kind of destabilization does not occur.
In the worst-case scenario, one finds some, albeit not all, of the unstable states.
Most importantly, frozen waves in the problem at hand can be found
analytically -- which is incomparably simpler than dealing with the general
perturbations (\ref{3.1}). For the same reason, this kind of stability
analysis is often used in fluid mechanics, in particular, for liquid bridges
(for example, \cite{MeseguerSlobozhaninPerales95,Benilov16}).
Since frozen waves are steady, we can search for them using the steady-state
reduction (\ref{2.13}) of the full EV\ equation. To do so, let
\[
n(\mathbf{r})=\bar{n}+\tilde{n}(\mathbf{r}),
\]
where $\bar{n}$ is the density of the base state and $\tilde{n}(\mathbf{r})$
is a perturbation. Substituting expression (\ref{2.10}) for $Q[n]$ into
equation (\ref{2.13}), linearizing it, and letting $\tilde{n}(\mathbf{r}%
)=\mathrm{e}^{ikz}$, we obtain an equation inter-relating $k$, $T$, and
$\bar{n}$ -- which can be written in the form (overbars omitted)%
\begin{equation}
T=-\frac{n\,\hat{\Phi}(k)}{1+nF_{1}(k)+%
{\displaystyle\sum\limits_{l=2}^{L}}
c_{l}n^{l}F_{l}(k)},\label{3.2}%
\end{equation}
where%
\begin{equation}
\fl F_{l}(k)=\int^{l}\left[
{\displaystyle\prod_{j=1}^{l}}
\operatorname{H}(1-\left\vert \mathbf{r}_{j}\right\vert )\right] \left[
{\displaystyle\prod_{j=1}^{l-1}}
{\displaystyle\prod_{i=j+1}^{l}}
\operatorname{H}(1-\left\vert \mathbf{r}_{j}-\mathbf{r}_{i}\right\vert
)\right] \cos kz_{l}\,%
{\displaystyle\prod_{j=1}^{l}}
\mathrm{d}^{3}\mathbf{r}_{j},\label{3.3}%
\end{equation}
and%
\[
\hat{\Phi}(k)=%
{\displaystyle\int}
\Phi(r\mathbf{)}\cos kz\,\mathrm{d}^{3}\mathbf{r}.
\]
Note that, due to constraint (\ref{2.2}),%
\begin{equation}
\hat{\Phi}(0)=-1.\label{3.4}%
\end{equation}
Functions $F_{l}(k)$ do not involve any parameters. The first two can be
calculated analytically, and another three have been computed using the
Monte-Carlo method. All five are depicted in figure \ref{fig1}.
\begin{figure}
\begin{flushright}
\includegraphics[width=0.835\textwidth]{fig1.pdf}
\end{flushright}
\caption{The functions $F_{l}(k)$ defined by (\ref{3.3}). The curves are marked with the corresponding value of $l$.}
\label{fig1}
\end{figure}
Equality (\ref{3.2}) is, essentially, an instability criterion: if a value of
$k$ exists such that (\ref{3.2}) is satisfied for a state $\left( n,T\right)
$, this state is unstable.
\section{The results\label{section 4}}
In what follows, we shall illustrate criterion (\ref{3.2}) using the values
for the coefficients $c_{l}$, obtained in \cite{BenilovBenilov19} for noble
gases. The series representing $Q$ was truncated at $L=5$, and%
\begin{align}
c_{2} & =-1.3207,\hspace{1.07cm}c_{3}=9.9308,\label{4.1}\\
c_{4} & =-18.7526,\qquad c_{5}=13.1406,\label{4.2}%
\end{align}
As seen later, the shape of the Vlasov potential is of little importance, so
we assume, on a more or less ad hoc basis,%
\begin{equation}
\hat{\Phi}(k)=-\frac{1}{1+\left( Rk\right) ^{4}},\label{4.3}%
\end{equation}
where $R$ is, physically, the ratio of the spatial scale of the van der Waals
force to the molecule's size. Evidently, expression (\ref{4.3}) complies with
restriction (\ref{3.4}).
The stability criterion (\ref{3.2}), (\ref{4.1})--(\ref{4.3}) describes a
one-parameter family of curves $T=T(n)$ with $k$ being the parameter. The
behavior of these curves depends on whether or not the fifth-order polynomial
in $n$ in the denominator of (\ref{3.2}) has positive roots. Computations show
that no more than one such root exists, and it (dis)appear only if
$F_{5}(k)\ $changes sign -- which it does do for infinite sequence of values
of $k$ tending to infinity (see figure \ref{fig1}). Denoting these values by
$k_{1}$, $k_{2}$, $k_{3}$..., we have computed%
\[
k_{1}\approx6.2042,\qquad k_{2}\approx8.0354,\qquad k_{3}\approx11.6014.
\]
A straightforward analysis of expression (\ref{3.2}) shows that, in the range%
\begin{equation}
0<k<k_{1},\label{4.4}%
\end{equation}
the denominator of expression (\ref{3.2}) does \emph{not} have positive roots.
As a result -- and due to quick decay of $\hat{\Phi}(k)$ as $k$ increases --
the curves $T(n)$ `recede' within range (\ref{4.4}) -- see figure \ref{fig2}.
Thus, the curve with $k=0$ determines the boundary of an instability region,
which will be referred to as IR1.
\begin{figure}
\begin{flushright}
\includegraphics[width=0.835\textwidth]{fig2.pdf}
\end{flushright}
\caption{Existence of frozen waves on the $\left( n,T\right)$ plane. The curves $T(n)$ are determined by (\ref{3.2}), (\ref{4.1})--(\ref{4.3}) with $R=1$. Dotted curves within ranges (1)--(3) correspond to $k$ being within ranges (\ref{4.4})--(\ref{4.6}), respectively. The boundaries of the instability regions are shown by solid lines.}
\label{fig2}
\end{figure}
Another instability region (IR2) arises for the range $k_{1}<k<k_{2}$ -- which
can be conveniently subdivided into two subranges,%
\begin{equation}
k_{1}<k<k_{1.5},\label{4.5}%
\end{equation}
with $k_{1.5}\approx7.129$, and%
\begin{equation}
k_{1.5}<k<k_{2}.\label{4.6}%
\end{equation}
As $k$ changes from $k_{1}$ to $k_{1.5}$, the (real positive) root $n_{0}$ of
the denominator of (\ref{3.2}) `travels' from $+\infty$ to $n_{0}\approx
1.230$. Then, when $k$ changes from $k_{1.5}$ to $k_{2}$, $n_{0}$ travels back
to $+\infty$ -- i.e., the boundary of IR2 corresponds to $k=k_{1.5}$. The
corresponding curve $T(n)$ is shown in figure \ref{fig2} together with
examples of curves for $k$ from ranges (\ref{4.5}) and (\ref{4.6}).
A basic analysis of expression (\ref{3.2}) and computations show that the
instability regions corresponding to $\left( k_{2},k_{3}\right) $, $\left(
k_{3},k_{4}\right) $, etc. are all \emph{inside} IR1 and IR2 and, thus, are
physically unimportant.
Finally, if $n\ll1$ (diluted gas), the stability criterion (\ref{3.2}) agrees
with the corresponding results obtained in
\cite{BenilovBenilov16,BenilovBenilov17} for the BGK--Vlasov and
Boltzmann--Vlasov models, respectively.
\section{Discussion\label{section 5}}
For $k=0$ (the boundary of IR1), (\ref{3.2}) and (\ref{3.4}) reduce to%
\[
T=\frac{n}{1+\frac{4\pi}{3}nA_{1}+%
{\displaystyle\sum\limits_{l=2}^{L}}
c_{l}n^{l}A_{l}},
\]
where constants $A_{l}$ are given by (\ref{2.11}). The above expression can be
rewritten in terms of the function $\Theta(n)$ [given by (\ref{2.10})],%
\begin{equation}
T=\frac{n}{1+\left[ n^{2}\Theta^{\prime}(n)\right] ^{\prime}}. \label{5.1}%
\end{equation}
This representation of the boundary of IR1 turns out to be very useful.
(1) Equation (\ref{5.1}) implies that IR1 does not depend on the specific
shape of the Vlasov potential $\Phi$.
(2) As for IR2, it does depend on $\Phi$, but this dependence is weak -- which
we illustrate by computing the boundary of IR2 for different values of the
parameter $R$ [which appears in expression (\ref{4.3})] and plotting the
results in figure \ref{fig3}. One can see that, for $R\gtrsim2$, the boundary
of IR2 is virtually indistinguishable from a vertical line. This effect is
even more pronounced if $\hat{\Phi}(k)$ decays exponentially as $k\rightarrow
\infty$.
\begin{figure}
\begin{flushright}
\includegraphics[width=0.835\textwidth]{fig3.pdf}
\end{flushright}
\caption{The dependence of the boundary of IR2 on the parameter $R$ of the Fourier transform (\ref{4.3}) of the Vlasov potential. The inset shows a blow-up of the shaded region of the main panel. The curves are marked with the corresponding values of $R$.}
\label{fig3}
\end{figure}
Given that the van der Waals force is supposed to be long-range (by comparison
with the molecule size), one can assume that $R\gg1$, and thus replace the
boundary of IR2 by a vertical line. Physically, this means that a fluid cannot
be compressed beyond a certain density value no matter what the temperature is.
(3) Using EoS (\ref{2.12}), one can show that the maximum of the function
$T(n)$ given by (\ref{5.1}) corresponds to the critical point.
(4) Not all of the stable states are physically meaningful, as some of them
correspond to negative pressure. These can be detected using EoS (\ref{2.12}).
For the case (\ref{4.1})--(\ref{4.3}) with $R=1$, the full diagram of stable
and physically meaningful fluid states is shown in figure \ref{fig4}.
\begin{figure}
\begin{flushright}
\includegraphics[width=0.835\textwidth]{fig4.pdf}
\end{flushright}
\caption{The stable, physically meaningful fluid states in the $\left( n,T\right) $ parameter plane. IR1 and IR2 stand for instability regions 1 and 2, respectively. The black dot marks the critical point.}
\label{fig4}
\end{figure}
(5) As stated in most thermodynamics texts, a non-ideal gas becomes unstable
if
\begin{equation}
\left( \frac{\partial p}{\partial n}\right) _{T=\operatorname{const}}<0,
\label{5.2}%
\end{equation}
i.e., if an increase of density gives rise to a decrease of pressure. Applying
this argument to EoS (\ref{2.12}), we recover equation (\ref{5.1}) describing
the boundary of IR1.
IR2, in turn, is located in high-density region -- hence, it may only describe
fluid-solid transitions. Most importantly, the whole boundary of IR2
corresponds to a single value of the perturbation wavenumber, $k_{1.5}$ -- so
that $2\pi/k_{1.5}$ can be identified with the spatial scale of the emerging
crystal. This agrees with the fact that that crystal structure does not depend
on the temperature or density of the fluid state where the transition takes place.
(6) It is well-known that gas-liquid transition typically occurs \emph{before}
criterion (\ref{5.2}) predicts it. The threshold where the actual transition
occurs is determined by the so-called evaporation curve describing the
gas-liquid equilibrium. It is still possible, however, to overcool a gas or
overheat a liquid beyond this threshold, provided they are sufficiently pure.
Thus, the boundaries of the instability regions are essentially the limits to
which one can overcool or overheat a fluid before phase transition occurs.
To illustrate this interpretation, we have redrawn figure \ref{fig4} on the
$\left( T,p\right) $ plane, thus turning it into a phase diagram -- see
figure \ref{fig5}. We have also added empirically-derived evaporation,
melting, and sublimation curves (the last two describe the solid-liquid and
solid-gas equilibria, respectively).
\begin{figure}
\begin{flushright}
\includegraphics[width=0.835\textwidth]{fig5.pdf}
\end{flushright}
\caption{The phase diagram for argon in the nondimensional $\left( T,p\right) $ plane. Solid lines correspond to the boundaries of the instability regions computed using the EV model; dashed lines show the empiric evaporation, melting, and sublimation curves \cite{TegelerSpanWagner99}. The critical and triple points are marked by a black dot and small circle, respectively. \textquotedblleft G\textquotedblright, \textquotedblleft L\textquotedblright, and \textquotedblleft S\textquotedblright\ mark the regions where gas, liquid, and solid may exist; the prefixes \textquotedblleft oc\textquotedblright\ and \textquotedblleft oh\textquotedblright\ mean \textquotedblleft overcooled\textquotedblright\ and \textquotedblleft overheated\textquotedblright.}
\label{fig5}
\end{figure}
The following features of figure \ref{fig5} can be observed:
\begin{itemize}
\item There are two single-phase regions: in the one marked \textquotedblleft
S\textquotedblright, only solid phase exists -- and in the one whose parts are
marked \textquotedblleft L\textquotedblright\ or \textquotedblleft
G\textquotedblright, one of the two fluid phases exists (gas and liquid are
difficult to separate in the latter case, as they can be continuously
transformed one into another).
\item In the transitional zone marked \textquotedblleft
S/ocL\textquotedblright, either solid or overcooled liquid can exist -- and in
the zone \textquotedblleft S/ocL/ocG\textquotedblright, it is either solid or
overcooled liquid, or overcooled gas.
\item In the remaining two zones, \textquotedblleft L/ocG\textquotedblright%
\ and \textquotedblleft G/ohL\textquotedblright, either of the two fluid
phases can exist.
\end{itemize}
\section{Concluding remarks}
In this work, we have used the Enskog--Vlasov model to examine when fluids are
unstable, and with respect to which perturbations. The parameter range of the
instability is illustrated in figure \ref{fig4} on the nondimensional $\left(
n,T\right) $ plane, and in figure \ref{fig5}, on the $\left( T,p\right) $
plane. These figures are the main results of this work.
Note that, in figure \ref{fig5}, we have calculated only the solid curves,
whereas the dashed ones have been obtained by methods of statistical
thermodynamics \cite{TegelerSpanWagner99}. This does not mean that the EV
model cannot be used to calculate the latter: in fact, it \emph{has} been used
for calculating the evaporation curve, producing a result with an error of
only several percent \cite{BenilovBenilov19}. Before calculating the melting
and sublimation curves, however, one should explore periodic solutions of the
EV equation which describe the solid (crystal) state; these solutions
bifurcate from the spatially uniform (fluid) solutions as frozen waves. That
is, we do not claim that the EV model can describe the fundamental physics of
the solid state -- but we do hope that it can `mimic' it given a suitable
choice of the functional $Q[n]$ and the Vlasov potential $\Phi$. In fact, the
Enskog approach to dense fluids has been successfully used for describing
hard-sphere crystals \cite{Kirkpatrick89,KirkpatrickDasErnstPiasecki90} and
studying equilibrium properties of the liquid--solid phase transitions
\cite{RamakrishnanYussouff79,HaymetOxtoby81} (for recent developments in the
latter theory, see
\cite{Archer09,Lutsko12,BaskaranBaskaranLowengrub14,HeinonenAchimKosterlitzYingEtAl16}%
).
Once the EV model is calibrated to deal with all three phases, it would become
an invaluable tool for modeling complex physical problems (e.g., evolution of
liquid films with evaporation and solidification). This is an important point,
as several version of the Enskog--Vlasov kinetic equation have been used for
applications (see
\cite{FrezzottiBarbante17,FrezzottiGibelliLockerbySprittles18} and references therein).
\ack{This work was supported by FCT---Fundação para a Ciência e a Tecnologia of Portugal under Project UID/FIS/50010/2019 and by European Regional Development Fund through the Operational Program of the Autonomous Region of Madeira 2014--2020 under Project PlasMa-M1420-01-0145-FEDER-000016.}\vspace{1cm} |
1909.04634 | \section{Introduction}\label{S:into}
Ultralong-range Rydberg molecules (ULRM) have been the subject of much recent
interest because of their novel physical and chemical properties\cite{shaf18}. In the
present work we focus on Rydberg dimer molecules which comprise a ground-state
atom weakly bound to a high-$n$ Rydberg atom by scattering of the Rydberg
electron. While the existence of such molecules was first predicted
theoretically~\cite{gree00}, they have now been observed using a variety of
different atomic Rydberg states and a number of atomic species including
rubidium, cesium, and
strontium~\cite{bend09,li11,tall12,bell13,ande14,desa15,sass15,nied16,shaf18}. The
interaction between the excited Rydberg electron and ground state atom can be
approximated using a Fermi pseudopotential~\cite{ferm34}.
The resulting molecular potential can support a number of vibrational levels
which, for example, for principal quantum numbers $n\sim30$, have binding energies of
a few, to a few tens, of megahertz. Since the binding energies are so low,
Rydberg molecules can only be studied in cold molecular gases,
$T \lesssim 1$~mK. The binding energies decrease rapidly with
increasing $n$ scaling as $\sim 1/n^{6}$.
\begin{figure}[b]
\begin{center}
\includegraphics[width=10cm]{Fig1.pdf}
\end{center}
\caption{\label{fig:mol potential}
Calculated molecular potential for a $5s34s$~$^{3}$S$_{1}$-$5s^{2}~^{1}S_{0}$
strontium atom pair (see Eq.\ref{eq:pseudopotential}) together with the calculated vibrational wavefunctions multiplied by the radial coordinate $R$ for
the $\nu$=0 to $\nu$=4 vibrational states. The horizontal axis for
each wavefunction denotes its binding energy. The inset shows the pair
correlation functions, $g^{(2)}(R)$, for cold thermal gases of non-interacting
identical bosons, fermions, and classical, i.e., distinguishable,
particles~\cite{nara99}. The particle separations are expressed in units of
the thermal de Broglie wavelength $\lambda_{dB}=h/\sqrt{2\pi m k_{B}T}$
where $m$ is the atomic mass, $k_{B}$ the Boltzmann constant, and T the temperature.}
\end{figure}
An example of a molecular potential is depicted in
Fig.~\ref{fig:mol potential} together with the radial component
of the vibrational wavefunctions
for the $\nu$=0 to 4 vibrational levels.
The oscillatory structure in the molecular potential reflects the modulations
in the radial Rydberg electron probability density distribution.
The wavefunction for the ground $\nu$=0 vibrational state is strongly localized in the outermost well of the molecular potential which is located near the outer classical turning point of the Rydberg electron orbit. The probability of photoexciting a $\nu=0$ dimer molecule therefore depends on the likelihood of finding a pair of ground-state atoms with the required initial internuclear separation, $R$. Thus, by varying $n$, and hence the location of the potential minimum, measurements of dimer formation can be used to probe the spatial dependence of the pair correlation function, $g^{(2)}(R)$, in a cold gas~\cite{nara99}. This has been exploited in earlier work to probe non-local pair correlations in cold gases of (bosonic) $^{84}$Sr and (fermionic) $^{87}$Sr over length scales of $\sim50-150$~nm using Rydberg molecules with $31\leq n\leq 45$~\cite{whal19a,whal19}. These studies clearly demonstrated the effects of quantum statistics, i.e., of bunching in a thermal gas of $^{84}$Sr and antibunching due to Pauli exclusion in a spin-polarized gas of $^{87}$Sr.
As seen in Fig.~\ref{fig:mol potential}, the wavefunctions for higher excited vibrational states $\nu =1, 2, \cdots$
extend
to smaller internuclear separations than for the $\nu$=0 states which
suggests that measurements of the formation rates for the different
vibrationally-excited dimer levels might be used to probe spatial correlations over a
broader range of $R$. This we examine in the present work where we
compare results for the formation of the different molecular vibrational
states in a cold ($T\sim 900 $nK) gas of spin-polarized $^{87}$Sr with
results for an unpolarized $^{87}$Sr sample. $^{87}$Sr atoms have a sizable
nuclear spin, $I=9/2$, resulting in a total angular momentum $F=9/2$ for the
$5s^{2}~^{1}S_{0}$ ground state and a large number of magnetic sublevels,
$m_{F} =-9/2, -7/2 . . .7/2, 9/2$. Because of this ten-fold degeneracy, a
statistical population of ground-state $^{87}$Sr atoms provides a good
approximation to a gas of uncorrelated, i.e., classical, particles. (The bosonic isotopes of strontium have $I=0$ and thus no degeneracy in the ground state. Here, when we refer in general to bosonic isotopes, we assume they are spin polarized.)
The experimental results are interpreted through comparison to calculated
rates for molecule formation that incorporate an effective Franck-Condon factor and
account for pair correlations.
The results presented here further demonstrate that studies of Rydberg molecule formation can provide a valuable probe of spatial correlations in quantum gases over a sizable (and previously inaccessible) range of internuclear separations together with a test of the present theoretical understanding of the molecular potentials and wavefunctions involved.
\section{Experimental approach}\label{experimental approach}
As is apparent from the inset in Fig.~\ref{fig:mol potential}, for cold gases of non-interacting identical bosons or fermions, quantum statistics only begin to have a significant effect on
$g^{(2)}(R)$ at small interparticle spacings $R\lesssim0.8~\lambda_{dB}$, where $\lambda_{dB}$ is the thermal de Broglie wavelength, and their effects only become readily apparent at even smaller interparticle spacings, say $R\sim 0.4~\lambda_{dB}$. Earlier work has shown that an $^{87}$Sr gas can be readily cooled to temperatures of $\sim800-900$~nK corresponding to $\lambda_{dB}\sim200$~nm enabling study of the effects of quantum statistics at interparticle spacings $\lesssim160$~nm. An interparticle spacing of 160~nm corresponds to the radius, $R_{n}$, of $5sns~^{3}S_{1}$ Rydberg atoms (given by $R_{n}\sim2(n-\delta)^{2} a_{0}$ where $\delta\sim3.37$ is the quantum defect and $a_{0}$ the Bohr radius) with $n\sim40$. Smaller interparticle separations can be investigated through formation of molecules with smaller values of $n$. However, it is difficult to extend measurements to values of $n\lesssim30$ because production of a Rydberg molecule requires that the ground-state atom density, $\rho$, in the trap be such that there is a significant likelihood of finding ground-state atom pairs with the necessary spacing which, for $n\sim30$, necessitates cold atom densities $\rho\gtrsim3\times 10^{13}$~cm$^{-3}$. In addition, measurements for $n\gtrsim45$ are challenging as the spacings between neighboring vibrational levels, which decrease rapidly as $n$ increases, become very small making them difficult to resolve with our existing laser linewidth of $\sim$300~kHz.
To more directly compare molecule formation in polarized and unpolarized
gases it is advantageous to produce samples with very similar temperatures and
density distributions. This is challenging because achieving sub-$\mu$K
temperatures requires evaporative cooling. For an unpolarized sample, energy
transfer during collisions allows the sample to continuously thermalize as
the trap depth is lowered. In contrast, for a spin-polarized sample there
are no $s$-wave collisions at these low temperatures and therefore
thermalization is suppressed. To overcome this problem a mixture of $^{84}$Sr and $^{87}$Sr is trapped and sympathetic cooling used to obtain the desired temperature.
\begin{figure}[hb]
\begin{center}
\includegraphics[width=10cm]{Fig2.pdf}
\end{center}
\caption{\label{fig:term diagram}
a) Schematic partial term diagram for strontium showing the levels used for laser cooling and repumping. The dashed lines represent the spontaneous decay paths involved in magnetic trapping and in repumping. b) Schematic diagram of the relevant transitions used for optical pumping and for two-photon excitation to 5sns~$^{3}$S$_{1}$ Rydberg states including the hyperfine structure.}
\end{figure}
The present cooling protocol~\cite{dees09,stel14}
can be understood with reference to
Fig.~\ref{fig:term diagram}
which presents a partial term diagram for strontium. Strontium atoms emerging from a Zeeman slower are first cooled to temperatures of a few millikelvin in a magneto-optical trap (MOT) operating on the 461~nm $5s^{2}~^{1}S_{0}-5s5p~^{1}P_{1}$ transition. Atoms in the excited state, however, have a small probability of decaying into the long-lived $5s5p~^{3}P_{2}$ metastable state via the $5s4d~^{1}D_{2}$ state. Those $^{3}P_{2}$ atoms formed in weak-field-seeking states become trapped in the MOT magnetic field which therefore serves as a magnetic trap \cite{nage03}. Atoms are allowed to accumulate in this trap to build up high atom densities. Because of isotope shifts in the $^{1}S_{0}-^{1}P_{1}$ transition, $^{87}$Sr and $^{84}$Sr atoms are loaded sequentially into the magnetic trap to allow the 461~nm laser to be separately tuned for each isotope. (Sequential loading also allows the relative populations of each isotope to be varied by controlling the loading times.) After loading the magnetic trap the atoms are returned to the $^{1}S_{0}$ ground state via the $5s5p~^{3}P_{1}$ state using a repump laser operating on the $5s5p~^{3}P_{2}-5p^{2}~^{3}P_{2}$ transition at 481~nm. However, as illustrated in
Fig.~\ref{fig:atom signal},
which shows the ground-state atom signal observed as the repump laser is scanned, the presence of hyperfine structure in $^{87}$Sr results in multiple spectral features which complicates the repump process. (The observed splittings indicate that the structure is associated primarily with the hyperfine splitting of the lower $5s5p~^{3}$P$_{2}$ state.) Efficient and simultaneous repumping of all the $^{87}$Sr hyperfine states, as well as admixed $^{84}$Sr atoms, therefore requires the presence of multiple laser frequencies which are generated by broadening the laser spectrum by modulating its drive current and superposing sidebands using an EOM. Following repumping, both isotopes are simultaneously cooled to $\sim2\mu$K using a MOT operating on the 689~nm $5s^{2}~^{1}S_{0}\rightarrow5s5p~^{3}P_{1}$ intercombination line. This is accomplished using three separate laser frequencies: one laser is tuned to the $^{1}S_{0}\rightarrow^{3}P_{1}$ transition in $^{84}$Sr, the other two to the $^{1}S_{0}~F=9/2\rightarrow^{3}P_{1}~F=11/2$ and $^{1}S_{0}~F=9/2\rightarrow^{3}P_{1}~F=9/2$ transitions in $^{87}$Sr. The atoms are then loaded into a ``pancake-shaped" optical dipole trap (ODT) formed using a 1.06~$\mu$m laser beam in the form of a flat sheet with a width of $\sim260\mu$m and thickness $\sim26\mu$m in the center of which is a ``dimple" of $\sim60\mu$m diameter created using a second laser beam incident near normal to the plane of the sheet.
\begin{figure}[hb]
\begin{center}
\includegraphics[width=10cm]{Fig3.pdf}
\end{center}
\caption{\label{fig:atom signal}
Ground-state $^{87}$Sr atom signal observed as the 481~nm repump laser is scanned over the 5s5p~$^{3}$P$_{2}$-5p$^{2}~^{3}$P$_{2}$ transition. The initial 5s5p ~$^{3}$P$_{2}$ hyperfine level associated with each $^{87}$Sr feature is indicated (see text). Data recorded using $^{84}$Sr are also included. }
\end{figure}
Spin-polarized samples are obtained by optically pumping the atoms in the
ODT. A magnetic bias field of 7.6~G is established which produces a Zeeman
splitting of $\sim650$~kHz between adjacent magnetic sublevels and a series
of $\sigma^{+}$-polarized laser pulses tuned to the $5s^{2}~^{1}S_{0}~F=9/2$
to $5s5p~^{1}P_{1}~F=9/2$ transition is applied to transfer the population to
the $M_{F}=+9/2$ magnetic sublevel. Once optical pumping is complete the
magnetic field is reduced to 1~G to preserve a quantization axis.
Detailed spectroscopic measurements\cite{whal19,whal19a} showed that optical pumping transfers $>90\%$ of the ground-state atoms to the $m_{F}=+9/2$ magnetic sublevel.
Experiments with unpolarized samples are undertaken in zero magnetic field.
The atoms in the ODT are cooled to $\sim900$~nK through evaporative cooling. Given the same initial ratio of $^{84}$Sr to $^{87}$Sr in the ODT, the final temperature of a sample of spin-polarized $^{87}$Sr atoms will always be higher than that for an unpolarized sample because of heating due to photon scattering during optical pumping. To obtain polarized and unpolarized samples with similar densities and temperatures the ratio of $^{84}$Sr and $^{87}$Sr atoms loaded into the ODT is varied - the greater the fraction of $^{84}$Sr the colder the final sample. Once evaporation is complete, a light pulse resonant with the $5s^{2}~^{1}S_{0}\rightarrow5s5p~^{3}P_{1}$ transition in $^{84}$Sr is applied to remove these atoms from the trap through light scattering. (The $^{84}$Sr-$^{87}$Sr isotope shift is sufficient that the laser pulse does not lead to any detectable heating of the $^{87}$Sr atoms.) The final atom number and temperature is determined by releasing the atoms from the trap and, after a fall time of $\sim7$~ms, measuring the spatial extent of the cloud using absorption imaging on the $5s^{2}~^{1}S_{0}\rightarrow5s5p~^{1}P_{1}$ transition.
Following preparation of an $^{87}$Sr sample, Rydberg excitation spectra are recorded using pulsed two-photon excitation. The first (689-nm) photon is $\sigma^{+}$ polarized (and is blue detuned $\sim14.8$~MHz from the transition to the $5s5p~^{3}P_{1}~F=11/2$ level) and the second (319-nm) photon is $\pi$ polarized and tuned to excite final $5sns~^{3}S_{1}~F=11/2$ Rydberg states. The ODT is turned off during the excitation pulses to eliminate AC Stark shifts. The product Rydberg atoms/molecules are detected through field ionization by applying voltage pulses to electrodes that surround the trap. The product electrons are directed towards, and detected by, a dual microchannel plate (MCP) detector whose output is fed to a multichannel scalar (MCS). The number of Rydberg atoms/molecules created in a single excitation pulse is kept small to avoid both saturating the detector and blockade effects, and data are accumulated following many laser pulses to build up good statistics.
\section{Theoretical method}\label{S:Theoretical method}
The interaction between the quasi-free Rydberg electron and the neutral
ground state atom is very weak and dominated by the short-ranged
Fermi pseudopotential~\cite{ferm34}.
The effective potential of an ULRM is therefore approximately given by
\begin{eqnarray}
\label{eq:pseudopotential}
V(\vec{R})&=&2\pi
\frac{\hbar^{2}}{m_{e}}a_{s}\vert\psi(\vec{R})\vert^{2}+6\pi\frac{\hbar^{2}}{m_{e}}a_{p}^{3}\vert\vec{\nabla}
\psi(\vec{R})\vert^{2},
\end{eqnarray}
where $\psi(\vec{R})$ is the electronic wavefunction, $m_{e}$ the electron
mass, $e$ the electronic charge,
and $a_{s}$ and $a_{p}$ are the $s$- and $p$-wave scattering lengths.
In the following we consider Rydberg dimers formed by $5sns$~$^3S_1$ Rydberg
states. For such a spherically symmetric charge cloud
the molecular potential is isotropic and depends only on the
internuclear distance $R$ between the Rydberg core and the ground state atom.
Therefore, the eigenstates of Rydberg dimers can be written as~\cite{thom18}
\begin{equation}
\Psi_{\nu,\Lambda,M_\Lambda}(R,\theta,\phi)
= {\cal R}_{\nu, \Lambda}(R)
Y_{\Lambda}^{M_{\Lambda}}(\theta,\phi)
\end{equation}
with $\nu$ the vibrational quantum number,
$\Lambda$ the rotational quantum number,
and $M_\Lambda$ the projection of $\vec{\Lambda}$ onto the magnetic field axis.
(For Rydberg $S$-states the third Euler angle becomes cyclic and the
Wigner rotation matrix reduces to a spherical harmonic
$Y_{\Lambda}^{M_{\Lambda}}$.)
The rate of excitation of a Rydberg molecule is governed by experimental factors such as laser intensity and sample density as well as by the
atomic dipole transition strength and a Franck-Condon-type factor
for characterizing the overlap between the initial unbound ground-state
atom-pair wavefunction and the ULRM wavefunction. Since the
interaction between the Rydberg electron and the ground state atom
is typically very weak, the molecular potential
[Eq.~(\ref{eq:pseudopotential})] is evaluated in first-order perturbation
theory, i.e., the unperturbed electronic wavefunction of the
Rydberg atom is used in Eq.~(\ref{eq:pseudopotential}).
Moreover, the electronic transition strength depends only on the atomic
Rydberg wavefunction, in particular on the principal quantum number $n$ and
quantum defect $\delta$ of the Rydberg atom,
$\langle d \rangle^2 \sim (n-\delta)^{-3}a^{2}_{0}$,
but is independent of the molecular level to be formed.
The Franck-Condon factor governing the production rate from a particular initial two-body scattering state $\chi_{\vec{k}}^{\pm}(\vec{R})$ is given by
\begin{eqnarray}
\label{eq:Franck-Condon}
{\cal F}^{\pm}_{\nu,\Lambda}(\vec{k})
=\int d^{3}R {\cal R}_{\nu,\Lambda}(R) Y_{\Lambda}^{M_{\Lambda}}(\theta,\phi)
\chi_{\vec{k}}^{\pm* }(\vec{R})
\end{eqnarray}
where $\vec{R}$ is the relative coordinate, $\pm$ is the parity, and
$\hbar \vec{k}$ is the relative momentum of the two neighboring ground state atoms that eventually form
the Rydberg dimer through photoexcitation of one of the atoms to
a Rydberg state.
Assuming that the interaction between the two ground-state atoms is negligibly small and that the potential of the ODT is constant over the length scale of the Rydberg atom, the properly symmeterized initial two-body scattering states are given by
\begin{equation}
\label{eq:init_scat}
\chi^{\pm}_{\vec{k}}(\vec{R}) = \frac{1}{\sqrt{2}}
\left( e^{ i \vec{k} \cdot \vec{R}} \pm e^{ -i \vec{k} \cdot \vec{R}} \right)
\, .
\end{equation}
For ground-state $^{87}$Sr atoms, if the gas is spin polarized, the scattering state must have odd parity. Otherwise, we describe the ensemble as an admixture of scattering states with both parities.
By assuming a Lorentzian profile, we define
the spectral excitation density for a given molecular
state $(\nu, \Lambda)$, fixed relative wave vector $\vec{k}$,
and parity $\pm$ as
\begin{equation}
\label{eq:spectrum_nu}
f^{\pm}_{\nu,\Lambda}(\vec{k},\omega) = \frac{1}{\pi}
|{\cal F}^{\pm}_{\nu,\Lambda} (\vec{k})|^2
\frac{ \Gamma/2}{(\hbar \omega +k^2/(2 \mu)-E_{\nu,\Lambda})^2 + (\Gamma/2)^2}
\end{equation}
where $E_{\nu,\Lambda}$ is the binding energy of the Rydberg molecule,
$\omega$ is the laser detuning from the resonant excitation of $5sns$ $^3S_1$
Rydberg atoms, and $\mu$ is the reduced mass which is half the $^{87}$Sr mass.
In the current setting, the experimental resolution determines
the effective width $\Gamma$ ($\sim 300$~kHz) which is much larger than
the lifetime broadening of the Rydberg molecule.
The total spectral excitation density from all states of parity $\pm$ follows then from Eq.~(\ref{eq:spectrum_nu}) as the sum over
all molecular states $(\Lambda, \nu)$ and the
average over the thermal distribution at given temperature $T$ over
the relative momenta of the atom pairs. Assuming the system is far from quantum degeneracy, this yields
\begin{eqnarray}
\label{eq:spectrum}
f^{\pm}(\omega)
= \sum_{\nu,\Lambda} (2 \Lambda + 1)
\left( \frac{1}{2\pi \mu k_{B}T} \right)^{3/2}
\int d^3 k e^{-k^2/(2 \mu k_B T)} f^{\pm}_{\nu,\Lambda}(\vec{k},\omega) \, .
\end{eqnarray}
The factor, $2 \Lambda + 1$, is due to the degeneracy of the
$M_{\Lambda}$ levels in the Rydberg molecule for a given $\nu$ and $\Lambda$.
Typically, the energy shifts associated with rotational excitation are small
(for $n=30$ and $\nu=0$ the rotational constant is $\sim 20$~kHz )
and individual rotational levels cannot be resolved.
However, the vibrational levels ($\nu = 0,1,2$) have significantly
larger energy spacing ($\sim 30$~MHz at $n=30$ and
$\sim 4$~MHz at $n=40$) and can be resolved. In such cases, an excitation
strength for each vibrational level can be separately
determined by summing over the
thermally-averaged Franck-Condon factors
$\langle |{\cal F}^{\pm}_{\nu,\Lambda}|^2 \rangle$
from all rotational levels or approximated
by integrating $f^{\pm}(\omega)$ over a frequency window centered
at a given vibrational level
\begin{equation}
\label{eq:ex_strength}
P^{\pm}_{\nu}
= \sum_\Lambda (2\Lambda+1)
\langle |{\cal F}^{\pm}_{\nu,\Lambda}|^2 \rangle
\simeq \int_{E_{\nu,\Lambda}/\hbar-\Delta}^{E_{\nu,\Lambda}/\hbar+\Delta}
f^{\pm}(\omega) d\omega \, .
\end{equation}
where $\Delta$ is much larger than $\Gamma$ but is small compared to the
vibrational level spacing, i.e., of the order of 1~MHz.
When the radial wavefunction of a vibrational state
is well localized at $R = R_n$,
the excitation strength for states of $\pm$ parity can be approximated as
\begin{equation}
\label{eq:ex_strength0}
P^{\pm}_\nu \simeq 4 \pi g_{\pm}^{(2)}(R_{n})
\left|
\int dR \, R^2 {\cal R}_{\nu,\Lambda}(R)
\right|^2
\end{equation}
Since, for $30 \le n \le 41$, the centrifugal potential only
adds nearly a constant energy shift to the molecular potential
and the resulting molecular wavefunctions ${\cal R}_{\nu,\Lambda}(R)$
are nearly independent of $\Lambda$.
In Eq. (\ref{eq:ex_strength0}), we have introduced
\begin{eqnarray}
g_{\pm}^{(2)}(R) &=& \frac{1}{4\pi} \sum_\Lambda (2\Lambda+1)
\left( \frac{1}{2\pi \mu k_{B}T} \right)^{3/2}
\int d^3 k e^{-k^2/(2 \mu k_B T)}
\nonumber \\
&& \times
\left|
\int d\cos\theta d\phi \, Y_{\Lambda}^{M_{\Lambda}}(\theta,\phi)
\chi_{\vec{k}}^{\pm*} (\vec{R})
\right|^2
\, .
\label{eq:g2_ini}
\end{eqnarray}
The plane waves (Eq.~\ref{eq:init_scat}) appearing in Eq.~(\ref{eq:g2_ini}),
can be expanded in partial waves with well-defined
exchange symmetry (or parity),
\begin{equation}
\chi^{\pm}_{\vec{k}}(\vec{R}) = 4 \sqrt{2} \pi
\sum_{\Lambda=0}^\infty \sum_{M_\Lambda=-\Lambda}^\Lambda
i^\Lambda {\cal P}^{\pm}_\Lambda Y_{\Lambda}^{M_\Lambda, *}(\theta_k,\phi_k)
\chi_{k,\Lambda,M_\Lambda} (\vec{R})
\end{equation}
with $\theta_k, \phi_k$ the polar angles of $\vec{k}$,
\begin{equation}
\chi_{k,\Lambda,M_\Lambda} (\vec{R}) = j_\Lambda(k R)
Y_{\Lambda}^{M_\Lambda}(\theta,\phi)
\label{eq:sph_wave}
\end{equation}
($j_\Lambda(k R)$ : spherical Bessel function) and
\begin{equation}
{\cal P}^{\pm}_\Lambda = \frac{1}{2} \left( 1 \pm (-1)^\Lambda \right)
\end{equation}
restricts the wavefunction to even angular momenta $\Lambda$ for symmetric
$(+)$ and odd $\Lambda$ for anti-symmetric $(-)$ two-body scattering states. This yields
\begin{eqnarray}
g_{\pm}^{(2)}(R)
&=& 8 \pi \sum_\Lambda (2\Lambda+1) {\cal P}^{\pm}_\Lambda
\left( \frac{1}{2\pi \mu k_{B}T} \right)^{3/2}
\nonumber \\
&& \times
\int d k \, k^2 e^{-k^2/(2 \mu k_B T)} |j_\Lambda(k R)|^2 = 1\pm e^{-2\pi R^{2}/ \lambda^{2}_{dB}}\, .
\label{eq:g2_int}
\end{eqnarray}
For a spin-polarized gas of $^{87}$Sr atoms, the excitation strength for a transition to a single localized vibrational state $\nu$ is
$P_\nu^{\rm pol} =
{\cal Q}_{\rm pol}^+ P_\nu^{+} + {\cal Q}_{\rm pol}^- P_\nu^{-} = P^{-}_{\nu}$ defining ${\cal Q}_{\rm pol}^+ = 0$ and ${\cal Q}_{\rm pol}^- =1$. For unpolarized fermions in which all the $M_{F}$ levels are populated with equal probability the excitation strength is $P_\nu^{\rm unpol} = {\cal Q}_{\rm unpol}^+ P_\nu^{+} + {\cal Q}_{\rm unpol}^- P_\nu^{-}$ where
\begin{equation}
{\cal Q}^{\pm}_{\rm unpol} = \frac{1}{2F+1}\left\{
\begin{array}{ll}
F & \mbox{for $+$} \\
F + 1 & \mbox{for $-$}
\end{array}
\right. \, .
\end{equation}
We can thus express the excitation strength for both polarized and unpolarized gases for excitation to a vibrational state well localized at $R=R_{n}$ as
\begin{equation}
P^{pol/unpol}_{\nu}\simeq4\pi g^{(2)}(R_{n})\left\vert\int dR~R^{2}\mathcal{R}_{\nu,\Lambda}(R)\right\vert^{2}
\label{eq:strength}
\end{equation}
where
\begin{equation}
g^{(2)}(R)={\cal Q}^{-} g^{(2)}_{-} (R) + {\cal Q}^{+}g^{(2)}_{+}(R) = 1-\epsilon e^{-2\pi R^{2}/ \lambda^{2}_{dB}}
\label{eq:cor function}
\end{equation}
is the appropriate correlation function for the sample, assuming weak interactions and thermal equilibrium far from quantum degeneracy. Here $\epsilon = {\cal Q}^{-} - {\cal Q}^{+}$. $g^{(2)}(R)$ takes the following forms: for spin polarized fermions
\begin{equation}
\label{eq:g2_pol}
g^{(2)}(R) = g_-^{(2)}(R)
= 1 - e^{-2\pi R^{2}/\lambda_{dB}^{2}} \, ,
\end{equation}
for an unpolarized ensemble with $F=9/2$
\begin{equation}
\label{eq:g2_unpol}
g^{(2)}(R) = g_{\rm unpol}^{(2)}(R)
= 1 -0.1 e^{-2\pi R^{2}/\lambda_{dB}^{2}} \, .
\end{equation}
and for a gas of spin polarized bosons
\begin{equation}
\label{eq:g2_boson}
g^{(2)}(R) = g_+^{(2)}(R)
= 1 + e^{-2\pi R^{2}/\lambda_{dB}^{2}} \, .
\end{equation}
Equation \ref{eq:strength} is accurate for transitions to the molecular ground state ($\nu=0$) because this state is typically localized in the outer well of the potential at $R=R_{n}$ (Fig.~\ref{fig:mol potential})\cite{whal19}. Thus, measurements of
$P_{\nu=0}$ can be used to extract information on the correlation function. Since the radial integrals in Eq.~(\ref{eq:strength}) are common
for spin-polarized and unpolarized gases ,
the ratio $\xi_{\nu=0} = P_{\nu=0}^{\rm pol}/P_{\nu=0}^{\rm unpol}$
can be related to the pair correlation function for spin-polarized
gases~\cite{whal19}. If it is assumed that $g^{(2)}(R)$ for the unpolarized gas is given by Eq~\ref{eq:g2_unpol}, then $g^{(2)}(R)$ for the polarized gas
\begin{equation}
g^{(2)}(R_{n})= \xi_{\nu=0}(1-0.1 e^{-2\pi R_{n}/\lambda^{2}_{dB}})
\label{eq:unpol _gas}
\end{equation}
and $\xi_{\nu=0}$ can be found from the ratio of the experimental signal rates for transitions in polarized and unpolarized gases after taking into account different Clebsch-Gordan coefficients and any variations in experimental parameters such as laser intensities and sample densities and temperature\cite{whal19}
Equation~(\ref{eq:g2_int}) indicates that, depending on the temperature $T$
of the atomic ensemble, a significant number of rotational levels $\Lambda$
contribute to the observed pair correlation function.
Figure~\ref{fig:g2_L} shows the relative contributions to
$g_{+}^{(2)}(R)$ associated with states with different values of $\Lambda$
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{Fig4.pdf}
\end{center}
\caption{\label{fig:g2_L}
$g^{(2)}(R)$ for $^{84}$Sr (bosons, in black) and spin-polarized$^{87}$Sr
(fermions, in light blue/gray) as function of $R/\lambda_{dB}$
calculated with rotational states included up to the values of $\Lambda$ indicated. For bosons
(fermions) only states with even (odd) values of $\Lambda$ contribute to
$g^{(2)}(R)$ (see text).
}
\end{figure}
as a function of $R/\lambda_{dB}$, for identical bosons.
As expected, the maximum in
$g_{+}^{(2)}(R)$ at small values of $R/\lambda_{dB}$ is
associated primarily with $\Lambda=0$ states, i.e., $s$-waves.
As $R/\lambda_{dB}$ increases, due,
for example, to an increase in sample temperature, higher-$\Lambda$ states become
accessible and become increasingly important while the $\Lambda = 0$
contribution is reduced. Indeed, for $R/\lambda_{dB} \sim 1$,
$\Lambda=2$ states, i.e., the $d$-wave, becomes the dominant
contribution to $g_+^{(2)}(R)$ which has by then become close to its limiting
value $g_+^{(2)}(R)=1$. Further increase in $R/\lambda_{dB}$ results in little
change of $g_+^{(2)}(R)$, although the relative contributions from
higher-$\Lambda$ states steadily grow. For spin-polarized fermions,
the $s$-wave contribution
is excluded because of anti-symmetry and $g_-^{(2)}(R)$ vanishes
for $R/\lambda_{dB} \to 0$. As $R/\lambda_{dB}$ increases, $p$-wave
and successively higher odd-order partial waves become accessible and
$g_-^{(2)}(R)$ approaches its limiting value of one.
The contributions from
various $\Lambda$ levels to the calculated excitation
spectra (Eq.~\ref{eq:spectrum_nu}) are shown in Fig.~\ref{fig:temp}.
\begin{figure}[th]
\begin{center}
\includegraphics[width=7cm]{Fig5.pdf}
\end{center}\caption{\label{fig:temp}
Excitation spectra (Eq.~\ref{eq:spectrum})
for ULRMs (black solid lines) associated with the
$5s30s$ $^3S_1$ Rydberg state calculated for a spin-polarized
$^{87}$Sr gas at the various temperatures indicated. The contributions from
each rotational level are also displayed
(red dashed line : $\Lambda=1$, green dot-dashed line : $\Lambda=3$,
and blue dotted line : $\Lambda = 5$).
}
\end{figure}For spin-polarized fermions
the excitation spectrum is dominated by the $\Lambda = 1$ rotational state
at low temperature (1~$\mu$K). As temperature increases, the contributions
from higher excited rotational levels become non-negligible.
The peak positions of $f^-(\omega)$ for different $\Lambda$ nearly coincide
since the spacing between different rotational levels are smaller than the
thermal line broadening. Furthermore, since $\Lambda$ is preserved during the
Franck-Condon transition, the contribution to rotational energy splitting
from the centrifugal potential present in both the initial and final states
largely cancels out. The contributions of different rotational channels to the Rydberg molecule excitation spectrum were recently discussed in reference \cite{sous19}.
Unlike the case for ground-state molecules, the molecular wavefunctions for excited Rydberg dimers span multiple wells (see Figs.~\ref {fig:mol potential}, \ref{fig:wells}). We therefore explore the degree of localization of each ULRM molecular wavefunction in a specific well and how this affects the ability to use excitation spectra for excited dimers to extract information on $g^{(2)}(R)$. To this end we construct a set of pseudostates $|w_{\iota}^\eta\rangle$
$(\iota = 0,1, \cdots)$ that are eigenstates of each isolated potential
well $\eta$ $(\eta=1,2,3, \cdots$ with $\eta = 1$ the outermost well)
thereby removing the influence of the adjacent potential wells. The molecular
wavefunction can then be (approximately) viewed as a
coherent superposition of eigenstates $|w^\eta_\iota\rangle$ associated
with each isolated well. For example, the lowest energy level
$|w^{\eta=1}_{\iota=0}\rangle$ of the outermost well ($\eta=1$)
lies below the minimum of the inner potential wells $(\eta=2,3,\cdots)$
and closely approximates the true ground vibrational state $\nu=0$
of the Rydberg dimer which is well localized in the outermost well near
$R_{n,\eta=1} \simeq 1.87 (n-\delta)^2 a_{0}$ and is undistorted by the
presence of adjacent wells (see Fig.~\ref{fig:wells}).
\begin{figure}[th]
\begin{center}
\includegraphics[width=10cm,bb=50 200 550 760]{Fig6.pdf}
\end{center}
\caption{\label{fig:wells}
Molecular potentials and associated eigenwavefunctions
for $n=31$ (upper frames) and 40 (lower frames) states with $\Lambda = 0$.
In the right column the molecular wavefunctions for $\nu=0$
(dashed lines in red), $\nu =1$ (solid line in green), and
$\nu = 2$ (dot-dashed line in blue) states are plotted.
In the left column the wavefunctions
$|w^{\eta=1}_{\iota=0,1}\rangle$ of the outer-most well
and $|w^{\eta=2}_{\iota=0}\rangle$
of the next nearest well are shown (see text). Solid lines (red) are those for the outermost well
and the dashed lines (blue) those for the neighboring well.
The wave functions are multiplied by the radial coordinate $R$ and
their base lines are shifted by their eigenenergies.
}
\end{figure}
For the excited vibrational states the molecular wavefunctions are less
localized (see Figs.~\ref{fig:mol potential} and \ref{fig:wells})
and the extraction of the pair correlation function becomes more complicated.
For example, at $n=30$ the first excited state $|w^{\eta=1}_{\iota=1}\rangle$
of the outermost well is nearly degenerate with the lowest energy state
$|w^{\eta=2}_{\iota=0}\rangle$ of the second well.
Therefore, the molecular wave function for $\nu = 1$ can be approximated
by a coherent superposition of two single-well eigenstates
$c_{\eta=1}|w^{\eta=1}_{\iota=1}\rangle \mp
c_{\eta=2}|w^{\eta=2}_{\iota=0}\rangle$.
For the $\nu = 1$ wavefunction,
whose delocalized probability distribution spans two adjacent
wells, the thermally averaged Franck-Condon factor may be written
\begin{eqnarray}
\langle |{\cal F}_{\nu,\Lambda}|^2 \rangle &\propto&
\int d k \, k^2 e^{-k^2/(2 \mu k_B T)}
\left|
c_{\eta=1} j_\Lambda(k R_{n,\eta=1})
\int dR \, R^2 w^{\eta=1}_{\iota=1}(R)
\right.
\nonumber \\
&& \left.
\mp c_{\eta=2} j_\Lambda(k R_{n,\eta=2}) \
\int dR \, R^2 w^{\eta=2}_{\iota=0}(R)
\right|^2 \,
\end{eqnarray}
assuming that (for small $T$) the spherical Bessel functions for the $k$-values that contribute to the transitions are nearly constant within a single well centered
at $R = R_{n,\eta}$. Because of the node
in the wavefunction $w^{\eta=1}_{\iota=1}(R)$ located in the
$\eta=1$ well, the overlap integral
$\int dR \, R^2 w^{\eta=1}_{\iota=1}(R) $ is typically small, and thus
the Franck-Condon factor can be simplified to
\begin{eqnarray}
\langle |{\cal F}_{\nu,\Lambda}|^2 \rangle \propto
\int d k \, k^2 e^{-k^2/(2 \mu k_B T)} |j_\Lambda(k R_{n,\eta=2})|^2
\left|
c_{\eta=2} \int dR \, R^2 w^{\eta=2}_{\iota=0}(R)
\right|^2 \,
\label{eq:FC_nu1}
\end{eqnarray}
with $R_{n,\eta=2} = 1.6 (n-\delta)^2 a_{0}$.
Summing over contributions from all $\Lambda$ levels with appropriate weighting for the polarization state of the gas,
the excitation strength becomes approximately
\begin{equation}
\label{eq:ex_strength1}
P_{\nu=1} \simeq 4 \pi g^{(2)}(R_{n,\eta=2})
\left|
c_{\eta=2} \int dR \, R^2 w^{\eta=2}_{\iota=0}(R)
\right|^2 \, .
\end{equation}
With increasing $n$, the energy of $|w^{\eta=2}_{\iota=0}\rangle$ becomes
smaller than $|w^{\eta=1}_{\iota=1}\rangle$ (see Fig.~\ref{fig:wells})
and, correspondingly, the $\nu=1$ molecular state becomes
increasingly dominated by the $|w^{\eta=2}_{\iota=0}\rangle$ state.
For Franck-Condon factors with well-localized transition points
[Eq.~(\ref{eq:FC_nu1})]
the ratio $\xi_{\nu = 1} = P_{\nu = 1}^{\rm pol}/P_{\nu = 1}^{\rm unpol}$
can be used to probe the correlation function but over a range of $R$
different from that for the $\nu = 0$ states.
However, as contributions from other wells $(\eta > 2)$ become increasingly
important (for example, for higher $n$) the ratio
$\xi_{\nu=1}$
no longer probes the pair correlation locally but provides an average
of $g_-^{(2)}(R)$ over a range of $R$ weighted by $|c_{\eta}|^2$.
For even higher vibrational states (for example $\nu=2$), the contributions from
inner wells ($\eta > 2$) can no longer be neglected.
As the molecular wavefunction becomes increasingly delocalized reliable extraction of the pair correlation function becomes difficult.
\section{Results and Discussion}
\label{Results and Discussion}
\begin{figure}[b]
\begin{center}
\includegraphics[width=10cm]{Fig7.pdf}
\end{center}
\caption{\label{fig:spectra}
Experimental Rydberg excitation spectra recorded using (a,c) polarized and (b,d)
unpolarized $^{87}$Sr cold gases with $T\sim900$~nK. (a,b) are the spectra for
5s34s~$^{3}$S$_{01}$-5s$^{2}~^{1}$S$_{0}$ molecules and (c,d) are
for 5s40s~$^{3}$S$_{1}$-5s$^{2}~^{1}$S$_{0}$ molecules.
The spectra are normalized such that the peaks of the $\nu$=0 features are of
equal height. The solid lines show the predicted excitation spectra (see
text). The calculated spectra have been convolved with a Lorentzian of
300~kHz width.}
\end{figure}
Experimental Rydberg excitation spectra recorded at $n=34$ and 40 using both polarized and unpolarized cold, $T\sim900$~nK, $^{87}$Sr gases are shown in Fig.~\ref{fig:spectra}.
The spectra are normalized such that the peaks associated with the formation
of $\nu$=0 ground-state Rydberg dimers are of equal height.
The actual sizes of the
excitation features seen in different experimental runs depend on many factors
including the intensities of the excitation lasers, the trapped atom density, the laser detuning, the dipole matrix elements, the Clebsch-Gordan coefficients, and the excitation strengths $P_{\nu}$ [Eqs.~\ref{eq:ex_strength} and \ref{eq:strength}]. Within a single panel in Fig~\ref{fig:spectra} all factors other than $P_{\nu}$ are the same, and the relative heights are proportional to $P_{\nu}$,
thereby providing a more direct
test of the calculated effective
Franck-Condon factors and their underlying dependence on the pair correlation
function. Multiple features are present in each spectrum that result from creation of different dimer vibrational states.
Figure~\ref{fig:spectra} also includes the predictions of calculations
[Eq.~(\ref{eq:spectrum})] using
the Fermi pseudopotential [Eq.~(\ref{eq:pseudopotential})]
with the effective $s$- and $p$-wave scattering lengths,
$a_{s}(k=0)=-13.3a_{0}$ and $a_{p}=9.7a_{0}$.
The positions and relative sizes of the observed features are in good general
agreement with the theoretical predictions. However, as seen in Fig.~\ref{fig:spectra} the relative sizes of the $\nu=1$ and $\nu=2$
features are strongly $n$-dependent. For $n=34$ the relative sizes
of the $\nu=1$ and $\nu=2$ features are comparable, whereas for
$n=40$ the $\nu=1$ feature is dominant.
\begin{figure}[b]
\begin{center}
\includegraphics[width=10cm]{Fig8.pdf}
\end{center}
\caption{\label{fig:n-dependence}
Measured ($\circ$) and calculated (- - -) $n$ dependence of the
production rates for a) $\nu$=1 and b) $\nu$=2 states relative to
that for $\nu$=0 states in polarized and unpolarized $T\sim 900nK ~^{87}$Sr samples.
{\bf }
}
\end{figure}
Figure~\ref{fig:n-dependence} shows the integrated experimental signals for transitions to the $\nu=1$ (top) and $\nu=2$ states (bottom), normalized by the integrated signals for the $\nu=0$ states, for various $n$. These ratios should equal the theoretically-calculated ratios of excitation strengths $P_{\nu=1,2}/P_{\nu=0}$. The integrated experimental signals were obtained by fitting the different features with a pseudo-Voigt profile and determining the area under the resulting curve. Interestingly, the relative excitation strengths of the $\nu=1$ and
$\nu=2$ features display very different $n$-dependences. The relative
strength of the $\nu=1$ feature decreases markedly
with decreasing $n$, whereas that of the $\nu=2$ feature
increases substantially, behavior that is well reproduced by theory.
For $\nu = 1$ states, the calculated ratio of the excitation strengths for $\nu=1$ and $\nu=0$ states can be written using
Eqs.~\ref{eq:strength} and \ref{eq:ex_strength1} as
\begin{equation}
\frac{P_{\nu=1}}{P_{\nu=0}}
=|c_{\eta=2}|^2 \frac{g^{(2)}(R_{n,\eta=2})}{g^{(2)}(R_{n,\eta=1})}
\frac{\left|
\int dR \, R^2 w^{\eta=2}_{\iota=0}(R)
\right|^2}
{\left|
\int dR \, R^2 w^{\eta=1}_{\iota=0}(R)
\right|^2} \, .
\end{equation}
The $w^{\eta=1,2}_{\iota=0}$
states represent the ground states of nearly harmonic potential wells
pointing to similar $n$-dependences of the integrals for both $\eta=1$ and 2.
Therefore, since $R_{n,\eta=1}$ and $R_{n,\eta=2}$ are similar, this results in a ratio of the $g^{(2)}(R)$ that remains close to unity for the present range of $n$.
The strong $n$-dependence seen in the $\nu=1$ to $\nu=0$ production ratios must thus be associated principally with the weights $|c_{\eta=2}|^2$.
Furthermore, the calculated ratios $P_{\nu=1}/P_{\nu=0}$
for polarized samples are, on average, somewhat smaller than those
for unpolarized samples. This is due to the fact that $R_{n,\eta=2}$ is slightly less than $R_{n,\eta=1}$ and thus the ratio
$g^{(2)}(R_{n,\eta=2})/g^{(2)}(R_{n,\eta=1})$ for a polarized sample is slightly less than one (see the inset in Fig.~\ref{fig:mol potential})
leading to the small decrease in $P_{\nu=1}/P_{\nu=0}$.
As discussed in the previous section
(see Fig.~\ref{fig:wells}), as $n$ increases the molecular state becomes increasingly dominated by
the $|w^{\eta=2}_{\iota=0}\rangle$ contribution and
$|c_{\eta=2}|^2$ increases. Therefore, the observed $n$-dependence
mirrors the
dominance of the $|w^{\eta=2}_{\iota=0}\rangle$ contribution to the $\nu=1$ vibrational state. For $\nu=2$, however,
the contributions from other wells ($\eta > 2$) become more significant
and the peaks in the molecular wavefunctions
shift towards smaller values of $R$, relative to the size of the atom, with increasing $n$.
This is reflected in the observed $n$-dependence of the $\nu=2$ features.
As demonstrated in earlier work that focused on the $\nu=0$ state~\cite{whal19}, pair correlation functions can be obtained from measurements of the ratio, $\xi_{\nu=0}= P_{\nu=0}^{pol}/P_{\nu=0}^{unpol}$, which can be determined from the relative molecular production rates in polarized and unpolarized samples. Ideally such measurements should be undertaken using identical samples with the exception that one is polarized, the other unpolarized. While, for the measurements reported here we attempt to match the sample conditions as closely as possible, differences remain. The ratio of the measured production rates must be corrected for small differences in the intensities of the photoexcitation lasers, in laser detunings, in the sample temperatures and densities and density distributions, as well as for the differences in the two-photon electronic transition matrix elements, i.e., Clebsch-Gordan coefficients, when creating Rydberg molecules in polarized and unpolarized gases.
Figure~\ref{fig:g2} shows the similarly determined ratios of excitation strengths, $\xi_{\nu=1}^{meas}= P_{\nu=1}^{pol,meas}/P_{\nu=1}^{unpol,meas}$, for the $\nu=1$ level in polarized and unpolarized samples. Also included in Fig~\ref{fig:g2} are the theoretically-predicted ratios of the excitation strengths, $\xi_{\nu=0}^{theory}=P_{\nu=0}^{pol,theory}/P_{\nu=0}^{unpol,theory}$ and $\xi_{\nu=1}^{theory}=P_{\nu=1}^{pol, theory}/P_{\nu=1}^{unpol,theory}$,
for the $\nu=0$ and $\nu=1$ states, respectively. For reference, Fig.~\ref{fig:g2} also shows the ratio to be expected under the simple zeroth-order ``ideal'' assumption that $\xi^{ideal}=g_{-}^{(2)}(R)/g_{unpol}^{(2)}(R)$.
As noted in earlier work\cite{whal19}, $\xi_{\nu=0}^{theory}$ closely matches $\xi^{ideal}$. The predicted values of $\xi_{\nu=1}^{theory}$ are somewhat smaller than $\xi_{\nu=0}^{theory}$, which results because, while the contributions from potential wells other than $\eta=2$ are small, they are significant. Nonetheless, the predicted values are in reasonable agreement with experiment, although the discrepancy seen at the largest values of $R/\lambda_{dB}$, i.e., the largest values of $n$, remains to be explained.
However, the pronounced decrease in $\xi_{\nu=1}^{meas}$ at the smaller values of $R$ provides clear evidence of the effects of antibunching, and the data demonstrate that (for $31\leq n\leq41$) measurements of the $\nu=1$ vibrational state can provide a probe of pair correlation functions at values of $R$ that are somewhat smaller than can be realized using $\nu=0$ states and where the effects of quantum statistics become increasingly important.
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm]{Fig9.pdf}
\end{center}
\caption{\label{fig:g2}
Ratios, $\xi_{\nu}$, of the ULRM excitation strengths in polarized and unpolarized samples of $^{87}$Sr as a function of $R/\lambda_{dB}$, with $R=1.87(n-\delta)^{2}a_{0}^{2}$ for $\nu=0$
states and $R=1.6(n-\delta)^{2}a_{0}^{2}$ for $\nu=1$ states.
The figure includes the results of measurements, $\xi_{\nu=1}^{meas}$, for the $\nu=1$ state together with theoretical predictions for the $\nu=0$ and 1 states, $\xi_{\nu=0}^{theory}$ and $\xi_{\nu=1}^{theory}$, and the ``ideal'' ratio $\xi^{ideal}=g_{-}^{(2)}(R)/g_{unpol}^{(2)}(R)$ (see text).
{\bf }
}
\end{figure}
\section{Conclusions}
\label{conclusions}
Measurements of the photoexcitation of ultralong-range Rydberg molecules
(ULRM), specifically Rydberg dimers,
can provide an {\it in~situ}
probe of pair correlations in an ultracold gas that,
with an appropriate choice of $n$, can be tuned over previously inaccessible
length scales that extend from $\sim20$ to greater than 250~nm. (Quantum gas microscopes can resolve
correlations on length scales on the order of half the wavelength of
light~\cite{mazu17,bakr10} whereas
inelastic loss from spin flips and three-body recombination probe two- and
three-body spatial correlations at shorter length scales~\cite{burt97}.)
The present approach provides a valuable new window into intermediate-range
phenomena such as the formation of Halo states~\cite{koeh06,jens04}
and Efimov trimers~\cite{chin10}, and allows study of correlation functions for scattering states involving atom pairs with large s-wave scattering lengths.
Furthermore, since the time scale for molecule formation is short,
$\sim1\mu$s, the present approach is suitable for {\it in~situ}
probing of non-equilibrium
dynamics in quantum gases.
\ack
The authors thank R. G. Hulet for the loan of equipment. Research supported
by the AFOSR (FA9550-14-1-0007), the NSF (1600059), the Robert A. Welch
Foundation (C-0734 and C-1844), the FWF (Austria)(FWF-SFB041 ViCom, and
FWF-W1243). The Vienna scientific cluster was used to the calculations.
\providecommand{\newblock}{} |
2102.05194 | \section{Introduction}
A brain-computer interface (BCI) is an immediate pathway that provides an intuitive interface for users, especially disabled ones, to translate their intention into commands to control external devices \cite{wolpaw2002brain}. Among various neuroimaging modalities, electroencephalogram (EEG) has been widely used for developing BCI applications for the sake of unobtrusiveness, low cost, and high temporal resolution \cite{wolpaw2002brain}. Researchers have attempted to develop a variety of real-world applications of an EEG-based BCI such as spellers that allow disabled users to communicate with others. Farewell and Donchin demonstrated a P300-based BCI as the first-ever BCI speller in the 1980s \cite{donchin:1988}. Since then, BCI spellers based on visual-evoked brain responses have been improved by different advancements in the decoding techniques. Recently, steady-state visual evoked potentials (SSVEPs), a type of neural responses to repetitive visual stimulation, have attracted increasing attention and been used in high-speed BCI spellers as the acquisition of SSVEP is relatively stable \cite{wang2008brain, vialatte:2010, gao2014visual}. The performance of SSVEP-based BCIs has been significantly improved by advances in system design, signal processing, and decoding algorithms in the past decade \cite{chen2015high}.
As SSVEPs are oscillatory fluctuations that respond to flickering visual stimuli, an SSVEP-based BCI measures the user's SSVEP and uses an algorithm to identify the corresponding stimulus based on the SSVEP data. Analyzing SSVEPs in the frequency-domain has been an intuitive approach to detect the frequency peaks in the spectral response of SSVEP that correspond to the flickering frequency of stimuli. For example, power spectral density analysis (PSDA) \cite{wang2008brain} that uses features in frequency-domain was proposed to decode SSVEPs. Canonical correlation analysis (CCA) \cite{lin2006frequency, bin:2009b} that compares a test trial with computer-generated SSVEP models/templates consisting of sine-cosine signals for each target \cite{wang2015computational} was also proved to work on SSVEPs. However, these methods failed to maintain consistent performance across users because the individual differences in the SSVEP data were not taken into account \cite{wei2018subject}. Later studies exploited individualized training data, or SSVEP templates, and developed training-based SSVEP decoding schemes that better characterize individual SSVEP patterns using personalized calibration data. The training-based methods usually outperform the aforementioned calibration-free methods \cite{nakanishi2014high, nakanishi2017enhancing, nakanishi2015comparison, zhang:2013l1regularized, zhang2014frequency, zerafa2018to}.
The success of training-based SSVEP decoding has significantly boosted the efficiency of SSVEP-based BCI spellers. Nonetheless, the calibration procedure is often laborious and time-consuming, hindering practical and wide-spread applications of BCIs in our daily life. Several studies have attempted to adopt transfer-learning techniques to reduce the individual-variability or session-variability so that a model for SSVEP can be tuned with existing data without repeat collections of calibration data before each use \cite{Wu2020transfer}. For instance, Yuan \etal and Wong \etal proposed subject-to-subject transfer-learning methods that transfer SSVEP data from existing subjects to new ones using a spatial filtering approach \cite{yuan2015enhancing, wong2020inter}. Rodrigues \etal also proposed a cross-subject transfer learning method using a Riemannian geometrical transformation \cite{Rodrigues2019-yl}. Waytowich \etal applied a convolutional neural network to train a subject-independent classifier for detecting SSVEPs \cite{Waytowich2018-ph}. In another study, Nakanishi \etal proposed a session-to-session and device-to-device transfer-learning method using spatial filtering \cite{Nakanishi2016-qa, nakanishi2019facilitating}. Suefusa \etal adopted a frequency shifting technique to synthesize calibration data at arbitrary frequencies from existing calibration data \cite{Suefusa2017-qp}.
The transfer learning approaches have succeeded either in improving classification accuracy of SSVEPs compared with calibration-free methods without any calibration process or compared with fully-calibrated methods with reduced calibration cost. However, such methods focused only on transferring data across one domain such as cross-subject or cross-session transferring. For example, the aforementioned subject-to-subject transfer learning methods implicitly assumed that their subjects, stimulus parameters, and EEG devices are identical between calibration and target data \cite{yuan2015enhancing, wong2020inter, Rodrigues2019-yl, Waytowich2018-ph}. Similarly, the cross-session and the cross-device transfer learning methods proposed in our previous studies implicitly required calibration data to be from the same subjects and stimulus parameters with target data \cite{Nakanishi2016-qa, nakanishi2019facilitating}. Table \ref{table_compare} compares and summarizes current transfer-learning schemes for SSVEP-based BCIs. It is worth noting that a generalized transfer learning approach that can handle transferring existing data across multiple domains has yet to be proposed.
\fulltable{\label{table_compare}Comparison between different transfer learning methods for SSVEP-based BCIs.}
\begin{threeparttable}
\begin{tabular}{llllcccc}
\br
Study & Year & Calibration & Transferring & \multicolumn{4}{c}{Domain transferred} \\ \cline{5-8}
& & data \tnote{*1} & approach & Subjects & Sessions & Devices & Stimuli \\
\mr
\cite{yuan2015enhancing} & 2015 & Not required & Spatial filtering & \textbf{Yes} & No & No & No \\
\cite{Nakanishi2016-qa} & 2016 & Not required & Spatial filtering & No & \textbf{Yes} & No & No \\
\cite{Suefusa2017-qp} & 2017 & Not required & Frequency shifting & No & No & No & \textbf{Yes} \\
\cite{Waytowich2018-ph} & 2018 & Not required & Convolutional & \textbf{Yes} & No & No & No \\
& & & neural network & & & & \\
\cite{Rodrigues2019-yl} & 2019 & Not required & Geometrical& \textbf{Yes} & No & No & No \\
& & & transformation & & & & \\
\cite{nakanishi2019facilitating} & 2019 & Not required & Spatial filtering & No & No & \textbf{Yes} & No \\
\cite{chiang2019cross} & 2019 & Required & Channel-wise & \textbf{Yes} & No & No & No \\
&&& projection &&&&\\
\cite{wong2020inter} & 2020 & Required & Spatial filtering & \textbf{Yes} & No & No & No \\
\br
\end{tabular}
\begin{tablenotes}\footnotesize
\item[*1] Additional calibration data from a target session.
\end{tablenotes}
\end{threeparttable}
\endfulltable
The goal of this study is to propose a generalized framework of transfer learning that can leverage SSVEP data across multiple domains including subjects, sessions, and devices toward a practical BCI application. Our preliminary study demonstrated that applying LST to the existing SSVEP datasets acquired from other users can augment the size of calibration data for new users and in turn enhance decoding performance without acquiring extra calibration process \cite{chiang2019cross}.
However, although the LST-based method could technically be employed to transfer SSVEPs across any domain such as cross-session and cross-device transferring, the effectiveness of such scenarios has not been investigated in our previous work \cite{chiang2019cross}.
Therefore, the generalizability of the LST-based method, especially in a real-world scenario where EEG data from different users are more likely recorded by different types of EEG recording systems, remains unknown. The discrepancy among recording montages has posed a grand challenge in consolidating EEG data across domains and hindered the translation of BCI technologies toward practical applications.
This study investigated the feasibility and the effectiveness of the LST-based method in transferring SSVEPs across multiple domains using the dataset acquired from multiple subjects and sessions with multiple EEG devices collected in our previous study \cite{nakanishi2019facilitating}. This work further examined and compared the characteristics of data from different subjects with and without the LST-based transferring in feature spaces.
In addition, this study also investigated the effects of parameters such as the number of data transferred via the LST on the classification accuracy.
\section{Methods}
\subsection{EEG Data}
This study used the EEG data recorded and reported in a previous study \cite{nakanishi2019facilitating}. The dataset consisted of the EEG recordings from ten healthy adults.
During the experiment, forty visual stimuli were presented on a 27-inch liquid-crystal display (LCD).
Each stimulus was modulated by a sampled-sinusoidal stimulation method with joint frequency-phase modulation (JFPM) \cite{chen2015high, nakanishi2017enhancing}. The stimulation frequencies ranged from 8 Hz to 15.8 Hz with an interval of 0.2 Hz. The initial phase values started from 0 rad and the phase interval was 0.35 $\pi$ rad.
In the experiment, the subjects performed two sessions of simulated online experiments \cite{nakanishi2014generating}. The procedure of tasks in both sessions were identical except that the EEG signals were recorded with two different devices. The two devices were an ActiveTwo system (BioSemi, Inc.) as a high-density laboratory-oriented system and a Quick-30 (Q30) system (Cognionics, Inc.) as a mobile and wireless system for real-life applications. Fig. \ref{device} lists the characteristics of the two devices.
The Q30 system was always tested in the first session, and the ActiveTwo system was tested in the second one to avoid the skin preparation required for the wet (gel) electrodes.
In each session, the subjects wore either one of the EEG devices and performed eight blocks of simulations. In each block, the subjects were asked to gaze at one visual stimulus indicated by the stimulus program for 1.5 s at a time until all forty stimuli were gazed once. Therefore, the subjects performed 40 trials corresponding to 40 stimuli in a block, and the data consisted of eight trials per stimulus from each subject.
\begin{figure}[t]
\centering
\includegraphics[scale=0.52]{images/device_spec.png}
\caption{Specifications of two EEG devices used in this study.}
\label{device}
\end{figure}
\subsection{Preprocessing}
Six channels (PO3, PO4, PO7, PO8, O1, O2) and eight channels (POz, PO3, PO4, PO7, PO8, Oz, O1, O2) of EEG signals were extracted from the recordings collected by the Q30 and the ActiveTwo systems, respectively.
The signals extracted from both devices were resampled at 256 Hz (from 500 Hz and 2048 Hz for the Q30 and the ActiveTwo system respectively) and then re-referenced to the Fz electrode. We employed a conventional sampling-rate conversion method, which includes upsampling, anti-aliasing filtering, and downsampling, to resample the data at a rational ratio between the original and the new sampling rates \cite{crochiere1979general}. This process can be done by using the \textit{resample} function in MATLAB. The re-referencing was done by simply subtracting the signals at target channels by that at the reference channel Fz.
The data were then extracted in [$L$ s, $L$ + 1.5 s], where time zero indicates the stimulus onset and $L$ indicates latency delay in the experimental environment and the human's visual system.
The latency $L$ was set to 0.17 and 0.15 for the Q30 and the ActiveTwo systems, respectively, according to the previous study \cite{nakanishi2019facilitating}. After epoching, the 60-Hz line noise was suppressed by applying an infinite impulse response (IIR) notch filter to each epoch.
\subsection{TRCA-based SSVEP detection}
TRCA is a data-driven method aiming to find a spatial filter that maximizes the reproducibility in each trial during task periods \cite{nakanishi2017enhancing, tanaka2013task}. The task-related components extracted by the spatial filter obtained by TRCA have been shown to provide better SNR which can significantly improve the performance of training-based SSVEP detections \cite{nakanishi2017enhancing}. In addition, the TRCA-based method could successfully be combined with the filter bank analysis, which decomposes EEG signals into multiple sub-band components so that independent information embedded in the harmonic components can be efficiently extracted \cite{chen2015filter}.
In the procedure of the TRCA-based method with filter bank analysis, individual calibration data for the $n$-th stimulus are denoted as $\mathbf{x}_n \in \mathbb{R}^{{N_C} \times {N_S} \times {N_T}}$, $n = 1,2,...,N_F$. Here $N_C$ is the number of channels, $N_S$ is the number of sampling points in each trial, $N_T$ is the number of trials for each stimulus, and $N_F$ is the number of visual stimuli (i.e., 40 in this study). In the training phase, the calibration data are divided into $N_K$ sub-bands by a filter bank and become $\mathbf{x}^k_n \in \mathbb{R}^{{N_C} \times {N_S} \times {N_T}}$, $k = 1,2,...,N_K$. The $N_K$ was set to five in this study. In the following parts of this paper, the $i$-th trial of stimulus $n$ in sub-band k will be denoted as $\mathbf{x}^k_{n, i}$. Spatial filters in each sub-band are obtained to maximize the sum of the inter-trial covariance after projecting the multi-channel signals into single-channel ones with the spatial filter. Therefore, the goal is finding the channel weights $\mathbf{w}^k_n \in \mathbb{R}^{N_C}$ to maximize the term
\begin{eqnarray}
V^k_n &=& \sum_{i, j \atop {i \neq j}}^{N_T}{\mathrm{Cov}\left((\mathbf{w}^k_n)^T \mathbf{x}^k_{n, i}, (\mathbf{w}^k_n)^T \mathbf{x}^k_{n, j}\right)} \nonumber \\
&=& (\mathbf{w}^k_n)^T \left(\sum_{i, j \atop {i \neq j}}^{N_T}{\mathrm{Cov}\left( \mathbf{x}^k_{n, i}, \mathbf{x}^k_{n, j}\right)} \right) \mathbf{w}^k_n \nonumber \\
&=& (\mathbf{w}^k_n)^T \mathbf{S}^k_n \mathbf{w}^k_n.
\end{eqnarray}
Here, $\mathbf{S}^k_n$ is the sum of cross-covariance matrices between all pairs of trials of stimulus $n$ in sub-band $k$. To avoid arbitrary scaling with the weights, instead of finding the $\mathbf{w}^k_n$ that maximizes $\mathbf{V}^k_n$, a constraint term is needed:
\begin{eqnarray}
C^k_n &=& \sum_{i}^{N_T}{\mathrm{Var}\left((\mathbf{w}^k_n)^T \mathbf{x}^k_{n, i}\right)} \nonumber \\
&=& (\mathbf{w}^k_n)^T \left(\sum_{i}^{N_T}{\mathrm{Cov}\left( \mathbf{x}^k_{n, i}\right)}\right) \mathbf{w}^k_n \nonumber \\
&=& (\mathbf{w}^k_n)^T \mathbf{Q}^k_n \mathbf{w}^k_n \nonumber \\
&=& 1.
\end{eqnarray}
Finally, the weights can be calculated as:
\begin{eqnarray}
\mathbf{w}^{k}_n &=& \mathop{\rm argmax}\limits_{\mathbf{w}} \frac{V^k_n}{C^k_n} \nonumber \\
&=& \mathop{\rm argmax}\limits_{\mathbf{w}} \frac{\mathbf{w}^T \mathbf{S}^k_n\mathbf{w}}{\mathbf{w}^T \mathbf{Q}^k_n \mathbf{w}}. \label{argmaxw}
\end{eqnarray}
The solution of equation \ref{argmaxw} is equal to the eigenvector of the matrix $\mathbf{Q}^{-1}\mathbf{S}$ with the largest eigvenvalue.
In the ensemble TRCA as an extension version of TRCA, the final spatial filters for each sub-band $\mathbf{w}^k\in \mathbb{R}^{{N_C} \times {N_F}}$ are obtained by concatenating all the weights in each stimulus:
\begin{eqnarray}
\mathbf{w}^k = \left[ \mathbf{w}^k_1, \mathbf{w}^k_2, \dots, \mathbf{w}^k_{N_F} \right].
\end{eqnarray}
After obtaining the spatial filters, individual templates $\mathbf{\bar{x}}^k_n \in \mathbb{R}^{{N_C} \times {N_S}}$ are prepared. The training trials for $n$-th stimulus in sub-band $k$ are first averaged across trials as:
\begin{eqnarray}
{\mathbf{\bar{x}}}^k_n = \frac{1}{N_T} \sum_{i}^{N_T} \mathbf{x}^k_{n, i}.
\end{eqnarray}
In the testing phase, single-trial testing data $\hat{\mathbf{x}} \in \mathbb{R}^{{N_C} \times {N_S}}$ are first pre-processed by the filter banks to be decomposed into $N_K$ sub-bands as well. Then, the spatial filters $\mathbf{w}^k$ obtained in training phase are applied to the testing signals $\hat{\mathbf{x}}^k \in \mathbb{R}^{{N_C} \times {N_S}}$ in each sub-band. Feature values $\rho^k_n$ are calculated as correlation coefficients between the testing signals and the individual templates as
\begin{eqnarray}
\rho^k_n = r\left((\mathbf{w}^k)^T\mathbf{\hat{x}}^k, (\mathbf{w}^k)^T\mathbf{\bar{x}}^k_n\right),
\end{eqnarray}
where $r\left(\mathbf{a}, \mathbf{b}\right)$ indicates the Pearson’s correlation analysis between two variables $\mathbf{a}$ and $\mathbf{b}$.
A weighted sum of the ensemble correlation coefficients corresponding to all the sub-bands was calculated as the final feature for target identification as:
\begin{eqnarray}
\rho_n &=& \sum_{k = 1}^{N_K} \alpha(k) \cdot \rho^k_n,
\end{eqnarray}
where $\alpha(k)$ was defined as $\alpha(k) = k^{-1.25} + 0.25$ according to \cite{chen2015filter}.
Finally, the target stimulus $\tau$ can be identified as \begin{eqnarray}
\tau = \mathop{\rm argmax}\limits_{n}\rho_n.
\end{eqnarray}
\subsection{LST-based cross-domain transferring}
This work proposes a transformation of SSVEP signals from one domain and another.
Let $\mathbf{x}\in \mathbb{R}^{{N_C} \times {N_S}}$ and $\acute{\mathbf{x}}\in \mathbb{R}^{{N'_C} \times {N_S}}$ be the single-trial SSVEP data obtained in a domain and in another domain (i.e., subject, session, and/or device), respectively.
Then, we aim to find a transformation matrix $\mathbf{P} \in \mathbb{R}^{{N_C} \times {N'_C}}$ such that $\mathbf{x}(t) = \mathbf{P}\acute{\mathbf{x}}(t)+\mathbf{\epsilon}$, where $\mathbf{x}(t), \acute{\mathbf{x}}(t)$ represent the $t$-th column of $\mathbf{x}, \acute{\mathbf{x}}$, and $\mathbf{\epsilon}\in\mathbb{R}^{N_{C}}$ is an error vector. Note that the numbers of channels $N_C$ and $N'_C$ are not necessary to be equal between two domains.
The transformation matrix $\mathbf{P}$ can be obtained by minimizing the error term $\mathbf{\epsilon}$ in the aforementioned equation with a multivariate least-squares regression given $\mathbf{x}$ and $\acute{\mathbf{x}}$ as follows:
\begin{eqnarray}
\mathbf{P} &=& \mathop{\rm argmin}\limits_{\mathbf{p}} {\mathrm{Tr}\left[(\mathbf{x} - \mathbf{p} \acute{\mathbf{x}})(\mathbf{x} - \mathbf{p} \acute{\mathbf{x}})^{T}\right]}
\end{eqnarray}
This problem can be solved as follows:
\begin{eqnarray}
\mathbf{P} &=& \mathbf{x} \acute{\mathbf{x}}^T(\acute{\mathbf{x}} \acute{\mathbf{x}}^T)^{-1}
\end{eqnarray}
Because several studies have shown that trial-averaging can significantly improve the SNR of SSVEPs compared with single-trial SSVEPs by removing background EEG activities \cite{nakanishi2014high,nakanishi2017enhancing}, we used it to improve the transferability of SSVEPs across different domains.
In the proposed method, therefore, instead of using the single-trial signals of the new domain $\mathbf{x}$, we use the averaged signals $\mathbf{\bar{x}}$ across several trials of the signals obtained from the new domain as the target of transformation from existing training pool. These calibration trials obtained from a new user are called transformation targets. Every trial of the existing domains in the training pool will be transformed to signals $\mathbf{\underline{\acute{x}}}$ which should be comparable to the transformation target $\mathbf{\bar{x}}$, i.e. $\mathbf{\bar{x}} \approx \mathbf{\underline{\acute{x}}}_i = \mathbf{P}\acute{\mathbf{x}}_i$ ( $i$ is a trial index). Finally, all trials from the new domain $\mathbf{x}$, which are used to construct the transformation targets, and in the existing data pool $\mathbf{\underline{\acute{x}}}$ are concatenated to form a larger training set than the one used in the conventional template-based algorithm (i.e., $\mathbf{x}$).
With the new training set, the aforementioned TRCA-based method is performed to identify target stimuli. The procedure of the LST is also illustrated in Fig. \ref{LST}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{images/LST.pdf}
\caption{The procedure of transferring SSVEPs based on the least square error transformation. $\mathbf{w}_n^k$ refers to the TRCA-based spatial filer for $n$-th stimulus in $k$-th sub-band (see Equation \ref{argmaxw}).}
\label{LST}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.14]{images/flow.pdf}
\caption{The flowchart of the preparation of the calibration data for three schemes.}
\label{Flow}
\end{figure}
\begin{figure*}[th!]
\centering
\includegraphics[width = 1.0 \textwidth]{images/bar_plot.pdf}
\caption{The averaged classification accuracy of different schemes across ten target subjects and ten cross-validation iterations at different numbers of calibration trials per stimulus. '*' indicates $p < 0.05$ of the Wilcoxon signed-rank test between two schemes.}
\label{bar_plot}
\end{figure*}
\subsection{Performance Evaluation}
To validate the efficacy of the proposed LST-based method in transferring SSVEPs, we compared the performance of detecting SSVEPs using the following three schemes (Fig. \ref{Flow}):
\begin{enumerate}
\item BASELINE: A self-decoding approach in which all the calibration data are collected from a new user (i.e., the conventional individual-template-based method).
\item Transfer without LST (w/oLST): A cross-domain transferring approach in which the calibration data consist of calibration trials from the new domain and from other domains without any transformation.
\item Transfer with LST (w/LST): A cross-domain transferring approach in which the calibration data consist of calibration trials from the new domain and from other domains transformed using LST. The transformation targets are obtained from the data obtained in the new domain.
\end{enumerate}
This study ran a series of simulations to test the performance of the proposed LST-based method as a cross-domain transfer learning for an SSVEP-based BCI. A leave-one-subject-out cross-validation, in which a subject is treated as a new (i.e., target) user and all the other subjects are treated as existing (i.e., non-target) users, was employed to investigate the effectiveness of the proposed method under the cross-subject scenario. When one session of the new user is being tested, the eight trials for each stimulus was randomly divided into five and three as a calibration set and a testing set. We then trained three models including the TRCA-based spatial filtering and individualized template using different pools of training trials for each scheme.
In the BASELINE, 2-5 calibration trials for each of 40 stimuli from a target subject are used to form training sets. In the w/oLST, all the eight trials for each stimulus from all nine non-target subjects (72 trials in total for each stimulus) are simply merged with the training sets used in the BASELINE. In the w/LST, the data from the non-target subjects are first transformed via the LST and then merged with the training sets used in the BASELINE.
Then, three models were evaluated with SSVEP-decoding performance on the 120-trial (i.e., three trials $\times$ 40 stimuli) testing set. The w/LST scheme was further evaluated under the cross-device scenario. In the scenario, the leave-one-subject-out cross validation was also employed. In addition, different EEG devices were selected between target and non-target subjects.
Note that the w/oLST cannot be applied to the cross-device scenario because the numbers of electrodes of the two EEG systems are different across devices.
The random separation of the template/test set was repeated ten times. The decoding performance of each target subject was estimated by the average of ten repeats.
Lastly, the classification accuracy was statistically tested by factorial nonparametric permutation-based repeated measures analysis of variances (ANOVAs) \cite{anderson:2003}. The number of permutation was set to 5,000. In the post-hoc analyses, different schemes were compared pair-wisely using Wilcoxon signed-rank tests.
\begin{comment}
\textcolor{blue}{
The separation of training and testing trials simulates the online calibration process. It is simulated that the TRCA-based model should have collected several calibration trials for the new user, and then made prediction to the testing trials. We evaluated the model at different size of calibration set (from two to five trials) to validate whether the w/LST scheme with smaller numbers of trials can reach comparable performance as the BASELINE scheme, which could imply whether the calibration process can be shortened. We assumed trials were independent in temporal, so shuffling the chronological orders could be deployed to reduce the effect or randomness.}
\end{comment}
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0 \textwidth]{images/subject_scatter.pdf}
\caption{The comparison of the log error rate of different schemes when each of ten subjects plays as the target subject under: (a) cross-subject scenarios, and (b) cross-device scenarios The number in the hollow dots indicates the subject id serving as the target subject. Four dots starting from the hollow dots are cases of number of calibration trials per stimulus starting from two to five.}
\label{scatter}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width = 1.0 \textwidth]{images/bar_plot_subjNum.pdf}
\caption{The averaged classification accuracy of different schemes across ten target subjects and ten iterations using different numbers of supplementary subjects. The number of trials per stimulus was fixed to five. '*' indicates $p < 0.05$ of the Wilcoxon signed-rank test between two schemes.}
\label{bar_plot_subjNum}
\end{figure*}
\section{Results}
Fig. \ref{bar_plot} shows, for the three schemes, the averaged SSVEP-decoding accuracy across subjects with different numbers (from two to five) of calibration trials per stimulus under the cross-subject and cross-device scenarios. In general, the w/LST-based scheme outperformed the other two schemes regardless of the number of calibration trials.
In the cross-subject scenario, a three (schemes) $\times$ four (the number of calibration trials) two-way nonparametric permutation-based repeated measures ANOVA showed significant main effects of both schemes (Q30: $p < 0.001$; ActiveTwo: $p = 0.006$) and the number of calibration trials (Q30: $p < 0.001$; ActiveTwo: $p < 0.001$). The two-way ANOVA also showed a significant interaction between schemes and the number of calibration trials ($p < 0.001$). In the cross-device scenario, when transferring data from the ActvieTwo to the Q30 systems, a two (schemes) $\times$ four (the numbers of calibration trials per stimulus) two-way ANOVA showed signification main effects of both schemes ($p < 0.001$) and the number of calibration trials ($p < 0.001$), and a significant interaction between them ($p < 0.001$). On the other hand, when transferring from the Q30 to the ActiveTwo system, the two-way ANOVA showed significant main effects of the number of calibration trials ($p < 0.001$), but no significant main effect of schemes ($p = 0.137$) and interaction between them ($p = 0.149$).
The post-hoc Wilcoxon signed-rank tests showed that the w/LST scheme consistently and significantly outperformed the others regardless of the number of calibration trials in the cross-subject scenario. In the cross-device scenario, when transforming the signals from the ActiveTwo system to the Q30 system, the signed-rank tests also showed that the w/LST scheme had significantly higher accuracy than the BASELINE regardless of the number of calibration trials. However, there was no statistically significant difference between w/LST and the BASELINE when transforming signals from the Q30 system to the ActiveTwo system.
Fig. \ref{scatter}(a) and (b) show the decoding performance for each subject under the cross-subject and cross-device scenarios, respectively, with different numbers of calibration trials.
The performance is represented as a logarithmic error rate in the 40-class classification.
In Fig. \ref{scatter}(a) and (b), most of the data points fall within the lower right region (i.e., below the diagonal dashed line), which indicates the w/LST outperformed both the method w/oLST and the BASELINE schemes under most of the circumstances. In particular, when the testing data are from Q30, the w/LST consistently has lower error rates than the BASELINE among nearly all subjects. As for the circumstances when the testing data are from the ActiveTwo system, the w/LST can still outperform the BASELINE among most of the subjects when the size of transformation targets is small. When compared to the method w/oLST, the w/LST has higher accuracy under nearly all circumstances.
\section{Discussions}
This study demonstrated the efficacy of the LST-based transfer-learning method in mitigating the variability of SSVEP data across multiple domains. Figs. \ref{bar_plot} and \ref{scatter} suggest that the LST-based method (i.e., w/LST) is capable of boosting the performance of the template-based SSVEP decoding with TRCA especially when the amount of calibration data from the target subject (new user) is insufficient. In addition, study results indicate the negative effects of using the naive transfer learning (w/oLST), compared to the standard TRCA (BASELINE) scheme. These results can be observed from the aspect of the signal characteristics.
Fig. \ref{tsne} shows the scatter plots of sample EEG data recorded with the ActiveTwo system in two schemes (w/ and w/oLST). The subject 1 served as the target subject in both cases, and the sizes of transformation targets were two and five trials per stimulus for the upper and lower plot respectively. The original EEG data and the EEG data transformed by LST onto the transformation targets were pooled together. All EEG trials were first averaged across channels and then processed with t-SNE \cite{maaten2008visualizing} to reduce the dimension to 2D. The plots suggested that the LST can enhance the similarity between trials in the same stimulus across all subjects and reduce similarities across different stimuli. For better visualization, Fig. \ref{tsne} only plots the trials of the first two of 40 stimuli (correspond to two colors, please see the Figure caption).
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.55]{images/tsne_v2.pdf}
\caption{The scatter plot of EEG trials after dimension reduction with t-SNE. For the easiness of visualization, only trials of the first two stimuli are plotted. The triangular dots with darker colors are trials after the LST (subject 1 as the target subject), and the circular dots with lighter colors are original trials. The circles with solid or dashed lines indicate the medians and the standard deviation of trials w/ or w/oLST with corresponding colors. The Silhouette scores of w/oLST and w/LST are 0.0287 and 0.2262 when the number of calibration trials per stimulus is two, and are -0.0071 and 0.1386 when the number of calibration trials per stimulus is five.}
\label{tsne}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[scale = 0.355]{images/spectrum.pdf}
\caption{Normalized spectra of averaged EEG signals across all training trials in each scheme (subject 1 as the target subject) and testing trials.}
\label{spectrum}
\end{figure}
The improvement in the similarity was also reflected in the EEG spectra. Fig. \ref{spectrum} shows the mean spectra of the means across all EEG signals in response to the 12 Hz stimulus in three schemes when subject 1 was the target subject. First, the peak of the spectrum of the BASELINE scheme when the number of calibration trials per stimulus was two (the top panel) didn't appear in the target frequency due to lack of training trials, while the peak became centered at 12 Hz when the number was increased to five (the bottom panel). Note that this phenomenon was reflected in the classification accuracy (Fig. \ref{scatter}b). Furthermore, the fact that the increment in training trials resulted in a more steady spectrum demonstrated the benefit provided by the w/LST scheme. Because the w/LST scheme makes pooling a large number of training trials possible, the SNR can be significantly increased. On the other hand, it can be seen that because the w/oLST scheme simply pooled many trials with high variability, the peak at the target frequency was less prominent. This implies that only with proper transformation on the trials of an existing domain, pooling these trials could lead to positive transfer and improve the SNR. Finally, Fig. \ref{timedomain_corrcoef} shows the averaged Pearson's correlation coefficients of time-domain signals across channels between training and testing trials in all cases. The similarity in the frequency domain matched the magnitude of the correlation coefficients in the time domain. The trend in Fig. \ref{timedomain_corrcoef} also matches the one in the classification accuracy (Fig. \ref{bar_plot}).
\begin{figure}[t!]
\centering
\includegraphics[scale = 0.375]{images/timedomain_corrcoef.pdf}
\caption{Pearson’s correlation coefficients of time domain signals averaged across channels between training trials and testing trials. The correlation is computed by averaging across all target subjects.}
\label{timedomain_corrcoef}
\end{figure}
The classification results shown in Fig. \ref{bar_plot} and \ref{scatter} suggest the efficacy of the proposed LST-based method, which significantly enhanced SSVEP-decoding performance, particularly when the performance of the original model (BASELINE) was limited. While the leading-edge SSVEP-decoding method, template-based method with TRCA-based spatial filtering \cite{nakanishi2017enhancing}, struggles with time-consuming calibration sessions, the LST-based method can leverage existing data from other domains (subjects and recording devices) and improve decoding performance. As shown in the figures, when the number of template trials was limited in all four scenarios, the w/LST scheme offered high accuracy without requiring many templates.
Comparable performance was found using the conventional TRCA approach (BASELINE) and the w/LST scheme in some cases when testing trials were from the ActiveTwo system (Fig. \ref{scatter}(a), the lower-left panel). In the cross-subject scenarios with the ActiveTwo system, as the number of calibration trials per stimulus was large enough (four and five), the performance of the BASELINE model nearly reached the ceiling, and therefore, the LST-based method couldn't improve the performance, suggesting that leveraging a large amount of data from other subjects has no observable benefit when newly collected individual calibration trials are sufficient. This is in line with the rationale of training-based SSVEP methods, which emphasizes the importance of individualized calibration for SSVEP decoding.
In comparison with the na\"ive transferring (w/oLST), for most of the subjects, the performance of the LST-based method improved along with acquiring additional calibration trials from the target user while the w/oLST scheme didn't. In addition, another big advantage of the LST is that the numbers and locations of the EEG channels of new calibration data and supplementary data can be different. In the cross-device scenario, while the na\"ive data pooling is not even allowed, the LST could help expand the number of training trials.
In the cross-device scenario, when the EEG signals of the target subjects were from the ActiveTwo system and the ones of the existing subjects were from the Q30 system, the increment in accuracy that the LST can bring was less than transferring within the ActiveTwo system (Fig. \ref{bar_plot}, the second and the fourth panel from the left).
This implies the limitation of the LST-based method that it still relies on the supplementary data from existing domains with good quality. Nonetheless, the LST did not bring any negative impact either. In a more practical scenario, in which the dry-EEG-based Q30 was used as the recording device for a new target subject, the LST can leverage the existing data collected from other subjects using a standard wet-electrode-based EEG system in a well-controlled laboratory to improve the SSVEP-decoding accuracy (Fig. \ref{bar_plot}, the third panel from the left). In other words, if there is sufficient data collected by gel-based EEG systems in well-controlled laboratories or even from the publicly available datasets on the Internet, the LST can leverage these data and a small number of calibration data collected from the test subject to build a practical SSVEP BCI, significantly improving the practicality of real-world SSVEP BCIs.
In the two scenarios, the cross-subject transferring within ActiveTwo system and the cross-device transferring from the ActiveTwo to the Q30 system, the accuracy of the w/LST with two calibration trials per stimulus was equal to or higher than that of the BASELINE with five calibration trials per stimulus. Therefore, in such cases, 60\% of calibration time could be saved. Assuming a 40-command BCI speller, the proposed method could save five minutes to collect training data with a trial length of 2.5 s (Stimulation time: 1.5 s; Inter-stimulus interval: 1.0 s; Three trials per stimulus).
In addition to the simulation validating the LST-based method with different numbers of calibration trials, another simulation study that varied the number of supplementary subjects was also conducted. In this simulation, a leave-one-subject-out cross-validation was also employed, but when preparing the calibration data of the w/oLST and w/LST schemes, different numbers (1, 3, 5, 7, and 9) of other subjects were randomly selected as the supplementary subjects. For each number of supplementary subject, the random selections were repeated ten times. Unlike the first simulation, the number of trials per stimulus was fixed to five, and the first five trials were always used to form the calibration data and the last three rials were used as testing trials.
As Fig. \ref{bar_plot_subjNum} shows, the performance of the w/LST scheme method improved slightly as the number of supplementary subjects increased, while the w/oLST scheme did not. These results suggested that the number of supplementary subjects does not heavily affect the performance of the LST-based method but the more the better. These results are important for the practical scenarios, because in real-world, the number of supplementary subjects (i.e., existing users) is not limited, and it's important to show that the LST-based method doesn't rely on very specific parameters.
In a comparison of the proposed method with existing approaches listed in Table \ref{table_compare}, our work stands out as the only generalized framework for multiple cross-domain transfer learning for SSVEP decoding. Although some of the methods do not require any additional calibration data from a target session, the others require a small amount of calibration data from a target session. In general, the methods that do not require additional data can achieve higher accuracy than the calibration-free method, but they are far inferior to fully-calibrate methods such as the TRCA-based method \cite{nakanishi2019facilitating}. The ones that require calibration data including the LST-based method employ transfer learning to achieve better performance than fully-calibration methods. Most importantly, most of the existing transfer learning methods can only transfer SSVEPs across one domain such as either cross-subject, cross-session or cross-device scenarios, whereas this study validated that the LST-based method can transfer SSVEPs across multiple domains except cross-stimulus transferring. It indicates that any user could reach a higher accuracy than fully-calibrated methods with a small amount of calibration data collected from his/her EEG device even if it was the first time for him/her to use the system.
It is also worth noting that the classification accuracy obtained in the simulated online analysis could be generalized to actual online performance. The preprocessing pipeline including notch and band-pass filters was applied to each data epoch separately after the data were segmented. In addition, the inter-stimulus interval (ISI) was set to 1.0 s in the experiment, which has been commonly and reasonably used in previous studies with online analyses \mbox{\cite{chen2015high, nakanishi2017enhancing}}.
The main limitation of the proposed method is that it still requires a small number of calibration trials from the new user, and therefore, it's still yet a calibration-free method. In addition, when the quality of the supplemental data is worse than the targeting data, the improvement from the LST-based method is limited. However, in practice, it's more likely that the supplemental data have better signal quality since they can be prepared in a well-controlled environment while the data of the targeting user can be acquired in any general environment.
In a nutshell, the LST enables an effective consolidation of EEG data across users and devices and consistently outperforms the standard TRCA approach and the naive integration of data without LST. Our results suggest using the LST-based method should be taken into account for augmenting calibration data when using TRCA-based SSVEP spellers.
\section{Conclusions}
This study proposed a cross-domain transfer method based on the LST for transforming SSVEP data across users and devices. The experimental results demonstrated the efficacy of the LST-based method in alleviating the inter-subject and inter-device variability in the SSVEPs. The LST-based method also improved the SSVEP-decoding accuracy by leveraging data from other subjects and/or devices and a small number of calibration data from a new subject. These findings considerably improve the practicality of a real-world SSVEP BCI.
\section*{Acknowledgement}
This work was partially supported by the US Army Research Laboratory under Cooperative Agreement W911NF-10-2-0022. T-P Jung is also supported, in part, by the US National Science Foundation (NSF) under Grant CBET-1935860. This work was also supported in part by the Higher Education Sprout Project of the National Chiao Tung University and Ministry of Education, Taiwan, the Ministry of Science and Technology, Taiwan, under MOST(NSC) 109-2222-E-009-006-MY3.
\section*{References}
\bibliographystyle{unsrt.bst} |
2005.04369 | \section{Introduction} \label{sec:Introduction}
\IEEEPARstart{A}{s} the advent and advance of cloud computing and data science in this big data era, more and more cloud-based data-driven applications are developed by different service providers (the data users, such as Facebook, LinkedIn and Google). Most of these applications leverage the vast amount of data collected from each individual (the data owner) to offer certain valuable service back to the corresponding individual or for the other political and commercial purposes, such as friend recommendation, human activity recognition, health monitoring, targeted advertising and election prediction. However, the same set of data could be repurposed in different ways to infer certain sensitive personal information, which would jeopardize the individual's privacy.
In the recent Facebook data leak scandal (April, 2018) \cite{FacebookCambridge2018}, about 87 million Facebook users' data were collected by a Facebook quiz app (a cloud-based data-driven application) and then paired with information taken from their social media profile (including their gender, age, relationship status, location and ``likes'') without any privacy-preserving operations being taken other than anonymization. Thus, the data user or the other adversaries that have the access to the data can still infer certain sensitive information of each individual from his/her data, such as identity, sexual orientation and marital status.
The unprecedented data leak scandal raised the alarm of privacy concerns among cloud-based data-driven applications which could became a big obstacle that impedes the individuals from releasing their data to the service providers to receive valuable service (the utilities).
A similar situation could happen in the patient-hospital scenario as shown in Fig.~\ref{fig:example}. Patient Alice (the data owner) would like to release her data to hospital Bob (the data user) with the premise of using it for disease A diagnosis. However, people like Eve (could be Bob), who works in the same hospital and has the access to Alice's data, could use the same data to infer certain irrelevant sensitive information about Alice, such as her disease B diagnosis.
In this case, some individuals (e.g., Facebook users or Alice) would like to release their data to receive good utilities, while on the premise that the service providers are prevented from inferring certain sensitive information from their data (e.g., identity, sexual orientation and marital status).
Therefore, it is of vital importance to develop a utility-aware privacy-preserving data releasing framework for cloud-based data-driven applications, which enables the released data to be utilized for certain premised intended purpose (utility target), without jeopardizing the corresponding data owner's certain privacy target.
\begin{figure}[tb]
\centering
\includegraphics[width=240pt]{example.pdf}
\caption{An example in patient-hospital scenario.}
\label{fig:example}
\end{figure}
Designing such general utility-aware privacy-preserving data releasing framework is rather challenging. To date, a few related approaches have been proposed \cite{sweeney2002k, kim2003multiplicative, zhang2012functional, shokri2015privacy, abadi2016deep, kung2015discriminant, diamantaras2016data, kung2017compressive, zhuang2017fripal}. However, these approaches cannot fulfil all the privacy requirements needed in the cloud-based data-driven application scenario. For example, approaches that relied on additive or multiplicative random noise perturbation \cite{kim2003multiplicative} and k-anonymity \cite{sweeney2002k} cannot handle the curse of dimensionality.
Differential privacy machine learning approaches \cite{zhang2012functional, shokri2015privacy, abadi2016deep} have been proposed to publish machine learning models while preserving the training data privacy.
In this paper, however, we consider the scenario that the machine learning models have been trained in advance by the cloud-based service providers (the data users). The data to be protected would appear as the testing data, which is beyond the scope of those approaches. Besides, \cite{hitaj2017deep} has shown that some record-level differential privacy approaches applied to collaborative learning scenario are ineffective in dealing with inference attacks. Dimensionality reduction based approaches \cite{kung2015discriminant, diamantaras2016data, kung2017compressive, zhuang2017fripal, al2017ratio} have been proposed to preserve privacy while maintaining most of the utility. However, despite of their good experimental performance on several public datasets, those approaches didn't introduce any uncertainty to hide the sensitive information, which failed to show the needed guarantees on the privacy targets mathematically.
To address the challenges mentioned above, in this paper, we devote to solve a two-party exemplar problem. The data user (i.e., the cloud-based service provider) use his/her domain knowledge and public domain data to train a model to provide certain service in advance. The data owner would like to receive the service via sharing his/her own data as the testing data to the data user. The data owner predefines several privacy targets (sensitive information) that he/she would like to prevent the data user from inferring from his/her data. By ``predefines'', we assume that the data owner knows what the malicious inference and the corresponding domain knowledge and public domain data will be utilized by the malicious data users.
In this paper, a two-step perturbation-based utility-aware privacy-preserving data releasing framework is proposed to tackle this problem.
Given certain specific utility/privacy targets (i.e., the inference problems and the corresponding domain knowledge and public domain data), our approach precisely transforms the original data into privatized data that can be successfully utilized for certain intended purpose (learning to succeed), without jeopardizing certain predefined privacy (training to fail).
The first step is a coarse-grained data perturbation method, Joint Utility/Privacy Analysis (JUPA). JUPA is an subspace-optimized projection method, which combines the advantages from both DCA \cite{kung2015discriminant} (utility driven projection) and MDR \cite{diamantaras2016data} (privacy emphasized projection), and tries to find a subspace projection that could optimize for both utility and privacy targets with the knowledge learned from the public datasets.
The second step is a fine-grained data perturbation method inspired by the ``label changing'' problems (e.g., adversarial image perturbation \cite{szegedy2013intriguing, Goodfellow2014, gardner2015deep, papernot2016limitations, papernot2017practical, carlini2017towards, athalye2017synthesizing}) in the computer vision area, where in order to change the image's class membership, very small perturbations are added to the image that remain quasi-imperceptible to a human vision system. For instance, \cite{gardner2015deep} proposed a Maximum Mean Discrepancy \cite{fortet1953convergence} (MMD) statistic test related approach to make semantic change to the appearance of given images.
We propose to use a MMD-like loss function to leverage the knowledge (i.e., the discriminant distance among the classes in each privacy target) learned from the public domain dataset and precisely perturb each coarse-grain-perturbed data to a fine-grain-perturbed data that belongs to a randomly selected privacy target class (the data owner's secret parameter).
In the experiments, we have tested our frame on three public datasets: Human Activity Recognition, Census Income and Bank Marketing datasets. The experiment results demonstrate that (a) JUPA is a more general utility-aware dimensionality reduction approach compared with DCA \cite{kung2015discriminant} and MDR \cite{diamantaras2016data}; (b) given certain predefined privacy target, our fine-grained data perturbation approach can reduce the accuracy of the corresponding inference attack to the level of random guessing.
The rest of paper is organized as follows:
Section~\ref{sec:RelatedWork} presents the related works.
Section~\ref{sec:Preliminaries} presents the preliminaries about dimensionality reduction and maximum mean discrepancy.
Section~\ref{sec:Framework} describes our proposed utility-aware privacy-preserving data releasing framework.
Section~\ref{sec:ExperimentalEvaluation} presents the experimental evaluation.
Section~\ref{sec:Conclusion} presents the conclusion and future work.
\section{Related Work} \label{sec:RelatedWork}
A few privacy-preserving data releasing approaches have been proposed, including solutions based on cryptography \cite{erkin2012generating, nikolaenko2013privacy, bost2015machine, bonawitz2017practical}, differentially private synthetic data generation \cite{hardt2012simple, jiang2013differential, bindschaedler2017plausible, soria2017differentially}, and dimensionality reduction \cite{liu2006random, kung2015discriminant, diamantaras2016data, kung2017compressive, zhuang2017fripal, al2017ratio}. Most of the cryptography-based approaches are designed for specific applications/algorithms. For instance, \cite{nikolaenko2013privacy} developed a privacy-preserving ridge regression system that utilized additive homomorphic encryption and Garbled circuits to train a ridge regression model with the encrypted data statistic shares submitted by multiple data owners. \cite{bost2015machine} proposed to use cryptographic building blocks to enable testing new samples while protecting both the ML model and the submitted samples, in three popular classification protocols: hyperplane decision, Naïve Bayes, and decision trees. Although cryptography-based approaches prevent the adversaries from performing inference attack on the encrypted data/model, they are not flexible enough to work for general data releasing purpose.
Differential privacy (DP) \cite{dwork2008differential} is one of the most popular standard for quantifying individual privacy. DP aims to protect the privacy of individuals via adding randomness to the aggregate information. Differentially private synthetic data generation approaches utilize those differentially private aggregate information to generate synthetic data. For instance, \cite{jiang2013differential} considers to use differential privacy component analysis for data releasing. ``Plausible Deniability'' \cite{bindschaedler2017plausible}, has been proposed and achieved by applying a privacy test after generating the synthetic data. The generative model proposed in \cite{bindschaedler2017plausible} is a probabilistic model which captures the joint distribution of features based on correlation-based feature selection (CFS) \cite{hall1999correlation}. \cite{hardt2012simple} proposed an algorithm which combines the multiplicative weights approach and exponential mechanism for differentially private data release. \cite{soria2017differentially} proposed a micro-aggregation \cite{domingo2005ordinal} based differential private data releasing approach which reduces the noise required by differential privacy based on $k$-anonymity. Although DP-based approaches provide strong guarantees on individuals' privacy, they does not take any utility targets into account in designing their privacy-preserving data releasing mechanisms.
The dimensionality reduction approaches provide a promising way to irreversibly transform the original data, and publish the transformed data for general usage. \cite{liu2006random} proposed to use random projection matrix to project the original data to a lower dimensional space. However, the random projection method mainly focuses on the privacy targets without considering the utility targets, which downgrades its utility performance. A few dimensionality reduction based privacy-preserving approaches focusing on maintaining the utility have been proposed \cite{kung2015discriminant, diamantaras2016data, kung2017compressive, zhuang2017fripal, al2017ratio}. For instance, \cite{kung2015discriminant} proposed to use Discriminant Component Analysis (DCA), a supervised version of Principle Component Analysis (PCA), to project the data into a lower dimensional space that maximizes the discriminant power for specific targets. However, since DCA mainly focuses on the utility target, it might maintain the utility while somewhat preserve the privacy because of the information loss through the dimensionality reduction. However, DCA could not control or adjust the projection matrix in terms of the privacy target. \cite{diamantaras2016data} proposed Multi-class Discriminant Ratio (MDR), which projects the data based on a pair of classification targets, (a) a privacy-insensitive and (b) a privacy-sensitive target. RUCA \cite{al2017ratio}, improves the MDR to provide more flexibility to adjust the trade-off or tuning between utility and privacy. However. these approaches do not introduce any uncertainty/randomness to hide the sensitive information, which failed to show the needed guarantees on the privacy targets mathematically.
\section{Preliminaries} \label{sec:Preliminaries}
\subsection{Dimensionality Reduction via Eigenvalue Decomposition} \label{sec:SPED}
An important component of our framework is supervised dimensionality reduction technique (i.e., it relies on data labels). Consider a dataset with $N$ training samples $\{x_{1}, x_{2}, \dots, x_{N}\}$, where each sample has $M$ features ($x_{i} \in \mathbb{R}^{M}$). Since the same dataset could be utilized in different classification problems, each classification problem $c$ has a unique set of labels $L^{c}_{i}$ associated with the corresponding training samples $x_{i}$. Without loss of generality, we assume the dataset could be utilized for a single utility target $U$ and a single privacy target $P$. Then, each training sample $x_{i}$ has two labels $L^{u}_{i} \in \{1, 2, \dots, L^{u}\}$ and $L^{p}_{i} \in \{1, 2, \dots, L^{p}\}$. $L^{u}$ and $L^{p}$ are the numbers of classes of the utility target and the privacy target, respectively.
Based on Fisher's linear discriminant analysis \cite{fisher1936use, mika1999fisher}, given a classification problem, the within-class scatter matrix of its training samples contains most of the ``noise information'', while the between-class scatter matrix of its training samples contains most of the ``signal information''.
We define the within-class scatter matrix and the between-class scatter matrix for the utility target as follows:
\begin{equation} \label{eq:S_U1}
S_{W_{U}} = \sum_{l=1}^{L^{u}} \bigg( \sum_{i=1}^{N_{l}^{u}} x_{i}x_{i}^{T}- N_{l}^{u} \mu_{l}\mu_{l}^{T} \bigg)
\end{equation}
\begin{equation} \label{eq:S_U2}
S_{B_{U}} = \sum_{l=1}^{L^{u}} N_{l}^{u}\mu_{l}\mu_{l}^{T} - N\mu\mu^{T}
\end{equation}
where $\mu=\frac{1}{N}\sum_{i=1}^{N} x_{i}$, $\mu_{l}$ is the mean vector of all training samples belonging to class $l$, $N_{l}^{u}$ is the number of training samples belonging to class $l$ of the utility target.
Similarly, for the privacy target the within-class scatter matrix and the between-class scatter matrix define as:
\begin{equation} \label{eq:S_P1}
\begin{split}
S_{W_{P}} = \sum_{l=1}^{L^{p}} \bigg( \sum_{i=1}^{N_{l}^{p}} x_{i}x_{i}^{T}- N_{l}^{p} \mu_{l}\mu_{l}^{T} \bigg)
\end{split}
\end{equation}
\begin{equation} \label{eq:S_P2}
\begin{split}
S_{B_{P}} &= \sum_{l=1}^{L^{p}} N_{l}^{p}\mu_{l}\mu_{l}^{T} - N\mu\mu^{T}
\end{split}
\end{equation}
Let $W$ be an $K \times M$ projection matrix, in which $K < M$. Given testing sample $x$, $\hat{x}=x^{T} \cdot W$ is its subspace projection. Our framework combines the
advantages of two eigenvalue decomposition based dimensionality reduction techniques: DCA \cite{kung2015discriminant} (utility driven projection) and MDR \cite{diamantaras2016data} (privacy emphasized projection).
\subsubsection{Discriminant Component Analysis (DCA)} \label{sec:DCA}
DCA \cite{kung2015discriminant} involves searching for the projection matrix $W \in R^{M \times K}$:
\begin{equation} \label{eq:DCA_1}
\begin{split}
DCA=\frac{det(W^{T} S_{B_{U}} W)}{det(W^{T} (\bar{S} + \rho I) W)}
\end{split}
\end{equation}
where $det(\cdot)$ is the determinant operator, $\rho I$ is a small regularization term added for numerical stability, and $\bar{S}=S_{W_{U}}+S_{B_{U}}= \sum_{i=1}^{N} x_{i}x_{i}^{T} - N\mu\mu^{T}$.
The optimal solution to this problem can be derived from the first $K$ principal generalized eigenvectors of the matrix pencil $(S_{B_{U}}, \bar{S} + \rho I)$.
\subsubsection{Multi-class Discriminant Ratio (MDR)} \label{sec:MDR}
MDR \cite{diamantaras2016data} considers both the utility target and the privacy target, which is defined as:
\begin{equation} \label{eq:MDR}
\begin{split}
MDR=\frac{det(W^{T} (S_{B_{U}}) W)}{det(W^{T} (S_{B_{P}} + \rho I) W)}
\end{split}
\end{equation}
where $\rho I$ is a small regularization term added for numerical stability.
The optimal solution to MDR can be derived from the first $K$ principal generalized eigenvectors of the matrix pencil $(S_{B_{U}}, S_{B_{P}} + \rho I)$.
\subsection{Maximum Mean Discrepancy (MMD)} \label{sec:MMD}
The Maximum Mean Discrepancy \cite{fortet1953convergence} (MMD) statistic has been proposed to test whether two distributions $p$ and $q$ are different based on the samples drawn from each of them. In this work, our fine-grained data perturbation utilized a MMD-like loss function inspired by a kernel-MMD solution \cite{gretton2007kernel}. Let $p$ and $q$ be two distributions defined on a domain $\mathcal{X}$. Given observations $X := \{x_1, x_2, \dots, x_m\}$ and $Y := \{y_1, y_2, \dots, y_n\}$, drawn i.i.d. from $p$ and $q$ respectively, the kernel-MMD solution \cite{gretton2007kernel} is defined as:
\begin{equation} \label{eq:KMMD}
\begin{split}
MMD[\mathcal{F}, X, Y] =& \frac{1}{m}\sum_{i=1}^{m} \phi(x_{i}) - \frac{1}{n}\sum_{i=1}^{n} \phi(y_{i}) \\
=&\Big[\frac{1}{m^{2}}\sum_{i, j=1}^{m} k(x_i, x_j) - \frac{2}{mn}\sum_{i, j=1}^{m, n} k(x_i, y_j) \\
&+ \frac{1}{n^{2}}\sum_{i, j=1}^{n} k(y_i, y_j)\Big]^{\frac{1}{2}}
\end{split}
\end{equation}
where $\mathcal{F}$ is a unit ball in a universal RKHS $\mathcal{H}$, defined on the compact metric space $\mathcal{X}$, with associated kernel $k(\cdot, \cdot)$, and $\phi(x)=k(x, \cdot)$. $MMD[\mathcal{F}, X, Y] \approx 0$, if and only if $p=q$.
\begin{figure*}[tb]
\centering
\includegraphics[width=525pt]{FRMWK.pdf}
\caption{A Utility-aware Privacy-preserving Data Releasing Framework.}
\label{fig:sysview}
\end{figure*}
\section{Utility-aware Privacy-preserving Data Releasing Framework} \label{sec:Framework}
\subsection{Framework Overview} \label{sec:ProblemStatement}
{\bf \textit{Problem Statement.}} As illustrated in Fig.~\ref{fig:sysview}, our framework involves two parties: the data owner(s) and the data user(s). The data user uses public data (background knowledge) to train a machine learning model (i.e., classification model) in advance to provide certain service (the utility targets). The data owner would like to release her private data to the data user for the purpose of the utility targets, and prevent the malicious data user from inferring certain predefined sensitive information (the privacy targets). Assume the data owner and the data user have access to similar set of public data (background knowledge) utilized for both utility and privacy targets, but the data owner does't know the data user's machine learning model. Our goal is to perturb the data owner's private data $x$ into perturbed data $z$ with the knowledge of predefined utility and privacy targets, such that the perturbed data $z$ could be utilized successfully for the intended purposes (i.e., the utility target achieves similar accuracies using either $x$ or $z$), without jeopardizing the data owner's privacy (i.e., the privacy target get no better accuracy than random guessing while using $z$). To achieve this goal, we propose a two-step data perturbation framework (Fig.~\ref{fig:sysview}).
More details about the two steps are described in Section~\ref{sec:CoarsePerturbation} and Section~\ref{sec:FinePerturbation}.
{\bf \textit{Threat Model.}} The adversaries in our framework are the malicious data users, who have the access to the public data that could be utilized as the training data for certain predefined privacy target. The adversaries would like to infer the knowledge (i.e., class) of the privacy target (i.e., classification problems) associated with the data owner's private data based on the corresponding perturbed data and public data (background knowledge).
For instance, as shown in Fig.~\ref{fig:sysview}, we shall assume that the predefined privacy target is a two-class (i.e., $\{P_1, P_2\}$) classification problem (utility targets are independent from the privacy task). Let $X=[X_{P_1}, X_{P_2}]$ be the public training samples for the privacy target, where $X_{P_i}$ ($i \in \{1, 2\}$) presents the samples associated with class $i$. Let $x$ be the data owner's private data, where $s \in \{P_1, P_2\}$ is its original (privacy target) class and $t \in \{P_1, P_2\}$ is its expected (privacy target) class after our privacy-preserving data releasing operation. The data owner could publish $z$ (i.e., the perturbed version of $x$) using our framework $F$: $z \leftarrow F(x, t, X_{P_1}, X_{P_2})$. The adversary has to use his/her approach $A(z, X_{P_1}, X_{P_2})$ to guess/infer the original (privacy task) class $s$.
\subsection{Coarse-grained Data Perturbation} \label{sec:CoarsePerturbation}
In this section, we introduce a general dimensionality reduction method Joint Utility/Privacy Analysis (JUPA). JUPA combines the advantages from both DCA \cite{kung2015discriminant} (utility driven projection) and MDR \cite{diamantaras2016data} (privacy emphasized projection), and tries to find a subspace projection that could optimize for both utility and privacy targets with the knowledge learned from the public datasets. Our problem settings are exactly the same as described in Section~\ref{sec:SPED}. For simplicity, we shall start from a single utility/privacy target scenario. JUPA tries to find a projection matrix $W$ that maximize the following function:
\begin{equation} \label{eq:JUPA}
\begin{split}
JUPA=\frac{det(W^{T} (S_{B_{U}} + \rho^{\prime}_{1}S_{W_{P}}) W)}{det(W^{T} (S_{W_{U}}+\rho_{1}S_{B_{P}} + \rho_{0}I) W)}
\end{split}
\end{equation}
where $det(\cdot)$ is the determinant operator, $\rho_{0}$ is regularization parameter added for numerical stability, and $\rho_{1}$, $\rho^{\prime}_{1}$ are privacy-utility adjustment parameters.
The optimal solution to JUPA can be derived from the first $K$ principal generalized eigenvectors of the matrix pencil $(S_{B_{U}} + \rho^{\prime}_{1}S_{W_{P}}, S_{W_{U}}+\rho_{1}S_{B_{P}} + \rho_{0}I)$. After getting the projection matrix $W$, we perturb the data owner's private data $x$ and the training data matrix $X$ as $\hat{x}=x^{T} W$ and $\hat{X}=X^{T} W$.
Additionally, JUPA can be generalized to multiple utility/privacy targets by including multiple corresponding scatter matrices:
\begin{equation} \label{eq:JUPA_multiple}
\begin{split}
JUPA=\frac{ det(W^{T}( \sum_{i=1}^{N_u}S_{B_{U_{i}}} + \sum_{i=1}^{N_p}\rho^{\prime}_{i}S_{W_{P_{i}}}) W)}{det(W^{T} (\sum_{i=1}^{N_u}S_{W_{U_{i}}}+\sum_{i=1}^{N_p}\rho_{i}S_{B_{P_{i}}} + \rho_{0}I) W)}
\end{split}
\end{equation}
{\bf \textit{Utility vs. ``Somewhat Privacy''.}} JUPA maintains a trade-off between the utility and ``somewhat privacy''. ``somewhat privacy'' means our coarse-grained perturbation approach optimizes towards privacy, but could not provide privacy guarantee (as in Section~\ref{sec:FinePerturbation}). On one hand, JUPA optimizes a subspace projection that maximizes the ``signal to noise'' ratio of the utility targets. On the other hand, JUPA optimizes towards two ``mappings'' for privacy targets: a ``many-to-one'' mapping, after which data belonging to the same privacy class are near each other (tuned by $\rho^{\prime}_{1}$); and a ``one-to-many'' mapping, after which data belonging to different privacy classes are far from each other (tuned by $\rho_{1}$). By adjusting $\rho_{1}$ and $\rho^{\prime}_{1}$, JUPA could be tuned between DCA \cite{kung2015discriminant}, MDR \cite{diamantaras2016data} and RUCA \cite{al2017ratio}. For instance, if $\rho_{1}=\rho^{\prime}_{1}=0$, this projection method becomes DCA; if $\rho_{1}$ is very large and $\rho^{\prime}_{1}=0$, it becomes MDR as the term $S_{B_{P}}$ dominates $(S_{W_{U}}+\rho_{1}S_{B_{P}} + \rho_{0}I)$; and if $\rho^{\prime}_{1}=0$ it becomes RUCA. Higher value of $\rho_{1}$ and $\rho^{\prime}_{1}$ means more emphasis on the privacy targets.
\subsection{Fine-grained Data Perturbation} \label{sec:FinePerturbation}
In this section, we introduce a perturbation approach that gradually change the privacy target classification label of a given data owner's coarse-grained perturbed data $\hat{x}$ from its source (original) label $s$ to a randomly selected target label $t$, via adding precisely calculated noise. For simplicity, we shall assume a single 2-class ($\{P_1, P_2\}$) privacy target scenario. Except for the data owner's coarse-grained perturbed data $\hat{x}$, another input for this approach is the coarse-grained perturbed training data matrix $\hat{X}=[\hat{X}_{P_{1}}, \hat{X}_{P_{2}}]$, where $\hat{X}$ will be split into two parts: $\hat{X}^{G}=[\hat{X}^{G}_{P_{1}}, \hat{X}^{G}_{P_{2}}]$ and $\hat{X}^{V}=[\hat{X}^{V}_{P_{1}}, \hat{X}^{V}_{P_{2}}]$. $\hat{X}^{G}$ is the ``ground truth'' training data matrix being used to gradually ``train'' the fine-grained perturbed data. $\hat{X}^{V}$ is the ``verification'' training data matrix being used to verify the current label of the input data $\hat{x}$ and intermediate perturbed data.
Given coarse-perturbed private data $\hat{x}$, we start from randomly selecting a target label $t \in \{P_1, P_2\}$ for $\hat{x}$, and use the following function to decide its current (source) label $s \in \{P_1, P_2\}$:
\begin{equation} \label{eq:sanitization}
\begin{split}
s=&label(\hat{x})=\argminB_{\{l : l \in \{P_{1}, P_{2}\}\}} \big(\frac{1}{|\hat{X}^{V}_{l}|}\sum_{\hat{x}_{i} \in \hat{X}^{V}_{l}} \phi(\hat{x}_{i}) - \phi(\hat{x}) \big)^{2}\\
=&\argminB_{\{l : l \in \{P_{1}, P_{2}\}\}} \frac{1}{|\hat{X}^{V}_{l}|^{2}}\sum_{\hat{x}_{i}, \hat{x}_{j} \in \hat{X}^{V}_{l}} k(\hat{x}_{i}, \hat{x}_{j}) \\
&- \frac{2}{|\hat{X}^{V}_{l}|}\sum_{\hat{x}_{i} \in \hat{X}^{V}_{l}} k(\hat{x}_{i}, \hat{x}) + k(\hat{x}, \hat{x}) \\
=&\argminB_{\{l : l \in \{P_{1}, P_{2}\}\}} \frac{1}{|\hat{X}^{V}_{l}|^{2}}\sum_{\hat{x}_{i}, \hat{x}_{j} \in \hat{X}^{V}_{l}} k(\hat{x}_{i}, \hat{x}_{j}) \\
&- \frac{2}{|\hat{X}^{V}_{l}|}\sum_{\hat{x}_{i} \in \hat{X}^{V}_{l}} k(\hat{x}_{i}, \hat{x})
\end{split}
\end{equation}
Our approach perturbs $\hat{x}$ in an iterative fashion. Let $z_{i}$ be the $i$th ($i=1, 2, \dots$) intermediate perturbed data. Then, our iterative data sanitization function is defined as:
\begin{equation} \label{eq:sanitization}
\begin{split}
z_{0}=&\hat{x} \\
z_{i}=&z_{i-1}+\theta(z_{i-1}) \ \ (i=1, \ 2, \ \dots)
\end{split}
\end{equation}
where $\theta(z_{i})$ is the noise vector being added to $z_{i}$. The starting noise vector $\theta(z_{0})$ could be initiated as a zero vector or a random vector.
In order to compute $\theta(z_{i})$ in each iteration, inspired by the kernel-MMD solution \cite{gretton2007kernel} described in Section~\ref{sec:MMD}, we define a loss function as:
\begin{equation} \label{eq:loss}
\begin{split}
L(\theta(z_{i}))=&\big(\frac{1}{n_t}\sum_{\hat{x}_{i} \in \hat{X}^{G}_{t}} \phi(\hat{x}_{i}) - \phi(z_{i}) \big)^{2} \\
& - \big(\frac{1}{n_s}\sum_{\hat{x}_{i} \in \hat{X}^{G}_{s}} \phi(\hat{x}_{i}) - \phi(z_{i}) \big)^{2} + \frac{\lambda}{2} \|\theta(z_{i})\|_{2}^{2} \\
=&\frac{1}{n_t^{2}}\sum_{\hat{x}_{i}, \hat{x}_{j} \in \hat{X}^{G}_{t}} k(\hat{x}_{i}, \hat{x}_{j}) - \frac{1}{n_s^{2}}\sum_{\hat{x}_{i}, \hat{x}_{j} \in \hat{X}^{G}_{s}} k(\hat{x}_{i}, \hat{x}_{j}) \\
&+ \frac{2}{n_s}\sum_{\hat{x}_{i} \in \hat{X}^{G}_{s}} k(\hat{x}_{i}, z_{i}) - \frac{2}{n_t}\sum_{\hat{x}_{i} \in \hat{X}^{G}_{t}} k(\hat{x}_{i}, z_{i}) \\
&+ \frac{\lambda}{2} \|\theta(z_{i})\|_{2}^{2}
\end{split}
\end{equation}
A large negative value of $L(\theta(z_{i}))$ indicates $z_{i}$ belongs to the target class, and a large positive value of $L(\theta(z_{i}))$ indicates $z_{i}$ belongs to the source class. Therefore, the value of $\theta(z_{i})$ is obtain by minimizing the loss function $L(\theta(z_{i}))$ gradually, until $label(z_{i})$ is $t$. To solve this optimization problem, we use a gradient descent approach:
\begin{equation} \label{eq:gd}
\begin{split}
\bigtriangledown_{\theta(z_{i})} L(\theta(z_{i}))=&\frac{1}{n_s}\sum_{\hat{x}_{i} \in \hat{X}^{G}_{s}} k(\hat{x}_{i}, z_{i}) \frac{\hat{x}_i-z_{i}}{\sigma^2} \\
&- \frac{1}{n_t}\sum_{\hat{x}_{i} \in \hat{X}^{G}_{t}} k(\hat{x}_{i}, z_{i})\frac{\hat{x}_i-z_{i}}{\sigma^2} \\
&+ \lambda \cdot \theta(z_{i})
\end{split}
\end{equation}
\begin{equation} \label{eq:gd}
\begin{split}
\theta(z_{i})=\theta(z_{i})-\alpha \bigtriangledown_{\theta(z_{i})} L(\theta(z_{i}))
\end{split}
\end{equation}
where we use RBF kernel $k(x_i, x_j)=e^{-\frac{\|x_i-x_j\|_2^2}{2\sigma^2}}$ as an example, and $\alpha$ is the learning rate. Finding the most appropriate kernel function is beyond the scope of this paper, and there are a few papers discussing about kernel selection \cite{jebara2004multi, kim2006optimal}. In the experimental evaluation, we use the kernel function that gives the highest cross-validation accuracy on the training data.
{\bf \textit{Privacy Guarantee.}} Considering the ``two-class'' scenario described in Section~\ref{sec:ProblemStatement}, we assume the adversaries' approach $A(z, X_{P_1}, X_{P_2})$ would be certain kernel-based classification models trained by public available dataset $[X_{P_1}, X_{P_2}]$. Inspired by semantic security \cite{goldreich2009foundations}, we give our definition of a privacy-preserving data releasing framework as below.
\begin{definition} {\bf (Privacy-preserving Data Releasing Framework.)} $F$ is a privacy-preserving data releasing framework, if given predefined privacy target and certain adversaries' approach $A$, the advantage $Adv[A, F] = |Pr(s = P_1) - Pr(s = P_2)|$ is negligible. (It is straightforward to generate this definition to multi-class scenarios.)
\end{definition}
\begin{theorem}
Our proposed framework is a privacy-preserving data releasing framework.
\end{theorem}
\begin{proof}
Given predefined privacy target, certain appropriate kernel function and public available dataset $[X_{P_1}, X_{P_2}]$, our framework precisely perturbs the private data $x$ towards a perturbed data $z$ associated with a randomly selected privacy target label $t$. Then, given $z$ and $[X_{P_1}, X_{P_2}]$, we have $Pr(s = P_1)=Pr(s = P_2)=\frac{1}{2}$. Therefore, $Adv[A, F] = |Pr(s = P_1) - Pr(s = P_2)|=0$ is negligible.
\end{proof}
\section{Experimental Evaluation} \label{sec:ExperimentalEvaluation}
\subsection{Datasets} \label{sec:ExperimentalEvaluationDatasets}
We have tested our proposed frame with three public datasets: Human Activity Recognition (HAR) \cite{anguita2013public}, Census Income (Census) \cite{kohavi1996scaling} and Bank Marketing (Bank) \cite{moro2014data}. Each dataset has been split into three subsets: training samples (for perturbation approaches), testing samples (data owner's private data), and adversary training samples (for inference attacks).
{\bf Human Activity Recognition (HAR) \cite{anguita2013public}} \textit{HAR} contains smartphone sensor data (i.e., accelerometer data) of 30 subjects' daily activities, where each sample has 561 features and two labels: activities of daily living (ADL) and identity (ID).
In our experiments, we consider ADL as the utility target and ID as the privacy target.
Specifically, ADL has 6 types of labels: ``Walking'', ``Walking Upstairs'', ``Walking Downstairs'', ``Sitting'', ``Standing'' and ``Laying''. On the other hand, ID has 30 types of labels, since 30 subjects have contributed to this dataset. The original dataset is unbalanced. For instance, some subjects contribute more data than the others and some ADLs happen more often than the others. As such, for each different ADL-ID combination ($6 \times 30 = 180$ combinations in total), we randomly sampled 20 samples from the original dataset, resulting in 3,600 samples. The numbers of training, testing and adversary training samples are 1,440, 720 and 1,440, respectively. We kept the number of samples in all ADL-ID combinations equal in all sets.
{\bf Census Income (Census) \cite{kohavi1996scaling}} \textit{Census} has been used to predict whether someone's income exceeds \$50K/yr based on census data. We identify two labels of this dataset: ``income'', where the data user tries to classify if someone's income is ``high'' (higher than \$50K/yr) or ``low'' (lower or equal to \$50K/yr); and ``gender'' (i.e., male/female) which was one feature in the original dataset. Since based on the application, either ``income'' or ``gender'' can be served as utility or privacy targets, we experimented for both cases.
Firstly, we removed the samples with missing features. Secondly, we turned all categorical features into numerical features using binary encoding, which resulted in 51 features. Lastly, we randomly sampled 750 samples for each income-gender combination ($2 \times 2 = 4$ combinations in total) from the original dataset, resulting in 3,000 samples. The numbers of training, testing and adversary training samples are 1,200, 600 and 1,200, respectively. As with the \textit{HAR} dataset, we kept the number of samples in all income-gender combinations equal in all sets.
{\bf Bank Marketing (Bank) \cite{moro2014data}} \textit{Bank} is related with direct marketing campaigns (phone calls) of a Portuguese banking institution. The original classification goal is to predict if the client will subscribe a term deposit (marketing purpose). As such, we used the marketing purpose (``marketing'') as the utility target, which is a ``yes'' or ``no'' binary classification problem. We used marital status (``marital'') as the privacy target, which was one feature in the original dataset. Since very few samples have ``unknown'' marital status, we removed those samples. Thus, ``marital'' has 3 types of labels: ``divorced'', ``married'' and ``single''. As with the \textit{Census}, we turned all categorical features into numerical features using binary encoding, resulting in 31 features. We randomly sampled 410 samples for each marketing-marital combination ($2 \times 3 = 6$ combinations in total) from the original dataset, resulting in 2,460 samples. The numbers of training, testing and adversary training samples are 984, 492 and 984, respectively. We also kept the number of samples in all marketing-marital combinations equal in all sets.
\subsection{Setups}
We evaluate the performance of our proposed two perturbation approaches step-by-step, in terms of utility and privacy. In all the experiments, we utilized RBF-kernel SVM to train the machine learning classifiers for both the utility and privacy targets. The utility classifier is to provide certain premised valuable service, while the privacy classifier is to perform the adversaries' inference attack. All the experiments were performed 15 iterations. At each iteration, a 10-fold cross-validation grid search was performed to find the best set of parameters for training utility and privacy classifiers. As discussed in the last section (Section~\ref{sec:ExperimentalEvaluationDatasets}), we evaluate our frame using three datasets and four scenarios. Given a scenario and its dataset, the evaluation metric is the accuracy of its utility/privacy classifiers. Higher accuracy of the utility classifier means providing better utility. Lower accuracy of the privacy classifier means less privacy leakage. The baseline (i.e., lowest accuracy, no privacy leakage) of the privacy classifier should be equal to the probability of random-guess, of which the prediction is drawn i.i.d. from a uniform distribution.
For the coarse-grained perturbation, we compare our proposed JUPA with a full-dimensional baseline method and four existing dimensionality reduction methods, including Random Projection, PCA, DCA and MDR. Moreover, We evaluate JUPA with regularization parameter $\rho_{0}=0.001$, and different combinations of privacy-utility adjustment parameters $\rho_{1}=1,$ $10^{2},$ $10^{4}$, $\rho^{\prime}_{1}=1,$ $10^{2},$ $10^{4}$. For the fine-grained perturbation, we set $\lambda=0.001$, $\alpha=0.1$, and use a zero vector to initiate the starting noise vector $\theta(z_{0})$.
\subsection{Experimental Results}
Table~\ref{table:HAR}, Table~\ref{table:Census}, Table~\ref{table:Census2} and Table~\ref{table:Bank} shows the experimental results of four scenarios (three datasets), and the following are the main observations and conclusions drawn from experimental results.
\subsubsection{} Considering the coarse-grained perturbation approach alone, JUPA outperforms the other DR methods in terms of the utility and ``somewhat privacy''.
Compared with PCA and random projection, DCA, MDR and JUPA provide better balance between the utility and ``somewhat privacy'' performance, since PCA and random projection are not leveraging any help from the ``label'' information.
For instance, in Table~\ref{table:HAR}, after applying random projection (coarse-grained), the accuracy of ID (privacy) dropped from 62.78\% to 13.75\% (providing one of the best privacy performance), and the accuracy of ADL (utility) dropped from 97.22\% to 60.28\% (giving one of the worst utility performance). On the contrary, after applying PCA (coarse-grained), the accuracy of either utility or privacy does not drop much (providing less ``somewhat privacy'').
Compared with DCA and MDR, JUPA provides better utility and ``somewhat privacy'' performance under certain privacy parameters. For instance, in Table~\ref{table:HAR}, when $\rho_{1}=1$ and $\rho^{\prime}_{1}=1$, compared with other DR methods, JUPA (coarse-grained) provides the highest accuracy (96.11\%) of ADL (utility), and also the second lowest accuracy (only higher than random projection) (21.11\%) of ADL (utility). Results in the other scenarios are inline with this observation.
\subsubsection{} JUPA provides the flexibility for finding a favorable trade-off or tuning between utility and privacy by tuning the privacy parameters. Based on our results, by increasing $\rho_{1}=1$ or $\rho^{\prime}_{1}=1$, JUPA weights more emphasis on preserving privacy (providing accuracy) with small amount of accuracy drop on the utility. For instance, in Table~\ref{table:HAR}, adjusting JUPA from $\rho_{1}=1$, $\rho^{\prime}_{1}=1$ to $\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^4$, results in a 42.78\% drop of the ID (privacy) accuracy (from 21.11\% to 12.08\%), while only resulting in a 8.96\% drop of the ADL (utility) accuracy (from 96.11\% to 87.50\%).
\subsubsection{} Our fine-grained perturbation approach provides the privacy guarantee. For instance, in all the scenarios, after applying the fine-grained perturbation, the accuracies of privacy targets are all converging to or near to the probability of random-guess.
\subsubsection{} In our framework, combining JUPA with the fine-grained perturbation outperforms the other options in terms of the utility. For instance, in Table~\ref{table:HAR}, compared with other DR methods, DCA, MDR and JUPA (fine-grained) provide relative higher accuracy of ADL (utility) ($\leq$ 86.11\%), and when $\rho_{1}=1$ and $\rho^{\prime}_{1}=1$, JUPA provides the best utility accuracy (94.31\%). Results in the other scenarios are inline with this observation. The reason is that even though the fine-grained perturbation could provide guarantee for privacy, applying supervised DR methods (DCA, MDR and JUPA) reserves more utility information and need less noise added to the coarse-grained perturbed data to achieve the privacy guarantee.
\begin{table}[!t]
\footnotesize
\captionsetup{font=footnotesize}
\centering\caption{The Mean Accuracy Percentage Results of Human Activity Recognition Dataset with $K=5$, ADL being the utility target, and ID being the privacy target.}
\label{table:HAR}
\centering
\begin{tabular}{l|c|c|c|c}
\hline
\multirow{2}{*}{Projection Method} & \multicolumn{2}{c|}{ADL} & \multicolumn{2}{c}{ID}\\\cline{2-5}
& Coarse & Fine & Coarse & Fine \\
\hline
Full-Dimensional & 97.22 & 66.94 & 62.78 & 3.33 \\
\hline
Random Projection & 60.28 & 57.36 & 13.75 & 3.33 \\
\hline
PCA & 84.72 & 73.33 & 30.28 & 3.75 \\
\hline
DCA & 94.58 & 93.75 & 23.61 & 3.33 \\
\hline
MDR & 91.67 & 88.75 & 22.92 & 4.58 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=1$) & 96.11 & 94.31 & 21.11 & 3.75 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=10^2$) & 95.83 & 93.47 & 20.28 & 3.61 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=10^4$) & 95.56 & 93.47 & 19.72 & 3.33 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=1$) & 94.44 & 93.33 & 20.00 & 3.33 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^2$) & 94.17 & 92.64 & 17.78 & 3.33 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^4$) & 93.75 & 92.36 & 16.67 & 3.33 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=1$) & 92.50 & 88.19 & 13.61 & 3.33 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^2$) & 89.58 & 86.39 & 12.50 & 3.33 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^4$) & 87.50 & 86.11 & 12.08 & 3.33 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\footnotesize
\captionsetup{font=footnotesize}
\centering\caption{The Mean Accuracy Percentage Results of Census Income Dataset with $K=1$, income being the utility target, and gender being the privacy target.}
\label{table:Census}
\centering
\begin{tabular}{l|c|c|c|c}
\hline
\multirow{2}{*}{Projection Method} & \multicolumn{2}{c|}{income} & \multicolumn{2}{c}{gender}\\\cline{2-5}
& Coarse & Fine & Coarse & Fine \\
\hline
Full-Dimensional & 84.50 & 69.76 & 87.33 & 50.00 \\
\hline
Random Projection & 58.33 & 50.50 & 59.17 & 50.00 \\
\hline
PCA & 73.33 & 70.33 & 81.67 & 50.00 \\
\hline
DCA & 80.00 & 73.50 & 56.00 & 50.00 \\
\hline
MDR & 76.67 & 68.33 & 58.00 & 50.00 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=1$) & 82.50 & 75.33 & 55.50 & 50.00 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=10^2$) & 80.00 & 75.16 & 54.67 & 50.00 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=10^4$) & 78.33 & 74.33 & 54.67 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=1$) & 79.17 & 74.66 & 55.00 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^2$) & 77.50 & 74.00 & 54.50 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^4$) & 76.67 & 73.83 & 54.17 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=1$) & 76.00 & 73.67 & 53.17 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^2$) & 75.00 & 73.50 & 52.67 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^4$) & 72.00 & 66.83 & 51.17 & 50.00 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\footnotesize
\captionsetup{font=footnotesize}
\centering\caption{The Mean Accuracy Percentage Results of Census Income Dataset with $K=1$, gender being the utility target, and income being the privacy target.}
\label{table:Census2}
\centering
\begin{tabular}{l|c|c|c|c}
\hline
\multirow{2}{*}{Projection Method} & \multicolumn{2}{c|}{gender} & \multicolumn{2}{c}{income}\\\cline{2-5}
& Coarse & Fine & Coarse & Fine \\
\hline
Full-Dimensional & 87.33 & 73.50 & 84.50 & 50.00 \\
\hline
Random Projection & 59.17 & 59.17 & 58.33 & 50.00 \\
\hline
PCA & 81.67 & 70.33 & 73.33 & 50.00 \\
\hline
DCA & 87.50 & 80.50 & 53.17 & 50.00 \\
\hline
MDR & 86.67 & 77.83 & 56.00 & 50.00 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=1$) & 88.00 & 82.5 & 57.17 & 50.00 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=10^2$) & 87.67 & 82.17 & 55.67 & 50.00 \\
\hline
JUPA ($\rho_{1}=1$, $\rho^{\prime}_{1}=10^4$) & 87.50 & 82.17 & 55.50 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=1$) & 87.67 & 81.33 & 55.67 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^2$) & 86.67 & 81.17 & 54.67 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^4$) & 86.00 & 80.17 & 54.67 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=1$) & 87.00 & 80.33 & 54.33 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^2$) & 86.67 & 79.67 & 53.50 & 50.00 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^4$) & 85.67 & 78.67 & 52.67 & 50.00 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\footnotesize
\captionsetup{font=footnotesize}
\centering\caption{The Mean Accuracy Percentage Results of Bank Marketing Dataset with $K=1$, marketing being the utility target, and marital being the privacy target.}
\label{table:Bank}
\centering
\begin{tabular}{l|c|c|c|c}
\hline
\multirow{2}{*}{Projection Method} & \multicolumn{2}{c|}{marketing} & \multicolumn{2}{c}{marital}\\\cline{2-5}
& Coarse & Fine & Coarse & Fine \\
\hline
Full-Dimensional & 86.38 & 69.11 & 45.73 & 34.15 \\
\hline
Random Projection & 60.57 & 54.88 & 39.23 & 33.33 \\
\hline
PCA & 71.14 & 70.73 & 41.06 & 33.33 \\
\hline
DCA & 84.76 & 78.66 & 38.01 & 33.33 \\
\hline
MDR & 71.75 & 67.48 & 36.79 & 33.33 \\
\hline
JUPA ($\rho_{1}=1.0$, $\rho^{\prime}_{1}=1.0$) & 86.38 & 81.30 & 39.63 & 33.33 \\
\hline
JUPA ($\rho_{1}=1.0$, $\rho^{\prime}_{1}=10^2$) & 86.18 & 79.67 & 38.82 & 33.33 \\
\hline
JUPA ($\rho_{1}=1.0$, $\rho^{\prime}_{1}=10^4$) & 85.37 & 78.66 & 38.41 & 33.33 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=1.0$) & 86.18 & 76.22 & 38.01 & 33.33 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^2$) & 85.98 & 75.61 & 37.60 & 33.33 \\
\hline
JUPA ($\rho_{1}=10^2$, $\rho^{\prime}_{1}=10^4$) & 85.98 & 75.41 & 36.18 & 33.33 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=1.0$) & 85.37 & 75.20 & 36.99 & 33.33 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^2$) & 84.35 & 75.00 & 36.59 & 33.33 \\
\hline
JUPA ($\rho_{1}=10^4$, $\rho^{\prime}_{1}=10^4$) & 83.13 & 74.59 & 35.77 & 33.33 \\
\hline
\end{tabular}
\end{table}
\section{Conclusion} \label{sec:Conclusion}
In this paper, we proposed a two-step perturbation-based utility-aware privacy-preserving data releasing framework. In the first step, we proposed JUPA, a supervised DR method, which outperforms several existing DR methods in terms of providing utility and ``somewhat privacy'', and provides the flexibility for finding a favorable trade-off or tuning between utility and privacy by tuning the privacy parameters. In the second step, we proposed a fine-grained perturbation approach, which guarantees to provide the protection against inference attacks on certain predefined privacy targets. In the experimental evaluation, we deployed our frame in four scenarios using three public dataset. The experiment results are inline with our expectations and demonstrating the effectiveness and practicality of our framework. Future work will include and extension of JUPA to support non-linear sub-space projections, and an optimized kernel selection method for our fine-grained perturbation approach.
\section*{Acknowledgment}
This material is based on research sponsored by the DARPA Brandeis Program under agreement number N66001-15-C-4068.\footnote[1]{The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran} |
2005.04272 | \section{Conclusion}
\label{conclusion}
In this paper, we proposed the dual-perturbation attack, a novel threat model that produces \emph{unsuspicious adversarial examples} by leveraging the cognitive distinction between image foreground and background.
As we have shown, our attack can defeat all state-of-the-art defenses.
By contrast, the proposed defense approaches using our attack model can significantly improve robustness against unsuspicious adversarial examples, with relatively small performance degradation on non-adversarial data.
In addition, our defense approaches can achieve comparable to, or better robustness than the alternatives in the face of traditional attacks.
Our threat model and defense motivate several new research questions.
The first is whether there are more effective methods to identify foreground of images.
Second, can we further improve robustness to dual-perturbation attacks?
Finally, while we provide the first principled approach for quantifying suspiciousness, there may be effective alternative approaches for doing so.
\section{Defending against dual-perturbation}
\section{Defense against Dual-Perturbation Attacks}
\label{sec:defense_approach}
Once we are able to compute the dual-perturbation attack, we can incorporate it into conventional adversarial training paradigms for defense,
as it has been demonstrated that adversarial training is highly effective in designing classification models that are robust to a given attack.
Specifically, we replace the PGD attack in the adversarial training framework proposed by~\citet{madry2018towards}, with the proposed dual-perturbation attack.
We term this approach \emph{AT-Dual}, which aims to solve the following optimization problem:
\begin{equation}
\underset{\bm{\theta}}{\min} \frac{1}{|D|} \sum_{\bm{x}, y \in D} \max _{\substack{||\bm{\delta} \circ \mathcal{F}(\bm{x})||_p \leq \epsilon_F,\\ ||\bm{\delta}\circ\mathcal{B}(\bm{x})||_p \leq \epsilon_B
}} \mathcal{L}\left(h_{\bm{\theta}} (\bm{x}+\bm{\delta}), y\right) + \lambda \cdot \mathcal{S} \left( \bm{x}+\bm{\delta} \right).
\label{eq:at_dual}
\end{equation}
Note that \emph{AT-Dual} needs to identify background and foreground for any input when solving the inner maximization problems in Equation~\ref{eq:at_dual} at training time.
At prediction time, our approaches classify test samples like any standard classifiers, which is independent of the semantic partitions so as to close the backdoors to attacks on object detection approaches~\citep{xie2017adversarial}.
We evaluate the effectiveness of our approaches in Section~\ref{sec:experiments}.
\section*{Appendix}
\section{Detailed Descriptions of The Algorithm for Computing Dual-perturbation Examples}
\label{sec:solution}
We use the following steps to solve the optimization problem of dual-perturbation attacks:
\begin{enumerate}
\item \emph{Initialization}.
Start with a random initial starting point $\bm{\delta}^{(0)}$.
To do this, randomly sample a data point $\bm{\delta}_F^{(0)}$ in $\ell_p$ ball $\Delta(\epsilon_F)$ and $\bm{\delta}_B^{(0)}$ in $\Delta(\epsilon_B)$.
Then, $\bm{\delta}^{(0)}$ can be obtained by using $\bm{\delta}^{(0)} = \bm{\delta}_F^{(0)} \circ \mathcal{F}(\bm{x}) + \bm{\delta}_B^{(0)} \circ \mathcal{B}(\bm{x})$.
This ensures that the initial perturbation is feasible in both foreground and background.
\item \emph{Split}.
At the $k$-th iteration, split the perturbation $\bm{\delta}^{(k)}$ into $\bm{\delta}_F^{(k)}$ for foreground and $\bm{\delta}_B^{(k)}$ for background:
\begin{equation}
\begin{cases}
\bm{\delta}_F^{(k)} = \bm{\delta}^{(k)} \circ \mathcal{F}(\bm{x}) \\
\bm{\delta}_B^{(k)} = \bm{\delta}^{(k)} \circ \mathcal{B}(\bm{x})
\end{cases}.
\end{equation}
Then update the foreground and background perturbations seperately using the following rules:
\begin{equation}
\begin{cases}
\bm{\delta}_F^{(k+1)}=\mathcal{P}_\epsilon(\bm{\delta}_F^{(k)}+\alpha_F \cdot g_F) \\
\bm{\delta}_B^{(k+1)}=\mathcal{P}_\epsilon(\bm{\delta}_B^{(k)}+\alpha_B \cdot g_B)
\end{cases}
\end{equation}
where $g_F$ is the update that corresponds to the \emph{normalized steepest descent} constrained in the foreground, and $g_B$ for the background.
Specifically,
\begin{equation}
\begin{cases}
g_F = \mathcal{G}(\mathcal{F}(\bm{x}) \circ \nabla_{\bm{\delta}^{(k)}} \{ \mathcal{L}(h_{\bm{\theta}}(\bm{x}+\bm{\delta}^{(k)}), y)) + \lambda \cdot \mathcal{S} \left( \bm{x}+\bm{\delta}^{(k)} \right) \} \\
g_B = \mathcal{G}(\mathcal{B}(\bm{x}) \circ \nabla_{\bm{\delta}^{(k)}} \{ \mathcal{L}(h_{\bm{\theta}}(\bm{x}+\bm{\delta}^{(k)}), y)) + \lambda \cdot \mathcal{S} \left( \bm{x}+\bm{\delta}^{(k)} \right)\}
\end{cases}
\label{eq:gf_gb}
\end{equation}
where $\alpha_F$ is the stepsize for foreground, and $\alpha_B$ is the stepsize for background.
\item \emph{Merge}.
At the end of the $k$-th iteration, merge the perturbations obtained in the last step by using
\begin{equation}
\bm{\delta}^{(k+1)} = \bm{\delta}_F^{(k+1)} + \bm{\delta}_B^{(k+1)}.
\end{equation}
$\bm{\delta}^{(k+1)}$ is further used to derive the update for the normalized steepest descent at the next iteration.
\item Return to step 2 or terminate after either a fixed number of iterations.
\end{enumerate}
\section{Descriptions of Datasets}
\subsection{Segment-6}
The statistics of the Segment-6 dataset are displayed in Table~\ref{tab:segment-6}.
\begin{table*}[h]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multirow{2}{*}{\textbf{Class}} & \multicolumn{2}{l|}{\textbf{Number of samples}} \\ \cline{2-3}
& \textbf{Training} & \textbf{Test} \\ \hline \hline
Train & 3,000 & 200 \\ \hline
Bird & 3,000 & 200 \\ \hline
Cat & 3,000 & 200 \\ \hline
Dog & 3,000 & 200 \\ \hline
Toilet & 3,000 & 200 \\ \hline
Clock & 3,000 & 200 \\ \hline \hline
Total & 18,000 & 1,200 \\ \hline
\end{tabular}
\caption{Number of samples in each class of the Segment-6 dataset.}
\label{tab:segment-6}
\end{table*}
\subsection{STL-10}
The statistics of the STL-10 dataset are displayed in Table~\ref{tab:stl-10}.
\begin{table*}[h]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multirow{2}{*}{\textbf{Class}} & \multicolumn{2}{l|}{\textbf{Number of samples}} \\ \cline{2-3}
& \textbf{Training} & \textbf{Test} \\ \hline \hline
Airplane & 500 & 10 \\ \hline
Bird & 500 & 10 \\ \hline
Car & 500 & 10 \\ \hline
Cat & 500 & 10 \\ \hline
Deer & 500 & 10 \\ \hline
Dog & 500 & 10 \\ \hline
Horse & 500 & 10 \\ \hline
Monkey & 500 & 10 \\ \hline
Ship & 500 & 10 \\ \hline
Truck & 500 & 10 \\ \hline \hline
Total & 5,000 & 100 \\ \hline
\end{tabular}
\caption{Number of samples in each class of the STL-10 dataset.}
\label{tab:stl-10}
\end{table*}
\subsection{ImageNet-10}
The labels and number of images per class in the ImageNet-10 dataset are listed in Table~\ref{tab:imagenet-10}.
\begin{table*}[h]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multirow{2}{*}{\textbf{Class}} & \multicolumn{2}{l|}{\textbf{Number of samples}} \\ \cline{2-3}
& \textbf{Training} & \textbf{Test} \\ \hline \hline
Airplane & 500 & 10 \\ \hline
Car & 500 & 10 \\ \hline
Cat & 500 & 10 \\ \hline
Dog & 500 & 10 \\ \hline
Truck & 500 & 10 \\ \hline
Elephant & 500 & 10 \\ \hline
Zebra & 500 & 10 \\ \hline
Bus & 500 & 10 \\ \hline
Bear & 500 & 10 \\ \hline
Bicycle & 500 & 10 \\ \hline \hline
Total & 5,000 & 100 \\ \hline
\end{tabular}
\caption{Number of samples in each class of the ImageNet-10 dataset.}
\label{tab:imagenet-10}
\end{table*}
\section{Implementations}
We implemented all the attack model, as well as the defense approaches in PyTorch\footnote{Available at \url{https://pytorch.org/}.}, an open-source library for neural network learning.
We used the ResNet34 model~\citep{he2016deep} and standard transfer learning, as the datasets employed in our experiments do not have a sufficient amount of data to achieve high accuracy.
Specifically, we initialized the network with the model pre-trained on ImageNet, reset the final fully connected layer, and added a \emph{normalization layer} in front of the ResNet34 model, which performs a channel-wise transformation of an input by subtracting $(0.485, 0.456, 0.406)$ (the mean of ImageNet) and then being divided by $(0.229, 0.224, 0.225)$ (the standard deviation of ImageNet);~\footnote{To fit the Segment-6 dataset which contains much smaller images compared to ImageNet, we also reset the first convolutional layer of the pre-trained ResNet34 model by reducing the kernel size from $7 \times 7$ to $3 \times 3$, stride from 2 to 1, and pad from 3 to 1.}
then, we train the neural networks as usual.
Unless otherwise specified, we used 60 epochs with training batch size 128 for Segment-6.
For STL-10 and ImageNet-10. we trained the classifiers for 20 epochs by using a batch size of 64.
We used Adam Optimizer~\citep{kingma2014adam} with initial learning rate of $10^{-4}$ for \emph{Clean}, and $10^{-3}$ for \emph{AT-PGD} and \emph{AT-Dual}, respectively.
We dropped the learning rate by 0.1 every 20 epochs on Segment-6, and similarly at the 8th and 15th epochs on STL-10 and ImageNet-10.
As mentioned above, we implemented \emph{PGD} and \emph{dual-perturbation} attacks, bounded by both $\ell_\infty$ and $\ell_2$ norms, to evaluate robustness of a classification model, as well as to build robust classifiers.
For $\ell_\infty$ attacks, when they were used for evaluation, they are performed with 20 steps; for training robust classifiers, these attacks were performed with 10 steps at each epoch of adversarial training.
Similarly, for $\ell_2$ attacks, they were performed with 100 steps for evaluation, and 50 steps for adversarial training.
We used the semantic segmentation masks on the Segment-6 dataset and used fixation prediction to identify foreground and backround on STL-10 and ImageNet-10.
\section{Adversarial Training Using $\ell_2$ Norm Attacks on ImageNet-10}
{\bf Transferability of Adversarial Examples.}
Here, we measure the \emph{transferability} of adversarial examples among different classification models.
To do this, we first produced adversarial examples by using $\ell_2$ PGD attack or dual-perturbation attack on a source model.
Then, we used these examples to evaluate the performance of an independent target model, where a higher prediction accuracy means weaker transferability.
The results are presented in Figure~\ref{F:transfer_l2_imagenet}.
The first observation is that dual-perturbation attacks exhibit significantly better transferability than the conventional PGD attacks (transferability is up to 40\% better for dual-perturbation attacks).
Second, we can observe that when \emph{AT-Dual} is used as the target (i.e., defending by adversarial training with dual-perturbation examples), these are typically resistant to adversarial examples generated against either the clean model, or against \emph{AT-PGD}.
This observation obtains even when we use PGD to generate adversarial examples.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/imagenet_l2_dpgd_l2_black.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/imagenet_l2_pgd_l2_black.pdf}\\
\end{tabular}
\caption{Robustness against adversarial examples transferred from other models on ImageNet-10.
Left: $\ell_2$ dual-perturbation attacks performed by using $\{\epsilon_F, \epsilon_B, \lambda\}=\{2.0, 20.0, 1.0\}$ on different source models.
Right: $\ell_2$ PGD attacks with $\epsilon=2.0$ on different source models.
}
\label{F:transfer_l2_imagenet}
\end{figure}
\newpage
\section{Adversarila Training Using $\ell_2$ Norm Attacks on STL-10}
Here, we present experimental results of the robustness of classifiers that use adversarial training with $\ell_2$ norm attacks on STL-10.
Specifically, we trained AT-PGD using $\ell_2$ PGD attack with $\epsilon=1.0$, and AT-Dual by using $\ell_2$ dual-perturbation attack with $\{\epsilon_F, \epsilon_B, \lambda\}=\{1.0, 5.0, 0.0\}$.
The results are shown in Figure~\ref{fig:saliency_analysis_stl_l2}, ~\ref{fig:white_stl_l2}, ~\ref{fig:black_stl_l2}, and ~\ref{fig:general_stl_l2}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{figure_new/stl_l2_fs_lambda.pdf} &
\includegraphics[width=0.24\textwidth]{figure_new/stl_l2_acc_lambda.pdf}\\
\end{tabular}
\caption{
Saliency analysis.
The $\ell_2$ dual-perturbation attacks are performed by using $\{\epsilon_F, \epsilon_B\}=\{1.0, 5.0\}$, and a variety of $\lambda$ displayed in the figure.
Left: foreground scores of dual-perturbation examples in response to different classifiers.
Right: accuracy of classifiers on dual-perturbation examples with salience control.
}
\label{fig:saliency_analysis_stl_l2}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.26\textwidth]{figure_new/stl_l2_dpgd_l2_white_in.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/stl_l2_dpgd_l2_white_out.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/stl_l2_pgd_l2_white.pdf}\\
\end{tabular}
\caption{
Robustness to white-box $\ell_2$ attacks on STL-10.
Left: $\ell_2$ dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be 5.0 and $\lambda=0.1$.
Middle: $\ell_2$ dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be 1.0 and $\lambda=0.1$.
Right: $\ell_2$ PGD attacks.
}
\label{fig:white_stl_l2}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/stl_l2_dpgd_l2_black.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/stl_l2_pgd_l2_black.pdf}\\
\end{tabular}
\caption{
Robustness against adversarial examples transferred from other models on STL-10.
Left: $\ell_2$ dual-perturbation attacks performed by using $\{\epsilon_F, \epsilon_B, \lambda\}=\{1.0, 5.0, 0.1\}$ on different source models.
Right: $\ell_2$ PGD attacks with $\epsilon=1.0$ on different source models.
}
\label{fig:black_stl_l2}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.22\textwidth]{figure_new/stl_l2_pgd_linf_white} &
\includegraphics[width=0.22\textwidth]{figure_new/stl_l2_dpgd_linf_white_in} &
\includegraphics[width=0.22\textwidth]{figure_new/stl_l2_dpgd_linf_white_out} &
\includegraphics[width=0.22\textwidth]{figure_new/stl_l2_jsma_l0_white} \\
\end{tabular}
\caption{
Robustness to additional white-box attacks on STL-10.
Left: 20 steps of $\ell_\infty$ PGD attacks.
Middle left: 20 steps of $\ell_\infty$ dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be 20/255 and $\lambda=0.1$.
Middle right: 20 steps of $\ell_\infty$ dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be 4/255 and $\lambda=0.1$.
Right: $\ell_0$ JSMA attacks.
}
\label{fig:general_stl_l2}
\end{figure}
\newpage
\section{Adversarial Training Using $\ell_2$ Norm Attacks on Segment-6}
Now, we present experimental results of the robustness of classifiers that use adversarial training with $\ell_2$ norm attacks on Segment-6.
Since DeepGaze II only work on images with more than $35 \times 35$ pixels, we are unable to use DeepGaze II to compute the \emph{foreground score (FS)} for Segment-6.
Hence, in the following experiment on this dataset, we omit the salience term in the optimization problem of Equation 3 and 4 in the main body of the paper.
Specifically, we trained AT-PGD using $\ell_2$ PGD attack with $\epsilon=0.5$, and AT-Dual by using $\ell_2$ dual-perturbation attack with $\{\epsilon_F, \epsilon_B\}=\{0.5, 2.5\}$.
The results are shown in Figure~\ref{fig:white_segment_l2}, ~\ref{fig:black_segment6_l2}, and ~\ref{fig:general_segment6_l2}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/segment6_l2_dpgd_l2_white.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/segment6_l2_pgd_l2_white.pdf}\\
\end{tabular}
\caption{
Robustness to white-box $\ell_2$ attacks on Segment-6.
Left: $\ell_2$ dual-perturbation attacks with different foreground and background distortions.
Right: $\ell_2$ PGD attacks.
}
\label{fig:white_segment_l2}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/segment6_l2_dpgd_l2_black.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/segment6_l2_pgd_l2_black.pdf}\\
\end{tabular}
\caption{
Robustness against adversarial examples transferred from other models on Segment-6.
Left: $\ell_2$ dual-perturbation attacks performed by using $\{\epsilon_F, \epsilon_B\}=\{0.5, 2.5\}$ on different source models.
Right: $\ell_2$ PGD attacks with $\epsilon=0.5$ on different source models.
}
\label{fig:black_segment6_l2}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.26\textwidth]{figure_new/segment6_l2_pgd_linf_white} &
\includegraphics[width=0.26\textwidth]{figure_new/segment6_l2_dpgd_linf_white} &
\includegraphics[width=0.26\textwidth]{figure_new/segment6_l2_jsma_l0_white} \\
\end{tabular}
\caption{
Robustness to additional white-box attacks on Segment-6.
Left: 20 steps of $\ell_\infty$ PGD attacks.
Middle: 20 steps of $\ell_\infty$ dual-perturbation attacks with different foreground and background distortions.
Right: $\ell_0$ JSMA attacks.
}
\label{fig:general_segment6_l2}
\end{figure}
\newpage
\section{Adversarial Training Using $\ell_\infty$ Norm Attacks on ImageNet-10}
Next, we present experimental results of the robustness of classifiers that use adversarial training with $\ell_\infty$ norm attacks on ImageNet-10.
Specifically, we trained AT-PGD using $\ell_\infty$ PGD attack with $\epsilon=4/255$, and AT-Dual by using $\ell_\infty$ dual-perturbation attack with $\{\epsilon_F, \epsilon_B, \lambda\}=\{4/255, 20/255, 0.0\}$.
The results are shown in Figure~\ref{fig:saliency_analysis_imagenet_linf}, ~\ref{fig:white_imagenet_linf}, ~\ref{fig:black_imagenet_linf}, and ~\ref{fig:general_imagenet_linf}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{figure_new/imagenet_linf_fs_lambda.pdf} &
\includegraphics[width=0.24\textwidth]{figure_new/imagenet_linf_acc_lambda.pdf}\\
\end{tabular}
\caption{
Saliency analysis.
The $\ell_\infty$ dual-perturbation attacks are performed by using $\{\epsilon_F, \epsilon_B\}=\{4/255, 20/255\}$, and a variety of $\lambda$ displayed in the figure.
Left: foreground scores of dual-perturbation examples in response to different classifiers.
Right: accuracy of classifiers on dual-perturbation examples with salience control.
}
\label{fig:saliency_analysis_imagenet_linf}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.26\textwidth]{figure_new/imagenet_linf_dpgd_linf_white_in.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/imagenet_linf_dpgd_linf_white_out.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/imagenet_linf_pgd_linf_white.pdf}\\
\end{tabular}
\caption{
Robustness to white-box $\ell_\infty$ attacks on ImageNet-10.
Left: $\ell_\infty$ dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be 20/255 and $\lambda=1.0$.
Middle: $\ell_\infty$ dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be 4/255 and $\lambda=1.0$.
Right: $\ell_\infty$ PGD attacks.
}
\label{fig:white_imagenet_linf}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/imagenet_linf_dpgd_linf_black.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/imagenet_linf_pgd_linf_black.pdf}\\
\end{tabular}
\caption{
Robustness against adversarial examples transferred from other models on ImageNet-10.
Left: $\ell_\infty$ dual-perturbation attacks performed by using $\{\epsilon_F, \epsilon_B, \lambda\}=\{4/255, 20/255, 1.0\}$ on different source models.
Right: $\ell_\infty$ PGD attacks with $\epsilon=4/255$ on different source models.
}
\label{fig:black_imagenet_linf}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_linf_pgd_l2_white} &
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_linf_dpgd_l2_white_in} &
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_linf_dpgd_l2_white_out} &
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_linf_jsma_l0_white} \\
\end{tabular}
\caption{
Robustness to additional white-box attacks on ImageNet-10.
Left: 100 steps of $\ell_2$ PGD attacks.
Middle left: 100 steps of $\ell_2$ dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be 2.0 and $\lambda=1.0$.
Middle right: 100 steps of $\ell_2$ dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be 20.0 and $\lambda=1.0$.
Right: $\ell_0$ JSMA attacks.
}
\label{fig:general_imagenet_linf}
\end{figure}
\newpage
\section{Adversarial Training Using $\ell_\infty$ Norm Attacks on STL-10}
Now, we present experimental results of the robustness of classifiers that use adversarial training with $\ell_\infty$ norm attacks on STL-10.
Specifically, we trained AT-PGD using $\ell_\infty$ PGD attack with $\epsilon=4/255$, and AT-Dual by using $\ell_\infty$ dual-perturbation attack with $\{\epsilon_F, \epsilon_B, \lambda\}=\{4/255, 20/255, 0.0\}$.
The results are shown in Figure~\ref{fig:saliency_analysis_stl_linf}, ~\ref{fig:white_stl_linf}, ~\ref{fig:black_stl_linf}, and ~\ref{fig:general_stl_linf}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{figure_new/stl_linf_fs_lambda.pdf} &
\includegraphics[width=0.24\textwidth]{figure_new/stl_linf_acc_lambda.pdf}\\
\end{tabular}
\caption{
Saliency analysis.
The $\ell_\infty$ dual-perturbation attacks are performed by using $\{\epsilon_F, \epsilon_B\}=\{4/255, 20/255\}$, and a variety of $\lambda$ displayed in the figure.
Left: foreground scores of dual-perturbation examples in response to different classifiers.
Right: accuracy of classifiers on dual-perturbation examples with salience control.
}
\label{fig:saliency_analysis_stl_linf}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.26\textwidth]{figure_new/stl_linf_dpgd_linf_white_in.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/stl_linf_dpgd_linf_white_out.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/stl_linf_pgd_linf_white.pdf}\\
\end{tabular}
\caption{
Robustness to white-box $\ell_\infty$ attacks on STL-10.
Left: $\ell_\infty$ dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be $20/255$ and $\lambda=0.1$.
Middle: $\ell_\infty$ dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be $4/255$ and $\lambda=0.1$.
Right: $\ell_\infty$ PGD attacks.
}
\label{fig:white_stl_linf}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/stl_linf_dpgd_linf_black.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/stl_linf_pgd_linf_black.pdf}\\
\end{tabular}
\caption{
Robustness against adversarial examples transferred from other models on STL-10.
Left: $\ell_\infty$ dual-perturbation attacks performed by using $\{\epsilon_F, \epsilon_B, \lambda\}=\{4/255, 20/255, 1.0\}$ on different source models.
Right: $\ell_\infty$ PGD attacks with $\epsilon=4/255$ on different source models.
}
\label{fig:black_stl_linf}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.22\textwidth]{figure_new/stl_linf_pgd_l2_white} &
\includegraphics[width=0.22\textwidth]{figure_new/stl_linf_dpgd_l2_white_in} &
\includegraphics[width=0.22\textwidth]{figure_new/stl_linf_dpgd_l2_white_out} &
\includegraphics[width=0.22\textwidth]{figure_new/stl_linf_jsma_l0_white} \\
\end{tabular}
\caption{
Robustness to additional white-box attacks on STL-10.
Left: 100 steps of $\ell_2$ PGD attacks.
Middle left: 100 steps of $\ell_2$ dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be 5.0 and $\lambda=0.1$.
Middle right: 100 steps of $\ell_2$ dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be 1.0 and $\lambda=0.1$.
Right: $\ell_0$ JSMA attacks.
}
\label{fig:general_stl_linf}
\end{figure}
\newpage
\section{Adversarial Training Using $\ell_\infty$ Norm Attacks on Segment-6}
Finally, we present experimental results of the robustness of classifiers that use adversarial training with $\ell_\infty$ norm attacks on Segment-6.
We trained AT-PGD using $\ell_\infty$ PGD attack with $\epsilon=8/255$, and AT-Dual by using $\ell_\infty$ dual-perturbation attack with $\{\epsilon_F, \epsilon_B\}=\{8/255, 40/255\}$.
The results are shown in Figure~\ref{fig:white_segment_linf}, ~\ref{fig:black_segment6_linf}, and ~\ref{fig:general_segment6_linf}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/segment6_linf_dpgd_linf_white.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/segment6_linf_pgd_linf_white.pdf}\\
\end{tabular}
\caption{
Robustness to white-box $\ell_\infty$ attacks on Segment-6.
Left: $\ell_\infty$ dual-perturbation attacks with different foreground and background distortions.
Right: $\ell_\infty$ PGD attacks.
}
\label{fig:white_segment_linf}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.35\textwidth]{figure_new/segment6_linf_dpgd_linf_black.pdf} &
\includegraphics[width=0.35\textwidth]{figure_new/segment6_linf_pgd_linf_black.pdf}\\
\end{tabular}
\caption{
Robustness against adversarial examples transferred from other models on Segment-6.
Left: $\ell_\infty$ dual-perturbation attacks performed by using $\{\epsilon_F, \epsilon_B\}=\{8/255, 40/255\}$ on different source models.
Right: $\ell_\infty$ PGD attacks with $\epsilon=8/255$ on different source models.
}
\label{fig:black_segment6_linf}
\end{figure}
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.26\textwidth]{figure_new/segment6_linf_pgd_l2_white} &
\includegraphics[width=0.26\textwidth]{figure_new/segment6_linf_dpgd_l2_white} &
\includegraphics[width=0.26\textwidth]{figure_new/segment6_linf_jsma_l0_white} \\
\end{tabular}
\caption{
Robustness to additional white-box attacks on Segment-6.
Left: 100 steps of $\ell_2$ PGD attacks.
Middle: 100 steps of $\ell_2$ dual-perturbation attacks with different foreground and background distortions.
Right: $\ell_0$ JSMA attacks.
}
\label{fig:general_segment6_linf}
\end{figure}
\newpage
\section{Attacking Randomzied Classifiers}
In addition to \emph{deterministic classifiers} that make a deterministic prediction for a test sample, our proposed attack can be adapted to \emph{stochastic classifiers} that apply randomization at training and prediction time.
For example, for classifiers using \emph{randomized smoothing}, we can refine Equation 3 in the main body of the paper as follows:
\begin{equation}
\max_{\substack{||\bm{\delta} \circ \mathcal{F}(\bm{x})||_p \leq \epsilon_F,\\ ||\bm{\delta}\circ\mathcal{B}(\bm{x})||_p \leq \epsilon_B}} \mathbb{E}_{\bm{\eta}\sim\mathcal{N}(\bm{0}, \sigma^{2} \bm{I})} [ \mathcal{L}\left(h_{\bm{\theta}}(\bm{x}+\bm{\delta}+\bm{\eta}), y\right) + \lambda \cdot \mathcal{S} \left( \bm{x}+\bm{\delta}+\bm{\eta} \right) ],
\label{eq:dual_pgd_rs}
\end{equation}
where $\sigma^{2}$ is the variance of the Gaussian data augmentation in randomized smoothing.~\footnote{Note that the Gaussian perturbations are only used to compute the expection of loss and are not in the resulting adversarial examples.}
The optimization problem in Equation~\ref{eq:dual_pgd_rs} can be solved by the same approach used for deterministic classifiers, with the following modification on Equation~\ref{eq:gf_gb} at the second step in Section~\ref{sec:solution}:
\begin{equation}
\begin{cases}
g_F = \mathcal{G}(\mathcal{F}(\bm{x}) \circ \nabla_{\bm{\delta}^{(k)}} \mathbb{E}_{\bm{\eta}} [ \mathcal{L}(h_{\bm{\theta}}(\bm{x}+\bm{\delta}^{(k)}+\bm{\eta}), y) + \lambda \cdot \mathcal{S} \left( \bm{x}+\bm{\delta}^{(k)} + \bm{\eta} \right) ]) \\
g_B = \mathcal{G}(\mathcal{B}(\bm{x}) \circ \nabla_{\bm{\delta}^{(k)}} \mathbb{E}_{\bm{\eta}} [ \mathcal{L}(h_{\bm{\theta}}(\bm{x}+\bm{\delta}^{(k)}+\bm{\eta}), y) + \lambda \cdot \mathcal{S} \left( \bm{x}+\bm{\delta}^{(k)} + \bm{\eta} \right) ])
\end{cases}.
\end{equation}
\subsection{Variance in Gaussian Data Augmentation}
Table~\ref{tab:rs_linf} and \ref{tab:rs_segment6_l2} show the effectiveness of \emph{Randomized Smoothing (RS)} against the proposed dual-perturbation attack.
Here, we use different variances in Gaussian data augmentation of \emph{RS}, and fix the number of noise-corrupted copies at prediction time, $n$ to be 100.
It can be seen that \emph{RS} is generally fragile to the dual-perturbation attacks that are adapted to randomized classifiers.
Moreover, increasing $\sigma$, the variance used in Gaussian data augmentation can only marginally improve adversarial robustness to dual-perturbation attacks while significantly decrease accuracy on non-adversarial data.
\begin{table*}[h]
\centering
\scalebox{0.90}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Defense approach}} & \multicolumn{5}{c|}{\textbf{Attack Strength ($\epsilon_B = 5 \times \epsilon_F$)}} \\ \cline{3-7}
& & $\epsilon_F = 0/255$ & $\epsilon_F=4/255$ & $\epsilon_F=8/255$ & $\epsilon_F=12/255$ & $\epsilon_F=1$ \\ \hline \hline
\multirow{3}{*}{Segment-6} & RS, $\sigma=0.25$ & 71.4\% & 9.6\% & 0.4\% & 0.1\% & 0.0\% \\ \cline{2-7}
& RS, $\sigma=0.5$ & 61.7\% & 13.7\% & 1.9\% & 0.6\% & 0.2\% \\ \cline{2-7}
& RS, $\sigma=1$ & 47.7\% & 15.6\% & 2.8\% & 0.4\% & 0.2\% \\ \hline \hline
\end{tabular}
}
\caption{Robustness of \emph{RS} against $\ell_\infty$ dual-perturbation attacks.}
\label{tab:rs_linf}
\end{table*}
\begin{table*}[h]
\centering
\scalebox{0.90}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Defense approach}} & \multicolumn{5}{c|}{\textbf{Attack Strength} ($\epsilon_B = 5 \times \epsilon_F$)} \\ \cline{2-6}
& $\epsilon_F = 0$ & $\epsilon_F=0.25$ & $\epsilon_F=0.5$ & $\epsilon_F=0.75$ & $\epsilon_F=1$ \\ \hline \hline
RS, $\sigma=0.25$ & 71.4\% & 29.7\% & 6.7\% & 0.9\% & 0.1\% \\ \hline
RS, $\sigma=0.5$ & 61.7\% & 31.6\% & 11.8\% & 3.1\% & 1.3\% \\ \hline
RS, $\sigma=1$ & 47.7\% & 28.2\% & 14.4\% & 6.0\% & 1.5\% \\ \hline
\end{tabular}
}
\caption{Robustness of \emph{RS} against $\ell_2$ dual-perturbation attacks on Segment-6.}
\label{tab:rs_segment6_l2}
\end{table*}
\subsection{Number of Samples with Gaussian Noise at Prediction Time}
It has been observed that \emph{Randomized Smoothing (RS)} can be computationally inefficient at prediction time as it uses a large number of noise-corrupted copies for each test sample at prediction time.
It is natural to ask whether the prediction time of \emph{RS} can be reduced without significantly sacrificing adversarial robustness in practice.
We answer this question by studying the effectiveness of \emph{RS} with different $n$, the numbers of noise-corrupted copies at prediction time.
Specifically, we fix $\sigma=0.5$ and set $n$ to be 1, 25, and 100.
Note that when $n=1$, there is no two-sided hypothesis test for prediction; thus, no abstentions are obtained.
Here we use $\ell_\infty$ dual-perturbation attacks on \emph{RS} for demonstration purposes.
The results are shown in Table~\ref{tab:rs_linf_n}.
It can be seen that when $n=25$, the accuracy on both adversarial and non-adversarial data can drop by up to 10\% compared to \emph{RS} using $n=100$.
The reason is that under a small $n$, the prediction appears more likely to abstain.
Interestingly, when $n=1$, the accuracy can be marginally improved compared to $n=100$, with the prediction time being reduced by 99\%.
This indicates that in practice, we would not lose accuracy without using the two-sided hypothesis test at prediction time.
\begin{table*}[h]
\centering
\scalebox{0.90}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Defense approach}} & \multicolumn{5}{c|}{\textbf{Attack Strength ($\epsilon_B = 5 \times \epsilon_F$)}} \\ \cline{3-7}
& & $\epsilon_F = 0/255$ & $\epsilon_F=4/255$ & $\epsilon_F=8/255$ & $\epsilon_F=12/255$ & $\epsilon_F=1$ \\ \hline \hline
\multirow{3}{*}{Segment-6} & RS, $n=1$ & 66.0\% & 19.8\% & 3.2\% & 0.8\% & 0.3\% \\ \cline{2-7}
& RS, $n=25$ & 49.4\% & 9.1\% & 1.3\% & 0.5\% & 0.0\% \\ \cline{2-7}
& RS, $n=100$ & 61.7\% & 13.7\% & 1.9\% & 0.6\% & 0.2\% \\ \hline
\end{tabular}
}
\caption{Robustness of \emph{RS} against $\ell_\infty$ dual-perturbation attacks under different numbers of noise-corrupted copies at prediction time.}
\label{tab:rs_linf_n}
\end{table*}
\newpage
\section{Visualization of Loss Gradient}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{figure_new/visualization_appendix.pdf}
\caption{
Visualization of loss gradient of different classifiers with respect to pixels of \emph{non-adversarial} inputs.
AT-PGD and AT-Dual were obtained using adversarial training with corresponding $\ell_2$ norm attacks.
}
\label{fig:visualization_gradient}
\end{figure}
\section{Examples of Dual-Perturbation Attacks}
\begin{figure}[h!]
\centering
\includegraphics[width=0.95\textwidth]{figure_new/adv_examples.pdf}
\caption{
Dual-perturbation attacks.
Adversarial examples are produced in response to the $\emph{Clean}$ model for each dataset.
}
\label{fig:adv_example}
\end{figure}
\section{Experiments}
\label{sec:experiments}
\subsection{Experimental Setup}
{\bf Datasets}.
We conducted the experiments on the following three datasets (detailed in Appendix B):
The first is Segment-6~\citep{cong2019masked}, which are images with $32\times32$ pixels obtained by pre-processing the Microsoft COCO dataset~\citep{lin2014microsoft} to make it compatible with image classification tasks.
We directly used the semantic segmentation based foreground masks provided in this dataset.
Our second dataset is STL-10, a subset that contains images with $96\times96$ pixels.
Our third dataset is ImageNet-10, a 10-class subset of the ImageNet dataset~\citep{deng2019imagenet}.
We cropped all its images to be with $224\times224$ pixels.
For STL-10 and ImageNet-10, we used fixation prediction to identify foreground and background as described in Section~\ref{sec:threat_model}.
{\bf Baselines}.
We consider \emph{PGD} attack as a baseline adversarial model, and \emph{Adversarial Training with PGD Attacks} as a baseline robust classifier.
We also consider a classifier trained on non-adversarial data (henceforth, \emph{Clean}).
Additionally, we consider \emph{Randomized Smoothing}~\citep{cohen2019certified} and defer the corresponding results to Appendix J.
{\bf Evaluation Metrics}.
We use two standard evaluation metrics for both attacks and defenses:
1) accuracy of prediction on clean test data where no adversarial attacks were attempted.
2) adversarial accuracy, which is accuracy when adversarial inputs are used in place of clean inputs.
Throughout our evaluation, we used both $\ell_2$ and $\ell_\infty$ norms to measure the magnitude of added adversarial perturbations.
Due to space limitations, we only present experimental results of the \emph{Clean} model and classification models that are trained to be robust to $\ell_2$ norm attacks using the ImageNet-10 dataset.
The results for $\ell_\infty$ norm and other datasets are similar and deferred to Appendix.
In the following experiments, all classifiers were trained with 20 epochs on a ResNet34 model~\citep{he2016deep} pre-trained on ImageNet and with a customized final fully connected layer.
Specifically, we trained AT-PGD by using 50 steps of $\ell_2$ PGD attack with $\epsilon=2.0$, and AT-Dual by using 50 steps of $\ell_2$ dual-perturbation attack with $\{\epsilon_F, \epsilon_B,\lambda\} = \{2.0, 20.0, 0.0\}$ at each training epoch.
At test time, we used both $\ell_2$ PGD and dual-perturbation attacks with 100 steps to evaluate robustness.
\subsection{Saliency Analysis of Dual-Perturbation Adversarial Examples}
We begin by considering a natural question: is our particular distinction between foreground and background actually consistent with cognitive salience?
In fact, this gives rise to two distinct considerations: 1) whether foreground as we identify it is in fact significantly more salient than the background, and 2) if so, whether background becomes significantly more salient \emph{as a result of our dual-perturbation attacks}.
We answer both of these questions by appealing to DeepGaze II~\citep{kummerer2017understanding} to compute the \emph{foreground score (FS)} of dual-perturbation examples as described in Section~\ref{sec:threat_model}, and using the accuracy of different classifiers on dual-perturbation examples with different background salience.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{figure_new/imagenet_l2_fs_lambda.pdf} &
\includegraphics[width=0.24\textwidth]{figure_new/imagenet_l2_acc_lambda.pdf}\\
\end{tabular}
\caption{
Saliency analysis.
Dual-perturbation attacks are performed by using $\{\epsilon_F, \epsilon_B\}=\{2.0, 20.0\}$ and a variety of $\lambda$ displayed in the figure.
Left: foreground scores of dual-perturbation examples in response to different classifiers.
Right: accuracy of classifiers on dual-perturbation examples with salience control.
}
\label{fig:saliency_analysis}
\end{figure}
Figure~\ref{fig:saliency_analysis} presents the answer to both of the questions above.
First, observe that in Figure~\ref{fig:saliency_analysis}, \emph{FS} (vertical axis) is typically well above 0.5, and in most cases above 0.9, for all attacks.
Second, this is true whether we attack the \emph{Clean} model, or either \emph{AT-PGD} or \emph{AT-Dual} robust models.
Particularly noteworthy, however, is the impact that the parameter $\lambda$ has on the \emph{FS}, especially when robust classifiers are employed.
Recall that $\lambda$ reflects the relative importance of salience in generating adversarial examples, with larger values forcing our approach to pay more attention to preserving unsuspiciousness of background relative to foreground.
As we increase $\lambda$, we note significantly higher \emph{FS}, i.e., lower background salience (again, Figure~\ref{fig:saliency_analysis}, left).
Figure~\ref{fig:dual_pgd_example} offers a visual illustration of this effect.
As significantly, Figure~\ref{fig:saliency_analysis} (right) shows that moderately increasing $\lambda$ does not significantly reduce the effectiveness of the attack, on either the \emph{Clean} or the robust classifiers.
\subsection{Dual-perturbation Attacks on Robust Classifiers}
Next, we evaluate the effectiveness of dual-perturbation attacks against state-of-the-art robust learning methods, as well as the effectiveness of adversarial training that uses dual-perturbation attacks for generating adversarial examples.
We begin by considering white-box attacks, and subsequently evaluate transferability.
Due to space limitations, we defer the results of transferability to Appendix D.
\begin{figure}[t]
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.26\textwidth]{figure_new/imagenet_l2_dpgd_l2_white_in.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/imagenet_l2_dpgd_l2_white_out.pdf} &
\includegraphics[width=0.26\textwidth]{figure_new/imagenet_l2_pgd_l2_white.pdf}\\
\end{tabular}
\caption{
Robustness to white-box $\ell_2$ attacks on ImageNet-10.
Left: dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be 20.0 and $\lambda=1.0$.
Middle:dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be 2.0 and $\lambda=1.0$.
Right: PGD attacks.
}
\label{F:l2attacks}
\end{figure}
The results for white-box attacks are presented in Figure~\ref{F:l2attacks}.
First, consider the dual-perturbation attacks (left and middle plots).
Note that in all cases these attacks are highly successful against the baseline robust classifier (AT-PGD); indeed, even relatively small levels of foreground noise yield near-zero accuracy when accompanied by sufficiently large background perturbations.
For example, when the perturbation to the foreground is $\epsilon_F=2.0$ and background perturbation is $\epsilon_B=20.0$, \emph{AT-PGD} achieves robust accuracy below $10\%$.
In contrast, AT-Dual remains significantly more robust, with an improvement of up to $40\%$ compared to the baseline.
Second, consider the standard PGD attacks (right plot).
It can be observed that all of the robust models are successful against the $\ell_2$ PGD attacks.
However, our defense exhibit moderately higher robustness than the baselines under large distortions of PGD attacks, without sacrificing much in accuracy on clean data.
For example, when the perturbation of the $\ell_2$ PGD attack is above $\epsilon=3.0$, \emph{AT-Dual} can achieve 20\% more accuracy.
\subsection{Generalizability of Defense}
It has been observed that models robust against $l_p$-norm-bounded attacks for one value of $p$ can be fragile when facing attacks with a different norm $l_{p'}$~\citep{sharma2018attacking}.
\begin{figure}[t]
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_l2_pgd_linf_white} &
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_l2_dpgd_linf_white_in} &
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_l2_dpgd_linf_white_out} &
\includegraphics[width=0.22\textwidth]{figure_new/imagenet_l2_jsma_l0_white} \\
\end{tabular}
\caption{Robustness to additional white-box attacks on ImageNet-10.
Left: 20 steps of $\ell_\infty$ PGD attacks.
Middle left: 20 steps of $\ell_\infty$ dual-perturbation attacks with different foreground distortions. $\epsilon_B$ is fixed to be 20/255 and $\lambda=1.0$.
Middle right: 20 steps of $\ell_\infty$ dual-perturbation attacks with different background distortions. $\epsilon_F$ is fixed to be 4/255 and $\lambda=1.0$.
Right: $\ell_0$ JSMA attacks.
}
\label{F:generalizability}
\end{figure}
Here, our final goal is to present evidence that the approaches for defense based on dual-perturbation attacks remain relatively robust even when faced with attacks generated using different norms.
Here, we show this when our models are trained using the $l_2$-bounded attacks, and evaluated against other attacks using other norms.
The results are presented in Figure~\ref{F:generalizability}.
We consider three alternative attacks: 1) PGD using the $l_\infty$-bounded perturbations, as in \citet{madry2018towards} (left in Figure~\ref{F:generalizability}) 2) dual-perturbation attacks with $l_\infty$-norm bounds (middle left and middle rigt in Figure~\ref{F:generalizability}), and 3) JSMA, a $l_0$-bounded attack~\citep{papernot2016limitations} (right in Figure~\ref{F:generalizability}).
We additionally considered $l_2$ attacks, per Carlini and Wagner~\citep{carlini2017towards}, but find that all of the robust models, whether based on PGD or dual-perturbation attacks, are successful against these.
Our first observation is that \emph{AT-Dual} is significantly more robust to $l_\infty$-bounded PGD attacks than the adversarial training approach in which adversarial examples are generated using $l_2$-bounded PGD attacks (Figure~\ref{F:generalizability} (left)).
Consequently, training with dual-perturbation attacks already exhibits better ability to generalize to other attacks compared to conventional adversarial training.
The gap between dual-perturbation-based adversarial training and standard adversarial training is even more significant when we consider $l_\infty$ dual-perturbation attacks (middle left and middle right figures of Figure~\ref{F:generalizability}).
Here, we see that robustness of PGD-based adversarially trained model is only marginally better than that of a clean model under large distortions (e.g., when $\epsilon_B\geq 20/255$ in the middle right plot of Figure~\ref{F:generalizability}), whereas \emph{AT-Dual} remains relatively robust.
Finally, considering JSMA attacks (see Figure~\ref{F:generalizability} (right)), we can observe that both \emph{AT-Dual} and \emph{AT-PGD} remain relatively robust.
However, a deeper look at Figure~\ref{F:generalizability} (right) reveals that compared to \emph{AT-PGD}, \emph{AT-Dual} exhibit moderately higher robustness than the baselines under large distortions of JSMA attacks.
Overall, in all of the cases, the model made robust using dual-perturbation attacks remains quite robust even as we evaluate against a different attack, using a different norm.
\subsection{Analysis of Defense}
\begin{figure}[t]
\centering
\includegraphics[width=0.50\textwidth]{figure_new/visualization.pdf}
\caption{
Visualization of loss gradient of different classifiers with respect to pixels of \emph{non-adversarial} inputs.
}
\label{F:visualization}
\end{figure}
Finally, we conduct an exploratory experiment to study adversarial robustness by investigating which pixel-level features are important for different classifiers at prediction time .
To do this, we visualize the loss gradient of different classifiers with respect to pixels of the same \emph{non-adversarial} inputs (as introduced in ~\citet{tsipras2019robustness}), shown in Figure~\ref{F:visualization}.
Our first observation is that the gradients in response to adversarially robust classifiers (AT-PGD and AT-Dual) align well with human perception, while a standard training model (Clean) results in a noisy gradient for the input images.
Second, compared to adversarial training with the conventional PGD attack (AT-PGD), the loss gradient of AT-Dual provides significantly better alignment with sharper foreground edges and less noisy background.
This indicates that adversarial training with the dual-pertubation attack which models unsuspiciousness can extract more perceptual semantics from an input image and are less dependant on the background at prediction time.
In other words, our defense approach can extract highly robust and semantically meaningful features, which contribute to its robustness to a variety of attacks.
\section{The \lowercase{dual}-PGD Attack}
\section{Dual-Perturbation Attacks}
\label{sec:threat_model}
\subsection{Motivation}
\begin{figure
\centering
\includegraphics[width=0.45\textwidth]{figure_new/semantics.pdf}
\caption{
Semantic distinction between foreground and background. Left: Original image of bears. Middle: Adversarial example with $\ell_\infty$ bounded perturbations ($\epsilon=40/255$) on the background, the sematic meaning (bear) is preserved. Right: Adversarial example with $\ell_\infty$ bounded perturbations ($\epsilon= 40/255$) on the foreground, with more ambiguous semantics.
}
\label{fig:semantics}
\end{figure}
Our threat model is motivated by the \emph{feature integration theory}~\citep{treisman1980feature} in cognitive science: regions that have features that are different from their surroundings are more likely to catch a viewer's gaze.
Such regions are called \emph{salient regions}, or \emph{foreground}, while the others are called \emph{background}.
Accordingly, for a given image, the semantics of the object of interest is more likely to be preserved in the foreground, as it catches more visual attention of a viewer compared to the background.
If the foreground of an image is corrupted, then the semantics of the object of interest is broken.
In contrast, the same extent of corruption in the background nevertheless preserves the overall semantic meaning of the scene captured (see, e.g., Figure~\ref{fig:semantics}).
Indeed, detection of salient regions, as well as the segmentation of foreground and background, have been extensively studied in computer vision~\citep{borji2015salient}.
These approaches either predict human fixations, which are sparse bubble-like salient regions sampled from a distribution ~\citep{kummerer2017understanding}, or salient objects that contain smooth connected areas in an image~\citep{he2018salient}.
Despite this important cognitive distinction between foreground and background, essentially all of the attacks on deep neural networks for image classification make no such distinction, even though a number of other semantic factors have been considered~\citep{bhattad2020unrestricted,Mohapatra20}.
Rather, much of the focus has been on adversarial perturbations that are \emph{not noticeable} to a human, but which are applied equally \emph{to the entire image}.
However, in security applications, the important issue is not merely that an attack cannot be noticed, but that whatever observed is \emph{not suspicious}.
This is, indeed, the frame of reference for many high-profile \emph{physical} attacks on image classification, which are clearly visible, but not suspicious because they hide in the ``human psyche'', that is, are easily ignored~\citep{Sharif16,Eykholt2018RobustPA}.
The main goal of the threat model we introduce next is therefore to capture more precisely the notion that an adversarial example is not suspicious by leveraging the cognitive distinction between foreground and background of an image.
\subsection{Dual-Perturbation Attacks}
\label{subsec:dual}
At the high level, our proposed threat model involves producing small (imperceptible) adversarial perturbations in the foreground of an image, and larger perturbations in the background.
This can be done by incorporating state-of-the-art attacks into our method:
we can use one attack with small $\epsilon$ in the foreground, and another with a large $\epsilon$ in the background.
Consequently, we term our approach \emph{dual-perturbation attacks}.
Note that these clearly generalize the standard small-norm (e.g., PGD) attacks, since we can set the $\epsilon$ to be identical in both the foreground and background.
However, the key consideration is that after we add the large amount of noise to the background, \emph{we must ensure that we do not thereby make it highly salient to the viewer}.
We capture this second objective by including in the optimization problem a \emph{salience} term that decreases with increasing salience of the background.
Formally, the \emph{dual-perturbation} attack solves the following optimization problem:
\begin{equation}
\max_{||\bm{\delta} \circ \mathcal{F}(\bm{x})||_p \leq \epsilon_F,\\ ||\bm{\delta}\circ\mathcal{B}(\bm{x})||_p \leq \epsilon_B} \mathcal{L}\left(h_{\bm{\theta}}(\bm{x}+\bm{\delta}), y\right) + \lambda \cdot \mathcal{S} \left( \bm{x}+\bm{\delta} \right),
\label{eq:dual_pgd}
\end{equation}
where $\mathcal{S}\left( \bm{x}+\bm{\delta} \right)$ measure the relative salience of the foreground compared to background after adversarial noise $\bm{\delta}$ has been added, with $\lambda$ a parameter that explicitly balances the two objectives: maximizing predicted loss on adversarial examples, and limiting background salience (compared to foreground) so that the adversarial example produced is unsuspicious.
Here $\mathcal{F}$ returns the mask matrix constraining the area of the perturbation in the foreground, and $\mathcal{B}$ returns the mask matrix restricting the area of the perturbation in the background, for an input image $\bm{x}$.
$\mathcal{F}(\bm{x})$ and $\mathcal{B}(\bm{x})$ have the same dimension as $\bm{x}$ and contain 1s in the area which can be perturbed and 0s elsewhere.
$\circ$ denotes element-wise multiplication for matrices.
Hence, we have $\bm{x} = \mathcal{F}(\bm{x})+\mathcal{B}(\bm{x})$ which indicates that any input image can be decomposed into two independent images: one containing just the foreground, and the other containing the background.
We model the suspiciousness $\mathcal{S}(\bm{x})$ of an input image $\bm{x}$ by leveraging a recent computational model of image salience, DeepGaze II~\citep{kummerer2017understanding}.
DeepGaze II outputs predicted pixel-level density of human fixations on an image with the total density over the entire image summing to 1.
Our measure of relative salience of the foreground to background is the \emph{foreground score}, which is defined as $\mathcal{S}(\bm{x}) = \sum_{i \in \{k|\mathcal{F}(\bm{x})_k \neq 0\}} s_i$, where $s_i$ is the saliency score produced by DeepGaze II for pixel $i$ of image $\bm{x}$.
Since foreground, as a fraction of the image, tends to be around 50-60\%, a score significantly higher than 0.5 indicates that predicted human fixation is relatively localized to the foreground.
A natural approach for solving the optimization problem shown in Equation~\ref{eq:dual_pgd} is to apply an iterative method, such as the PGD attack.
However, the use of this approach poses two challenges in our setting.
First, as in the PGD attack, the problem is non-convex, and PGD only converges to a local optimum.
We can address this issue by using \emph{random starts}, i.e.,~by randomly initializing the starting point of the adversarial perturbations, as in~\citet{madry2018towards}.
Second, and unlike PGD, the optimization problem in Equation~\ref{eq:dual_pgd} involves \emph{two hard constraints} $||\bm{\delta}\circ\mathcal{F}(\bm{x})||_p \leq \epsilon_F$ and $||\bm{\delta}\circ\mathcal{B}(\bm{x})||_p \leq \epsilon_B$.
Thus, the feasible region of the adversarial perturbation $\bm{\delta}$ is not an $\ell_p$ ball, which makes computing the projection $\mathcal{P}_{\epsilon}$ computationally challenging in high-dimensional settings.
To address this challenge, we split the \emph{dual-perturbation} attack into two individual processes in each iteration, one for the adversarial perturbation in the foreground and the other for the background, and then merge these two perturbations when computing the gradients, like a standard PGD attack.
Full details of our algorithms for computing dual perturbation examples are provided in Appendix A.
Now, the question that remains is how to partition an input image $\bm{x}$ into foreground, $\mathcal{F}(\bm{x})$, and background, $\mathcal{B}(\bm{x})$.
We address this next.
\subsection{Identifying Foreground and Background}
Given an input $\bm{x}$, we aim to compute $\mathcal{F}(\bm{x})$, the foreground mask and $\mathcal{B}(\bm{x})$, the background mask.
We consider two approaches for this: fixation prediction and segmentation.
Our first method leverages the fixation prediction approach~\citep{kummerer2017understanding} to identify foreground and background.
This enables a general approach for foreground-background partition as fixation predictions are not limited to any specific collection of objects.
Specifically, we first use DeepGaze II~\citep{kummerer2017understanding} to output predicted pixel-level density of human fixations on an image.
We then divide the image into foreground and background by setting a threshold $t = 0.5\cdot(s_{min}(\bm{x})+s_{max}(\bm{x}))$ for each input image $\bm{x}$ where $(s_{min}, s_{max})$ are the minimum and maximum values of human fixation on pixels of $\bm{x}$.
Pixels with larger values than $t$ are grouped into the foreground, and the others are identified as background subsequently.
Our second approach is to make use of semantic segmentation to provide a partition of the foreground and background in pixel level.
This can be done in two steps:
First, we use state-of-the-art paradigms for semantic segmentation (e.g., \citet{long2015fully}) to identify pixels that belong to each corresponding object, as there might be multiple objects in an image.
Next, we identify the pixels that belong to the object of interest as the foreground pixels, and the others as background pixels.
We use both of the above approaches in dual-perturbation attacks when evaluating the robustness of classifiers, as well as designing robust models.
More details are available in Section~\ref{sec:experiments}.
\section{Background}
\label{sec:background}
\subsection{Adversarial Examples and Attacks}
The problem of generating adversarial examples is commonly modeled as follows.
We are given a a learned model $h_{\bm{\theta}}(\cdot)$ parameterized by $\bm{\theta}$ which maps an input $\bm{x}$ to a $k$-dimensional prediction, where $k$ is the number of classes being predicted.
The final predicted class $y_p$ is obtained by $y_p = \argmax_i h_{\bm{\theta}}(\bm{x})_i$, where $h_{\bm{\theta}}(\bm{x})_i$ is the $i$th element of $h_{\bm{\theta}}(\bm{x})$.
Now, consider an input $\bm{x}$ along with a correct label $y$.
The problem of identifying an adversarial example for $\bm{x}$ can be captured by the following optimization problem:
\begin{equation}
\max_{\bm{\delta} \in \Delta(\epsilon)}\mathcal{L}\left(h_{\bm{\theta}}(\bm{x}+\bm{\delta}), y\right),
\label{eq:adv_example}
\end{equation}
where $\mathcal{L}(\cdot)$ is the adversary's utility function (for example, the loss function used to train the classifier $h_{\bm{\theta}}$).
$\Delta(\epsilon)$ is the feasible perturbation space which is commonly represented as a $\ell_p$ ball:
$\Delta(\epsilon)=\left\{\bm{\delta} :\|\bm{\delta}\|_{p} \leq \epsilon\right\}$.
A number of approaches have been proposed to solve the optimization problem shown in Eq.~(\ref{eq:adv_example}), among which two are viewed as state of the art: \emph{CW attack} developed by \citet{carlini2017towards}, and \emph{Projected Gradient Descent (PGD) attack} proposed in \citet{madry2018towards}.
In this work, we focus on the PGD attack with $\ell_\infty$ and $\ell_2$ as the distance metrics.
\subsection{Robust Learning}
An important defense approach that has proved empirically effective even against adaptive attacks is \emph{adversarial training}~\citep{szegedy14intriguing, cohen2019certified, goodfellow15, madry2018towards}.
The basic idea of adversarial training is to produce adversarial examples and incorporate these into the training process.
Formally, adversarial training aims to solve the following robust learning problem:
\begin{equation}
\underset{\bm{\theta}}{\min} \frac{1}{|D|} \sum_{\bm{x}, y \in D} \max _{\|\bm{\delta}\|_{p} \leq \epsilon} \mathcal{L}\left(h_{\bm{\theta}} (\bm{x}+\bm{\delta}), y\right),
\label{eq:adv_training}
\end{equation}
where $D$ is the training dataset.
In practice, this problem is commonly solved by iteratively using the following two steps~\citep{madry2018towards}: 1) use a PGD (or other) attack to produce adversarial examples of the training data; 2) use any optimizer to minimize the loss of those adversarial examples.
It has been shown that adversarial training can significantly boost the adversarial robustness of a classifier against $\ell_p$ attacks, and it can be scaled to neural networks with complex architectures.
\section{Introduction}
\label{sec:introduction}
An observation by \citet{szegedy14intriguing} that
state-of-the-art deep neural networks that exhibit exceptional
performance in image classification are fragile in the face of small
adversarial perturbations of inputs has received a great deal of attention.
A series of approaches for designing adversarial examples followed~\citep{szegedy14intriguing, goodfellow15, carlini2017towards},
along with methods for defending against them~\citep{papernot2016distillation, madry2018towards}, and then new attacks
that defeat prior defenses, and so on.
Attacks can be roughly classified along three dimensions: 1)
introducing small $l_p$-norm-bounded perturbations, with the goal of
these being imperceptible to humans~\citep{madry2018towards}, 2) using non-$l_p$-based
constraints that capture perceptibility (often called
\emph{semantic perturbations})~\citep{bhattad2020unrestricted}, and 3) modifying physical objects, such
as stop signs~\citep{Eykholt2018RobustPA}, in a way that does not arouse suspicion.
One of the most common motivations for the study of adversarial
examples is safety and security, such as the potential for attackers to
compromise the safety of autonomous vehicles that rely on computer
vision~\citep{Eykholt2018RobustPA}.
However, while imperceptibility is certainly sufficient for
perturbations to be unsuspicious, it is far from necessary, as
physical attacks demonstrate.
On the other hand, while there are numerous formal definitions that
capture whether noise is perceptible~\citep{moosavi2016deepfool, carlini2017towards}, what makes adversarial examples
suspicious has been largely informal and subjective.
\begin{figure
\centering
\includegraphics[width=0.6\textwidth]{figure_new/dual-perturbation-example-v2.pdf}
\caption{
An illustration of dual-perturbation attacks.
Adversarial examples are with large $\ell_\infty$ perturbations on the background ($\epsilon_B=20/255$) and small $\ell_\infty$ perturbations on the foreground ($\epsilon_F=4/255$).
A parameter $\lambda$ is used to control background salience explicitly.
A larger $\lambda$ results in less salient background under the same magnititude of perturbation.
}
\label{fig:dual_pgd_example}
\end{figure}
We propose a simple formalization of an important aspect of what makes
adversarial perturbations unsuspicious.
Specifically, we make a distinction between image foreground and
background, allowing significantly more noise in the background than
the foreground.
This idea stems from the notion of cognitive salience~\citep{borji2015salient, kummerer2017understanding, he2018salient}, whereby an
image can be partitioned into the two respective regions to reflect
how much attention a human viewer pays to the different parts of the
captured scene.
In effect, we posit that perturbations in the foreground, when
visible, will arouse significantly more suspicion (by being
cognitively more salient) than perturbations made in the background.
Our first contribution is a formal model of such
\emph{dual-perturbation attacks}, which is a generalization of the
$l_p$-norm-bounded attack models (see, e.g.,
Figure~\ref{fig:dual_pgd_example}), but explicitly aims to ensure that
adversarial perturbation does not make the background highly salient.
Second, we propose an algorithm for finding adversarial examples using
this model, which is an adaptation of the PGD
attack~\citep{madry2018towards}.
Third, we present a method for defending against dual-perturbation
attacks based on the adversarial training framework~\citep{madry2018towards}.
Finally, we present an extensive experimental study that demonstrates that
(a) the proposed attacks are significantly stronger than PGD,
successfully defeating all state-of-the-art defenses, (b) proposed
defenses using our attack model significantly outperform
state-of-the-art alternatives, \emph{with relatively small performance
degradation on non-adversarial instances}, and (c) proposed defenses are
comparable to, or better than alternatives \emph{even against
traditional attacks}, such as PGD.
\subsubsection*{Author Contributions} |
0902.2833 | \section{Introduction}
Recent developments of precision cosmology have yielded a slight
shift of an inflationary paradigm~\cite{Komatsu:2008hk}.
Before the precision cosmology, zeroth order predictions of
inflationary scenarios were sufficient.
Indeed, curvature fluctuations had been supposed to be
statistically homogeneous, isotropic, gaussian and almost scale
invariant. However, because of progress in observations,
we are now forced to look at fine structures of fluctuations
such as spectral tilt, non-gaussianity, parity violation, and
so on~\cite{Baumann:2008aq}.
In fact, we need theoretical predictions at a percent level.
Those precise predictions of inflationary scenarios
will provide a clue to understand fundamental physics
such as superstring theory when they are compared with
observations.
In this paper, we focus on a role of a vector field
in the early universe~\cite{Kanno:2006ty}.
Of course, no one doubts existence of vector fields.
At the same time, it is widely believed vector hair
will disappear during the inflation conforming to the
cosmic no-hair conjecture~\cite{Wald:1983ky}.
However, recently, it is shown that anisotropic hair
in the inflationary universe can exist~\cite{Golovnev:2008cf,Kanno:2008gn},
although there may be perturbative instability in this specific
realization~\cite{Himmetoglu:2008zp}.
Hence, it is worth seeking other models.
At this point, we should recall that primordial magnetic fields
are produced during inflation~\cite{Turner:1987bw}. For example,
the nonminimal kinetic term of vector fields in supergravity
can be used to generate the primordial cosmological magnetic
fields~\cite{Martin:2007ue}.
This fact suggests that
we have a vector hair during inflation.
Here, there is a prejudice that the vector hair is negligibly small and
it is legitimate to ignore the backreaction of magnetic fields to geometry.
However, in the context of the precision cosmology,
we should not neglect the backreaction
if it is around a percent level~\cite{Pullen:2007tu}.
Hence, it is important to quantify how small it is.
Based on this observation,
we study an inflationary scenario
where the inflaton is coupled with the kinetic term of
a massless vector field. Apparently, our model is free from instability.
Interestingly, we find a tracking behavior of the energy density of the vector
field. As a consequence, we show that there exist sizable vector hair
quite generally. That yields a percent level anisotropic inflation.
It should be stressed that the presence of the vector hair in the early
universe breaks the rotational invariance and therefore provides various
interesting phenomenological consequences~\cite{Yokoyama:2008xw}.
Moreover, anisotropic inflation might give rise to
a percent level correlation between primordial gravitational waves
and cosmic microwave background radiations (CMB), which might be testable
by CMB observations near future~\cite{Kanno:2008gn}.
Therefore, ``hairy inflation" is phenomenologically rich.
\section{Basic equations}
\label{sc:basic}
We consider the following action for the gravitational field, the inflaton
field $\phi$ and the
vector field $A_\mu$ coupled with $\phi$:
\begin{eqnarray}
S&=&\int d^4x\sqrt{-g}\left[~\frac{1}{2\kappa^2}R
-\frac{1}{2}\left(\partial_\mu\phi\right)\left(\partial^{\mu}\phi\right)
-V(\phi)-\frac{1}{4} f^2 (\phi) F_{\mu\nu}F^{\mu\nu}
~\right] \ ,
\label{action1}
\end{eqnarray}
where $g$ is the determinant of the metric, $R$ is the
Ricci scalar, $V(\phi)$ is the inflaton potential, $f(\phi)$ is the coupling function of the inflaton field to the vector one, respectively.
The field strength of the vector field is defined by
$F_{\mu\nu}=\partial_\mu A_\nu -\partial_\nu A_\mu$.
Thanks to the gauge invariance, we can choose the gauge $A_0 =0$.
Without loss of generality,
we can take $x$-axis in the direction of the vector.
Hence, we take the homogeneous fields of the form
$
A_\mu=(~0,~A_x(t),~0,~0~)
$
and
$
\phi=\phi(t) \ .
$
Note that we have assumed the direction of the vector field does
not change in time, for simplicity.
This field configuration holds the plane symmetry in the plane
perpendicular to the vector.
Then, we take the metric to be
\begin{eqnarray}
ds^2=- dt^2+e^{2\alpha(t)}\left[~
e^{-4\sigma(t)}dx^2
+e^{2\sigma(t)}\left( dy^2 + dz^2\right)~\right] \ ,
\label{metric}
\end{eqnarray}
where the cosmic time $t$ is used.
Here, $e^\alpha$ is an isotropic scale factor and $\sigma$ represents
a deviation from the isotropy. With above ansatz,
one obtains the equation of motion for the vector field which is
easily solved as
\begin{eqnarray}
\dot{A_x} = f^{-2}(\phi ) e^{-\alpha -4\sigma}p_{A},
\label{eq:Ax}
\end{eqnarray}
where an overdot denotes the derivative with respect to the cosmic time $t$
and $p_A$ denotes a constant of integration.
Substituting (\ref{eq:Ax}) into other equations, we obtain basic equations
\begin{eqnarray}
\dot{\alpha}^2 &=& \dot{\sigma}^2
+\frac{\kappa^2}{3}\left[ \frac{1}{2} \dot{\phi}^2+V(\phi)
+\frac{p_{A}^2}{2}f^{-2} (\phi) e^{-4\alpha-4\sigma } \right] \ ,
\label{hamiltonian}\\
\ddot{\alpha} &=& -3\dot{\alpha}^2 + \kappa ^2 V(\phi )
+\frac{\kappa ^2 p_{A}^2}{6}f^{-2}(\phi )e^{-4\alpha -4\sigma},
\label{evolution:alpha}\\
\ddot{\sigma} &=& -3\dot{\alpha}\dot{\sigma}
+ \frac{\kappa ^2 p_{A}^2}{3}f^{-2}(\phi )e^{-4\alpha -4\sigma}
\label{eq:sigma}, \\
\ddot{\phi} &=& -3\dot{\alpha}\dot{\phi} -V'(\phi )
+ p_{A}^2 f^{-3}(\phi )f'(\phi ) e^{-4\alpha -4\sigma }
\label{eq:phi} \ ,
\end{eqnarray}
where a prime denotes the derivative with respect to $\phi$.
From Eq.(\ref{hamiltonian}), we see the effective potential
$
V_{\rm eff} = V + p_A^2 f^{-2} e^{-4\alpha -4\sigma}/2
$
determines the inflaton dynamics. As the second term is
coming from the vector contribution, we refer it to
the energy density of the vector. Let's check if inflation
occurs in this model. Using Eqs.(\ref{hamiltonian}) and
(\ref{evolution:alpha}), equation for acceleration of
the universe is given by
\begin{eqnarray}
\ddot{\alpha} + \dot{\alpha}^2
= - 2\dot{\sigma}^2 -\frac{\kappa^2}{3} \dot{\phi}^2
+ \frac{\kappa^2}{3} \left[ V - \frac{p_A^2}{2} f^{-2}
e^{-4\alpha -4\sigma } \right] \ .
\end{eqnarray}
We see that the potential energy of the inflaton needs to
be dominant for the inflation to occur.
Now, we assume the energy density of the vector can be
negligible compared to that of the inflaton for the inflaton dynamics.
Then, we examine when the anisotropy
is not diluted during inflation. From Eq.(\ref{eq:sigma}),
it is apparent that the fate of anisotropic expansion rate
$\Sigma \equiv \dot{\sigma}$ depends on the behavior of
coupling function $f(\phi)$. In the critical
case $f(\phi ) \propto e^{-2\alpha}$, the energy
density of the vector field as a source term in
Eq.(\ref{eq:sigma}) remains almost constant during the slow-roll
inflation. Using slow-roll equations
\begin{eqnarray}
\dot{\alpha}^2 = \frac{\kappa ^2}{3}V(\phi), \quad
3\dot{\alpha}\dot{\phi} = -V'(\phi ) \ ,
\label{slow1}
\end{eqnarray}
we obtain
$
d\alpha / d\phi = \dot{\alpha} /\dot{\phi}
= - \kappa ^2 V(\phi) / V'(\phi ) \ .
$
This can be easily integrated as
$
\alpha = -\kappa^2 \int V/V' d\phi \ .
$
Here, we have absorbed a constant of integration into the definition of
$\alpha$. Thus, we obtain
\begin{equation}
f = e^{-2\alpha} = e^{2\kappa^2 \int \frac{V}{V'} d\phi } \ .
\label{critical}
\end{equation}
For the polynomial potential $V\propto \phi^n $, we have
$
f = e^{ \kappa ^2 \phi ^2/n} \ .
$
Given the critical case (\ref{critical}),
we can parameterize the coupling function as~\cite{Martin:2007ue}:
\begin{equation}
f = e^{2 c \kappa^2 \int \frac{V}{V'} d\phi } \label{key} \ ,
\end{equation}
where $c$ is a parameter.
Naively, the energy density of the vector field grows during inflation
when $c > 1$, which is the case we want to consider.
It would not be possible to neglect the vector field
in this case, and Eq.(\ref{slow1}) would not be
appropriate for discussing the inflation dynamics anymore.
Let us see what happens
if the vector field is not negligible.
\section{Tracking Anisotropic Inflation}
\label{sc:coe}
To make the analysis concrete, we consider chaotic inflation with
the potential $V(\phi ) = m^2\phi^2 /2$ ($n=2$).
For this potential, the coupling function becomes $f(\phi)=e^{c \kappa^2\phi^2 /2}$.
It is instructive to see what happens by solving
Eqs.(\ref{hamiltonian})-(\ref{eq:phi}) numerically.
\begin{figure}[ht]
\includegraphics[height=6cm, width=7.5cm]{phase.eps}
\caption{Phase flow for $\phi$ is depicted.
Here, we took the parameters $c=2$ and
$\kappa m=10^{-5} $. We also put initial conditions
$\phi_i=12$ and $\dot{\phi}_i=0$.
There are two different slow-roll phases.
The transition occurs around $\kappa\phi= 9$.}
\label{fg:phase}
\end{figure}
In Fig. \ref{fg:phase}, we have shown the phase flow
in $\phi-\dot{\phi}$ space
where we can see
two slow-roll phases, which indicates something different from
the conventional inflation occurs.
In Fig.\ref{fg:ce-ratio},
we have calculated the evolution of the anisotropy
$\Sigma/H \equiv \dot{\sigma}/\dot{\alpha}$ for various parameters $c$
under the initial conditions $\sqrt{c}\kappa\phi_i=17$.
\begin{figure}[h]
\includegraphics[height=6cm, width=7.5cm]{ce-ratio.eps}
\caption{
Evolutions of the anisotropy $\Sigma/H$ for various $c$ are shown.
One can see the attractor like behavior of the anisotropy. }
\label{fg:ce-ratio}
\end{figure}
As expected, all of solutions show a rapid growth of anisotropy
in the first slow-roll phase.
However, the growth of the anisotropy eventually stops at the order of a percent.
Notice that this attractor like behavior is not so sensitive to a parameter $c$.
Now, we will give an analytic explanation of the numerical results
and find a quite remarkable relation between the anisotropy and
a slow-roll parameter of inflation.
As the energy density of the vector
field should be subdominant during inflation,
we can ignore $\sigma$ in Eqs.(\ref{hamiltonian}), (\ref{evolution:alpha}),
and (\ref{eq:phi}).
However, in Eq.(\ref{eq:sigma}),
all terms would be of the same order.
Now, Eqs.(\ref{hamiltonian}) and (\ref{eq:phi})
can be written as
\begin{eqnarray}
\dot{\alpha}^2 &=&
\frac{\kappa^2}{3}\left[ \frac{1}{2} \dot{\phi}^2
+\frac{1}{2}m^2\phi^2+\frac{1}{2}e^{-c\kappa^2\phi^2-4\alpha } p_{A}^2
\right] \ , \label{h3} \\
\ddot{\phi} &=& -3\dot{\alpha}\dot{\phi} -m^2\phi
+ c \kappa^2\phi e^{-c\kappa^2\phi^2-4\alpha }p_{A}^2\label{eq:phi3} \ .
\end{eqnarray}
Let's see how the energy density of the vector field works in these equations.
When the effect of the vector field is comparable with that of the inflaton
field as source terms in (\ref{eq:phi3}), we get the relation
$c\kappa^2 p_A^2 e^{-c\kappa^2 \phi^2 -4\alpha } \sim m^2 $.
If we define the ratio of the energy density of the vector field
$\rho_A\equiv p_A^2 e^{-c\kappa^2 \phi^2 -4\alpha } /2$ to that of
the inflaton $\rho_\phi\equiv m^2 \phi^2/2$ as
\begin{equation}
{\cal R} \equiv \frac{\rho_A}{\rho_\phi}
= \frac{p_{A}^2 e^{-c\kappa^2\phi^2-4\alpha}}{m^2\phi^2} \ ,
\label{R}
\end{equation}
we find the ratio becomes
${\cal R} \sim 1 /c\kappa^2 \phi^2$
when the above relation holds.
Since the e-folding number is crudely given by $N\sim \kappa^2 \phi^2$
and the scale observed through CMB corresponds to $N \sim {\cal O} (100)$,
we have typically $\kappa \phi \sim {\cal O} (10)$. Hence, the ratio goes
${\cal R} \sim 10^{-2}$. Thus we find that the effect of the
vector filed in (\ref{h3}) is negligible even when it is comparable
with that of the scalar field in (\ref{eq:phi3}).
It turns out that the above situation is not transient one but
an attractor.
Suppose that $\rho_A$ is initially negligible,
${\cal R}_i \ll 10^{-2} $. In the first slow-roll
inflationary phase (\ref{slow1}),
the relation
$e^{-\kappa^2\phi^2} \propto e^{4\alpha} $
holds as was shown in (\ref{critical}).
Hence, the ratio ${\cal R}$ varies as
$
{\cal R} \propto e^{4(c-1)\alpha}.
$
As we now consider $c>1$, $\rho_A$ increases rapidly during inflation
and eventually reaches ${\cal R} \sim 10^{-2}$.
Whereas, when ${\cal R}$ exceeds $ 10^{-2} $,
the inflaton climbs up the potential due to the effect
of the vector field in (\ref{eq:phi3}), hence $\rho_A$ will decrease rapidly
and go back to the value ${\cal R} \sim 10^{-2}$.
Thus irrespective of initial conditions, $\rho_A$ will track $\rho_{\phi}$.
The above arguments tell us that the inflaton dynamics after tracking is
governed by the modified slow-roll equations
\begin{eqnarray}
\dot{\alpha}^2 &=& \frac{\kappa^2}{6} m^2 \phi^2 \ ,
\label{h4}\\
3\dot{\alpha} \dot{\phi} &=&
-m^2\phi+c\kappa^2 \phi p_{A}^2 e^{-c\kappa^2\phi^2-4\alpha } \ .
\label{eq:balance}
\end{eqnarray}
We refer to the phase governed by the above equations as the
second inflationary phase, compared to the first one
governed by the equations (\ref{slow1}).
Using above equations, we can deduce
\begin{equation}
\phi \frac{d\phi}{d\alpha} = -\frac{2}{\kappa^2} + \frac{2cp_A^2}{m^2}
e^{-c\kappa^2 \phi^2 -4 \alpha} \ . \label{phi:alpha}
\end{equation}
This can be integrated as
$
e^{-c\kappa^2 \phi^2 -4\alpha}
= m^2 (c-1)/ c^2 \kappa^2 p_A^2 \left[1+D e^{-4(c-1)\alpha} \right]^{-1} ,
$
where $D$ is a constant of integration. This solution rapidly converges
to
\begin{eqnarray}
e^{-c\kappa^2 \phi^2 -4\alpha}
= \frac{m^2 (c-1)}{c^2 \kappa^2 p_A^2} \ .
\label{attractor}
\end{eqnarray}
Thus, we found $\rho_A$ becomes constant during the second inflationary
phase.
Substituting the result (\ref{attractor}) into
the modified slow-roll equation (\ref{eq:balance}),
we obtain the equation for the second inflationary phase
\begin{eqnarray}
3\dot{\alpha} \dot{\phi} = - \frac{m^2}{c} \phi
\label{effective} \ .
\end{eqnarray}
This indicates that $\dot{\phi}$ in the second phase of inflation
is about $1/c$ times that in the first phase of inflation.
In Fig. \ref{fg:phase},
we can see the value of $\dot{\phi}$ after the phase transition is about
a half of that in the first phase, which agrees with the
analytical estimate for $c=2$.
Now let us consider the anisotropy.
In the second slow-roll phase, Eq.(\ref{eq:sigma}) reads
\begin{eqnarray}
3\dot{\alpha}\dot{\sigma}
= \frac{\kappa ^2 p_{A}^2}{3}e^{-c\kappa^2\phi^2-4\alpha }
\label{eq:sigma3} \ .
\end{eqnarray}
where we have assumed
$\sigma\ll c\kappa^2\phi^2$, $\ddot{\sigma}\ll\dot{\alpha}\dot{\sigma}$.
Using Eqs.(\ref{h4}) and (\ref{eq:sigma3}), the anisotropy turns out
to be determined by the ratio (\ref{R}) as
\begin{equation}
\frac{\Sigma}{H}
= \frac{\kappa^2 p_{A}^2 e^{-c\kappa^2\phi^2-4\alpha }}{9\dot{\alpha}^2}
= \frac{2}{3}{\cal R}(t) \ .
\label{S/H}
\end{equation}
From Eq.(\ref{attractor}), we can calculate the ratio
\begin{equation}
{\cal R}(t) = \frac{c-1}{c^2\kappa^2\phi^2}
\ . \label{ratio}
\end{equation}
Using this relation, we can relate degrees of anisotropy
to the slow-roll parameter as follows.
Combining Eqs.(\ref{hamiltonian}) with (\ref{evolution:alpha}),
we obtain
\begin{equation}
\ddot{\alpha}
=-\frac{\kappa^2}{2}\dot{\phi}^2
-\frac{\kappa^2}{3}e^{-c\kappa^2\phi^2-4\alpha }p_{A}^2
\label{eq:ddalpha} \ ,
\end{equation}
where we have used $\dot{\sigma}^2\ll\kappa^2\dot{\phi}^2$
derived from Eqs.(\ref{h4}), (\ref{effective}), (\ref{S/H}) and (\ref{ratio}).
Thus, the slow-roll parameter is given by
\begin{eqnarray}
\epsilon \equiv -\frac{\ddot{\alpha}}{\dot{\alpha}^2}
= \frac{2}{c \kappa^2\phi^2} \ , \label{slow}
\end{eqnarray}
where we used the results (\ref{h4})
, (\ref{attractor}) and (\ref{effective}).
Thus, combining Eqs.(\ref{S/H}),(\ref{ratio}), and (\ref{slow}),
we reach a main result
\begin{equation}
\frac{\Sigma}{H} = \frac{1}{3}\frac{c-1}{c} \epsilon \ .
\end{equation}
This remarkable relation shows a quite good agreement
with the numerical results in Fig.\ref{fg:ce-ratio}.
\section{Generality}
\label{sc:generality}
Although the discussion we have made so far is restricted to a specific form
of potential $V$, we now argue that our finding is the general
feature of the inflationary scenario in the presence of the
vector field.
Let us consider the general potential $V(\phi)$ for the inflaton.
Then, the coupling function should be of the form (\ref{key}).
Hence, in the slow-roll phase, the equation for the inflaton (\ref{eq:phi})
becomes
\begin{eqnarray}
3\dot{\alpha}\dot{\phi}
= - V' + 2c\kappa^2 \frac{V}{V'} f^{-2}p_{A}^2 e^{-4\alpha -4\sigma} \ .
\end{eqnarray}
When $c>1$, the energy density of the vector will soon catch up with
that of the inflaton. At the tracking point,
$\rho_A$ and $\rho_\phi$ tend to be
$
\rho_A \simeq \left( V'/V \right)^2 \rho_\phi /4c\kappa^2 \ .
$
Note that the slow-roll parameter now becomes:
\begin{equation}
\epsilon \equiv -\frac{\ddot{\alpha}}{\dot{\alpha}^2}
\simeq \frac{1}{2c\kappa^2} \left( \frac{V'}{V}\right)^2 \ .
\end{equation}
Then, again, we can conclude that the anisotropy becomes of the order of the slow-roll parameter:
\begin{equation}
\frac{\Sigma}{H} \simeq \frac{1}{6c\kappa^2}\left( \frac{V'}{V}\right)^2
\simeq \frac{1}{3} \epsilon \ .
\end{equation}
Thus, we have shown that the anisotropy is universally determined by
the slow-roll parameter.
This is reminiscent of non-gaussianity in single
inflaton models~\cite{Maldacena:2002vr}.
\section{Conclusion}
We have proposed an inflationary scenario with anisotropy.
Remarkably, we have find that degrees of anisotropy are universally
determined by the slow-roll parameter of inflation.
Since the slow-roll parameter is observationally known to be
of the order of a percent, the anisotropy during inflation
cannot be entirely negligible.
Indeed, we can expect rich phenomenology as consequences of the
anisotropy during inflation. First of all,
since the rotational invariance is violated, the statistical
anisotropy of CMB temperature fluctuations can be expected
~\cite{Ackerman:2007nb}.
More interestingly,
tensor perturbations could be induced from curvature perturbations
through the anisotropy of the background spacetime. One immediate
consequence is a correlation between
curvature and tensor perturbations~\cite{Kanno:2008gn}.
This correlation should be detected through the analysis of
temperature-B-mode correlation in CMB.
Moreover,
because of the anisotropy, there might be linear polarization
in primordial gravitational waves.
This polarization can be detected either through CMB observations
or direct interferometer observations.
These predictions can be checked by future observations.
Theoretically, we need more systematic check such as
quantum loop effects~\cite{Seery:2008ms}.
Finally, let us point out another view of our result.
Our finding of hairy inflation can be regarded as a counter example
to the cosmic no-hair conjecture. This hair stems from the fact
that the inflation is not exactly deSitter expansion. In fact, degrees of
anisotropy is determined by the slow-roll parameter.
In a sense, this is the origin of the universality of
a percent level of vector hair.
\begin{acknowledgements}
JS is supported by the Japan-U.K. Research Cooperative Program,
Grant-in-Aid for Scientific Research Fund of the Ministry of
Education, Science and Culture of Japan No.18540262 and No.17340075.
\end{acknowledgements} |
0902.1666 | \section*{R\'esum\'e}
\else \small
\begin{center}
{\bf R\'esum\'e\vspace{-.5em}\vspace{0pt}}
\end{center}
\quotation \fi}
\def\today{
\number\day\space
\ifcase\month\or
janvier\or {f\'evrier}\or mars\or avril\or mai\or juin\or
juillet\or {ao\^ut}\or septembre\or octobre\or novembre
\or {d\'ecembre}\fi
\space\number\year}
\def\tableofcontents{
\section*{Table des mati\`eres\markboth{{Table des mati\`eres}}{{Table des mati\`eres}}}
\@starttoc{toc}}
\def\section*{Liste des figures\markboth{Liste des figures}{Liste des figures}{\section*{Liste des figures\markboth{Liste des figures}{Liste des figures}}
\@starttoc{lof}}
\def\section*{Liste des tableaux\markboth{Liste des tableaux}{Liste des tableaux}{\section*{Liste des tableaux\markboth{Liste des tableaux}{Liste des tableaux}}
@starttoc{lot}}
\def\thebibliography#1{\section*{Bibliographie\markboth{Bibliographie}{Bibliographie}}\list
{[\arabic{enumi}]}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth
\advance\leftmargin\labelsep
\usecounter{enumi}}
\def\hskip .11em plus .33em minus -.07em{\hskip .11em plus .33em minus -.07em}
\sloppy
\sfcode`\.=1000\relax}
\def\fnum@table{Tableau \thetable}
\def\@part[#1]#2{\ifnum \c@secnumdepth >\m@ne \refstepcounter{part}
\addcontentsline{toc}{part}{\thepart \hspace{1em}#1}\else
\addcontentsline{toc}{part}{#1}\fi { \parindent 0pt \raggedright
\ifnum \c@secnumdepth >\m@ne \Large \bf Partie \thepart \par \nobreak \fi \huge
\bf #2\markboth{}{}\par } \nobreak \vskip 3ex \@afterheading }
\newtheorem{theo}{Th\'eor\`eme}[section]
\newtheorem{conv}[theo]{Convention}
\newtheorem{ex}[theo]{Exemple}
\newtheorem{prop}[theo]{Proposition}
\newtheorem{lem}[theo]{Lemme}
\newtheorem{rema}[theo]{Remarque}
\newtheorem{remas}[theo]{Remarques}
\newtheorem{cor}[theo]{Corollaire}
\newtheorem{conj}[theo]{Conjecture}
\newtheorem{propbis}{Proposition}
\newtheorem{lembis}{Lemme}
\newtheorem{corbis}{Corollaire}
\newtheorem{theobis}{Th\'eor\`eme}
\renewcommand{\theprop} {\arabic{section}.\arabic{prop}}
\renewcommand{\theconv} {\arabic{section}.\arabic{conv}}
\renewcommand{\theex} {\arabic{section}.\arabic{ex}}
\renewcommand{\theconj} {\arabic{section}.\arabic{conj}}
\renewcommand{\thetheo} {\arabic{section}.\arabic{theo}}
\renewcommand{\thecor} {\arabic{section}.\arabic{cor}}
\renewcommand{\thelem} {\arabic{section}.\arabic{lem}}
\renewcommand{\therema} {\arabic{section}.\arabic{rema}}
\renewcommand{\theremas} {\arabic{section}.\arabic{remas}}
\renewcommand{\thesection}{\arabic{section}}
\renewcommand{\thesubsection}{\arabic{section}.\arabic{subsection}}
\renewcommand{\thesubsubsection}{\arabic{section}.\arabic{subsection}.\arabic{subsubsection}}
\renewcommand{\thepropbis} {}
\renewcommand{\thetheobis} {}
\renewcommand{\thecorbis} {}
\renewcommand{\thelembis} {}
\def {\pi_1^{t}} {{\pi_1^{t}}}
\def {\bar k} {{\bar k}}
\def {\bar x} {{\bar x}}
\def {\overline{\bf Q}} {{\overline{\bf Q}}}
\def {\overline X} {{\overline X}}
\def {\overline U} {{\overline U}}
\def \paragraph{\em D\'emonstration.} {\paragraph{\em D\'emonstration.}}
\def \paragraph{Remarque. } {\paragraph{Remarque. }}
\def \paragraph{Remarques . } {\paragraph{Remarques . }}
\def \Romannumeral #1 {\expandafter\uppercase\expandafter {\romannumeral #1} }
\def {\rm{Frac\,}} {{\rm{Frac\,}}}
\def {\cal O} {{\cal O}}
\def {\rm{Spec\,}} {{\rm{Spec\,}}}
\def {\rm{dim\,}} {{\rm{dim\,}}}
\def {\rm {Hom}} {{\rm {Hom}}}
\def {\rm {End}} {{\rm {End}}}
\def {\rm {Aut}} {{\rm {Aut}}}
\def {\mathcal X} {{\mathcal X}}
\def {\bf Z} {{\bf Z}}
\def {\bf Q} {{\bf Q}}
\def {\bf F} {{\bf F}}
\def {\bf U} {{\bf U}}
\def {\bf P} {{\bf P}}
\def {\bf N} {{\bf N}}
\def {\bf R} {{\bf R}}
\def {\bf C} {{\bf C}}
\def {\rm{Diag}} {{\rm{Diag}}}
\def {\rm{tr\,}} {{\rm{tr\,}}}
\def {\rm{exp\,}} {{\rm{exp\,}}}
\def {\rm{coker\,}} {{\rm{coker\,}}}
\def {\rm {Im\,}} {{\rm {Im\,}}}
\def {\bf G}_m {{\bf G}_m}
\def {\rm Gal}\, {{\rm Gal}\,}
\def {\rm Br}\, {{\rm Br}\,}
\def {\rm Pic}\, {{\rm Pic}\,}
\def H_{\mbox{\scriptsize\'et}} {H_{\mbox{\scriptsize\'et}}}
\def \mathop{\lim} {\mathop{\lim}}
\def \mathop{\to} {\mathop{\to}}
\def \mathop{\otimes} {\mathop{\otimes}}
\def\vbox{\hrule\hbox{\vrule height 1 ex\kern 1 ex\vrule}\hrule}{\vbox{\hrule\hbox{\vrule height 1 ex\kern 1 ex\vrule}\hrule}}
\def\hfill \smallsquare\vskip 3mm{\hfill \vbox{\hrule\hbox{\vrule height 1 ex\kern 1 ex\vrule}\hrule}\vskip 3mm}
\def \paragraph{Acknowledgements. } {\paragraph{Acknowledgements. }}
\usepackage{amscd}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{pb-diagram}
\everymath{\displaystyle}
\def \hbox{\scriptsize \'et} {\hbox{\scriptsize \'et}}
\def {\rm res} {{\rm res}}
\def {\rm nr} {{\rm nr}}
\def {\rm Ext}^1 {{\rm Ext}^1}
\def {\rm codim} \, {{\rm codim} \, }
\def {\rm ev} {{\rm ev}}
\def {\rm tors} {{\rm tors}}
\def {\rm id} {{\rm id}}
\def \kappa {\kappa}
\def {\bar \kappa} {{\bar \kappa}}
\def {\bf {\cal F}} {{\bf {\cal F}}}
\def {\cal H}om {{\cal H}om}
\def {\cal E}xt^1 {{\cal E}xt^1}
\def {\bf H} {{\bf H}}
\def \widehat{{\bf H}}{\widehat{{\bf H}}}
\DeclareFontFamily{U}{wncy}{}
\DeclareFontShape{U}{wncy}{m}{n}{%
<5>wncyr5%
<6>wncyr6%
<7>wncyr7%
<8>wncyr8%
<9>wncyr9%
<10>wncyr10%
<11>wncyr10%
<12>wncyr6%
<14>wncyr7%
<17>wncyr8%
<20>wncyr10%
<25>wncyr10}{}
\DeclareMathAlphabet{\cyrille}{U}{wncy}{m}{n}
\def\cyrille X{\cyrille X}
\def\cyrille B{\cyrille B}
\def\cyrille b{\cyrille b}
\def {\bf R}{{\bf R}}
\def {\bf C}{{\bf C}}
\def {\bf r}{{\bf r}}
\title{Autour de la conjecture de Tate \mbox{\`a coefficients ${\bf Z}_\ell$} pour les vari\'et\'es sur les corps finis}
\author{Jean-Louis Colliot-Th\'el\`ene et Tam\'as Szamuely}
\address{C.N.R.S., U.M.R. 8628, Universit\'e de Paris-Sud, Math\'ematique, B\^atiment 425, 91405 Orsay, France}
\email{jlct@math.u-psud.fr}\date{\today}
\address{Alfr\'ed R\'enyi Institute of Mathematics, Hungarian Academy of Sciences, Re\'altanoda utca 13--15, H-1053 Budapest, Hungary}\begin{document}
\email{szamuely@renyi.hu}\maketitle \markboth{Jean-Louis Colliot-Th\'el\`ene et Tam\'as Szamuely}{Autour de la
conjecture de Tate \`a coefficients ${\bf Z}_\ell$}
\section{Introduction}
Soient $k$ un corps fini, ${\bar k}$ une cl\^oture alg\'ebrique de $k$,
$G$ le groupe de Galois ${\rm Gal}\,({\bar k}|k)$ et $\ell$ un nombre premier inversible dans $k$.
Consid\'erons une vari\'et\'e projective, lisse, g\'eom\'etriquement int\`egre $X$, de
dimension $d$. D'apr\`es la conjecture de Tate, l'application cycle \`a valeurs dans la
cohomologie \'etale $\ell$-adique induit une {\em surjection}
\begin{equation}\label{tate1}
CH^i(X)\otimes_{\bf Z}{\bf Q}_\ell\twoheadrightarrow H^{2i}({\overline X}, {\bf Q}_\ell(i))^G.
\end{equation}
Une forme \'equivalente de la conjecture est la surjectivit\'e du morphisme
\begin{equation}\label{tate2}
CH^i({\overline X})\otimes_{\bf Z}{\bf Q}_\ell\to\bigcup_U H^{2i}({\overline X}, {\bf Q}_\ell(i))^U
\end{equation}
o\`u ${\overline X}:=X\times_k{\bar k}$ et $U$ parcourt le syst\`eme des sous-groupes ouverts de $G$.
La forme plus forte ci-dessus en r\'esulte par un argument de restriction-corestriction.
On peut \'egalement consid\'erer des formes enti\`eres de ces \'enonc\'es, et se demander
si les morphismes
\begin{equation}\label{tateZ1}
CH^i(X)\otimes_{\bf Z}\Z_\ell\to H^{2i}({\overline X}, {\bf Z}_\ell(i))^G
\end{equation}
ou
\begin{equation}\label{tateZ2}
CH^i({\overline X})\otimes_{\bf Z}\Z_\ell\to\bigcup_U H^{2i}({\overline X}, {\bf Z}_\ell(i))^U
\end{equation}
induits par l'application cycle sont surjectifs. Ici le deuxi\`eme \'enonc\'e de
surjectivit\'e est a priori plus faible.
Comme nous allons le rappeler dans la section \ref{conjfausse}, on ne s'attend pas \`a ce
que les formes enti\`eres de la conjecture ci-dessus soient vraies.
N\'eanmoins, il est raisonnable d'esp\'erer
la surjectivit\'e de (\ref{tateZ1}) et (\ref{tateZ2}) pour $i=d-1$, i.e. pour les
1-cycles.
Dans ce cas, la surjectivit\'e de (\ref{tateZ2}) a \'et\'e conditionnellement
d\'emontr\'ee par Chad Schoen :
\begin{theo}\label{theoschoen} {\rm (Schoen \cite{schoen})}
Soient $k$, $G$ et $X$ comme ci-dessus. Supposons la conjecture de Tate connue
pour les diviseurs sur une surface projective et lisse sur un corps fini. Alors le morphisme
$$
CH_1({\overline X})\otimes{\bf Z}_\ell\to \bigcup_UH^{2d-2}({\overline X}, {\bf Z}_\ell(d-1))^U
$$
est surjectif, o\`u $U$ parcourt le syst\`eme des sous-groupes ouverts de $G$.
\end{theo}
Notons que la conjecture de Tate pour les diviseurs sur une surface au-dessus d'un corps
fini peut \^etre per\c cue comme un analogue de la finitude hypoth\'etique du groupe de
Tate-Shafarevich de la jacobienne d'une courbe sur un corps de nombres.
Nous expliquons la d\'emonstration de Schoen (avec quelques modifications) dans les
sections \ref{seclef}, \ref{secalglin} et \ref{schoenfin}.
Au paragraphe \ref{brauermanin}, on voit que le th\'eor\`eme de Schoen a des cons\'e\-quences
sur l'existence de z\'ero-cycles sur certaines vari\'et\'es d\'efinies sur
un corps de fonctions d'une variable sur
la cl\^oture alg\'ebrique
d'un corps fini. Voici un cas particulier concret :
\begin{cor}\label{corschoen} Soient ${\bar k}$ et $X$ comme ci-dessus. Supposons qu'il exis\-te un
\mbox{${\bar k}$-morphisme} propre surjectif $f : {\overline X} \to \overline C$, avec $\overline C$
une ${\bar k}$-courbe propre lisse. Supposons en outre que la fibre g\'en\'erique de $f$ est
une intersection compl\`ete lisse de dimension $\geq 3$ et de degr\'e premier \`a ${\rm
car}(k)$ dans un espace projectif, et que chacune des fibres de $f$ poss\`ede une
composante de multiplicit\'e 1. Si la conjecture de Tate pour les diviseurs sur les
surfaces projectives lisses sur un corps fini est vraie, alors le pgcd des degr\'es des
multisections de ${\overline X} \to\overline C$ est \'egal \`a 1. \end{cor}
\section{G\'en\'eralit\'es sur la conjecture de Tate \`a coefficients entiers }
\label{conjfausse}
On entend souvent dire : la conjecture de Hodge \`a coefficients entiers est fausse,
il n'est pas raisonnable d'\'enoncer la conjecture de Tate avec des coefficients entiers.
Quelle est la situation ?
Chacune de ces conjectures porte sur l'image d'une application cycle \'emanant du groupe
de Chow $CH^r(X)$ des cycles de codimension $r$ sur une vari\'et\'e projective et lisse
$X$ de dimension $d$, \`a valeurs dans un groupe de cohomologie. Il s'agit de
$H^{2r}(X,{\bf Z})$ pour Hodge et de $ H^{2r}_{\hbox{\scriptsize \'et}}(X \times_{k}{\overline
k},{\bf Z}_{\ell}(r))$ pour Tate (dans cette section on va distinguer
les groupes de cohomologie \'etale des groupes de cohomologie singuli\`ere par des indices
pour ne pas induire une confusion). On trouvera dans le survol \cite{voisin} de Voisin un
\'etat des lieux pour la conjecture de Hodge.
Si l'on croit \`a ces conjectures \`a coefficients rationnels, la variante
enti\`ere peut \^etre mise en d\'efaut de deux fa\c cons:
\smallskip
\noindent $(a)$ on trouve une
classe de cohomologie de torsion qui n'est pas la classe d'un cycle; \smallskip
\noindent $(b)$
on trouve une classe de cohomologie d'ordre infini qui n'est pas dans l'image de
l'application cycle, mais qui donne un \'el\'ement de torsion dans son conoyau. \smallskip
Pour la conjecture de Hodge enti\`ere, il y a des contre-exemples de type $(a)$ dus \`a
Atiyah et Hirzebruch \cite{ah}, reconsid\'er\'es plus r\'ecemment par Totaro
(\cite{totaroJAMS}, \cite{totaro}), pour les groupes $H^{2r}(X,{\bf Z})$ avec $r\geq 2$.
L'exemple de dimension minimale chez eux est une vari\'et\'e de dimension $7$, avec une
classe de torsion
dans $H^4(X,{\bf Z})$.
Dans la litt\'erature (par exemple dans Milne \cite{milne}, Aside 1.4) il est affirm\'e
que l'on peut adapter ces exemples pour donner des contre-exemples \`a la conjecture de
Tate enti\`ere sous la forme (\ref{tateZ2}), mais \`a notre connaissance aucune
d\'emons\-tration n'a \'et\'e \'ecrite. Voici donc une esquisse de d\'emons\-tration qui met
en relief les modifications \`a faire par rapport au cas analytique discut\'e dans
\cite{ah}. Il s'agit de prouver le th\'eor\`eme suivant :
\begin{theo}\label{AH} ${}$
\begin{enumerate}
\item Soit $V$ une vari\'et\'e projective et lisse sur un corps alg\'ebriquement clos. Pour
tout $\ell\geq i$ inversible sur $V$ les op\'erations de Steenrod de degr\'e impair
s'annulent sur la classe de tout cycle alg\'ebrique dans le groupe $H^{2i}_{\hbox{\scriptsize \'et}}(V,
{\bf Z}/\ell{\bf Z}(i))$.
\item Au-dessus de tout corps alg\'ebriquement clos,
pour tout premier $\ell$ diff\'erent de la caract\'eristique, il existe une intersection
compl\`ete lisse $Y\subset {\bf P}^N$, un groupe fini $G$ agissant librement sur $Y$, une
classe $c$ de $\ell$-torsion dans $H^{4}_{\hbox{\scriptsize \'et}}(Y/G, {\bf Z}_\ell(2))$ et une op\'eration de
Steenrod de degr\'e impair qui n'annule pas
l'image de $c$
dans $H^{4}_{\hbox{\scriptsize \'et}}(Y/G, {\bf Z}/\ell{\bf Z}(2))$.
\end{enumerate}
\end{theo}
Ici pour $\ell>2$ premier
\og
op\'eration de Steenrod de degr\'e impair
\fg{}
veut dire un
compos\'e d'op\'erations de Steenrod ${\mathcal P}^j$ et d'une op\'eration de Bockstein.
Les op\'erations ${\mathcal P}^j:\, H^{i}_{\hbox{\scriptsize \'et}}(V, {\bf Z}/\ell{\bf Z})\to H^{i+2j(\ell-1)}_{\hbox{\scriptsize \'et}}(V,
{\bf Z}/\ell{\bf Z})$ en cohomologie \'etale ont \'et\'e d\'efinies par Mme Raynaud dans \cite{ray}.
Pour $\ell=2$ on utilise des op\'erations ${Sq}^j:\, H^{i}_{\hbox{\scriptsize \'et}}(V, {\bf Z}/2{\bf Z})\to
H^{i+j}_{\hbox{\scriptsize \'et}}(V, {\bf Z}/2{\bf Z})$, \'egalement d\'efinies dans \cite{ray}.
Si le corps de base est une cl\^oture alg\'ebrique $\overline F$ d'un sous-corps $F$,
toute classe de torsion dans $H^{2i}_{\hbox{\scriptsize \'et}}(V, {\bf Z}_\ell(i))$ est invariante par un
sous-groupe ouvert de ${\rm Gal}\,(\overline F|F)$, donc pour $F$ fini le th\'eor\`eme nous
fournit un contre-exemple du type $(a)$ \`a la surjectivit\'e des applications
(\ref{tateZ1}) et (\ref{tateZ2}).
\smallskip
Esquissons une d\'emonstration du th\'eor\`eme qui nous a \'et\'e g\'en\'ereuse\-ment
communiqu\'ee par Burt Totaro. Pour d\'emontrer (1), la premi\`ere observation est que par
le th\'eor\`eme de Riemann-Roch sans d\'enomina\-teurs de Jouanolou (\cite{fulton},
Example 15.3.6) pour $\ell$ premier \`a $(i-1)!$ toute classe de cycle dans
$H^{2i}_{\hbox{\scriptsize \'et}}(V, {\bf Z}/\ell{\bf Z}(i))$ est combinaison lin\'eaire de classes de Chern $c_i(E)$ de
fibr\'es vectoriels $E$ sur $X$. Il suffit donc de d\'emontrer l'\'enonc\'e d'annulation
pour les $c_i(E)$. Un calcul d'op\'erations de Steenrod montre que l'annulation vaut pour
$E$ si et seulement si elle vaut pour $E\otimes L$ avec $L$ tr\`es ample de rang un.
Ainsi, on peut supposer que $E$ est engendr\'e par ses sections globales, et a fortiori
qu'il est la tirette du fibr\'e tautologique d'une grassmannienne. L'\'enonc\'e r\'esulte
alors de la fonctorialit\'e contravariante des ${\mathcal P}^j$
et de la trivialit\'e de
la cohomologie d'une grassmannienne en degr\'es impairs (\cite{sga5}, expos\'e VII,
proposition 5.2).
Un point clef de l'argument d'Atiyah--Hirzebruch \cite{ah} \'etait l'identi\-fication de
la cohomologie en bas degr\'es d'une vari\'et\'e de Godeaux--Serre $Y/G$ comme dans (2)
\`a celle du produit d'espaces classifiants $BG\times B{\bf G}_m$. On peut alg\'ebriser leur
m\'ethode en utilisant l'approxima\-tion alg\'ebrique de $BG$ introduite par Totaro. En
effet, d'apr\`es (\cite{totaro}, Remark 1.4) pour tout $s\geq 0$ il existe une
repr\'esentation $k$-lin\'eaire $W$ de $G$ telle que l'action de $G$ soit libre en dehors
d'un ferm\'e $S$ de codimension $s$ dans $W$. La cohomologie de $BG:=(W\setminus S)/G$ est
\'egale \`a celle de $G$ jusqu'en degr\'e $s\,{}$; en particulier, elle ne d\'epend
ni du choix de $W$ ni du choix de $S$.
Le quotient ${\bf P}(W)//G:=({\bf P}(W)\times (W\setminus S))/G$ est un fibr\'e
projectif sur $BG$, donc son anneau de cohomologie est un anneau de polyn\^omes sur celui
de $BG$. En particulier, la cohomologie de $BG$ est facteur direct dans celle de
${\bf P}(W)//G$.
Si maintenant $Y\subset {\bf P}(W)$ est une intersection compl\`ete lisse sur laquelle
\mbox{l'action} de $G$ est libre, la cohomologie de $Y/G$ est isomorphe \`a celle de
${\bf P}(W)//G$ en bas degr\'es. En effet, la cohomologie de $Y$ est isomorphe \`a celle de
${\bf P}(W)$ jusqu'en degr\'e
${\rm{dim\,}}(Y)-1$ par le th\'eor\`eme de Lefschetz faible. On en d\'eduit un isomorphisme entre
les cohomologies de $(Y\times (W\setminus S))/G$ et de ${\bf P}(W)//G$ jusqu'en degr\'e
${\rm{dim\,}}(Y)$ en appliquant la suite spectrale de Hochschild--Serre aux $G$-rev\^etements
${Y\times (W\setminus S)}\to (Y\times (W\setminus S))/G$ et $({\bf P}(W)\times (W\setminus
S))\to {\bf P}(W)//G$. Or la cohomologie de $(Y\times (W\setminus S))/G$ s'identifie \`a celle
de $(Y\times W)/G$ jusqu'en degr\'e $s$, et finalement \`a celle de $Y/G$ dans
le m\^eme intervalle, puisque $W$ est un espace affine.
En somme, en bas degr\'es la cohomologie de $BG$ (donc celle de $G$) s'identifie \`a un
facteur direct de celle de $Y/G$ ci-dessus. La fin de la d\'emonstration de (2) est alors
similaire \`a celle de (\cite{ah}, Proposition 6.7). Prenons $G=({\bf Z}/\ell{\bf Z})^3$. Comme $G$
est d'exposant $\ell$, la suite exacte longue associ\'ee \`a ${0\to{\bf Z}_\ell\to{\bf Z}_\ell\to
{\bf Z}/\ell{\bf Z}\to 0}$
montre que $H^i(G, {\bf Z}_\ell)$ s'identifie au noyau du morphisme de Bockstein $\beta:\,
H^i(G, {\bf Z}/\ell{\bf Z})\to H^{i+1}(G,{\bf Z}/\ell{\bf Z})$. Le cup-produit des \'el\'ements d'une base du
$({\bf Z}/\ell{\bf Z})$-espace vectoriel $H^1(G,{\bf Z}/\ell{\bf Z})\cong ({\bf Z}/\ell{\bf Z})^3$ donne une classe dans
$H^3(G, {\bf Z}/\ell{\bf Z})$. Essentiellement le m\^eme calcul que dans \cite{ah} montre que pour
$\ell>2$ l'image de cette classe dans $H^4(G, {\bf Z}/\ell{\bf Z})$ par le Bockstein $\beta$ n'est
pas annul\'ee par l'op\'eration $\beta{\mathcal P}^1$, dont le degr\'e est $2\ell-1$. Pour
$\ell=2$ la m\^eme conclusion vaut pour $Sq^3$.
\medskip
Terminons cette section par une br\`eve discussion des contre-exemples de type~$(b)$. Un
c\'el\`ebre contre-exemple de ce type \`a la conjecture de Hodge a \'et\'e fabriqu\'e par
J. Koll\'ar \cite{kollar}; voir aussi \cite{soulevoisin}. Il s'agit d'une hypersurface
\og tr\`es g\'en\'erale \fg{} dans ${\bf P}^4_{{\bf C}}$ de degr\'e $m$ un multiple de $\ell^3$ avec
$\ell$ entier premier \`a $6$, et de l'application cycle $CH^2(X) \to H^4(X,{\bf Z})$. Comme il
s'agit d'une hypersurface de degr\'e $m$, ici on a $H^4(X,{\bf Z})\cong{\bf Z}$, et l'image de
l'application cycle contient $m{\bf Z}$. Mais Koll\'ar montre par un argument de d\'eformation
astucieux que toute courbe sur $X$ a un degr\'e divisible par $\ell$. En d'autres mots,
l'image de $CH^2(X) \to H^4(X,{\bf Z})$ est contenue dans $\ell H^4(X,{\bf Z})$ et ne peut \^etre
surjective. Comme le note C. Voisin (\cite{soulevoisin}, \cite{voisin}), on peut \`a
partir de cet exemple fabriquer des contre-exemples \`a la conjecture de Hodge enti\`ere
en d'autres degr\'es aussi, par \'eclatement ou par produit direct avec une autre
vari\'et\'e.
L'\'enonc\'e de Koll\'ar en induit un au niveau de la cohomologie $\ell$-adique \'etale.
En effet, si on travaille sur un corps
alg\'ebriquement clos
non d\'enombrable et on choisit $\ell$ premier \`a
la caract\'eristique et \`a 6, sa m\'ethode fournit toujours une hypersurface
$X\subset{\bf P}^4$ sur laquelle toute courbe a un degr\'e divisible par $\ell$. (Le corps non
d\'enombrable sert ici pour pouvoir choisir le point correspondant \`a $X$ d'un sch\'ema
de Hilbert convenable en dehors de la r\'eunion d'une famille d\'enombrable de ferm\'es
propres.) Ensuite, comme pour toute vari\'et\'e digne de ce nom, on trouve un corps $K$ de
type fini sur le corps premier sur lequel $X$ est d\'efinie. Notant $\overline K$ une
cl\^oture alg\'ebrique de $K$, l'image de l'application cycle
$$CH^2(X \times_{K}{\overline K}) \to H^4_{\hbox{\scriptsize \'et}}(X \times_{K}{\overline K},{\bf Z}_{\ell}(2))\cong{\bf Z}_\ell$$
est alors contenue dans $\ell{\bf Z}_\ell$; noter qu'ici l'action de Galois sur la cohomologie
induit l'action triviale sur ${\bf Z}_\ell$.
\begin{remas}\label{remconjfausse}\rm ${}$\smallskip
\noindent 1. La m\'ethode ci-dessus ne permet pas de trouver un tel exemple avec $K$ un
corps de nombres.\smallskip
\noindent 2. Par le th\'eor\`eme \ref{theoschoen}, si on croit \`a la conjecture de Tate
rationnelle pour les diviseurs sur les surfaces, en caract\'eristique positive le corps
$K$ ci-dessus ne peut \^etre un corps fini.\smallskip
\noindent 3. Nous ne savons pas s'il existe des contre-exemples du type $(b)$ \`a la
surjectivit\'e de (\ref{tateZ1}) sur un corps fini. En d'autres mots, nous ne savons pas
si pour $X$ projective, lisse, g\'eom\'etri\-quement connexe sur un corps fini $k$,
l'application
$$
CH^i(X)\otimes_{\bf Z}\Z_\ell\to H^{2i}_{\hbox{\scriptsize \'et}}({\overline X}, {\bf Z}_\ell(i))^G/{\rm torsion}
$$
induite par l'application cycle est toujours surjective.
Cette question est \'equivalente \`a la question suivante, fort int\'eressante du point de
vue de \cite{CT} : pour tout $i\geq 0$, l'application
$$
CH^i(X)\otimes_{\bf Z}\Z_\ell\to H^{2i}_{\hbox{\scriptsize \'et}}(X, {\bf Z}_\ell(i))/{\rm torsion}
$$
induite par l'application cycle
\begin{equation}\label{appcyc}
CH^i(X)\otimes_{\bf Z}\Z_\ell\to H^{2i}_{\hbox{\scriptsize \'et}}(X, {\bf Z}_\ell(i)) \end{equation}
est-elle surjective
? Le lien entre les deux questions est fourni par les suites exactes
$$ 0 \to H^1(k,H^{2i-1}_{\hbox{\scriptsize \'et}}(X_{{\bar k}},{\bf Z}_\ell(i)))) \to H^{2i}_{\hbox{\scriptsize \'et}}(X, {\bf Z}_\ell(i)) \to H^{2i}_{\hbox{\scriptsize \'et}}(X_{\bar k}, {\bf Z}_\ell(i))^G
\to 0,$$
o\`u les groupes $H^1(k,H^{2i-1}_{\hbox{\scriptsize \'et}}(X_{{\bar k}},{\bf Z}_\ell(i))))$ sont des groupes finis
(ceci est une cons\'equence du th\'eor\`eme de Deligne \'etablissant les conjectures de Weil).
Notons ici pour usage ult\'erieur que par un argument bien connu utilisant la suite de Kummer
et le groupe de Brauer,
pour $i=1$ la surjectivit\'e de (\ref{appcyc}) est \'equivalente \`a la conjecture
de Tate \`a coefficients ${\bf Q}_\ell$, et m\^eme \`a la bijectivit\'e du morphisme
(\ref{tate1}) (voir \cite{tate}, Proposition 4.3). En vertu de la suite exacte ci-dessus,
dans le cas des diviseurs la conjecture de Tate \`a coefficients ${\bf Q}_\ell$ implique donc
la version enti\`ere sous toutes ses formes possibles.
\end{remas}
\section{Le th\'eor\`eme de Schoen, I : un argument de type Lefschetz}\label{seclef}
Nous commen\c cons maintenant l'exposition de la d\'emonstration du th\'eor\`eme
\ref{theoschoen}, suivant \cite{schoen}.
Au cours de la preuve nous ferons \`a plusieurs reprises des extensions du corps de base
de degr\'e premier \`a $\ell$. Un argument de
restriction-corestriction fournit alors le r\'esultat au-dessus du corps de base initial.
\begin{lem} Il suffit d'\'etablir le th\'eor\`eme \ref{theoschoen} pour $d=3$.
\end{lem}
\begin{demo} D'apr\`es le th\'eor\`eme de Bertini sur un corps fini \cite{gabber,poonen}, on peut
trouver un plongement projectif de $X$ et une hypersurface $H$ tels que le $k$-sch\'ema
$Y=X \cap H$ soit de codimension 1 dans $X$, lisse et g\'eom\'etriquement connexe.
Comme $X\setminus Y$ est affine, pour $d>3$, les th\'eor\`emes sur la dimension
cohomologique des sch\'emas affines
(\cite{milneEC}, \S VI.7) donnent
$$H^{2d-3}({\overline X}\setminus Y_{\bar k}, {\bf Z}_\ell(d-1))=0, \hskip2mm H^{2d-2}({\overline X}\setminus Y_{\bar k},
{\bf Z}_\ell(d-1))=0.$$
Donc le morphisme compos\'e
$$
H^{2d-4}(Y_{\bar k}, {\bf Z}_\ell(d-2))\stackrel\sim\to H^{2d-2}_{Y_{\bar k}}({\overline X}, {\bf Z}_\ell(d-1))\to
H^{2d-2}({\overline X}, {\bf Z}_\ell(d-1))
$$
de l'isomorphisme de puret\'e
et de la fl\`eche provenant
de la suite de localisation est un isomorphisme (th\'eor\`eme de Lefschetz faible, {\it ibidem}).
Comme l'application cycle est compatible aux morphismes de Gysin (\cite{milneEC},
Proposition VI.9.3), par r\'ecurrence
sur $d$ on se ram\`ene donc au cas $d=3$.
\end{demo}
\hfill \smallsquare\vskip 3mm
Jusqu'\`a la fin du paragraphe 4, on suppose donc $d={\rm dim}(X)=3$.
\smallskip
Remarquons ensuite qu'il suffit d'\'etablir le th\'eor\`eme apr\`es avoir \'eclat\'e un
point de $X$, puisque la cohomologie de $X$ s'identifie \`a un facteur direct de celle de
l'\'eclat\'e (\cite{sga7}, expos\'e XVIII, 2.2.2) et l'application cycle est compatible
aux morphismes propres de vari\'et\'es propres et lisses (\cite{laumon}, th\'eor\`eme 6.1
et remarque 6.4).
Ainsi, apr\`es avoir fait un \'eclatement convenable, on peut
tranquillement supposer que le deuxi\`eme nombre de Betti $\ell$-adique $b_2({\overline X})$ de
${\overline X}$ est {\em impair.} En effet, si par malheur ce nombre est pair, on \'eclate un
point ferm\'e de degr\'e $f$ impair (d'apr\`es un argument de type Lang--Weil, un tel
point existe puisque
la vari\'et\'e $X$ est g\'eom\'etriquement int\`egre),
ce qui donne pour l'\'eclat\'e $X^*$ la formule $b_2(X^*)=b_2(X)+f$
d'apr\`es \cite{sga7}, expos\'e XVIII, (2.3.1). La raison pour cette hypoth\`ese
suppl\'ementaire se d\'evoilera lors de la preuve de la proposition \ref{cica} ci-dessous.
Un calcul simple de classes de Chern
(voir \cite{schoenams}, 9.2.1) montre que, quitte \`a composer
le plongement projectif donn\'e de $X$
avec un plongement de
Veronese de degr\'e {\em pair}, on peut supposer que le deuxi\`eme nombre de Betti
$\ell$-adique de toute section hyperplane lisse de $X$ est {\em pair}.
Cette information de parit\'e sera
\'egalement importante pour la suite.
Ayant fait une extension de degr\'e premier \`a $\ell$ si n\'ecessaire, on trouve un
\'eclat\'e $V\to X$ muni d'un pinceau de Lefschetz $V\to D\cong {\bf P}^1$ de sections
hyperplanes (\cite{sga7}, expos\'e XVII, th\'eor\`eme 2.5). Notons $\dot D\subset D$ le
lieu au-dessus duquel le morphisme $V\to D$ est lisse, et choisissons un point
g\'en\'erique g\'eom\'etrique $\varepsilon$ de $D$. D'apr\`es ce qui pr\'ec\`ede,
le deuxi\`eme nombre de Betti de
$V_\varepsilon$ est pair.
Introduisons les notations $\pi$ (resp. $\bar \pi$) pour le groupe fondamental
arithm\'etique (resp. g\'eom\'etrique) de $\dot D$ ayant $\varepsilon$ pour point base.
\begin{prop}\label{imageinfinie}
Quitte \`a faire une extension de $k$ de degr\'e premier \`a $\ell$, on peut choisir le
pinceau $V$ de sorte que l'image de $\bar \pi$ dans ${\rm Aut}_{{\bf Z}_\ell}(H^2(V_\epsilon,
{\bf Z}_\ell(1)))$ via la repr\'esentation de monodromie soit infinie.
\end{prop}
\begin{demo} C'est la Proposition 1.1 de \cite{schoen}. On ne donne que l'id\'ee de
l'argument. Soit ${\bf P}$ l'espace projectif param\'etrisant les intersections de $X$ avec
les hypersurfaces de degr\'e fixe suffisamment grand dans un plongement projectif fix\'e.
Soient ${\bf V}\subset {\bf P}\times X$ l'hypersurface universelle, et $\dot {\bf P}\subset {\bf P}$
l'ouvert de lissit\'e de la fibration ${\bf V}\to {\bf P}$. Le choix d'un pinceau de Lefschetz
correspond au choix d'une droite $D\subset{\bf P}$, et on a $V={\bf V}\times_{{\bf P}}D$. Par un
argument de type Bertini (voir par exemple \cite{fgbook}, Lemma 5.7.2)
apr\`es une extension finie de $k$ de degr\'e premier \`a $\ell$ on trouve
$D$ assez g\'en\'erale pour laquelle l'homomorphisme $\pi_1(\dot D_{\bar k}, \epsilon)\to \pi_1(\dot
{\bf P}_{\bar k}, \epsilon)$ est surjectif. Il suffit donc de montrer
que l'image du deuxi\`eme groupe dans ${\rm Aut}_{{\bf Z}_\ell}(H^2(V_\epsilon, {\bf Z}_\ell(1)))$
est infinie. Schoen montre par une construction de g\'eom\'etrie alg\'ebrique classique
qu'il existe un autre espace projectif $P$ et un morphisme $P\to{\bf P}$ tels que le morphisme
${\bf V}\times_{{\bf P}}P\to P$ se factorise en ${\bf V}\times_{{\bf P}}P\to W\to P$, o\`u $W$ est
une hypersurface projective lisse, et la dimension relative de $W\to P$ est 2. Comme $W$
est une hypersurface, elle se rel\`eve en caract\'eristique 0, et un th\'eor\`eme de
Deligne (\cite{verdier}, Th\'eor\`eme B) montre que la monodromie de tout pinceau de
Lefschetz balayant $W$ est infinie (sous l'hypoth\`ese ${\rm car}\,(k)\neq 0$; en
caract\'eristique 2 un petit argument suppl\'ementaire est donn\'e dans \cite{schoen}).
Ceci implique que la monodromie doit \^etre infinie pour la fibration ${\bf
V}\times_{{\bf P}}P\to P$, et finalement pour ${\bf V}\to{\bf P}$.
\end{demo}
\hfill \smallsquare\vskip 3mm
\smallskip
Expliquons maintenant l'id\'ee de la preuve du th\'eor\`eme \ref{theoschoen}. Tout
d'abord, il suffit de montrer que toute classe dans $H^4({\overline X}, {\bf Z}_\ell(2))^G$ est la
classe d'un cycle alg\'ebrique sur ${\overline X}$ (ensuite, pour un sous-groupe ouvert $U\subset
G$ on peut appliquer ce r\'esultat apr\`es changement de base de $X$ au sous-corps fix\'e
par $U$). Etant donc donn\'e $w\in H^4({\overline X}, {\bf Z}_\ell(2))$ fix\'e par $G$, on montre qu'il
est la poussette d'un \'el\'ement de $H^2(V_{\bar x}, {\bf Z}_{\ell}(1))^{{\rm Gal}\,({\bar k}|k(x))}$,
o\`u $\bar x$ est un point g\'eom\'etrique au-dessus d'un point ferm\'e $x\in\dot D$. La
conjecture de Tate pour les diviseurs sur la surface $V_x$ (qui est valable \`a
coefficients ${\bf Z}_\ell$ si elle est valable \`a coefficients ${\bf Q}_\ell$ d'apr\`es ce qu'on a
dit dans la remarque \ref{remconjfausse} (3) ci-dessus) montre alors que cet \'el\'ement
est la classe d'un cycle alg\'ebrique.
Notons qu'\`a coefficients ${\bf Q}_\ell$ l'\'enonc\'e voulu est une cons\'equence directe du
th\'eor\`eme de Lefschetz difficile ({\em cf.} la preuve du lemme \ref{lemcica} {\em
infra}); toute la finesse de l'argument consiste \`a en tirer un \'enonc\'e \`a
coefficients entiers.
Voici une reformulation. \'Ecrivons $X_\varepsilon$ pour le changement de base
$X\times_{{\rm{Spec\,}} k}\varepsilon$. Par d\'efinition, il est muni de l'action triviale de
$\bar\pi$ (celui-ci agissant sur les fibres de la fibration triviale $X\times D\to D$), et
par cons\'equent
$$H^i({\overline X}, {\bf Z}_\ell(2))^{{\rm Gal}\,({\bar k}|k)}\cong H^i(X_\varepsilon, {\bf Z}_\ell(2))^\pi$$ pour
tout $i>0$. Notant $D_{\bar x}$ le groupe de d\'ecomposition d'un point $\bar x$ du
rev\^etement universel profini de $\dot D$ au-dessus de $x$, on obtient
$$H^i(V_{\bar x}, {\bf Z}_\ell(2))^{{\rm Gal}\,({\bar k}|k(x))}\cong
H^i(V_\varepsilon, {\bf Z}_\ell(2))^{D_{\bar x}}$$
pour tout $i>0$ par
le th\'eor\`eme de
changement de base propre et l'isomorphisme $D_{\bar x}\cong{{\rm Gal}\,({\bar k}|k(x))}$.
Notons $i$ l'inclusion de la surface $V_\varepsilon$ dans la vari\'et\'e $X_\varepsilon$
(qui est de dimension 3). Elle induit un morphisme de restriction
$$ i^* :\,H^2(X_\varepsilon, {\bf Z}_\ell(1))\to H^2(V_\varepsilon, {\bf Z}_\ell(1))$$ ainsi qu'une
poussette $$i_* :\,H^2(V_\varepsilon, {\bf Z}_\ell(1))\to H^4(X_\varepsilon, {\bf Z}_\ell(2)).$$
D'apr\`es la discussion ci-dessus, il suffit donc de montrer :
\begin{prop}\label{cica} Chaque \'el\'ement de $H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi$ est de la forme
$i_*(\beta)$, avec un $\beta\in H^2(V_\varepsilon, {\bf Z}_\ell(1))$ invariant sous l'action
d'un sous-groupe de d\'ecomposition $D_{\bar x}$ dans $\pi$,
pour un point ${\bar x}$ convenable.
\end{prop}
Interrompons-nous pour quelques consid\'erations d'alg\`ebre ${\bf Z}_\ell$-lin\'eaire.
\section{Le th\'eor\`eme de Schoen, II : Lemmes d'alg\`ebre lin\'eaire}\label{secalglin}
Etant donn\'es un ${\bf Z}_\ell$-module $B$ et un sous-ensemble $A\subset B$,
on d\'efinit {\em le satur\'e}
$A_s$ de $A$ dans $B$ comme l'ensemble des $b\in B$ avec $\ell^nb\in A$ pour un $n \geq 0$
convenable. On dit que $A$ est satur\'e dans $B$ si $A_s=A$.
\begin{lem}\label{la1} Pour un module $B$ de type fini sur ${\bf Z}_\ell$ il existe un sous-groupe ouvert
$\Gamma\subset Aut_{{\bf Z}_l}(B)$
tel que $B^g$ soit satur\'e dans $B$ pour tout
$g\in \Gamma$.
\end{lem}
\begin{demo} Ecrire $B=F\oplus T$ avec $F$ libre et $T$ de torsion, et prendre
$\Gamma=Aut_{{\bf Z}_\ell}(F)\times \{{\rm id}_T\}$.
\end{demo}
\hfill \smallsquare\vskip 3mm
\begin{prop}\label{la2}
Soient $F$ un ${\bf Z}_\ell$-module libre de rang fini impair, $S\subset F$ un sous-ensemble
ouvert pour la topologie $\ell$-adique, et $\Phi : F\times F\to{\bf Z}_\ell$ une forme
bilin\'eaire sym\'etrique non d\'eg\'en\'er\'ee sur ${\bf Q}_\ell$. Il existe alors un
sous-ensemble ouvert ${\mathcal S}\subset O(F, \Phi)$ tel que :
(a) chaque \'el\'ement de
$\mathcal S$ admet un vecteur fixe non nul dans $S$;
(b) l'ouvert $\mathcal S$ contient des
\'el\'ements arbitrairement proches de $1$ pour la topologie $\ell$-adique.
\end{prop}
Ici $O(F, \Phi)$ d\'esigne le groupe des automorphismes ${\bf Z}_\ell$-lin\'eaires de $F$
pr\'eservant $\Phi$. Pour la preuve nous avons besoin d'un r\'esultat ancillaire.
\begin{lem}\label{lemla1}
Soient $K$ un corps de caract\'eristique diff\'erente de 2, $V$ un \mbox{$K$-espace}
vectoriel de dimension finie impaire, et $\Phi$ une forme quadratique non
d\'eg\'en\'er\'ee sur $V$. Tout \'el\'ement de $SO(\Phi)$ admet $1$ comme valeur propre.
\end{lem}
\begin{demo} Notons $A$ la matrice de
$\Phi$ dans une base fix\'ee de $V$. Pour un \'el\'ement de $O(\Phi)$ dont la matrice est
$M$ et la matrice transpos\'ee $M^t$, on a
$M^t .A .M=A$. D'o\`u
$$M^t .A .(M-I)=A- M^t .A= (I-M^t).A.$$
En prenant le d\'eterminant on obtient :
$$ \det(M). \det(A). \det(M-I)= \det(I-M). \det(A).$$
Ici $\det(A)\neq 0$ et $\det(M)=1$ (comme $M \in SO(\Phi)$), donc
$\det(M-I)=\det(I-M)$. Comme $V$ est de dimension impaire, ceci n'est possible que si $\det(M-I)=0$.
\end{demo}
\hfill \smallsquare\vskip 3mm
\bigskip
\noindent{\em D\'emonstration de la proposition \ref{la2}.} Soit $U\subset
SO(F_{{\bf Q}_\ell},\Phi)$ l'ouvert de Zariski form\'e des \'el\'ements ayant des valeurs
propres distinctes. C'est aussi un ouvert de Zariski de $O(F_{{\bf Q}_\ell},\Phi)$. Comme $F$
est de rang impair, tout \'el\'ement de $SO(F_{{\bf Q}_\ell}, \Phi)$ admet 1 comme valeur
propre par le lemme \ref{lemla1}. Donc tout $u\in U$ stabilise un sous-espace $L_u$ de
dimension~1 correspondant \`a la valeur propre 1.
Envoyant $u$ sur $L_u$ on obtient une application continue
$\lambda :\,U\to {\bf P}(F_{{\bf Q}_\ell})$. L'image
de $S \setminus 0$ dans ${\bf P}(F_{{\bf Q}_\ell})$
par la projection naturelle $F_{{\bf Q}_\ell} \setminus 0 \to {\bf P}(F_{{\bf Q}_\ell})$
est
ouverte, tout comme son image inverse $\mathcal S$ dans $U\subset SO(F_{{\bf Q}_\ell}, \Phi)$.
Reste \`a voir que l'ensemble $\mathcal S$ est non vide, et qu'il contient des
\'el\'ements arbitrairement proches de~1. Soit $v\in S$ un vecteur non isotrope. Ecrivant
$F_{{\bf Q}_\ell}={\langle v\rangle\perp M}$ avec un ${\bf Q}_\ell$-vectoriel $M$, on commence par
montrer qu'il existe un \'el\'ement de $SO(M, \Phi|_M)$ \`a valeurs propres distinctes, et
toutes diff\'erentes de~1.
Pour cela, on d\'ecompose l'espace quadratique $M$ en une somme orthogonale d'espaces quadratiques
$V_i$ de dimension~2. Chaque $SO(V_{i})$ est un tore
$T_{i}=R^1_{k_{i}/k}{\bf G}_m$ de dimension~1, o\`u $k_{i}/k$ est une alg\`ebre \'etale de
degr\'e 2 sur ${\bf Q}_\ell$. Si $k_{i} \simeq {\bf Q}_\ell \times {\bf Q}_\ell$, alors $T_{i} \simeq
{\bf G}_{m,k}$, et ${\bf G}_{m,k}$ agit sur $V_{i}={\bf Q}_\ell \oplus {\bf Q}_\ell$ par
$\lambda.(u,v)=(\lambda.u, \lambda^{-1}.v)$. Si $k_{i}$ est une extension quadratique de
${\bf Q}_\ell$, alors $SO(V_{i})({\bf Q}_\ell)$ est le groupe des \'el\'ements de norme 1 dans
$k_{i}$, et l'action de ce groupe sur $V_{i} \simeq k_{i}$ est donn\'ee par la
multiplication dans $k_{i}$.
Les deux valeurs propres d'un \'el\'ement $\alpha \in SO(V_{i})({\bf Q}_\ell) \subset
SO(\Phi) \subset GL(V) $ sont les conjugu\'es de $\alpha$ (qui sont inverses l'un \`a
l'autre).
On trouve donc une famille d'\'el\'ements $\alpha_i \in SO(V_{i})$ dont la somme d\'efinit un \'el\'ement
de $SO(M,\Phi|_M)$ qui
a des valeurs propres distinctes et diff\'erentes de 1. De plus, on peut
choisir les matrices des $\alpha_i$ de sorte qu'elles aient des
coefficients
dans ${\bf Z}_\ell$
et qu'elles soient arbitrairement proches de la matrice 1 pour la topologie $\ell$-adique.
Si elles sont suffisamment
proches de 1, leur somme directe doit pr\'eserver la trace du r\'eseau $F$ sur $M$.
\hfill \smallsquare\vskip 3mm
\section{Le th\'eor\`eme de Schoen, III : fin de la d\'emonstration}\label{schoenfin}
Il nous reste \`a d\'emontrer la proposition \ref{cica}.
\begin{lem}\label{lemcica}
Il existe une inclusion
$$
H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi\subset i_*((\ker i_* +H^\pi)_s),
$$
o\`u
$$
H :={\rm {Im\,}}(i^*)\subset H^2(V_\varepsilon, {\bf Z}_\ell(1)).
$$
\end{lem}
\begin{demo}
Le morphisme compos\'e
$$
H^2(X_\varepsilon, {\bf Z}_\ell(1)) \stackrel{i^*}\to H^2(V_\varepsilon,
{\bf Z}_\ell(1))\stackrel{i_*}\to H^4(X_\varepsilon, {\bf Z}_\ell(2))
$$
est un isomorphisme apr\`es tensorisation par ${\bf Q}_\ell$ selon le th\'eor\`eme de Lefschetz
difficile, car c'est le cup-produit par la classe de la section hyperplane
$V_\varepsilon$. Il est \'equivariant pour l'action de $\pi$, car $V_\varepsilon$
provient par changement de corps de base d'une $k({\bf P}^1)$-vari\'et\'e.
Donc par d\'efinition de $H$
$$i_*(H^\pi)\otimes{\bf Q}_\ell\cong
H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi\otimes {\bf Q}_\ell,
$$
d'o\`u
\begin{equation}\label{hl}
H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi\subset (i_*(H^\pi))_s. \end{equation}
Remarquons maintenant que le morphisme
$$
i_* :\,H^2(V_\epsilon, {\bf Z}_\ell(1))\to H^4(X_\varepsilon, {\bf Z}_\ell(2))
$$
est surjectif. Ceci r\'esulte du th\'eor\`eme de Lefschetz faible : dans la suite de
localisation
$$
H^4_{V_\varepsilon}(X_\varepsilon, {\bf Z}_\ell(2))\to H^4(X_\varepsilon, {\bf Z}_\ell(2))\to
H^4(X_\varepsilon\setminus V_\varepsilon, {\bf Z}_\ell(2))
$$
le dernier terme est 0, car la vari\'et\'e $X_\varepsilon\setminus V_\varepsilon$
est affine de dimension 3, et le premier terme est isomorphe
\`a $H^2(V_\varepsilon, {\bf Z}_\ell(1))$ par puret\'e.
En particulier, \'etant donn\'e $w\in H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi$, on trouve
$\beta\in H^2(V_\varepsilon, {\bf Z}_\ell(1))$ avec
$$
w=i_*(\beta).
$$
D'autre part, (\ref{hl}) implique
$$
\ell^nw=i_*(\gamma)$$ pour un $\gamma\in H^\pi$ convenable et $n \geq 0$. Mais comme
$\ell^nw=i_*\ell^n\beta$, on obtient $i_*(\gamma-\ell^n\beta)=0$, i.e. $\ell^n\beta\in
\ker i_*+ H^\pi$, d'o\`u le lemme.
\end{demo}
\hfill \smallsquare\vskip 3mm
\begin{cor} Pour $w\in H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi$ fix\'e, le sous-ensemble
$$
H_w :=\{v\in \ker i_* :\, w\in i_*((v+H^\pi)_s)\}
$$
de $\ker i_* \subset H^2(V_\varepsilon, {\bf Z}_\ell(1))$ est un ouvert non vide de $\ker i_*$,
stable par multiplication par $\ell$.
\end{cor}
\begin{demo}
Le lemme donne $H_w\neq\emptyset$; plus pr\'ecis\'ement, la preuve du lemme montre que
$v_0 :=\ell^n\beta - \gamma \in H_w$. Ce choix de $n$ donne $(v_0+\ell^n\ker i_*)\subset
H_w$, car pour $\delta\in\ker i_*$ et $v=v_0+\ell^n\delta$ on a $i_*(\beta+\delta)=w$ et
$\ell^n(\beta+\delta)=v_0+\ell^n\delta+\gamma=v+\gamma\in (v+H^\pi)$. Enfin, la
stabilit\'e de $H_w$ par multiplication par $\ell$ r\'esulte de la d\'efinition.
\end{demo}
\hfill \smallsquare\vskip 3mm
Consid\'erons maintenant la forme ${\bf Z}_\ell$-bilin\'eaire sur $H^2(V_\varepsilon,
{\bf Z}_\ell(1))$ induite par le cup-produit (i.e. la forme d'intersection)
sur la cohomologie de la
surface $V_\varepsilon$,
et notons
$H^\perp$ l'orthogonal de $H$. On a alors $\ker i_*\subset H^\perp
\subset H^2(V_\varepsilon, {\bf Z}_\ell(1)),$
et l'inclusion ${\ker i_*\subset H^\perp}$ devient \'egalit\'e
apr\`es tensorisation par ${\bf Q}_\ell$. En effet, les accouplements de cup-produit satisfont
\`a la compatibilit\'e
$$
\alpha\cup i_*(\beta)=i^*(\alpha)\cup\beta
$$
pour $\alpha\in H^2(X_\varepsilon, {\bf Z}_\ell(2))$ et $\beta\in H^2(V_\varepsilon,
{\bf Z}_\ell(1))$, et ils sont non d\'eg\'en\'er\'es \`a coefficients ${\bf Q}_\ell$.
Soit $F\subset H^\perp$ un ${\bf Z}_\ell$-module libre, compl\'ement direct au sous-module de
torsion $T$. Alors $F\cap \ker i_*$ est un sous-module ouvert dans $F$, et d'indice fini
dans $\ker i_*$. Ainsi le corollaire pr\'ec\'edent implique :
\begin{cor}\label{ouvertSw} Pour $w\in H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi$ fix\'e le sous-ensemble
$$
S_w :=\{v\in \ker i_*\cap F :\, w\in i_*((v+H^\pi)_s)\}
$$
est un ouvert non vide de $F$.
\hfill \smallsquare\vskip 3mm
\end{cor}
\begin{rema}\rm
Quand $X$ est une hypersurface dans ${\bf P}^4$, tous les \mbox{${\bf Z}_\ell$-modules}
consid\'er\'es sont sans torsion, et l'on a $\ker i_*=H^\perp=F$, d'o\`u $H_w=S_w$.
\end{rema}
Le lemme suivant distille la strat\'egie de la d\'emonstration de la proposition
\ref{cica}.
\begin{lem}
Fixons $w\in H^4(X_\varepsilon, {\bf Z}_\ell(2))^\pi$. Supposons qu'il existe $g\in \pi$
satisfaisant aux trois hypoth\`eses suivantes :
\begin{enumerate}
\item $H^2(V_\varepsilon, {\bf Z}_\ell(1))^g$ est satur\'e dans $H^2(V_\varepsilon, {\bf Z}_\ell(1))$;
\item $g$
engendre
topologiquement le
sous-groupe de d\'ecomposition $D_{\bar x}$ dans $\pi$;
\item $g$ fixe un \'el\'ement $v\in S_w$.
\end{enumerate}
Alors il existe $\beta\in H^2(V_\varepsilon, {\bf Z}_\ell(1))^{D_{\bar x}}$ avec
$w=i_*(\beta)$.
\end{lem}
\begin{demo} Pour un $v$ comme dans (3) on a $(v+H^\pi)\subset H^2(V_\varepsilon, {\bf Z}_\ell(1))^g$.
Comme $H^2(V_\varepsilon, {\bf Z}_\ell(1))^g$ est satur\'e dans $H^2(V_\varepsilon,
{\bf Z}_\ell(1))$, on a de plus $(v+H^\pi)_s\subset H^2(V_\varepsilon,
{\bf Z}_\ell(1))^g=H^2(V_\varepsilon, {\bf Z}_\ell(1))^{D_{\bar x}}$. Mais par le corollaire
pr\'ec\'edent on a $w=i_*(\beta)$ pour un $\beta\in(v+H^\pi)_s$.
\end{demo}
\hfill \smallsquare\vskip 3mm
\bigskip
\noindent {\em D\'emonstration de la proposition \ref{cica}.} On cherche un $g\in \pi$
satisfaisant aux conditions du lemme.
Par le th\'eor\`eme de Lefschetz
difficile, la res\-triction de la forme d'inter\-section sur $H^2(V_\varepsilon, {\bf Z}_\ell(1))$ \`a
${H\otimes{\bf Q}_\ell}$ est non d\'eg\'en\'er\'ee (voir \cite{deligne}, Lemme 4.1.2).
Sa restriction \`a
$H^\perp\otimes{\bf Q}_\ell$ est donc non d\'eg\'en\'er\'ee. Ecrivant $H^\perp=F\oplus T$
comme ci-dessus, on peut identifier $O(F)$ avec le stabilisateur (point par point) de $T$,
qui est un sous-groupe ouvert d'indice fini de $O(H^\perp)$. Comme l'image de $\bar \pi$
par la repr\'esentation de monodromie $\rho$ est infinie par construction (Proposition
\ref{imageinfinie}),
un th\'eor\`eme de Deligne (\cite{deligne}, Th\'eor\`eme 4.4.1)
assure que c'est un sous-groupe
ouvert de $O(H^\perp\otimes{\bf Q}_\ell)$. A fortiori
$\rho(\pi)\cap O(F)$ est ouvert dans $O(F)$. Par le lemme \ref{la1} il existe un
sous-groupe ouvert $G_0\subset \rho(\pi)\cap O(F)$ tel que $H^2(V_\varepsilon,
{\bf Z}_\ell(1))^g$ est satur\'e dans $H^2(V_\varepsilon, {\bf Z}_\ell(1))$ pour tout $g\in G_0$.
On applique maintenant la proposition \ref{la2} \`a $F$. Pour ce faire, on doit d'abord
v\'erifier que le rang de $F$ est impair, i.e. que la dimension de $H^\perp\otimes{\bf Q}_\ell$ est
impair. Ceci r\'esulte de nos hypoth\`eses initiales que la dimension de
$H^2(V_\varepsilon, {\bf Q}_\ell(1))$ est paire et celle de $H^2(X_\varepsilon, {\bf Q}_\ell(1))$
impaire; on conclut par l'injectivit\'e de $i^*\otimes{\bf Q}_\ell$ (voir le d\'ebut de
la preuve du lemme \ref{lemcica}).
La proposition \ref{la2} (a)
fournit donc un ouvert $\mathcal S$ de $O(F)$
dont tout \'el\'ement a un vecteur fixe dans l'ouvert non vide $S_w$
donn\'e par
le corollaire \ref{ouvertSw}.
De plus, la proposition \ref{la2} (b) assure que $\mathcal S$ contient des
\'el\'ements arbitrairement proches de 1, donc son intersection avec le sous-groupe ouvert
$G_0$ est un ouvert non vide.
L'image inverse de ${\mathcal S}\cap G_0$ dans $\pi$ est un ouvert dont les \'el\'ements
satisfont aux propri\'et\'es (1) et (3) du lemme ci-dessus. En outre, par d\'efinition de
la topologie de $\pi$ elle contient une cosette $hV$ d'un sous-groupe normal ouvert
$V\subset \pi$. Appliquant le th\'eor\`eme de densit\'e de Tchebotarev au rev\^etement
galoisien $Z\to \dot D $ correspondant \`a $V$ on obtient un point ferm\'e $z\in Z$ dont le
Frobenius associ\'e dans $\pi/V$ est $\bar h$, la classe de $hV$. Prenons alors un point
$\bar x$ du rev\^etement universel profini de $\dot D$ au-dessus de $z$. Le sous-groupe de
d\'ecomposition $D_{\bar x}$ est engendr\'e par un \'el\'ement $g$ d'image $\bar
h$ dans $\pi/V$. Ceci veut dire que $g$ est un \'el\'ement de $hV$, et en tant que tel
satisfait aux hypoth\`eses (1) et (3) du lemme. Par construction, il satisfait \'egalement
\`a (2).
\hfill \smallsquare\vskip 3mm
\begin{rema}\rm Si l'on pouvait choisir $V$ de telle sorte que le
conoyau du morphisme $\bar \pi/ (V\cap\bar \pi)\to\pi/V$ soit d'ordre premier \`a
$\ell$, alors une variante plus pr\'ecise de l'argument de Tchebotarev ci-dessus donnerait un point
ferm\'e $x$ de degr\'e premier \`a $\ell$. L'existence d'un tel $V$ impliquerait donc la
conjecture de Tate \`a coefficients ${\bf Z}_\ell$ pour les 1-cycles sur $X$ (en supposant la
conjecture connue pour les surfaces).
\end{rema}
\section{Cons\'equences du th\'eor\`eme de Schoen}\label{brauermanin}
Nous donnons ici des applications du th\'eor\`eme \ref{theoschoen} \`a l'existence de
z\'ero-cycles de degr\'e premier \`a la caract\'eristique sur certaines vari\'et\'es
d\'efinies sur le corps des fonctions d'une courbe au-dessus de la cl\^oture alg\'ebrique
d'un corps fini. Il s'agit de deux \'enonc\'es apparent\'es mais non \'equivalents dont
chacun implique le corollaire \ref{corschoen}.
\begin{theo}\label{edmonton}
Soit ${\bar k}$ la cl\^oture alg\'ebrique d'un corps fini $k$ de carac\-t\'eristique $p$, et
soit $\overline C$ une ${\bar k}$-courbe propre lisse connexe, de corps des fonctions $F=
{\bar k}(\overline C).$ Fixons une cl\^oture s\'eparable $\overline F$ de $F$.
Soit ${\overline X}$ une ${\bar k}$-vari\'et\'e projective, lisse, connexe, admettant un
${\bar k}$-morphisme projectif et dominant $f : {\overline X} \to\overline C$ dont la fibre g\'en\'erique
${\overline X}_F $ est lisse et g\'eom\'etriquement int\`egre.
Supposons :
(i) Le groupe de Picard ${\rm Pic}\, {{\overline X}_{\overline F}} $ est sans torsion.
(ii) Pour tout premier $\ell$ diff\'erent de $p$,
la partie $\ell$-primaire
du groupe de Brauer ${\rm Br}\, {\overline X} \subset {\rm Br}\, {\overline X}_F$ est
finie.
(iii) La $F$-vari\'et\'e ${\overline X}_F$ a des points dans tous les compl\'et\'es de $F$ aux
points \mbox{de $C$.}
(iv) Pour tout premier $\ell$ diff\'erent de $p$, la conjecture de Tate $\ell$-adique vaut
pour les diviseurs sur les surfaces projectives et lisses sur un corps fini de
caract\'eristique $p$.
Alors ${\overline X}_F$ poss\`ede un z\'ero-cycle de degr\'e une puissance de $p$.
\end{theo}
\begin{demo}
Soit $d+1$ la dimension de ${\overline X}$, et donc $d$ la dimension de la $F$-vari\'et\'e ${\overline X}_F$.
Fixons une cl\^oture s\'eparable ${\overline F}$ de $F$. Consid\'erons la suite d'applications
$$
CH^{d}({\overline X}) \otimes {\bf Z}_\ell \to H^{2d}({\overline X}, {\bf Z}_\ell(d)) \to
H^{2d}({\overline X}_F,{\bf Z}_\ell(d)) \to H^{2d}(\overline{X}_{\overline F}, {\bf Z}_\ell(d))\cong{\bf Z}_\ell
$$
La fl\`eche $H^{2d}({\overline X}, {\bf Z}_\ell(d)) \to
H^{2d}({\overline X}_F,{\bf Z}_\ell(d))$ est obtenue par passage \`a la limite
sur les applications de restriction $$H^{2d}({\overline X}, {\bf Z}_\ell(d)) \to
H^{2d}({\overline X}\times_{\overline C}U, {\bf Z}_\ell(d)) $$
pour $U$ parcourant les ouverts non vides de la courbe ${\overline C}$.
La compatibilit\'e de l'application cycle \`a la restriction \`a un ouvert
(\cite{milneEC}, \S VI, Prop.~9.2)
montre que l'application
compos\'ee ci-dessus se factorise \`a travers le groupe ${CH^{d}({\overline X}_{F})\otimes{\bf Z}_\ell}$.
Il suffit
donc d'\'etablir la surjectivit\'e de l'application compos\'ee en question, pour tout
$\ell$ premier \`a $p$. On le fait en plusieurs \'etapes.\smallskip
\noindent {\em Surjectivit\'e de $CH^{d}({\overline X}) \otimes {\bf Z}_\ell \to H^{2d}({\overline X},
{\bf Z}_\ell(d))$.} Il existe un corps fini $k\subset{\bar k}$ et une $k$-vari\'et\'e $X$ telle
que ${\overline X}=X\times_k{\bar k}$. La surjectivit\'e requise r\'esulte du th\'eor\`eme
\ref{theoschoen}, pourvu qu'on d\'emontre que tout \'el\'ement de $H^{2d}({\overline X},
{\bf Z}_\ell(d))$ est fix\'e par un sous-groupe ouvert de ${\rm Gal}\,({\bar k}|k)$.
Or pour tout $n>0$ la dualit\'e de Poincar\'e $$ H^{2d}({\overline X},\mu_{\ell^n}^{\otimes d}))
\times H^2({\overline X},\mu_{\ell^n}) \to {\bf Z}/\ell^n{\bf Z}$$ est un accouplement parfait \'equivariant
pour l'action de Galois. D'autre part, on a la suite exacte de Kummer
$$0 \to {\rm Pic}\, {\overline X}/\ell^n{\rm Pic}\,{\overline X} \to H^2({\overline X},\mu_{\ell^n}) \to {}_{\ell^n}{\rm Br}\, {\overline X} \to 0.$$
Le groupe ${\rm Pic}\, {\overline X}$ est extension du
groupe de N\'eron-Severi $NS({\overline X})$ par le groupe $\ell$-divisible ${\rm Pic}\,^0\,{\overline X}$. Le
groupe $NS({\overline X})$ est de type fini, donc $NS({\overline X})/\ell^n\cong {\rm Pic}\,{\overline X}/\ell^n$ est fix\'e
par un sous-groupe ouvert de ${\rm Gal}\,({\bar k}|k)$ ind\'ependant de $n$. Par l'hypoth\`ese
$(ii)$ le groupe ${}_{\ell^n}{\rm Br}\, {\overline X}$ a \'egalement cette propri\'et\'e. Il en est donc
de m\^eme pour le groupe fini $H^{2d}({\overline X},\mu_{\ell^n}^{\otimes d})$, et a fortiori pour
$H^{2d}({\overline X},{\bf Z}_\ell(d))$.\smallskip
\noindent{\em Surjectivit\'e de $H^{2d}({\overline X}, {\bf Z}_\ell(d)) \to
H^{2d}({\overline X}_F,{\bf Z}_\ell(d))$.} On va d\'emontrer la surjectivit\'e de $H^{2d}(\overline
X,\mu_{\ell^n}^{\otimes d}) \to H^{2d}(\overline{X}_{F}, \mu_{\ell^n}^{\otimes d})$ pour
tout $n>0$. Ceci donnera un morphisme surjectif de syst\`emes projectifs de groupes
ab\'eliens finis, d'o\`u une surjection apr\`es passage \`a la limite projective suivant
$n$.
On a la suite exacte de localisation
$$H^{2d}({\overline X},\mu_{l^n}^{\otimes d}) \to H^{2d}({\overline X}_F, \mu_{l^n}^{\otimes d}) \to
\bigoplus_{P\in \overline C_0} H_{{\overline X}_{P}}^{2d+1}({\overline X}, \mu_{l^n}^{\otimes d}),$$
o\`u $P$ parcourt les points ferm\'es de $\overline C$.
Montrons que chaque fl\`eche
$H^{2d}({\overline X}_F, \mu_{l^n}^{\otimes d}) \to
H_{{\overline X}_{P}}^{2d+1}({\overline X}, \mu_{l^n}^{\otimes d})$ est nulle. Pour ce faire, par excision
on peut se restreindre au-dessus du hens\'elis\'e
$R=O_{\overline C,P}^h$ de $\overline C$ en $P$, dont on note $L$ le corps des fractions. On dispose de la suite exacte de localisation
$$ H^{2d}(\overline X_{R}, \mu_{l^n}^{\otimes d}) \to H^{2d}(\overline X_{L}, \mu_{l^n}^{\otimes d})
\to H_{X_{P}}^{2d+1}(\overline X_R, \mu_{l^n}^{\otimes d}).$$
Il suffit donc d'\'etablir la surjectivit\'e du morphisme $H^{2d}({\overline X}_{R},
\mu_{\ell^n}^{\otimes d}) \to H^{2d}({\overline X}_{L}, \mu_{\ell^n}^{\otimes d})$.
Le corps $L$ est de dimension cohomologique 1 (c'est un corps $C_{1}$).
La suite spectrale de Hochschild--Serre donne donc naissance \`a la suite exacte courte
{\small $$
0 \to H^1(L,H^{2d-1}(\overline{X}_{\overline L}, \mu_{\ell^n}^{\otimes d})) \to H^{2d}(\overline
X_L,\mu_{\ell^n}^{\otimes d})\to H^{2d}(\overline{X}_{\overline L}, \mu_{\ell^n}^{\otimes
d})^{{\rm Gal}\,(\overline L|L)} \to 0.
$$}
\noindent D'autre part, le groupe $H^{2d-1}(\overline{X}_{\overline L},
\mu_{\ell^n}^{\otimes
d})$ est dual de $H^{1}(\overline{X}_{\overline L}, \mu_{\ell^n})\cong
{}_{\ell^n}{\rm Pic}\,{\overline X}_{\overline L}$ par
dualit\'e de Poincar\'e. La torsion dans le groupe de Picard ne change pas
par extension de corps alg\'ebriquement clos, donc ce dernier groupe est nul par
l'hypoth\`ese $(i)$. Ceci donne des isomorphismes de modules galoisiens
$$ H^{2d}({\overline X}_{L}, \mu_{\ell^n}^{\otimes d})) \cong H^{2d}( {\overline X}_{\overline{L}},
\mu_{\ell^n}^{\otimes d})\cong{\bf Z}/\ell^n{\bf Z}.$$
L'hypoth\`ese que ${\overline X}_F$ poss\`ede des points dans tous les compl\'et\'es de $F$ est
\'equivalente \`a la m\^eme hypoth\`ese avec les hens\'elis\'es, d'o\`u en particulier une
section du morphisme ${\overline X}_{R} \to {\rm{Spec\,}} R$ par propret\'e de ${\overline X}$. Elle donne
naissance \`a un 1-cycle sur ${\overline X}_{R}$ dont la classe de cohomologie dans
$H^{2d}({\overline X}_{R},\mu_{\ell^n}^{\otimes d})$ s'envoie sur le g\'en\'erateur de $
H^{2d}({\overline X}_{L}, \mu_{\ell^n}^{\otimes d})$.
\smallskip
\noindent{\em Surjectivit\'e de $H^{2d}(\overline X_F,{\bf Z}_\ell(d)) \to
H^{2d}(\overline{X}_{\overline F}, {\bf Z}_\ell(d))$.} Ici encore, il suffit d'\'etablir la
surjectivit\'e \`a niveau fini, car il r\'esulte de l'\'etape pr\'ec\'edente que les
groupes $H^{2d}(\overline{X}_{F}, \mu_{\ell^n}^{\otimes d})$ sont finis. Le corps $F$ est
de dimension cohomologique 1, donc le morphisme
$$
H^{2d}(\overline X_F,\mu_{\ell^n}^{\otimes d}) \to H^{2d}(\overline{X}_{\overline F},
\mu_{\ell^n}^{\otimes d})^{{\rm Gal}\,(\overline F|F)}$$
dans la suite spectrale de Hochschild--Serre
est une surjection.
On a
$H^{2d}(\overline{X}_{\overline F}, \mu_{\ell^n}^{\otimes d})\cong {\bf Z}/\ell^n{\bf Z}$ avec
action triviale de Galois, d'o\`u la surjectivit\'e requise.
\end{demo}\
\hfill \smallsquare\vskip 3mm
\medskip
Une cons\'equence facile du th\'eor\`eme est le corollaire \ref{corschoen} de
l'intro\-duction.\medskip
\noindent{\em D\'emonstration du corollaire \ref{corschoen}.\/} Comme le degr\'e de la
fibre g\'en\'erique du morphisme ${\overline X}\to \overline C$ est suppos\'e premier \`a $p$, il
suffit de montrer que les hypoth\`eses $(i)$ et $(ii)$ du th\'eor\`eme \ref{edmonton} sont
satisfaites. En d'autre termes, on doit v\'erifier que pour $\overline X_{\overline F}$
une intersection compl\`ete lisse de dimension $\geq 3$ dans ${\bf P}^n_{\overline F}$ le
groupe de Picard est sans torsion $\ell$-primaire et la partie $\ell$-primaire du groupe
de Brauer est finie.
Le premier \'enonc\'e r\'esulte du th\'eor\`eme de Noether--Lefschetz (\cite{sga2},
expos\'e XII, corollaire 3.7): la fl\`eche de restriction ${\bf Z}= {\rm Pic}\, {\bf P}^n_{\overline F}
\to {\rm Pic}\, \overline X_{\overline F}$ est un isomorphisme. D'autre part, la suite exacte de
Kummer en cohomologie \'etale donne une suite exacte
$$
0\to {\rm Pic}\, {\overline X}_{\overline F}/\ell\,{\rm Pic}\, {\overline X}_{\overline F}\to H^2({\overline X}_{\overline F},
{\bf Z}/\ell{\bf Z})\to{}_\ell{\rm Br}\, {\overline X}_{\overline F}\to 0.
$$
On vient de voir que le premier terme est isomorphe \`a ${\bf Z}/\ell{\bf Z}$. Mais ceci vaut
\'egalement pour le deuxi\`eme, car il est isomorphe \`a $H^2({\bf P}^n_{\overline
F},{\bf Z}/\ell{\bf Z})$ par le th\'eor\`eme de Lefschetz faible en cohomologie \'etale. On constate
donc avec satisfaction que le dernier terme disparait, ce qui montre que la partie
$\ell$-primaire de ${\rm Br}\, {\overline X}_{\overline F}$ est en fait triviale.
\hfill \smallsquare\vskip 3mm
Le corollaire \ref{corschoen} peut \'egalement se d\'eduire du th\'eor\`eme
\ref{vraibrauermanin} ci-apr\`es. Il s'agit d'une variante du th\'eor\`eme \ref{edmonton}, avec la
diff\'erence sensible qu'ici on fait une hypoth\`ese au-dessus du corps de fonctions d'une
courbe d\'efinie sur un corps fini, et non pas sur $\overline{\bf F}_p$.
Soient donc $C$ une courbe propre, lisse, g\'eom\'etriquement connexe d\'efinie sur un
corps fini $k$, et $Y$ une vari\'et\'e lisse sur le corps des fonctions $k(C)$ de $C$. Pour un
point ferm\'e $P$ de $C$ notons $K_P$ le compl\'et\'e de $k(C)$ pour la valuation
discr\`ete associ\'ee \`a $P$. Une famille $\{z_P\}$ de z\'ero-cycles de degr\'e 1 sur
$Y\times_{k(C)}K_P$ pour tout $P$ d\'efinit un homomorphisme ${\rm Br}\, Y\to {\bf Q}/{\bf Z}$ donn\'e par
$A\mapsto \Sigma_P{\rm inv}_P (A[z_P])$. Ici ${\rm Br}\, Y$ est le groupe de Brauer de $Y$, le
morphisme ${\rm inv}_P$ est l'invariant de Hasse du corps local $K_P$, et $A[z_P]$ est
l'\'evaluation de l'\'el\'ement $A\in{\rm Br}\, Y$ en $z_P$ d\'efinie via la fonctorialit\'e
contravariante du groupe de Brauer.
On dit qu'il n'y a pas d'obstruction de Brauer--Manin \`a l'existence d'un z\'ero-cycle de
degr\'e 1 sur $Y$ s'il existe une famille $\{z_P\}$ pour \mbox{laquelle} l'homomorphisme
ci-dessus est nul. Notons que cette condition est automatique si la fl\`eche naturelle ${\rm Br}\, k(C) \to {\rm Br}\, Y$
est surjective.
\begin{theo}\label{vraibrauermanin}
Soient $k$ un corps fini de caract\'eristique $p$, et $X\to C$ un morphisme projectif et dominant
de $k$-vari\'et\'es projectives
lisses g\'eom\'etriquement connexes, o\`u $C$ est une courbe et la fibre g\'en\'erique $X_{k(C)}$ est lisse
et g\'eom\'etrique\-ment int\`egre.
Supposons :
(i) Il n'y a pas d'obstruction de Brauer--Manin \`a l'existence d'un z\'ero-cycle de
degr\'e 1 sur la $k(C)$-vari\'et\'e~$X_{k(C)}$.
(ii) La conjecture de Tate sur les diviseurs vaut sur les surfaces projectives et lisses
sur un corps fini.
Alors la $\overline k(C)$-vari\'et\'e
$X\times_{k(C)}\overline k(C)$ poss\`ede un z\'ero-cycle de degr\'e une puissance de $p$.
\end{theo}
\begin{demo}
Soit $d+1$ la dimension de $X$, et donc $d$ la dimension de la $k(C)$-vari\'et\'e $X_{k(C)}$.
Soit $\ell \neq p$ un nombre premier, et soit $\{z_P\}$ une famille de z\'ero-cycles de
degr\'e 1 sur les $X\times_{k(C)}K_{P}$ pour laquelle l'application ${\rm Br}\, X_{k(C)}\to{\bf Q}/{\bf Z}$
d\'efinie ci-dessus est nulle. La proposition 3.1 de \cite{CT} fournit alors un
\'el\'ement $z$ de $H^{2d}(X,{\bf Z}_\ell(d))$ dont la restriction dans le groupe
$H^{2d}(X\times_{k(C)}{\overline{k(C)}},{\bf Z}_\ell(d)) \simeq {\bf Z}_\ell$ est \'egale \`a $1 \in
{\bf Z}_\ell$. D'apr\`es le th\'eor\`eme \ref{theoschoen}, l'image de $z$ dans
${H^{2d}(X\times_k{\bar k}, {\bf Z}_{\ell}(d))}$ provient d'un \'el\'ement $Z$ de
$CH_{1}(X\times_k{\bar k}) {\otimes} {\bf Z}_\ell$. Prenant la trace de $Z$ dans le groupe
${CH_0(X\times_{k(C)}\overline k(C))\otimes{\bf Z}_\ell}$ on voit que le morphisme
$CH_0(X\times_{k(C)}\overline k(C))\to{\bf Z}/\ell{\bf Z}$ induit par le degr\'e est surjectif.
\end{demo}
\hfill \smallsquare\vskip 3mm
\begin{rema}\rm
La d\'emonstration de la proposition 3.1 de \cite{CT} invoqu\'ee ci-dessus utilise des
arguments voisins de ceux rencontr\'es dans la preuve du th\'eor\`eme \ref{edmonton}. Une
diff\'erence notable est que, dans la notation de ladite preuve, l'existence de classes de
cohomologie de degr\'e 1 sur les sch\'emas $X_R$ implique directement l'existence d'une
classe globale de degr\'e 1 sur $X$ gr\^ace \`a l'hypoth\`ese \og arithm\'etique \fg de type
Brauer--Manin, sans imposer une hypoth\`ese g\'eom\'etrique comme l'hypoth\`ese $(i)$ du
th\'eor\`eme \ref{edmonton}.
\end{rema}
\bigskip
{\noindent\small{\em Remerciements.} Nous remercions chaleureusement Burt Totaro pour nous
avoir communiqu\'e la d\'emonstration de \ref{AH}, et Bruno Kahn pour plusieurs
discussions \'edifiantes. Une grande partie de ce travail a \'et\'e fait
lors d'une visite du premier auteur \`a Budapest dans le cadre du programme
intra-europ\'een BUDALGGEO de l'Institut R\'enyi. Le second auteur remercie
\'egalement OTKA (projet no. K 61116) pour son soutien.}
\bigskip |
0902.2492 | \section{Introduction \label{intro}}
The luminous galaxy NGC~3718 (UGC~6524; Arp~214; PRC D-18)
and its dwarf companion NGC~3729
form a galaxy pair in the loose Ursa Major group \citep{Tu96}.
Morphologically NGC~3718 is quite peculiar: see Figure~\ref{f_optical}.
A strong dark dust lane resembling that in Centaurus~A
\citep[NGC~5128; ][]{Du79}
runs almost edge-on and straight across the central bulge.
Further out, the dust lane diverges into several smooth filaments.
It then twists by almost 90\arcdeg\ into an
`S' shape, forming a diffuse spiral in the stellar light which led
\cite{RC3} to classify the galaxy as a peculiar barred spiral.
As in Centaurus~A, an active nucleus is largely hidden behind the dust
lane.
A compact radio continuum source less than 0.2\,pc across
has a brightness temperature in excess of $10^7$\,K \citep{Na05};
\cite{Kr07} see a jet stretching 0.5\arcsec\ or 40\pc\ to the north-west.
Optical spectroscopy shows a LINER of Type 1.9 \citep{Ho97}
with weak broad emission at H$\alpha$
and a strong narrower line of [O{\sc i}] at 6300\AA.
The \HI\ maps of \cite{sc85} showed that the dusty gas
does not lie in the plane of the stellar disk,
but forms a complex three-dimensional structure.
Some lines of sight pass more than once through the gas layer,
giving rise to multiple velocity peaks;
but the velocity field is strikingly bisymmetric.
Schwarz was able to describe the gas layer as a violently warped disk made up of material following concentric but tilted orbits, which twisted by roughly 90\arcdeg\ between the inner and outer radii.
The strong straight portion of the dust lane arises where the orbits
turn nearly edge-on to our line of sight, at roughly 200\arcsec\ radius.
\cite{Po04} mapped molecular gas in the inner part of the dust lane
using the CO and HCN lines.
\cite{Kr05} combined those data with interferometric maps
at $\sim 2$\arcsec\ resolution
in the CO $1 \rightarrow 0$ line at 3\,mm.
They showed that the CO emission
traces an inward extension of the warp that \cite{sc85} derived for the \HI\ gas.
NGC~3718 is included as `related object' D-18 in the Polar Ring Catalog of \cite{PRC}.
Because polar ring systems contain gas orbiting in more than one plane, these rare objects
constitute one of the few observational probes of the
three-dimensional mass distribution of galaxies.
\citet{Sp90} presented a dynamical model for NGC 3718 to explain the complex shape of the twisted \HI\ disk mapped by \cite{sc85}.
According to this model we see the underlying disk galaxy almost face-on, with the ring gas in near-polar orbit about it.
The tilted gas orbits precess about the symmetry axis of
the flattened central galaxy and its dark halo.
Orbits at smaller radius precess more rapidly in the galaxy's
gravitational field, so the gas disk becomes twisted.
This dynamical model reproduced the main features of Schwarz's tilted-ring fit.
It was consistent with a spherical dark halo, and indicated an age for the gas disk of $3-4$~Gyr.
To test this model, we mapped the system in the 21\cm\ line
of neutral hydrogen with the Very Large Array (VLA) radio telescope.
Our new observations, described in Section~\ref{observ},
improve on those of \cite{sc85} in sensitivity
and in both velocity and spatial resolution.
In Section~\ref{datacube} we discuss the gas distribution
and compare our results with previous observations.
In Section~\ref{tiltmodels} we present tilted-ring fits for the gas motions measured in \HI\ and CO, using optical images to resolve ambiguity about where the gas lies in front of the stellar body.
In Sections~\ref{kinwarp} and \ref{dynwarp} these are interpreted as showing a near-polar disk of gas that has become twisted by the differential precession of the gas orbits in the galaxy's aspherical gravitational potential.
Table~\ref{tablebasic} gives basic information on NGC~3718 and NGC~3729.
We adopt the distance of 17\Mpc\ given by \citet{Tu98} for the Ursa Major group:
there, 1\arcmin\ is equivalent to 4.945\kpc, 1\arcsec\ = 82.4\pc,
and 13\arcsec\ = 1.07\kpc.
NGC~3718 is a very luminous galaxy with
$L_B \approx 3 \times 10^{10}$\lsun, while for
NGC~3729 $L_B \approx 7 \times 10^{9}$\lsun.
\citet{Tu96} classified NGC~3718 as T=1 (Sa) and NGC~3729 as T=2 (Sab)
on the basis of their optical and near-infrared images.
\begin{figure}
\plottwo{f1_left.pdf}{f1_right.jpg}
\caption{Left, B-band image of NGC~3718 taken by E. Wehner
with the WIYN 0.9-m telescope,
showing the strong central dust lane and diffuse `spiral arms'.
North is up and East is to the left; the image covers
$9\arcmin \times 13.5\arcmin$.
Right, $R$-band image of the central region taken by J. Gallagher
with the 3.5-m WIYN telescope; the dust lane is close to edge-on, with the bright nucleus seen to the north. The dark feature seen closest to the nucleus is at PA
$\approx125\arcdeg$.}
\label{f_optical}
\end{figure}
\section{Observations in the 21cm Line and Data Reduction \label{observ}}
We used the VLA in the C configuration in four
different observing runs in March and April 1992,
for a total observing time of 26 hours and 28 minutes on source.
To obtain the required velocity coverage and resolution, we observed
using two IFs, tuned at slightly different frequencies in order to
almost double the spectral coverage.
Parameters of the observations under proposal AS649 are listed in Table~\ref{tableobs}.
Compared to the observations of \citet{sc85}, we improve the velocity
resolution to 5\kms\ from 33\kms, and the spatial resolution from a
25\arcsec\ $\times$ 31\arcsec\ beam to a 13\arcsec\ circular beam.
The complete data reduction was done using the Astronomical Image
Processing System (AIPS). The four data sets were calibrated
independently, both for amplitude and phase gains errors that vary
with time, and for those that vary with frequency. The absolute flux
scale was determined by observing 3C286, which has a well-known flux
density. After this, the four databases were combined into one.
Inspection of a first mapping of the result allowed us to determine
line-free channels at both edges of the band. The average of these
channels was subtracted from the uv data set using the AIPS task
UVLIN. This new data cube, which now contains just line emission, was
used throughout in all subsequent mapping and cleaning.
The continuum map shows point sources at the central locations of both
NGC~3718 and NGC~3729.
Table~\ref{tablecontinuum}
lists positions and fluxes of these point sources, with uncertainties
0.5\arcsec\ and 1.0~mJy respectively.
The sources coincide to within this accuracy with the positions given by \citet{VeSa01} at 20\,cm and by \citet{Kr05} at 3\,mm;
we take them to represent the center of each galaxy.
The presence of a dust lane prevents an accurate optical
position for NGC~3718, but the various values in the literature agree
with this radio position within the error margins.
In their Appendix~B, \citet{VeSa01} quote $11.4 \pm 0.4$~mJy for the
continuum source in NGC~3718, in reasonable agreement with our value of 14.4~mJy; we find 7.9~mJy for NGC~3729, while
they give a higher flux of $18 \pm 0.9$~mJy.
The data were Fourier transformed using the AIPS task IMAGR, using a
robustness parameter of 0.0, resulting in a circular 13\arcsec\
beam.
A proper choice of robustness \citep{br95,br99}
results in better sensitivity than when using uniform weighting,
without the strong non-Gaussian beam effects
normally associated with natural weighting.
We made two data cubes: one with the full 5~\kms\ velocity resolution,
and once applying a smoothing over the frequency axis resulting in
10~\kms\ velocity resolution.
The maps with 10~\kms\ resolution showed no additional structure in the outer galaxy, so
the 5~\kms\ resolution data were used throughout in this paper.
Both cubes were cleaned to a $1~\sigma$ noise level:
0.39~mJy/beam for the full resolution data and
0.30~mJy/beam for the frequency-smoothed data,
as given in Table~\ref{tableobs}.
\section{Neutral Hydrogen Datacube for NGC~3718 and NGC~3729}\label{datacube}
\subsection{Global Results}\label{globalresults}
\begin{figure}
\includegraphics[height=10cm]{f2.jpg}
\caption{Global profiles of \HI\ emission in both NGC~3718 (solid line)
and NGC~3729 (dashed line).
Fluxes have been corrected for primary beam attenuation.}
\label{f_gp}
\end{figure}
We list global \HI\ properties for NGC~3718 and for NGC~3729
in Table~\ref{tableglobal}.
For both galaxies, we used a method which corrects for the mismatch
between the dirty beam and the clean beam in the residual map
\citep{jo95} to determine the \HI\ flux in each channel map,
and corrected for the attenuation of the VLA primary beam.
The global \HI\ profiles in Figure~\ref{f_gp} show no sign of
absorption against the weak radio continuum source in either galaxy.
For NGC~3718, the \HI\ flux integral of 118~Jy\kms\ agrees with that
found by \citet{sc85} and is 20\% lower than the value of
\citet{VeSa01}.
Our flux integral of 3.8~Jy~\kms\ for NGC~3729 agrees with the latter authors, but is roughly 5 times lower than given by \citet{sc85}.
A recalculation using the map in Figure~3 of that paper yields a much smaller flux integral, so the result quoted in \citet{sc85} may have been in error.
Estimates from single-dish observations vary between 90~Jy~\kms\ and
150~Jy~\kms\ \citep{hr89}.
Thus there is little gas in an extended component that would be missed in our maps.
In NGC~3718 we find $8 \times 10^9$\msun\ of \HI\ gas, about twice as
much as in the Milky Way, while the galaxy is about 50\% brighter in
stellar light.
The ratio \MHoverLB=0.3, which is about average for the
sample of gas-rich S0 and Sa galaxies studied by \citet{No05}.
NGC~3729 has only $3 \times 10^8$\msun\ of \HI\ and is gas-poor
compared to a normal Sab galaxy; we find \MHoverLB=0.04 while
\MHoverLB=0.1 would be typical \citep{rh94}.
The regular, steep-sided and symmetric profile of NGC~3718
suggests that the gas has had time to settle into a steady state.
Between the points at which the emission falls to 20\% of its peak
value we measure a width $W_{20}$=476\kms.
If the \HI\ followed pure circular orbits, our measured line width
would yield the rotation speed directly:
$W_{20} = 2 V_{max} \sin i$, where $V_{max}$ is the maximum rotation
speed in the galaxy disk, inclined at angle $i$ to face-on.
\citet{VeSa01} find that in disk galaxies we must subtract about
20\kms\ from $W_{20}$
to correct for random motions in the gas.
For NGC~3718 this would imply $V_{max} \sin i \approx 230$\kms, with
little gas in regions where the circular speed is higher.
Based on the mean of the velocities at 20\% of peak flux,
we adopt the systemic velocity \Vsys=995\kms\ for NGC~3718
and \Vsys=1063\kms\ for NGC~3729.
For NGC~3718, \citet{VeSa01} derived 993\kms\ from the midpoint of their
\HI\ global profile, and 990\kms\ by examining the position-velocity
diagram along PA=195\arcdeg.
For NGC~3729 the agreement is even closer: \citet{VeSa01}
find \Vsys=1060\kms\ and 1063\kms\ for the two methods respectively.
\subsection{Channel maps}\label{channelmaps}
\begin{figure}
\includegraphics[height=20cm]{f3.jpg}
\caption{Channel maps for the \HI\ distribution in NGC~3718 at
intervals of roughly 20\kms.
Our adopted systemic velocity \Vsys=995\kms\
lies midway between the last channel map
in this figure and the first in Figure~\ref{f_chmap_b}.}
\label{f_chmap_a}
\end{figure}
\begin{figure}
\includegraphics[height=20cm]{f4.jpg}
\caption{Continued from Fig.~\ref{f_chmap_a}:
channel maps for the \HI\ distribution in NGC~3718.}
\label{f_chmap_b}
\end{figure}
Figures~\ref{f_chmap_a} and \ref{f_chmap_b} show the channel maps
for gas in NGC~3718.
In a warped disk made up of gas on concentric but tilted circular
orbits, gas at each velocity above the systemic velocity \Vsys\
should have a counterpart at the same interval below \Vsys, at a
position point-reflected about the galaxy center.
In Figure~\ref{extreme_channels}, emission from gas in two extreme
channels centred at 765\kms\ and 1222\kms\ has been superposed to show this symmetry.
\begin{figure}
\includegraphics[height=15cm]{f5.jpg}
\caption{Superposed channel maps for the \HI\ gas in
NGC~3718 at velocities displaced roughly 230\kms\ on either side of
the systemic velocity.
Lower contours, in red, show gas centred at 1221.8\kms;
upper contours, in blue, show gas centred at 765.3\kms.}
\label{extreme_channels}
\end{figure}
The extreme channel maps containing \HI\ emission are at 755\kms\
and 1232\kms,
separated by almost exactly our measured width $W_{20} = 476$\kms.
Channel maps at 5\kms\ lower and higher velocity are empty.
The emission in these channels extends from 30\arcsec\ from the center to 400\arcsec, so
$V_{rot} \sin i$ should be between 230\kms\ and 240\kms\
over this entire radial range.
Within 30\arcsec, either \HI\ gas is largely absent, or
$V_{rot} \sin i$ is considerably lower.
Within 300\arcsec\ of the center the band of emission in Figure~\ref{extreme_channels} is narrow, suggesting that we see the gas orbits within 10\arcdeg\ to 20\arcdeg\ of edge-on.
Tracing the ridge line of the emission in the extreme channels then
gives us the position angle of the gas orbits, as plotted in
Figure~\ref{f_ringangles}.
The kinematic major axis swings from close to PA=100\arcdeg\ at
30\arcsec\ from the center to PA=190\arcdeg\ at radius 400\arcsec.
\subsection{Blanking and moment maps}\label{blank}
The cube of data can be viewed as a rectangular array of velocity (or
frequency) profiles. It is standard practice to reduce the spectral
line data further by forming maps containing the value of the various
moments of each profile. The zeroth moment is a map
showing the spatial distribution of total hydrogen; all velocity
information is lost. In calculating the zeroth moments we restrict
ourselves to that part of the profile where emission is present; this
avoids contamination of the total HI map with noise. Our method of
separating emission and noise is automatic: we convolved the cube to a
40\arcsec~\keer~40\arcsec\ beam, and masked (blanked) all pixels in the {\it high} resolution cube which were below a $3 \sigma$ noise level in the {\it low} resolution cube. The moment maps are constructed from the unblanked pixels only.
This method avoids the addition of unrelated noise to the total HI map, and at the same time misses little of the low-level HI emission.
\begin{figure}
\includegraphics[height=18cm]{f6.jpg}
\caption{Total hydrogen in NGC~3718 and NGC~3729, corrected for the
primary beam attenuation. The lowest contour level is at
$3.3 \times 10^{19}$~atoms~cm$^{-2}$ or 0.26\msun~pc$^{-2}$,
approximately at the 3-$\sigma$ noise level.
Higher contours are at 1.0, 3.3, 10.0, and 33.4~$10^{20}$
atoms~cm$^{-2}$.
The positions of the central continuum sources are marked with
crosses; the beam size is indicated by the small circle in the lower
left hand corner.}
\label{f_th}
\end{figure}
Figure~\ref{f_th} shows the resulting map.
Within 100\arcsec\ of the center, we see a narrow dense ridge of \HI\
emission along PA=140\arcdeg,
coinciding with the dark dust lane of Figure~\ref{f_optical}.
At larger radii this ridge swings counter-clockwise into a `S' shape.
The left panel of Figure~\ref{f_optical} shows that the diffuse
`spiral arms' that we see in the starlight correspond to regions where the \HI\ density in Figure~\ref{f_th} rises above
$3.3 \times 10^{20}$~atoms~cm$^{-2}$ or 2.6\msun~pc$^{-2}$.
This gas density barely reaches the threshold of 3--10\msun~pc$^{-2}$ normally required for widespread star formation in a galaxy disk \citep[\eg][]{sch04}.
The tilted-ring model developed for the \HI\ layer in Section~\ref{tiltmodels} below, and illustrated in Figure~\ref{f_velfield}, implies that the projected density along the `spiral arms' is increased by warping in the gas layer.
The true surface density is even further below the normal threshold for star formation.
The stellar `spiral arms' extend to roughly 250\arcsec, while the pattern of \HI\ emission is bisymmetric to about 7\arcmin\ or 35\kpc\ from the center, and the \HI\ disk can be traced to a radius of 500\arcsec\ or 41\kpc.
To the southeast a spiral-arm fragment extends to a long streamer, apparently ending in a gas cloud projected 12\arcmin\ or 59\kpc\ from the center.
This arm fragment and a symmetrically placed structure to the northeast are also visible in the left panel of Figure~\ref{f_optical} as star-forming regions.
As in NGC~1058 and NGC~6946 \citep{fe98, bo05, pr07}, both coherent spiral patterns in the gas and continuing star formation are present far beyond the radius where gravitational instability should be strong enough to initiate them.
Smoothing our data cube at 10~\kms\ velocity resolution further to a 40\arcsec\ beam yields a noise level of 0.5mJy/beam. A map of total \HI\ made from this smoothed cube fails to show emission more extended than Figure~\ref{f_th}, to a surface density of 0.1\msun~pc$^{-2}$ or $1.3 \times 10^{19}$~atoms~cm$^{-2}$. In particular, there is no bridge of emission linking NGC~3718 with NGC~3729.
\begin{figure}
\includegraphics[height=18cm]{f7.jpg}
\caption{Mean velocity field (first moment map) of the \HI\ emission
in NGC~3718 and NGC~3729; the tilted ring model of Section~\ref{tiltmodels} for the warped
gas layer of NGC~3718 is superposed.
Contours are spaced at intervals of 50\kms\ around the systemic
velocity of 995\kms. All the contours in the gas streamer on the
northeast side of the disk of NGC~3718 are at 795\kms.
Beyond about 200\arcsec\
from the center, where velocity profiles are singly
peaked, the values in this map are representative of the radial
velocity of the gas at that position.
The beam size is indicated by the small circle in the lower
left hand corner.}
\label{f_velfield}
\end{figure}
Beyond 200\arcsec\ from the center, we see only a single velocity peak along each line of sight, and the velocity dispersion is generally below 10\kms.
Here the first-moment map of Figure~\ref{f_velfield} describes the velocity field of the gas.
It shows a pattern characteristic of a warped rotating disk: the kinematic major axis (where the velocity is furthest from systemic) twists with radius into an `S'-shape.
The orderly rotation and low velocity dispersion suggests that the structure is at least a few orbits old.
Beyond about 7\arcmin\ or 35\kpc, the gas of the spiral-arm fragments seen on both sides of the disk along PA=120\arcdeg\ appears to share in the rotation, although it does not form part of a complete ring.
The long streamer curving northwards away from the east side of the
disk is continuous in both position and velocity.
Taking the maximum radius as 500\arcsec\ and the circular speed there as 220\kms\ (see below) yields a dynamical mass
M$_{dyn} = 5 \times 10^{11}$\msun, so that M$_{dyn}$/L$_K \approx 7$ in solar units.
This is much larger than the value of unity that is typical of an old
stellar population \citep{be03};
so the galaxy must contain substantial dark matter.
The small galaxy NGC~3729 is projected 11\arcmin\ to the east,
and shows a clear signature of rotation
in gas that extends to 1\arcmin\ or 5\kpc\ radius.
>From their K-band images, \cite{Tu96} find an isophotal ellipticity
$e = 1-b/a = 0.32$.
If we assume the disk to be round (although the galaxy is
classified as barred), it is inclined 48\arcdeg\ from face-on.
Further assuming that the \HI\ gas shares this plane yields a
dynamical mass M$_{dyn} = 35 \times 10^9$\msun\ and M/L$_K \approx 2$ in solar units.
Here we cannot draw strong conclusions about the presence of dark matter.
\begin{figure}
\includegraphics[height=16cm]{f8.jpg}
\caption{Position-velocity plot through the center of NGC~3718, and
along the apparent HI ridge at PA=140\arcdeg.
The axes are labeled
relative to the systemic velocity \Vsys=995\kms\
and the center of NGC~3718: right ascension increases to the left.
The lowest contour is at 0.75 mJy/beam;
higher contours are odd multiples.
The sense of the velocity axis is chosen for comparison with the figures of \cite{Kr05}.
}
\label{f_xv}
\end{figure}
In the central parts of NGC~3718 the velocity profiles are neither
singly-peaked nor symmetric.
Figure~\ref{f_xv} shows a position-velocity plot through the center at a position angle of 140\arcdeg,
along the ridge of bright HI emission visible in Figure~\ref{f_th}.
This plot is highly symmetrical, as expected for gas in circular orbit about the galaxy center.
There are two main components:
the very strong inner one shows velocities rising steeply to 230\kms\
at 80\arcsec\ from the center,
while in the outer component rotation speeds increase almost linearly to 60\kms\ at 300\arcsec\ radius.
We interpret the slower-rotating gas as following an orbit at larger
radius; we see only the portion projected close to the center,
where the radial velocity is small.
On the western side and at negative velocities,
there is a third and much weaker component with an intermediate slope.
Looking towards any point along this line within 80\arcsec\ of the
center, we would see a double or triple peak in the \HI\ velocity profile.
If the \HI\ gas forms a continuously warped disk
and we look once through it in the outer parts,
then each line of sight must cross the gas layer an odd number of
times, so we expect triple profiles.
There is a slight indication that the weak third component may have a
counterpart to the east at positive velocities.
\cite{Kr05} made interferometric maps of the molecular gas associated
with the inner part of the dust lane.
They combined several pointings with the single-dish observations of
\cite{Po04} to probe the dust lane to 70\arcsec\
radius with $\sim 2$\arcsec\ resolution
in the CO $1 \rightarrow 0$ line at 3\mm.
Their Figure~17 displays a position-velocity diagram along an axis at
PA=130\arcdeg.
Like our Figure~\ref{f_xv}, it is highly symmetric about the center,
with emission at velocities rising to 220\kms\ at 70\arcsec\ radius,
and an even smaller-scale structure where speeds reach 250\kms\
within 10\arcsec\ of the center.
This nuclear component would correspond
to an edge-on disk of diameter 1.5~kpc.
If atomic and molecular gas share the same kinematics, then \HI\ must
be largely absent within 30\arcsec\ of the center;
otherwise, our Figure~\ref{extreme_channels} would not show a gap in high-velocity emission close to the center.
\citet{rc94} measured velocities in the \Halpha\ line of ionized gas
in the central regions of NGC~3718. Along PA=130\arcdeg, they found
velocities rising to $260 \pm 20$\kms\ within 80\arcsec\ of the
center, which is consistent with the results in CO.
\section{Tilted-ring models for the \HI\ gas}\label{tiltmodels}
\subsection{Fitting a tilted-ring model}
Because of the very high degree of symmetry in the channel maps of
Figures~\ref{f_chmap_a} and \ref{f_chmap_b}, we follow \cite{sc85}
in modeling the gas within a radius of 500\arcsec\
as a strongly-warped but otherwise symmetric disk.
The disk is made up of rings of material, following near-circular
orbits that are concentric but tilted.
Because the emission does not peak symmetrically about a mean velocity
at each point on the sky, we cannot use tasks such as {\sc rotcur}
\citep{Be87, Be89} to determine the ring orientations by fitting to the
mean velocity field, as measured by the first-moment map.
Instead, we built the task {\sc inspector} in {\sc gipsy}
\citep{vt01} to compare
the predictions of such a model to various two-dimensional cuts
through the three-dimensional cube of data.
Following the convention of \cite{Ro75} and \cite{Be89}, we measure
the position angle $p$ of each gas orbit anti-clockwise from north to
the line of nodes (the kinematic major axis) on the receding side of
the galaxy.
Note that this definition of $p$ is 180\arcdeg\ different from that of
\cite{sc85}.
The orbital inclination $i$ runs from zero as the spin axis points
towards the observer, through 90\arcdeg\ for an edge-on ring, to
180\arcdeg.
Neglecting the effect of both random motion in the gas and our finite
beam size, {\sc inspector} calculates the expected velocities at which
each ring of \HI\ should contribute to a given longitude-velocity cut,
or the positions at which its emission should appear in a given
channel map.
Fixing the central velocity at 990\kms\ gave a slightly better fit than the central value of 995\kms\ that we derived from the global profile.
We placed the ring centers at the radio continuum source, and adjusted the rotation speeds and the ring angles interactively,
using {\sc inspector} to compare model predictions with the
position-velocity cuts and channel maps.
\begin{figure}
\includegraphics[height=16cm]{f9.jpg}
\caption{Tilted ring model for the warped \HI\ disk in NGC 3718.
In the top panel, red lines and open circles show the position angle $p$ of the receding
line of nodes, as defined by the ridge lines of intensity along the three extreme channel maps above and below the systemic velocity (at 754.9\kms, 760.1\kms, 770.5\kms,
1232.2\kms, 1227.0\kms\ and 1216.6\kms).
Filled circles and a dashed or solid line show our tilted-ring models {\sc inspector}1 and {\sc inspector}2 respectively.
The crosses and green short-dashed line shows the tilted-ring fit derived by \citet{sc85}, while the triangles and the blue (dotted) line show the fit from Figure~6 of \cite{Jo04}.
The purple long-dashed line shows the warped disk of CO from \citet{Kr05}.
The second panel shows inclination $i$, and the third panel the assumed or fitted circular speed \Vrot.
The bottom panel gives $V_{rot} \sin i$, the maximum speed along the line of sight.
The red lines with long and short dashes show the constraint derived in Section~\ref{channelmaps} above
above, and the red star shows the innermost velocities seen in CO by \citet{Kr05}.}
\label{f_ringangles}
\end{figure}
Figure~\ref{f_ringangles} shows that the position angle of the gas orbits is the best-determined quantity.
In the top panel, we see that at radii $r>40\arcsec$ the results from
{\sc inspector} are in excellent agreement with those obtained by
tracing the ridge line of emission in the extreme velocity channels.
They match fairly well to the model fit by \cite{Kr05} to the CO observations: see below.
Near the galaxy center, the apparent major axis of the gas orbits lies
roughly east-west, almost orthogonal to the major axis of the stellar
light.
It then turns counterclockwise towards north-south, twisting quite
sharply at smaller radii and then more slowly beyond 300\arcsec.
The most difficult quantity to determine is the rotation speed. Initially, we used the rotation curve fit by \cite{sc85} to the earlier \HI\ observations.
This yielded the model {\sc inspector}1. However, the CO velocities of \cite{Kr05} show a rise to 250\kms\ within 10\arcsec\ of the center.
Our 13\arcsec\ synthesized beam is roughly 1\kpc\ across, so we do not expect to resolve a rapid central rise in the rotation curve.
For the model {\sc inspector}2, we began our iteration with \Vrot\ set at 250\kms, and decreased it in the outer parts only when we could not otherwise obtain a good fit.
Figure~\ref{f_velfield} shows the geometry of this tilted-ring model; the run of position angle and inclination are given in .
Figures~\ref{f_pos_chmap} -- \ref{f_ring_lvcut} compare the model
predictions with position-velocity cuts and channel maps.
The lower panels of Figure~\ref{f_ringangles} show the runs of
inclination and rotation speed.
The multiply-peaked velocity profiles illustrated in
Figure~\ref{f_xv} require that the warped gas disk passes through
edge-on with $i=90$\arcdeg; we place this transition between 160\arcsec\ and 220\arcsec.
Within this radius, lines of sight can pass three times through the disk.
The position angle in this region of nearly edge-on gas orbits is
155\arcdeg--175\arcdeg, running along the dark central dust lane in the left panel of Figure~\ref{f_optical}.
The velocity field of the \HI\ gas is exactly the same for a ring
of inclination $i$ and one at $180\arcdeg - i$;
we use the dust distribution in the right panel of Figure~\ref{f_optical} to resolve this ambiguity.
There, we see the bright nucleus to the north of the dust lane; so the south side of the dusty gas disk is closest to us.
The gas recedes on the east side, so its spin axis points away from
us, meaning that $i >90\arcdeg$.
The warp appears smooth as the disk twists through edge-on,
so we follow \cite{sc85} in assuming that the inclination
decreases monotonically to $i < 90\arcdeg$ at larger radii.
\cite{sc85} constrained the position angle of the edge-on gas orbit by examining the emission peak closest to the center at velocities close to systemic. He found that the centroid of that peak moved along a line in $PA=-23\arcdeg$ as the velocity decreased through \Vsys. This is the behavior expected along an edge-on circular orbit in $PA=157\arcdeg$. We repeated this exercise for the present data set as a consistency check, and find that the central peak moves along $PA=135\arcdeg$.
Our measured velocity gradient corresponds to a ring at radius 90\arcsec, where
the top panel of Figure~\ref{f_ringangles} shows that the position angle indeed reaches 135\arcdeg. Thus the gas orbit at this radius is very close to edge-on.
Within 300\arcsec\ of the center the \HI\ gas orbits are less than 10\arcdeg\ from edge-on.
So the measured speed $V_{rot} \sin i$ should be very near to the orbital speed itself.
Our stipulation that $V_{rot}$ should decrease monotonically means that our model predicts too high a value for $V_{rot} \sin i$. To avoid this we would need a rotation curve like that of the model {\sc inspector}1, which peaks beyond the optical radius $R_{25}$.
Closed loops in the channel maps at 1190.6\kms\ in Figure~\ref{f_chmap_a} and at 796.4\kms\ in Figure~\ref{f_chmap_b} show that either the rotation speed must drop in the outer disk, or the gas orbits turn closer to face-on.
We find that both effects are present.
At large radii the shape of the total HI map in Figure~\ref{f_th}
shows that the outermost gas orbits turn to $i \sim 60$\arcdeg, or
$\sim 30\arcdeg$ from edge-on. They cannot become much more face-on,
or the predicted east-west extent of gas in channel maps
near the systemic velocity becomes much larger than observed,
and the two arms of the ``fork'' in Figures~\ref{f_pos_chmap} and
\ref{f_neg_chmap} are too wide-open.
To reproduce the closed loops, we had to reduce the model rotation
speed to about 220\kms\ near the outer edge.
This behavior is consistent with the 10\%--30\% drop in rotation speed that
\cite{No07} found to be common in massive S0 and Sa galaxies, with
rotation speeds above 200\kms.
Far from the galaxy center, emission in the channel map
near 990\kms\ extends almost east-west for about 200\arcsec\ on each side
of the center.
However, the fit at this velocity (shown in the top left panel in both Figures~\ref{f_pos_chmap} and \ref{f_neg_chmap}) is better on the east (left) side than the west.
Also, Figure~\ref{f_th} shows that the gas furthest to the east and west does not seem to be part of a complete ring.
So we treat our model with caution within 40\arcsec\ and beyond 400\arcsec\ radius.
We see in Figure~\ref{f_ringangles} that the molecular gas follows the same warped disk structure as the inner \HI\ layer.
\citet{Kr05} fitted a tilted-ring model to describe the CO
kinematics.
They chose a model rotation curve close to that of \citet{sc85}:
the rotation speed $V(r)$ is taken as 235\kms\ within 100\arcsec\
of the center, rising linearly to 255\kms\ at 130\arcsec.
Their Figure~12 displays the derived run of tilt angle with radius,
relative to a reference plane inclined by 70\arcdeg\ (or 110\arcdeg)
to the plane of the sky,
and with the {\it approaching} line of nodes at PA=-60\arcdeg.
With respect to that reference plane, their model takes the \twist\ angle to increase with radius $r$ as
$twist \propto \cos(\tilt) \times r/V(r)$ (compare Equation~\ref{eqnprec} below).
Taking the reference inclination as $i = 110$\arcdeg\ and
using the tilt and twist angles kindly supplied by Dr. Krips,
we recovered the inclination and position angles of their model
relative to the sky plane, as shown in Figure~\ref{f_ringangles}.
The position angle agrees well with what we derive from the \HI\
observations.
The inclination oscillates because of the form that they chose for the twist,
but the product $V_{rot} \sin i$ is very close to that for the \HI\ layer.
\begin{figure}
\includegraphics[height=16cm]{f10.jpg}
\caption{Tilted ring model {\sc inspector}2 compared with channel maps at velocities greater than our adopted central velocity of 990\kms.
Crosses indicate emission from each of the model rings;
the size of the cross increases proportionally with the ring radius.
The central velocity for each map is given in \kms\ in the top left corner; maps with the larger labels are displaced from the central velocity by the same amount as the maps in corresponding panels of Figure~\ref{f_neg_chmap}. Other channels are chosen to illustrate features such as the closed contours at 1187\kms.}
\label{f_pos_chmap}
\end{figure}
\begin{figure}
\includegraphics[height=16cm]{f11.jpg}
\caption{As Figure~\ref{f_pos_chmap}, but for channel maps at velocities below the central velocity.
Maps with the larger velocity labels are displaced from 990\kms\ by the same amount as the maps in corresponding panels of Figure~\ref{f_pos_chmap}.}
\label{f_neg_chmap}
\end{figure}
\begin{figure}
\includegraphics[height=16cm]{f12.jpg}
\caption{Tilted ring model compared with longitude-velocity cuts
taken along the east-west direction.
The velocity of the front (closest) portion of each ring is shown by an open blue square, and the rear (more distant) portion by a red cross.
Large squares enclose symbols at radius 400\arcsec; large triangles show rings at 300\arcsec.
Large red and blue circles with inscribed crosses show rings at 200\arcsec; in the central cut only, these circles almost coincide and are shown as a single light symbol, very close to the central velocity.
Rings at 100\arcsec\ appear in the central cut only, as large circles with central dots.}
\label{f_ring_lvcut}
\end{figure}
These sets of derived quantities each
represent an eyeball fit to a constrained parametric model.
This contrasts with systematic fitting techniques such as {\sc rotcur}
that are applied to galaxies with a single-valued velocity field.
Because the velocity field represents an integral over the full data cube, it is smooth and relatively insensitive to the clumpy distribution of emitting gas, and can be compared directly with a model in which
gas orbits are uniformly filled. In a galaxy like NGC~3718 we must
work with the full 3-D data cube, where the patchy emitting gas lies
close to a warped and folded 2-D surface.
\cite{Jo04} present a model for the \HI\ layer in NGC~3718 from TiRiFiC, a new method \citep{Jo07} which fits a tilted-ring model automatically to the full data cube. Their observations at Westerbork had a resolution of 12\arcsec.
Figure~\ref{f_ringangles} shows that the run of position angle is very similar to ours, and the inclination shows the same decreasing trend within 400\arcsec.
The run of inclination differs; \cite{Jo04} estimate that the gas orbits turn through edge-on closer to the center, at 80\arcsec\ radius and in $PA \approx 130\arcdeg$, and that at larger radii the gas remains further from edge-on than indicated by our model-fits.
In the central 240\arcsec\ the run of $V_{rot} \sin i$ falls below the constraint that we derived from the channel maps of Section~3.2.
The implied rotation curve is not monotonic, rising from 210\kms\ with maxima at 120\arcsec\ and 320\arcsec.
The differences between the sets of curves in Figure~\ref{f_ringangles} illustrate the difficulties of the fitting methods, the limitations of the data, and deviations from uniformly filled concentric circular orbits.
\subsection{Relation between the gas layer and the stellar disk}
In Figure~\ref{f_optical}, we seem to see the stellar disk of NGC~3718 close to face-on.
We cannot easily measure its orientation from the isophotes, because of the obscuring dust.
From near-infrared photometry in the H band (1.6\,$\mu$m) within 50\arcsec\ of the center,
\cite{PeWi93} find isophotes elongated in PA=112\arcdeg, with
ellipticity $\epsilon \equiv 1 - b/a = 0.17$ (see their Table~4).
\citet{Tu96} give $\epsilon = 0.58$ at PA=195\arcdeg, measured
between 150\arcsec\ and 250\arcsec\ radius
(see their Figure~8 and Table~2),
which would correspond to a round disk seen 55\arcdeg\ to face-on
(assuming an intrinsic axis ratio $b/a=0.2$).
In fact the elongation is caused by the stellar light of the `spiral
arms' seen in Figure~\ref{f_optical}.
Marc Verheijen kindly supplied us with two images at K band (2.2\,$\mu$m) taken by \cite{Tu96}; only the lower-reslution image with 2.052\arcsec\ pixels was used in their paper.
Neither shows any sign of the dust lane, even at the center.
>From their higher-resolution image with 0.753\arcsec\ pixels,
we measure an ellipticity $\epsilon \equiv 1-b/a = 0.11$~to~0.12
at 25\arcsec -- 27\arcsec, which is
within the first exponential scale length of the disk but beyond most
of the bulge light (see below).
This would correspond to a round disk with intrinsic $b/a = 0.2$
inclined 28\arcdeg\ to face-on.
The major axis at PA~$\approx 12$\arcdeg\ is almost the same as at
large radii.
However, oval distortions of 10\% are common in galaxy disks \citep{rz95, kk04}, especially among earlier types \citep{ry06}.
In what follows, we assume that the position angle \pg\ where the
galaxy disk intersects the plane of the sky lies in the range $\pg = 195\arcdeg \pm 20$ \arcdeg.
\citet{HS98} measured velocities along PA=15\arcdeg,
and find a rise to roughly 100\kms\ at 20\arcsec --30\arcsec\ radius
on both sides of the center.
According to their Figure~1 the southwest side of the stellar disk is receding, just as for the outer \HI, so the receding line of nodes lies near PA=195\arcdeg.
If the stellar disk is indeed inclined at $i$=28\arcdeg,
then for circular speeds close to 250\kms\ we would expect
to see motions of about 110\kms\ along the kinematic major axis.
So these observations are consistent with a round stellar disk inclined with its apparent major axis close to their slit position.
\citet{HS98} find a central velocity dispersion of 193\kms\ (including
the factor $f_{bulge}$ in their Table~1), which drops to about 100\kms\
at 30\arcsec.
From this photometric and kinematic evidence, the galaxy disk appears to be nearly face-on, as suggested by the dynamical model developed by \cite{Sp90} for the warped and twisted gas layer.
However, that model took the plane of the stellar disk to be inclined with $\ig \approx 20\arcdeg$ at a position angle close to PA=--90\arcdeg\ \citep[see][]{Sp02}.
Over most of its radial extent, the \HI\ disk is then tilted by about 80\arcdeg\ with respect to this reference plane, and its twisting could be explained by differential precession.
But if the stellar disk indeed has this orientation, we would expect streaming speeds to be low along the direction PA=15\arcdeg\ explored by \cite{HS98}.
It seems more likely that the stellar disk intersects the sky plane along a line closer to PA=15\arcdeg\ (or equivalently PA=195\arcdeg).
Accordingly, we abandon the earlier model.
The gas orbits of our tilted-ring model nowhere lie close to the plane of the galaxy's stellar disk.
The disk of NGC~3718 seems to be that of an S0 galaxy, substantially free of cool gas.
This is similar to NGC~2655 \citep{Sp08}, an S0/a galaxy with a strong asymmetric central dust lane.
It contrasts with NGC~660 (PRC C-13), a starburst galaxy with a twisted polar ring that is tilted by roughly 55\arcdeg\ to the stellar disk \citep{vD95}.
In NGC~660 the host galaxy's disk is as gas-rich as a typical Sc galaxy, and
contains a quarter of the \HI\ gas in the system.
Although it contains $8 \times 10^9$\msun\ of \HI\ gas, with
$4 \times 10^8$\msun\ of molecular material \citep{Po04}, the modest far-infrared luminosity of $5 \times 10^8$\lsun\ \citep[][with our adopted distance]{Ri88} shows that this galaxy is making few new stars.
The compact star clusters visible in the right panel of Figure~\ref{f_optical} are bluer than their surroundings \citep{Tr07}, indicating relative youth, and the stellar `spiral arms' noted in Section~3.3 show that some starbirth still occurs in this galaxy.
But because the cool gas is not concentrated towards the main stellar disk, its density may be too low for efficient star formation.
\section{Why should the gas layer be warped and twisted?}\label{kinwarp}
What might have caused the gas layer in NGC~3718 to become warped and twisted?
A disk of material following orbits tilted away from the galaxy equator will tend to twist because of differential precession.
In an oblate galaxy, consider a cloud of gas following an orbit tipped by an angle
$\alpha$ away from the equator,
passing upward through the midplane.
The cloud will make a complete vertical oscillation and again cross the midplane traveling upward, before it has made a whole orbit about the center.
The tilt of its orbit remains constant, but the line of nodes, where that orbit crosses the symmetry plane, regresses in the direction opposite to the orbital motion:
\citep[see \eg\ Section~5.8 of][]{Goldstein}.
The angular precession rate $\Omega_p$ for an orbit at radius $r$ inclined by an angle
$\alpha$ is related to the circular speed $V(r)$ by
\begin{equation}
\Omega_p = { 1 \over {r V(r)} } \,
{ {\partial \langle \Phi \rangle} \over {\partial \cos \alpha} }
\equiv - \epsilon_{\Phi} \cos \alpha V (r)/r
\; {\rm or} \; - \epsilon_{\Phi} \cos \alpha \Omega (r)
\, .
\label{eqnprec}
\end{equation}
Here $ \langle\Phi \rangle $ represents the gravitational potential
energy, averaged over the ring \citep[\eg][]{Sp86},
and $\Omega (r) = V/r$ is the orbital angular speed.
The quantity $\epsilon_{\Phi} $ measures the flattening of the potential; it is positive for an oblate system, so $\Omega_p$ is negative.
Because the orbital periods are shorter towards the center
the inner orbits will regress faster,
unless the galaxy's flattening increases strongly with radius.
Thus a gas disk made up of material on concentric tilted orbits generally develops a leading twist.
Conversely, a disk in a prolate galaxy potential
will twist in a sense that trails the rotation.
Its very regular velocity field suggests that the outer \HI\ disk of NGC~3718 has been in place for at least a couple of orbits.
For a rotation speed of 230~\kms, the orbital period at 400\arcsec\ is roughly 900\,Myr, implying an
age of at least 2\,Gyr.
Rotation times in the inner disk are much shorter, and at 40\arcsec\
radius this would correspond to at least 20 orbits.
The position angle of the gas orbits has twisted by about 120\arcdeg\
between these radii.
If that twist represents precession in a system of roughly constant flattening $\epsilon_{\Phi}$,
then by Equation~\ref{eqnprec} we must have
$| \Omega_p | < \Omega /60$ for the inner orbits, or
$\epsilon_{\Phi} \cos \alpha < 1/60$.
At first glance, this implies that the gravitational potential must be
improbably spherical to prevent the disk from twisting around
itself many times in its $\geq 2$\Gyr\ lifetime.
The twisted dust lane of the peculiar S0 galaxy NGC~4753 presents a similar dynamical problem.
Here \cite{SCKD92} found a gas disk extending to roughly 7 times the radius of the inner edge at 13\arcsec\ (1\kpc\ at an assumed distance of 15.8\Mpc)
that appears to wrap by almost two complete turns around the galaxy
pole. They argue that the outer disk is at least six orbits old, corresponding to 40 orbits at the inner edge.
Precessional twisting then implies a nearly spherical mass distribution with axis ratio $b/a \geq 0.84$.
The stellar body is flattened with an axis ratio roughly 2:1 ($b/a=0.5$), so these authors conclude that the dark halo must be both round and gravitationally dominant even well within the main stellar body of this luminous
galaxy.
A different model for the warped disk of dusty gas in Centaurus~A was proposed by \cite{vA82}: this had the great advantage of representing a stable equilibrium state.
It requires that the galaxy's mass distribution is not axisymmetric, but a triaxial spheroid tumbling about its short axis.
The potential then supports a family of stable `anomalous orbits' described by \cite{He82}, which make up a warped disk.
The dusty gas should settle onto these orbits, in a process studied numerically by \cite{HaIk85}, \cite{HaIk88}, \cite{SCD88} and \cite{CoSp96}.
Near the galaxy center, in the core of the gravitational potential, the anomalous orbits circle the long axis.
The pole tilts with radius, until at large radii the orbits
lie in the `equatorial' plane perpendicular to the short axis,
circling it in the opposite sense to the one in which the figure tumbles.
The anomalous orbits make up a twisted disk which follows a
{\it restricted} warp:
orbits at all radii cross a single line of nodes, which at each
instant lies along the intermediate axis of the tumbling triaxial galaxy.
We can understand the anomalous family as a set of orbits that
precess about the short axis of the galaxy at exactly the right rate
to keep up with the tumbing galaxy potential.
Orbits in a triaxial galaxy will precess about the long or short axis
at an average rate given by Equation~\ref{eqnprec},
where $\epsilon_{\Phi}$ is now the average
flattening about that axis \citep[\eg][]{SCD84}.
When the triaxial figure tumbles about its own short axis at a rate
$\Omega_t$, the stable anomalous orbit family consists of just those orbits that precess at the rate $\Omega_p = \Omega_t$.
For orbits circling the short axis we have $\epsilon_{\Phi} >0$, so
Equation~\ref{eqnprec} requires the orbital motion to be retrograde with $\Omega_t<0$.
If the system is equally aspherical at all radii and $V (r)$ is
constant, then $\Omega_p$ is constant when
\begin{equation}
\cos \alpha \propto r \, .
\label{anomaloustilt}
\end{equation}
As \cite{He82} point out, the anomalous family tilts over to reach the
galaxy's equatorial plane at the radius where the rate $\Omega_p$ of
free precession for an orbit that is only slightly tilted from the equator becomes equal to the tumbling speed $\Omega_t$.
To test whether the anomalous orbit family can represent the warped \HI\ layer of NGC~3718, we specify the position angle of each gas orbit by
the unit vector {\bf l} along the receding line of nodes,
where the orbit crosses the plane of the sky.
The spin axis is along {\bf n} which is perpendicular to
{\bf l}, and we take {\bf m} in the plane of the ring to complete the
right-handed set {\bf l, m, n}.
We take Cartesian coordinates $x,y,z$ from the galaxy center, with $z$
pointing towards the observer, $x$ to the east and $y$ to the north.
In these coordinates, the vectors {\bf l, m, n} are related to the inclination $i$ and position angle $p$ of Section~\ref{tiltmodels} by
\begin{eqnarray}
{\bf l} = (- \sin p , \cos p, 0) ~, &
{\bf m} = (- \cos i \cos p , - \cos i \sin p , - \sin i)
\nonumber \\
{\rm ~and~} &
{\bf n} = (- \sin i \cos p , - \sin i \sin p , \cos i) \, .
\end{eqnarray}
The planes defined by two rings with normals along vectors
${\bf n_1, n_2}$ intersect along the line ${\bf n_1 \times n_2}$.
A restricted warp is one in which this vector points in the same
direction for all pairs of rings.
We can specify the equatorial plane of the galaxy by the apparent position angle \pg\ and inclination \ig\ of a circular ring lying in that plane.
(Note that if the stellar body is triaxial, the position angle of the galaxy's apparent major axis may differ from \pg.)
Defining the corresponding vectors \llg, \mmg\ and \nng,
the angle \tilt\ between a gas ring and that plane is given by
\begin{equation}
\cos(\tilt) = {\bf n} \cdot \nng =
\cos i \cos \ig + \sin i \sin \ig \cos(p - \pg) \, .
\end{equation}
The ring intersects the galaxy's equatorial plane along the direction
${\bf n} \times \nng$.
We define the \twist\ to be the angle in the equatorial plane between
${\bf n} \times \nng$
and the vector \llg\ where the equator intersects the sky plane; so
\begin{eqnarray}
& \sin(\tilt) \cos(\twist) =
\llg \cdot {\bf n} \times \nng =
[\cos \ig \sin i \cos(p - \pg) - \cos i \sin \ig ] =
{\bf n} \cdot \mmg \; ,
\nonumber \\
& {\rm and}~
\sin(\tilt) \sin(\twist) =
\mmg \cdot {\bf n} \times \nng =
\sin i \sin (p - \pg) =
- {\bf n} \cdot \llg \; .
\end{eqnarray}
With these definitions, $\tilt = i$ and $\twist = p-\pg$ when the galaxy's
equatorial plane coincides with the plane of the sky so that
$\ig=0$.
These are related to the angles $\theta, \beta$ of \cite{sc85}
by $tilt = \theta$ and $twist = 180\arcdeg - \beta$.
The pair of angles (\tilt, \twist) describes the same ring as
($-\tilt, \twist + 180\arcdeg$).
Just as for the inclination $i$, we usually take $0 < \tilt < 180\arcdeg$ so that $\sin(\tilt)$ is positive.
\begin{figure}
\includegraphics[width=15cm]{f13.jpg}
\caption{For the ring model {\sc inspector}2, angles with respect to a reference plane at inclination \ig\ and position angle \pg.
The top panel shows \twist\ measured relative to the ring at 140\arcsec.
The red solid line refers to the `restricted warp' obtained for \ig=95\arcdeg\ and \pg=105\arcdeg: the \twist\ is nearly constant in the range 100\arcsec~$< r <$ 400\arcsec.
The blue line with dots refers to the orientation of the stellar disk implied by the K-band isophotes: $\ig$=28\arcdeg, $\pg$=195\arcdeg; the twist is leading.
The green dash-dotted line is for $\ig$=152\arcdeg\ and $\pg$=195\arcdeg, the other possible orientation corresponding to the K-band isophotes.
The twist now has a trailing sense.
The middle panel shows the angle $\alpha$ between each ring and the galaxy equator in the corresponding model: $\alpha = 90\arcdeg - \tilt$ for the restricted warp and $\alpha = \tilt$ for the other models.
The ring inclination decreases monotonically with radius from polar to equatorial for all the models except the last.
The bottom panel shows $\cos \alpha$; the dashed line shows the relation $\cos \alpha \propto r$ of Equation~\ref{anomaloustilt}.
The run of \twist\ and \tilt\ for the `restricted warp' model and that with $\ig$=28\arcdeg, $\pg$=195\arcdeg\ is given in Table~\ref{tabletilttwist}.}
\label{f_twotwist}
\end{figure}
The solid curves of Figure~\ref{f_twotwist} show the angles for the \HI\ orbits, relative to a plane with \ig=95\arcdeg\ and \pg=105\arcdeg,
close to that of the gas ring at $r=40$\arcsec, our innermost reliably-determined orbit.
As \cite{sc85} found, over the best-measured portion of the disk the rings fall close to a restricted warp.
(Schwarz's reference plane corresponds to $i_g = 104\arcdeg, p_g = 114\arcdeg$ which is close to the position angle of our gas at 50\arcsec\ radius.)
The twist angles of all the rings between 40\arcsec\ and 300\arcsec\ fall within 10\arcdeg\ of a common value.
If we take the ring at 40\arcsec\ to define the polar plane of the galaxy's potential, then
in $40\arcsec < r < 400\arcsec$ the orientation of the gas orbits changes almost exactly from polar to equatorial, as predicted by the model of \cite{vA82}.
However, there are two difficulties with interpreting the gas motions as material following anomalous retrograde orbits.
First, the outer \HI\ orbits should lie perpendicular to the short axis of the triaxial potential.
Simulations combining dissipative gas with cold dark matter \citep{Du94, Ka04, Ba05} find that this axis tends to be perpendicular to the galaxy disk.
Those models are directly applicable to luminous galaxies like NGC~3718, where the stellar disk should dominate the gravitational force within the optical radius \citep[\eg][]{Ka06}.
But we saw in Section~\ref{tiltmodels} that the luminous disk is close to face-on, unlike the outer \HI\ orbits.
Also, the shape of the warp does not follow the prediction of
Equation~\ref{anomaloustilt}, given by the straight dashed line in the bottom panel of Figure~\ref{f_twotwist}.
As \cite{sc85} noted, within 200\arcsec\ of the center the tilt of
the disk changes too rapidly to fit this description.
The stellar disk appears to be tipped by about 28\arcdeg\ from face-on (Section~\ref{tiltmodels} above).
So we have either $i_g$=28\arcdeg\ or $i_g$=152\arcdeg, depending on whether the east or the west side of the disk is closer to us.
The blue curves with dots in Figure~\ref{f_twotwist} shows that for the combination
$i_g$=28\arcdeg, $p_g$=195\arcdeg, the central gas disk is very nearly polar while
orbits at larger radius tilt monotonically towards the galaxy plane.
The bottom panel shows that it follows rather closely the curve $\cos \alpha \propto r$
that we expect for the anomalous orbit family.
However, this is far from a restricted warp: the twist angle increases
by about 120\arcdeg\ between the inner and outer radii.
The twist has a leading sense relative to the orbital motion, as we expect for differential precession in an oblate galaxy potential.
When we choose $i_g$=152\arcdeg, the dash-dot line in Figure~\ref{f_twotwist} shows that the tilt is not monotonic.
The gas orbits are nearly polar near the center, then dip by about 20\arcdeg, and then warp up towards the pole and over it at 420\arcsec.
Because the flattened stellar body of the galaxy should dominates the gravitational force
within 150\arcsec\ of the center (see below), the potential should be oblate and we expect the precessional twist to have a leading sense.
Instead, we see a trailing twist.
The shape of the gas layer is neither a stable configuration,
nor a natural result of precessional twisting.
We do not consider this model further, but adopt $i_g$=28\arcdeg\ for the stellar disk.
If the ring is twisted about the galaxy pole, it cannot be in a
steady state: \eg\ \cite{HuTo69}, \cite{Sp86}, \cite{ArSp94}.
Instead, the gas orbits suffer differential precession according to
Equation~\ref{eqnprec}.
In the following section, we construct a mass model for the galaxy,
to examine how fast the gas layer should twist up,
and for how long the warped gas disk might have been in place.
This model is similar to those presented for Centaurus A by \cite{Q92}, \cite{Q93} and \cite{Sp96},
where the complex warped structure results from an interplay between
self-gravity and precession.
\section{An illustrative dynamical model}\label{dynwarp}
We now examine how a tilted gas disk would precess in a simple axisymmetric mass model for the disk, bulge and dark halo of NGC~3718.
Our model for the stellar component is based on near-infrared photometry, to minimize the effect of dust absorption.
From deep K-band images that trace the galaxy's light beyond 300\arcsec\ from the center,
\citet{Tu96} measure a scale length $h_R = 56.6\arcsec = 4.66$\kpc.
This is longer than the $h_R = 27\arcsec$ found by \cite{PeWi93} in the H band, and by \cite{Ch02} from 2MASS K-band photometry, but both of these images were much shallower.
Making our own ellipse fits to the published image of \citet{Tu96} with 2.052\arcsec\ pixels confirms the longer scale length; so for our illustrative model we adopt $h_R = 55\arcsec$.
We follow \citet{Tu96} in taking the stellar disk to have
an intrinsic axis ratio $b/a = 0.2$, and calculate the forces from this thickened exponential disk as described in \citet{SS90}.
We take the bulge to be spherical.
The K-band radial profile in Figure~8 of \cite{Tu96} appears roughly exponential outside 20\arcsec, as does the 2MASS profile measured by \cite{Ch02}.
Since our innermost measured \HI\ orbits lie further out,
it does not matter how we distribute the bulge mass within that radius.
For simplicity we model the bulge as a Plummer sphere with core radius $r_P = 10\arcsec$.
The rotation curve of Figure~\ref{f_ringangles} remains nearly flat to
400\arcsec, which is at least four scale lengths of the stellar disk.
This, and the high mass-to-light ratio M$_{dyn}$/L$_K \sim 7$ (Section~\ref{blank}),
requires an extended dark halo.
We use the pseudo-isothermal form of \citet{SS90}, parametrized by
the flattening $\epsilon = 1 - b/a$ of the equidensity contours,
the core radius $r_H$ and the asympototic circular speed $V_H$.
For a given halo flattening, we set $r_H$ and $V_H$ by requiring that
the combined rotation curve $V(r)$ from the bulge, disk and halo remains
approximately flat.
To calculate $V(r)$ we use the equatorial rotation curve of
the halo from Equation~4 of \citet{SS90},
but we average the inward pull of the exponential disk over a circular
ring at the appropriate tilt angle.
The halo torque is computed as described in \citet{SS90}.
The torques from the halo and the flattened disk are added to
calculate the precession rate according to Equation~\ref{eqnprec}.
We do not include the HI gas mass in calculating the rotation curve: see below.
Models like that of \cite{Sp96} for the warped disk in Centaurus~A shows that the self-gravity of the warped disk can also affect details of how it resists precessional twisting.
In this case the disk is very strongly twisted, so this effect is likely to be small, and we do not include it.
\begin{figure}
\includegraphics[height=15cm]{f14.jpg}
\caption{Rotation curve and expected twisting for a dynamical model with
M$_d = 5 \times 10^{10}$\msun\ and M$_b = 2 \times 10^{10}$\msun.
The spherical halo has $r_H$=10\arcsec\ and $V_H$=200\kms.
Top: points show the rotation curve of our tilted-ring fit {\sc inspector2} in Figure~\ref{f_ringangles}; curves show the total rotation predicted from the dynamical model (solid), with the contributions of dark halo (dotted),
disk (dashed) and bulge (dash-dot).
Middle: angle \tilt\ of the \HI\ orbits from {\sc inspector2} relative to a stellar disk with \ig=28\arcdeg, \pg=195\arcdeg\ (dashed line with filled dots) and for \pg=175\arcdeg\ (open circles).
An exactly polar orbit has \tilt=90\arcdeg.
Bottom: angle \twist\ for the \HI\ orbits
(filled dots for \pg = 195\arcdeg, open circles for \pg=175\arcdeg), and the precessional twisting predicted by the mass model (line with dots for \pg=195\arcdeg, line with crosses for \pg=175\arcdeg).
The dashed curve shows the result for \pg = 195\arcdeg,
when the halo is flattened with an E3 shape.
All twists are measured relative to the gas orbit at $r=240$\arcsec.
Measured angles and consequently the predicted twists are uncertain
within 40\arcsec\ radius.}
\label{f_prec_fiducial}
\end{figure}
The top panel of Figure~\ref{f_prec_fiducial} shows the rotation curve from this model.
The disk mass M$_d = 5 \times 10^{10}$\msun\ and the halo is chosen to have a small core radius, $r_H$=10\arcsec, so the rotation speed declines gently with radius.
To provide a flat rotation curve at the center, the bulge mass is relatively small, M$_b = 2 \times 10^{10}$\msun; the overall mass-to-light ratio $M/L_K =1$ in solar units.
This is a `maximim disk' model:
the disk and bulge must dominate the rotation curve within 2$h_R$
to provide the observed declining rotation curve.
We initially take the dark halo to be spherical.
The middle and bottom panels show the orientation of the gas orbits in our fit {\sc inspector2}, relative to a `galaxy' oriented with $\ig = 28$\arcdeg\ and $\pg = 195$\arcdeg.
As discussed in Section~\ref{tiltmodels} above, the viewing angles \ig\ and \pg\ for the stellar disk are also uncertain.
The predicted twist is not very sensitive to a change in the inclination \ig, but decreasing the position angle \pg\ slightly will change the sign of $\cos (tilt)$ for the inner, near-polar orbits, and hence the sense of their precession.
The open circles show the orientation when we take \pg=175\arcdeg.
The bottom panel shows how much twisting a gas disk would suffer over our inferred minimum 2\Gyr\ lifetime, if it was initially warped but not twisted, so that its tilted orbits intersected the `galaxy' plane along a single straight line of nodes.
Comparing this prediction to the twist angles derived from our tilted-ring fit, we see that after 2\Gyr\ the model would develop roughly the observed pattern of twisting in the region between radius 40\arcsec\ and 400\arcsec, where our tilted-ring fit is most reliable.
Because it warps away from the pole at larger radii, it does not become strongly twisted, as the naive arguments of Section~\ref{kinwarp} would suggest.
The mass of the \HI\ disk is 16\% of that of the stellar disk in our dynamical model. If we had included it in our rotation curve fit, we would have reduced the mass of both the disk and the dark halo to compensate.
The torque from the disk would then be no more than 16\% less, and precession times would be longer by that same fraction. Our conclusion remains unchanged.
Differential precession changes the \twist\ of the gas orbits, but not their \tilt\ with respect to the stellar disk.
Why then should the gas disk have the observed run of tilt?
In the middle panel of Figure~\ref{f_prec_fiducial} we show the run of tilt angle that allows the orbits at all radii to precess together in our dynamical model.
The curve is for a halo flattened to an E3 shape, but those for a spherical halo and even an E6 halo lie nearby.
The observed run of tilt lies fairly close to this curve.
We conclude that the warped gas disk has the shape that it does, because that shape has permitted it to survive far longer than would otherwise be the case.
The dynamical model of Figures~\ref{f_prec_fiducial} assigns the maximum plausible mass to the flattened disk.
\section{Discussion}\label{discuss}
We have made high-resolution maps of the \HI\ gas in NGC~3718 and its companion NGC~3729.
Our data cube for NGC~3718 shows multiply-peaked velocity profiles and
a complex but highly bisymmetric structure.
Using {\sc inspector}, a task in {\sc gipsy}, we fitted a tilted-ring model, in which gas following near-circular orbits about the galaxy center forms a warped and twisted layer.
We confirm the conclusions of \cite{sc85}, that the prominent asymmetric dust lane marks the region where the orbits of the (dusty) gas turn edge-on to the line of sight.
The molecular gas mapped by \cite{Kr05} shares the motion of the innermost \HI\ gas.
The unusual diffuse spiral arms fall in regions where gas orbits appear to crowd together on the sky.
The arms are visible in blue light: new stars have formed in the twisted gas layer.
As in other galaxies with extended \HI\ disks \citep{sa08},
spiral structure is observed far out in the disk, where self-gravity should be too weak to provoke the gas to instability and clumping.
The warped and twisted \HI\ disk can be traced to 500\arcsec\ or 42\kpc\ from the center.
It is fairly symmetric within 7\arcmin\ or 35\kpc, where the orbital period is roughly a gigayear.
So the gas disk has probably been in place for at least a few orbits at this radius, or 2--3\Gyr.
Further out, symmetrically-placed spiral-arm fragments to the east and west are visible in both \HI\ gas and blue light.
The polar gas disk is still in the process of formation: the
eastern arm fragment continues as a streamer of gas stretching to a cloud 60\kpc\ north of the galaxy center.
Sensitive \HI\ maps increasingly reveal such long streamers and tails in the outer parts of disk systems, continuous with the regular velocity field of the galaxy, that may represent gas in the process of joining the galaxy \citep{vdhs05}.
However, the gas in polar orbit around NGC~3718 is very dusty; it is not pristine material.
NGC~3718 has been classified as a barred Sa galaxy, but this is misleading.
The apparent bar is an effect of looking through the edge-on disk of dusty gas, and the peculiar diffuse spiral arms instead represent star formation in the warped and tilted gas disk.
K-band photometry \citep{Tu96} shows an exponential disk close to face-on; we find no \HI\ gas orbiting near this plane, so the old stellar disk must be almost empty of cool gas.
Instead, NGC~3718 is typical of gas-rich early-type galaxies, where \HI\ gas is often found far outside the stellar body, and does not share the stellar kinematics \citep{No05, Mo06, Sp08}.
When we refer our tilted-ring model for the \HI\ gas to the most probable plane for the stellar disk, the innermost gas orbits are nearly polar.
We do not see gas orbiting in the plane of the stellar disk itself.
Thus NGC~3718 is indeed a polar ring galaxy: as in the archetype NGC~4650A
\citep[\eg][]{G02}, a gas-poor early-type galaxy is surrounded by a highly inclined gas-rich low-surface-brightness disk.
While the inner parts of the \HI\ disk are nearly polar, the outer orbits tip to lower inclination.
This pattern of tilt minimizes the destructive effects of differential precession, and has allowed the polar structure to survive until the present day.
The observed pattern of twisting can be explained by a dynamical model for the galaxy in which the gas orbits precess freely about the pole of the stellar disk, and the dark halo is roughly spherical.
Polar ring galaxies are one of our few tools for studying the three-dimensional shape of the dark halo.
Our models for NGC~3718 allows a round dark halo.
\cite{SCKD92} obtain a similar result for the twisted dust disk in NGC~4753, concluding that $b/a>0.8$.
The Milky Way's flattening can be estimated from the near-polar streams of stars torn from the Sagittarius dwarf galaxy, which undergo differential precession as they orbit our Galaxy.
This process yields confusing results, with some aspects of the streams pointing to a slightly oblate halo \citep[\eg][]{Jo05} and others to a prolate halo \citep{He04}: see \cite{Fe06} for a summary.
However, \cite{Jo05} favor the range $0.75 < b/a < 1.1$ for the density, and strongly disfavor a very flattened halo with $b/a < 0.6$.
By contrast, studies of the velocity fields in two polar ring galaxies
imply strongly oblate mass distributions.
When the dark halo is flattened, polar orbits are generally elongated towards the pole, and the polar rotation curve falls below that in the equatorial plane.
By comparing speeds measured for gas in the polar ring with stellar speeds in the central S0 galaxy, \cite{S94} deduced that the dark halo of NGC~4650A is considerably flattened, with $0.3 \leq b/a \leq 0.4$,
almost as flat as the stellar disk.
In the system A0136-0801, \cite{SP95} used Fabry-Perot imaging to map the two-dimensional velocity field of the polar ring in H$\alpha$ emission.
The kinematic major and minor axes were skewed away from perpendicular, a sign that the gas followed oval orbits.
Fitting a dynamical model to the velocity field in conjunction with the spatial distribution of the emitting gas yielded a flattening $b/a \approx 0.5$ for the system.
Cosmological simulations predict that the dark halos of galaxies should be fairly round.
Halos formed from cold dark matter alone should be {\it prolate,} with axis ratios 0.6--0.7: \eg\ \citet{al06}.
Adding a baryonic disk flattens the halo in the same sense as the disk \citep{Du94},
but only to an average axis ratio 0.7--0.8 \citep{Ka04,Ba05},
although \cite{Ka04} found that 'aligned-disk' galaxy mergers could produce a halo as oblate as $b/a \approx 0.5$.
If the material of the polar ring was a late accretion onto the central galaxy, we would expect the halo to be flattened in the same sense as the host's disk.
If the ring gas flowed in along filaments of the `cosmic web', as \cite{ma06} propose, the dark halo should be oblate close to the host galaxy, becoming prolate and elongated along the filament further out.
\cite{bek98} suggested that the polar ring represents the disk of a low-surface-brightness galaxy that captured the dense central body by merger.
The dark halo might then be aligned with its long axes in the plane of the ring.
Polar rings indeed deviate from the Tully-Fisher relation in the sense that Bekki's model would predict \citep{Io03}; rotation speeds measured from the gas of the polar ring are higher than expected.
The dark halos of galaxies may really be quite diverse; the review of \cite{Me04} showed measurements covering the whole range $0.2 < b/a < 0.8$.
But why should we see such a pronounced difference among polar ring systems?
Perhaps this is simply observational selection.
Both NGC~4650A and A0136-0801 have `classical' polar rings, lying nearly perpendicular to the host galaxy's stellar disk.
In a galaxy with a flattened halo like NGC~4650A, a gas disk tipped far from the perpendicular as that in NGC~3718 would rapidly become twisted beyond recognition.
Strongly tilted rings would survive preferentially in systems with the roundest halos.
Another possibility is that the halo shape depends systematically on the galaxy's luminosity.
The Milky Way, NGC 3718 and NGC 4753 are all luminous systems, while NGC~4650A and A0136-0801 are several times less luminous, with $L_B \approx 4 \times 10^9$\lsun.
\acknowledgments{We are very grateful to Elizabeth Wehner and Jay Gallagher for help with optical and near-infrared images, and especially for Figure~\ref{f_optical};
to Melanie Krips for supplying details of her model fit for the molecular gas;
and to Marc Verheijen for access to his K-band images.
LSS acknowledges support from the National Science Foundation through grant AST-00-98419.
GvM and LSS would like to thank the Kapteyn Astronomical Institute of Groningen University, Netherlands,
and the MPI for Astrophysics in Garching, Germany for hospitality while part of this work was carried out.
We are all grateful to Hugo van Woerden for his encouragement throughout this project.
Finally, we would like to thank our anonymous referee for comments that helped us to improve and shorten the paper.
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
The WIYN Observatory is a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatories.
The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration (NASA).
This research has made use of NASA's Astrophysics Data System (ADS).
}
\clearpage |
0902.2555 | \section{Introduction}
Poor groups of galaxies are the first level structures of galaxies in the
hierarchy of cosmic structure formation. Considerable effort has been
put in identifying such objects in redshift surveys of galaxies and it
has been found that a large number of galaxies in the local Universe
are indeed members of such groups (e.g. Huchra \& Geller 1982; Tully 1987;
Nolthenius \& White 1987; Ramella et al. 2002; Merch\'an \& Zandivarez
2002, 2005; Gal et al. 2003; Gerke et al. 2004; Lee et al. 2004;
Lopes et al. 2004; Eke et al. 2004; Tago et al. 2006, 2008;
Berlind et al. 2006; Crook et al. 2007; Yang et al. 2008).
The determination of the dynamical state
and evolution of groups is an important step for investigating
the hierarchical galaxy formation theories.
Many dynamical and morphological studies have been restricted to
compact groups (e.g. Kelm \& Focardi 2004, Da Rocha, Ziegler, \&
Mendes de Oliveira 2007; Coziol \& Plauchi-Frayn 2007). The intrinsic elongated
(mostly prolate-like)
shape of groups (e.g, Hickson et al. 1984; Malykh \& Orlov
1986; Orlov, Petrova, Tarantaev 2001;
Plionis, Basilakos \& Tovmassian 2004; Plionis, Basilakos, \&
Ragone-Figueroa 2006; Wang et al. 2008) is a very important factor for the determination
of their dynamical state (see however Robotham, Phillips \& de
Propris 2007). Tovmassian, Martinez \& Tiersch (1999)
showed that the projected length and velocity dispersion of Hickson
compact groups (Hickson 1982) are anti-correlated, and suggested that
member galaxies in these groups possibly move preferentially along
the group major axis in quasi-stable orbits (see also Tovmassian 2001, 2002).
Using the Millenium simulation, Diaz-Gimenez et al. (2008) have found
a weak correlations between projected elongation and line of
sight velocity dispersion in physically dense compact groups, but
also in groups characterized as compact due to chance alignments along the line-of-sight.
Tovmassian \& Chavushyan (2000) and Tovmassian, Plionis \&
Torres-Papaqui (2006) have found that members of loose groups in which
compact groups are embedded, appear to move in similar elongated orbits
around the common gravitational center of the corresponding group. However,
such groups may represent only a special class and the study of generic
poor groups is important for our understanding of their formation and
evolution processes.
In this paper we study the relation between group morphology and dynamics, using
as a measure of group morphology the projected axial ratio of the fitted ellipse and the
projected group size, while as a measure of the group dynamical state we
use its velocity dispersion and galaxy morphological content.
\section{Data and Methods}
\subsection{Group Sample Selection}
In order to investigate group dynamics it is important to use groups not contaminated, as much
as possible, by field galaxies. We want to stress that random
projection of field galaxies over groups could affect their true shape and dynamical
parameters, and also their morphological content. It is obvious that the probability
of a group being significantly affected by random projections is
inversely proportional to the group
galaxy membership, $n_m$. Projection of even one field galaxy may significantly alter
the dynamical parameters of poor groups consisting of a few members.
For example, Ramella et al. (2002) mention that 20\%-60\% of their groups
which mainly consist of less than ten galaxies, are expected to be contaminated by
superpositions of field galaxies.
Furthermore, the effect of discreteness in the determination of the shape of groups
with a few members is severe (eg. Paz et al. 2006; Plionis et al. 2006), a fact
which results into artificially elongated shapes.
Moreover, Robotham et al. (2007)
claimed that poor groups with few galaxy members may have an oblate configuration. The
expected galaxy orbits and therefore dynamics in oblate groups differ from that of
groups with a prolate-like configuration, which has been shown to be the dominant
group and cluster shape (eg., Malykh \& Orlov 1986; Plionis, Barrow \&
Frenk 1991; de Theije, Katgert \& van Kampen 1995;
Basilakos, Plionis \& Maddox 2000; Cooray 2000; Plionis et
al. 2004; 2006; Sereno et al. 2006; Wang et al. 2008).
In this study we use the 2MASS High Density Contrast (HDC) group
catalog (Crook et al. 2007) which was constructed by a friends-of-friends algorithm
(eg. Huchra \& Geller 1982) such that the groups
correspond to an overdensity $\delta\rho/\rho \ge 80$. We have choosen
to study this catalogue and not the lower density contrast (LDC) one,
which is based on $\delta\rho/\rho\ge 12$, since
we believe that the HDC catalog
is less prone to projection, interloper contamination and
contamination by the large-scale structures from which galaxies are
accreted to the groups.
We also use the group catalog constructed by Tago et al. (2008) from the
SDSS Data Release 5, which from now on
we will tab as ``Tago-SDSS''. Although the authors do not provide the
overdensity threshold to which their groups correspond, a crude
calculation gives $\delta\rho/\rho\simeq 260$.
The overdensity difference between the two group catalogues
is reflected in their projected size distribution with the ``Tago-SDSS'' groups
being significantly smaller than the 2MASS-HDC groups,
as can be verified inspecting figures 3-6 and 8 further below.
Taking into account the problems discussed in the beginning of this section
we wish to limit our study to groups with more than eight members ($n_m\geq9$),
and since our aim is to study poor groups we also put an upper limit of $n_m\le 12$.
Furthermore by studying groups in a small $n_m$ range we significantly reduce
the variable, due to the different group
membership, discreteness effects on their measured shape (eg., Paz et al. 2006; Plionis et al. 2006).
One more important issue regarding mostly
the high $n_m$ groups is the fact that the increasing size of the friend-of-friend
linking radii (radial and tranverse),
necessary to take into account the decrease of the selection function
with redshift, tends to join at higher redshifts nearby (clustered) groups into single entities.
Therefore, the probability that the groups are real dynamical entities, should
decrease with redshift.
However, both of the group catalogs used (Crook et al 2007; Tago et al. 2008)
have been constructed taking special care for
this effect and it appears that indeed the artificial trends, found in previous
group catalogs (see discussion in Plionis et al. 2004; 2006) have been significantly
suppressed. None the less, by visually inspecting the apparently large and
high velocity dispersion rich groups we have found that in quite a few occasions they
appear to be the by-product of joining neighboring groups.
We would like to remind the reader that Rose (1977), Mamon (1986, 2008) and
Walke \& Mamon (1989) put forward the idea that some ordinary and compact
groups could well be a result of such projection effect (see relevant
recent works of Diaz-Gimenez et al. 2008 and Brasseur et al. 2008).
Of the groups with
$9\le n_m\le 12$ we have found, in the 2MASS-HDC sample, only one that clearly falls in
this category (No 1218; see figure 1) which we exclude from our analysis.
Indeed, the projected distribution of members of this group shows that eight of its
members compose a relatively compact group with mean $V=1713\pm233$
km s$^{-1}$, and a wide triplet located at projected distance of
$\sim 1.4 \; h^{-1}$ Mpc to the south
with mean $V=1074\pm111$ km s$^{-1}$. Another three of the supposed
members of this group are located at projected distance of
$\sim 1 \; h^{-1}$ Mpc to the south of
the first subgroup (distances are measured at the mean redshift of the
whole group)\footnote{Throughout this work we use $H_0=100 \;h$ km
s$^{-1}$ Mpc$^{-1}$ with $h=0.73$.}.
As another example,
we present in Figs. 1 the map of 2MASS-HDC group No 384 with a multiplicity $n_m=13$
which could consist of two-three probably unrelated groups.
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.0}
\plotone{f1.eps}
\figcaption{Equal area maps of two examples of 2MASS-HDC groups
suspect of being composed of unrelated groups and field galaxies:
{\em Left Panel}: The group 1218, with $n_m=12$, consists
of two clearly distant sub-groups at a projected separation of $\sim
1.4\; h^{-1}$ Mpc (at the mean redshift of the whole
group) and a mean velocity difference of $\sim 700$ km s$^{-1}$.
This group is the only clearly ``problematic'' group in our $n_m=9-12$ sample.
{\em Right Panel:} The group 384, with $n_m=13$, possibly consists of
three separate groups with mean radial velocities
as indicated in the plot. The projected distance (at the mean redshift of the whole group)
of the northern and southern subgroups from the central
one is $\sim 2$ and $\sim 1.3 \; h^{-1}$ Mpc, respectively.}
\end{inlinefigure}
We will also analyse groups (with $n_m\ge 9$) based on their estimated virial masses
since the magnitude limited
nature of the 2MASS galaxy redshift survey implies that groups of the same multiplicity
but at different redshifts correspond to different intrinsic richness.
Indeed the difference of the $K_{\rm total}$ apparent magnitude (Jarrett et al. 2000) between
the brightest and faintest galaxy in the 2MASS-HDC groups with, for
example, 9 members is $\approx3\fmag5$ for
nearby groups with $cz \approx 300$ km s$^{-1}$, and systematically
decreases to about $1\fmag5$ for distant groups with $cz \approx7000$
km s$^{-1}$. This shows that faint group members are missed as a function of increasing
redshift and thus the distant groups, of apparently the same multiplicity as nearby ones,
are intrinsically richer in members above a given luminosity
threshold. However, if assume that the missed faint galaxies randomly
sample the distribution of group member velocities, then the mass estimate of the
groups will not be systematically affected by the omission of these fainter galaxies and
therefore performing an analysis of low and intermediate mass groups (ie., excluding
the apparently massive systems which could be artificial) circumvents the previously mentioned problems.
Note however that the above assumption may not be valid for virialized
groups in which dynamical friction has played a relatively important
role, and in which case their mass may be underestimated.
In order to have as much as possible a representative sample of the
true underlying local group population and to minimize the above mentioned
redshift dependent systematic biases, we have chosen to study from the
Tago-SDSS catalog the groups within $z\le 0.043$.
Note that the 2MASS-HDC group sample is by
construction defined in the local universe ($z \le 0.033$).
\subsection{Shape and Dynamics measures}
We determine the projected group shape diagonalizing the moments of inertia tensor,
which we construct by weighting each member galaxy by $1/K_{\rm
total}$, with $K_{\rm total}$ the apparent K-band galaxy magnitude.
This is done in order to weight more the luminous (and thus
massive) galaxies, which dominate in shaping the group gravitational potential.
Firstly, the galaxy equatorial positions are transformed into an equal area
coordinate system, centered on the group center of mass (which we
determine using obviously the $1/K_{\rm total}$ weighting scheme).
We then evaluate the moments:
\begin{eqnarray}
I_{11} & = & \sum_{i} w_{i}(r_{i}^{2}-x_{i}^{2}) \nonumber \\
I_{22} & = & \sum_{i} w_{i}(r_{i}^{2}-y_{i}^{2}) \nonumber \\
I_{12} & = & I_{21}=-\sum_{i} w_{i}x_{i}y_{i}
\end{eqnarray}
with $w_{i} (=1/K_{\rm total})$ the statistical weight of each member galaxy
and $r_i$ the distance of the $i^{\rm th}$ galaxy from the group center of mass.
Note that because the inertia tensor is symmetric, we have
$I_{12}=I_{21}$. Diagonalizing the inertia tensor
\begin{equation}\label{eq:diag}
{\rm det}(I_{ij}-\lambda^{2}M_{2})=0 \;\;\;\;\; {\rm (M_{2} \;is \;
2 \times 2 \; unit \; matrix.) }
\end{equation}
we obtain the eigenvalues $\lambda_{1}$, $\lambda_{2}$, from which we
define the principal axial ratio of the configuration under study by:
$q=\lambda_{2}/\lambda_{1} (\equiv b/a)$, with $\lambda_{1}>\lambda_{2}$.
As a measure of the size of the group we also calculate the mean projected galaxy-galaxy
separation and a variant (see below) which we use to estimate the group virial mass (only
for the Tago et al. groups, since for the 2MASS-HDC groups the relevant
values are provided by the catalog).
The group virial radius, used to determine the group mass, is:
\begin{equation}
R_v = \frac{n_m (n_m-1)}{\sum_{i=1}^{n_m-1} \sum_{j=i+1}^{n_m}
\left[D_L \tan (\delta\theta_{ij})\right]^{-1}}\;\;,
\end{equation}
where $D_L$ is the luminosity distance of the group (using
$\Omega_{\Lambda}=0.7$, $\Omega_{\rm m}=0.3$, $h=0.73$),
$\delta\theta_{ij}$ is the angular $(i,j)$-pair separation.
Using the group velocity dispersion and $R_v$ we can
estimate the group's virial mass
according to:
\begin{equation}
M_v=\frac{3\pi}{2}\frac{\sigma_v^2 R_v}{G}\;\;\;,\;\;\;\;\;
\end{equation}
Note that $R_v$ is significantly smaller than
the maximum or the mean group galaxy-pair separation. In what follows
we will use as the major axis of each group, $a$, its mean galaxy-pair
separation and as its minor axis: $b = a q$.
\section{Group Morphology-Dynamics Relation}
\subsection{Background framework}
Do we expect to find {\em a priori} any morphology-dynamics relation in a sample
of self-gravitating groups of galaxies which are in dynamical equilibrium?
The answer is probably no, since in such a case the velocity dispersion of the group should
reflect its virial mass, while the group shape should be quasi-spherical since
relaxation processes will have (mostly) isotropized the initial anisotropic group phase-space.
If however a group morphology-dynamics relation is found, then it could be due to either
of two possible causes (or a combination of both):
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.02}
\plotone{f2.eps}
\figcaption{The expected morphology-dynamics correlations due to the random orientation
with respect to the line-of-sight of prolate spheroids with intrinsic $q=0.5$ and
major axis $a=1$ Mpc. The red line is the expected theoretical curves while the black points
represent random realizations in which each group is sampled by 10 ``galaxies''.}
\end{inlinefigure}
\begin{itemize}
\item The groups of galaxies in the sample are not all virialized but at different
evolutionary stages, and therefore it is possible to find a correlation between their
velocity dispersion (as a measure of their dynamical state) and the group size and
axial ratio, since a group at its early stages of formation will have a larger
size, with respect
to their final dynamical relaxed state, and it will be more elongated reflecting
the initial anisotropic accretion of matter along filamentary structures (eg. West 1994).
\item Since groups of galaxies are prolate-like, as has been found by numerous studies (see
introduction), and if galaxies move predominantly along the group elongation, then
the size, the axial ratio and the velocity dispersion of the groups will depend on the
group orientation with respect to the line-of-sight.
The nearer is the orientation of the three-dimensional major axis of a group
to the line-of-sight, the smaller will appear its size, the larger
its axial ratio and velocity dispersion (Tovmassian, Martinez \& Tiersch 1999;
Tovmassian, Plionis \& Torres-Papaqui 2006, and references therein). In effect, performing
a simple Monte-Carlo simulation in which we randomly orient with respect
to the line of sight, 1000 intrinsically prolate groups with $q=0.5$,
$a=1$ Mpc and velocity
dispersion $\sigma_v=200$ km s$^{-1}$ strictly along its major axis, we obtain in Fig.2 the red
curves, which constitute the theoretical expectation in the presence of no discreetness.
If now we sample each group by 10 ``galaxies'', we again obtain
very significant $q-\sigma_v$ and $a-\sigma_v$ correlations but with a large scatter.
It is important to note that (a) in this model the $a-\sigma_v$ correlation is stronger
than the $q-\sigma_v$ correlation while no $b-\sigma_v$ correlation expected and
(b) the observed scatter in Fig.2 is soley due to sampling randomly
each Monte-Carlo group by
10 ``galaxies''. More scatter should be expected, however, in a more realistic situation
in which the intrinsic range of group sizes and velocity dispersions
would have been taken into account.
\end{itemize}
In fact, the intrinsic spread in group sizes could effectively mask the
possible morphology-dynamics correlations,
and thus even a weak correlation should be considered as a
hint of a true underlying effect.
\subsection{Possible systematic effects}
Any systematic redshift dependence of
$\sigma_v$ and group size, introduced by the convolution of the friends-of-friends
group finding algorithm and the magnitude-limited nature of the underlying
galaxy catalogs (see discussion in Plionis et al. 2006), produces a dependence of
the group size and the velocity dispersion with redshift (at higher $z$'s you get large
groups with higher $\sigma_v$), which could also
results in an artificial correlation between $a$ and $\sigma_v$, but such that
$a \propto \sigma_v$ (opposite to what predicted by either of the possibilities
discussed previously).
We have tested whether such bias is present in the presently analysed group samples
(as has been found in previous group samples; see discussion in Plionis et al.
2004; 2006) and found
that for the 2MASS-HDC samples there is a relatively weak $\sigma_v-z$ correlation
($R=0.29$ with ${\cal P}=0.02$) and
an insignificant $a-z$ correlation ($R=0.18$ with ${\cal P}=0.18$),
probably due to the special effort put by the authors
to reduce such systematics (Crook et al. 2007).
The positive $\sigma_v-z$ correlation could well be
due to the expected volume effect (ie., at higher $z$'s a large fraction of the group mass
function is sampled).
Regarding the SDSS-Tago groups we find no correlation whatsoever
between either $\sigma_v$ nor $a$ with redshift, again an indication that the authors
managed to suppress the systematics from which many other group catalogs suffer.
\subsection{Results}
In Figs. 3-4 we plot the scatter diagrams between the group shape parameters and their
velocity dispersion for both group catalogs analysed.
It is evident that we do find the qualitatively expected correlations which are
quite strong and significant in both catalogs of groups.
The Pearson correlation coefficients $R$ and corresponding
random probabilities ${\cal P}$ for the considered group subsamples are presented
in Table 1. It is evident that the velocity dispersion
$\sigma_v$ of groups increases with increasing $q$, while it decreases
with increasing group major axis $a$ (while the opposite is expected if it would have
been due to the systematics discussed previously). An additional
systematic effect that acts in the direction of diluting a positive
$q-\sigma_v$ correlation is that, within any $n_m$ bin, the range of
group virial masses traced will tend to
induce an anticorrelation between $q$ and $\sigma_v$, since more massive
halos are more elongated (lower $q$) and have a higher velocity dispersion with
respect to less massive halos (e.g. Jing \& Suto 2002; Kasun \& Evrard 2005;
Allgood et al. 2006; Bett et al. 2007; Ragone-Figueora \& Plionis
2007; Wang et al. 2008). This implies that the observed shape-dynamics correlation is,
in effect, stronger than what observed.
Note also that a weak but non significant correlation is found between the groups minor axis and
velocity dispersion. As we will see further below (Fig.4) this is due
to the lower mass groups which do not show a negative $b-\sigma_v$ correlation.
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.02}
\plotone{f3.eps}
\figcaption{The morphology-dynamics correlations of the 2MASS-HDC groups:
From left to right the $q-\sigma_v$, $b-\sigma_v$ and $a-\sigma_v$ scatter diagrams.}
\label{fig4}
\end{inlinefigure}
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.02}
\plotone{f4.eps}
\figcaption{The morphology-dynamics correlations of the the Tago-SDSS groups:
From left to right the $q-\sigma_v$, $b-\sigma_v$ and $a-\sigma_v$ scatter diagrams.}
\label{fig6}
\end{inlinefigure}
Probably a more instructive view of the morphology-dynamics correlations, which is free
of the bias of mixing poor nearby and richer distant groups of the same multiplicity ($n_m$),
is to divide the group samples into ranges of mass, according to eq.(4). To this end
we use all groups with $n_m\ge 9$ and divided each group sample in 4 bins of mass,
excluding groups with $M>10^{14}
h^{-1} \;M_{\odot}$, which actually correspond to clusters. A further reason to
exclude the high-mass groups is the contamination problem
we have identified in some apparently high velocity dispersion groups
(see Figs. 1).
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.02}
\plotone{f5.eps}
\figcaption{The morphology-dynamics correlations of the the 2MASS-HDC groups in different
mass ranges: From left to right the $q-\sigma_v$, $b-\sigma_v$ and $a-\sigma_v$ scatter
diagrams. The stars correspond to $12<\log M/M_{\odot}\le 13$, blue open circles to
$13<\log M/M_{\odot}\le 13.5$, green filled circles to
$13.5<\log M/M_{\odot}\le 13.75$ and magenta open squares $13.75<\log M/M_{\odot}\le 14$.}
\label{fig4}
\end{inlinefigure}
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.02}
\plotone{f6.eps}
\figcaption{The morphology-dynamics correlations of the the Tago-SDSS groups in different
mass ranges: From left to right the $q-\sigma_v$, $b-\sigma_v$ and $a-\sigma_v$ scatter
diagrams. The stars correspond to $12<\log M/M_{\odot}\le 13$, blue open circles to
$13<\log M/M_{\odot}\le 13.25$, green multiple crosses to
$13.25<\log M/M_{\odot}\le 13.5$ and magenta open squares $13.5<\log M/M_{\odot}\le 13.75$.}
\end{inlinefigure}
In Figs 5-6 and in Table 2
we present the same morphology-dynamics correlations as before, but
dividing the groups in
bins of mass. The correlations are now more evident, with different mass range groups
occupying a clearly delineated region in the $q-\sigma_v$, $b-\sigma_v$ and $a-\sigma_v$
planes.
Although the number of 2MASS-HDC groups in each mass range is small, the $q-\sigma_v$
correlations are systematic and significant (with the exception of the highest mass range
which could be affected by the previously mentioned problems). The
$a-\sigma_v$ and $b-\sigma_v$
correlations are present only for groups with $M\ge 10^{13.5} \; h^{-1}\; M_{\odot}$.
Similar and more significant correlations are found in the case of the Tago-SDSS groups.
Here however the $a-\sigma_v$ correlations are extremely significant, while there is
also a significant correlation of the minor axis with velocity dispersion.
\subsection{Orientation or Virialization?}
For the orientation paradigm to work in producing the observed correlations
it is also important to have galaxies moving predominantly along the
prolate-like group major axis.
Such galaxy motions are generally expected in the hierarchical structure formation
scenario, where groups and clusters of
galaxies are formed by anisotropic accretion and merging along filamentary
large-scale structures (e.g. West 1994). However, the possible predominance
of such galaxy orbits would also
imply that the groups are not virialized, since virialization would mix the
phase-space and erase (mostly) the memory of the initial directional
accretion (see however van Haarlem \& van de Weygaert 1993).
It is interesting to point-out that the observed
$a-\sigma_v$ correlations are generally more significant than those of $q-\sigma_v$,
a fact which agrees also with the expectations of the orientation
paradigm (see Fig.2), and could be attributed to the dispersion of the minor axis, $b$, which
does not depend on the group orientation.
Therefore, although both discussed causes of the observed group morphology-dynamics
correlations should be at work, we attempt to disentangle which of the
two, if any, dominates. To this end we consider two tests, one based on
the morphological content of groups of galaxies, since a proxy of
their dynamical state should be their morphological content, and the
other based on the expectation of the virialization process to compactify the initial
dispersed group morphology.
\subsubsection{Morphological Content of Groups}
It has been shown that galaxies in
clusters evolve mainly by dynamical interactions and merging (e.g.,
Goto 2005). It also appears that mergers, strangulation as well as interactions
with the hot diffuse gas (for the richest groups) act in the group
environment and affect the morphology and gas content of member galaxies
(e.g., Barnes 1985; Zabludoff \& Mulchaey
1998; Hashimoto \& Oemler 2000; Coziol \& Plauchu-Frayn 2007; Rasmussen et al. 2008).
The efficiency of galaxy interactions in altering the morphology and
gas content of galaxy group members depends on their relative
velocity, being more efficient in low-velocity dispersion groups (e.g., Mamon
1992), which implies that such processes should be relatively frequent in the
early stages of the group dynamical evolution.
While galaxy interactions and merging occur,
contemporary the host group evolves dynamically and therefore the fraction
of E/S0 galaxies should appear high in dynamically advanced (high
velocity-dispersion) groups.
Indeed, the fraction of E/S0 galaxies, $f_{E/S0}$, in groups appears to
increase with increasing group velocity dispersion
(e.g., Tovmassian, Plionis \& Andernach 2004; Aguerri, Sanchez-Janssen
\& Mu\~noz-Tunon 2007) a fact which could also
be viewed as a manifestation of the known {\em density-morphology} relation at the
groups scale (Postman \& Geller 1984).
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.02}
\plotone{f7.eps}
\figcaption{Dependence of the fraction $f_{E/S0}$ of early type galaxies
in groups on the group velocity dispersion $\sigma_v$ (left panel) and major-axis size (right panel).}
\end{inlinefigure}
Therefore, if we also verify for our current group samples
a correlation between morphological content, $f_{E/S0}$, and group
velocity dispersion then this would clearly suggest that the observed
range of velocity dispersions is related mostly to the group dynamical state
and not to the group orientation (in which case no $f_{E/S0}-\sigma_v$
correlation is expected).
We apply this test to the well defined 2MASS-HDC groups with $n_m=9-12$. We used those
groups with 9-10 members for which the morphological type of no more than
one member galaxy was unknown, and groups with 11-12 members, the
morphological types of no more than two galaxies were unknown. We took
morphological types of member galaxies from the NASA/IPAC Extragalactic
Database (NED).
The total number of groups used is 33 and their
$\sigma_v-f_{E/S0}$ scatter diagram is presented in the left panel of
Fig. 7 (we also show in this plot with an empty dot the excluded No. 1218 group, suspected
of being contaminated by multipole groups; see section 2.1) and it is
evident that $f_{E/S0}$ strongly increases with increasing $\sigma_v$.
The Pearson correlation coefficient and random probability are $R=0.54$ and
${\cal P}=3\times 10^{-4}$, respectively, showing that the $\sigma_v-f_{E/S0}$
correlation is indeed very significant.
During virialization the groups should become more compact and their major axis
should decreases. If this is so, then the $\sigma_v-f_{E/S0}$ correlation implies
that there should also be a $a-\sigma_v$ correlation.
Indeed, the right panel of Fig. 7 shows the
corresponding correlation with coefficient $R=0.44$ and random probability ${\cal P}=0.01$.
The $q-f_{E/S0}$ correlation is weak and not significant which could well be due to
the influence of the large dispersion of the group minor axes, $b$.
\subsubsection{Minor Axes \& Group Projected Size}
In the orientation paradigm, the increase of the velocity dispersion
takes place with corresponding decrease of the projected major axis of
a group, while the minor axis remain unchanged (see Fig.2).
Meanwhile, during virialization both the projected major and minor
axis, and thus the projected surface ($S$)
of a self-gravitating system should decrease becoming more
compact. Indeed, we have found that the projected minor axes of most
of the analysed samples of groups decrease with increasing velocity dispersion. Note however
that projection effects and the presence of interlopers will affect
more the projected minor axis, with respect to the major axis, of an
intrinsically elongated group, and thus weaken any true correlations
between $b$ and $\sigma_v$. This, as well as low-number statistics,
could be the reasons why the low-mass 2MASS-HDC groups (see Fig. 5)
do not show a negative $b-\sigma_v$ correlation.
Obviously, the $a-\sigma_v$ and
$b-\sigma_v$ correlations translate into a $S-\sigma_v$ correlation
with a rate of variation, in the virialization case, which is
higher than that caused by orientation (in which case only $a$
decreases with increasing $\sigma_v$).
In Fig. 8 we present the $S-\sigma_v$ correlations
for the mass-defined 2MASS-HDC and Tago-SDSS groups. It is evident
that, depending on group virial mass, the group surface varies
by 5-8 times within the range covered by the group velocity dispersion. The
corresponding variation in the case of the orientation paradigm is
expected to be $\sim 1/q$,
which means that for the observed groups (which have $\langle q \rangle \sim 0.5$)
the expected surface variation , within the indicated velocity
dispersion range, is $\sim 2$ (as seen also in right panel of Fig.2),
significantly smaller than what observed.
\begin{inlinefigure}
\centering\leavevmode
\epsscale{1.02}
\plotone{f8.eps}
\figcaption{Dependence of the projected group surface ($S$)
on the group velocity dispersion, $\sigma_v$ for subsamples of
different mass. {\em Left Panel:} 2MASS-HDC groups (symbols as in Fig. 5)
and {\em Right Panel:} SDSS-Tago groups (symbols as in Fig.6).}
\end{inlinefigure}
\section{Conclusions}
If a family of cosmic structures are all virialized, there
is no reason to find any significant morphology-dynamics
correlations.
Such a correlation may be expected in two cases:
(a) if the galaxy group members have a net motion predominantly along the group elongation,
as expected in dynamical young groups which form by anisotropic accretion
of matter along filamentary large-scale structures, then due to
projection there must be a positive correlation between the group
axial ratio $q$ and the group radial velocity dispersion, and a negative
correlation between the projected group major axis and the group radial
velocity dispersion, and (b) if virialization is
currently at work, which will tend to compactify and sphericalize the initial volume
from which the structures form, as well as increase its velocity dispersion.
We searched for such
correlations using 2MASS-HDC (Crook et al. 2007) and SDSS Data Release
5 (Tago et al. 2008) catalogs of groups. In order to avoid
discreteness and interloper contamination effects we have performed two analyses, one
based solely on group samples defined by their apparent multiplicity ($n_m=9-12$)
and one based on samples defined in bins of group virial mass.
We found significant negative $a-\sigma_v$ and positive $q-\sigma_v$
correlations, although the observed correlations could have been
substantially weakened by many effects, among which also a
distance dependent bias by which at larger redshifts
the detected group length and velocity dispersion increases artificially.
However, we have verified that the analysed groups, due to the specific care taken by the authors
that constructed them, do not suffer significantly of this effect.
We have also found a positive and significant correlation between the early type galaxy
content and the group velocity dispersion, as well as a negative correlation between the
early type galaxy content and the group major axis. These correlations
indicate that the cause of the group morphology-velocity dispersion
trend should be attributed mostly to the dynamical evolution of structures and not to
their orientation with respect to the line-of-sight.
Our final conclusion, based on all available evidence regarding the observed
group morphology-dynamics and morphological content-dynamics relations,
is that the groups of galaxies in the local universe
do not constitute a family of objects in dynamical equilibrium, but rather a family of cosmic
structures that are presently at various stages of their virialization process. We also
expect that the observed group morphology-dynamics correlations are affected by the group
orientation with respect to the line-of-sight, but such an effect works in the same direction
as the virialization process.
\section* {Acknowledgments}
MP acknowledges funding by CONACyT grant 2005-49878. This
research has made use of the NASA/IPAC
Extragalactic Database (NED) which is operated by the Jet Propulsion
Laboratory, California Institute of Technology, under contract with
the National Aeronautics and Space Administration.
We thank Cinthia Ragone-Figueroa, Gary Mamon and Manuel Merch\'an
for useful suggestions and comments. |
1409.0955 | \section{Introduction}
\label{s:Intro}
Several mechanical systems
are described by ODE or PDE systems of the type:
\begin{subequations}
\label{intro:rip-limit}
\begin{align}
\label{intro:rip-limit-1}
&
\mathrm{D}_u \enet{t}{u(t)}{z(t)} =0 && \text{in } \mathcal{U}^*, && \text{for a.a.\,}\, t \in (0,T),
\\
&
\label{intro:rip-limit-2}
\partial\mathcal{R}_0(z'(t)) + \mathrm{D}_z \enet{t}{u(t)}{z(t)} \ni 0 && \text{in } \mathcal{Z}^*, && \text{for a.a.\,}\, t \in (0,T),
\end{align}
\end{subequations}
where $\mathcal{U}$, $\mathcal{Z}$ are Banach spaces, and $\cE : [0,T] \times \mathcal{U} \times \mathcal{Z} \to \R$ is an energy functional.
For example, within the ansatz of generalized standard materials, $u$ is the displacement, at equilibrium,
while changes in the elastic behavior due to dissipative effects are
described in terms of an internal variable $z$ in some state space $\mathcal{Z}$. In several mechanical phenomena \cite{Miesurvey}, dissipation due to
inertia and viscosity is negligible, and the system is governed by rate-independent evolution,
which means that the (convex, nondegenerate) dissipation potential $\mathcal{R}_0 : \mathcal{Z} \to [0,\infty) $ is \emph{positively homogeneous of degree $1$}.
Thus system \eqref{intro:rip-limit-2} is invariant for time-rescalings.
It is well known that, if the map $z\mapsto \enet tuz$ is not uniformly convex, one cannot expect the existence of absolutely continuous solutions to
system \eqref{intro:rip-limit}.
This fact has motivated the development of various
weak solvability concepts
for \eqref{intro:rip-limit}, starting with the well-established notion of \emph{energetic solution}. The latter
dates back to \cite{Mie-Theil99} and was further developed in \cite{Mielke-Theil04} (see \cite{DaFrTo05QCGN}, as well, in the context of crack growth),
cf.\ also \cite{Miesurvey}, \cite{Miel08?DEMF}
and the references therein.
Despite the several good features of the energetic formulation, it is known that, in the case the energy $z \mapsto \enet tuz$ is nonconvex,
the global stability condition
may lead to
jumps of $z$ as a function of time that are not motivated by, or in accord with, the mechanics of the system, cf.\ e.g. the discussions
in \cite[Ex.\,6.1]{Miel03EFME},
\cite[Ex.\,6.3]{KnMiZa07?ILMC}, and
\cite[Ex.\,1]{MRS09}.
Over the last years,
an alternative selection criterion of mechanically feasible weak solution concepts for the rate-independent system \eqref{intro:rip-limit}
has been developed,
moving from the finite-dimensional analysis in \cite{ef-mie06}.
It is
based on the interpretation of
\eqref{intro:rip-limit}
as originating in the vanishing-viscosity limit of the \emph{viscous} system
\begin{subequations}
\label{van-visco-intro}
\begin{align}
\label{van-visc-1-intro}
&
\mathrm{D}_u \enet{t}{u(t)}{z(t)} =0 && \text{in } \mathcal{U}^*, && \text{for a.a.\,}\, t \in (0,T),
\\
&
\label{van-visc-2-intro}
\partial\mathcal{R}_0(z'(t)) +\varepsilon \partial\vpotname{z} (z'(t)) + \mathrm{D}_z \enet{t}{u(t)}{z(t)} \ni 0 && \text{in } \mathcal{Z}^*, && \text{for a.a.\,}\, t \in (0,T),
\end{align}
\end{subequations}
where $\vpotname{z} : \mathcal{Z} \to [0,\infty)
$ is a dissipation potential with \emph{superlinear} (for instance, quadratic) growth at infinity.
Observe that the existence of solutions for the \emph{generalized gradient system} \eqref{van-visco-intro} follows from \cite{ColliVisintin90,Colli92},
cf.\ also \cite{MRS-dne}.
This vanishing-viscosity approach leads to a notion of solution featuring a \emph{local}, rather than \emph{global}, stability condition
for the description of rate-independent
evolution, thus
avoiding ``too early'' and ``too long'' jumps.
Furthermore, it provides an accurate description of the energetic behavior of the system at jumps, in particular highlighting how viscosity,
neglected in the limit as $\varepsilon \downarrow 0$, comes back into the picture and governs the jump dynamics.
This has been demonstrated in \cite{MRS09,MRS10,mielke-rossi-savare2013}
within the frame of abstract, finite-dimensional and infinite-dimensional, rate-independent systems, and in \cite{Mielke-Zelik}
for a wide class parabolic equations with a rate-independent term. This analysis has also been developed
in several applicative contexts, ranging from crack propagation \cite{ToaZan06, KnMiZa07?ILMC}, to plasticity \cite{DalMaso-DeSimone-Solombrino2011,DalMaso-DeSimone-Solombrino2012,BabFraMor12, FraSte}, and to damage \cite{KnRoZa2011}, among others.
In this note, we shall perform the vanishing viscosity analysis of system \eqref{intro:rip-limit} by considering the viscous approximation of \eqref{intro:rip-limit-1}, in addition to the viscous approximation of \eqref{intro:rip-limit-2}.
More precisely, we will address the asymptotic analysis as $\varepsilon\downarrow 0$ of the system
\begin{subequations}
\label{intro:vv-limit}
\begin{align}
\label{intro:vv-limit-1}
&
\varepsilon^\alpha \partial\vpotname{u} (u'(t)) + \mathrm{D}_u \enet{t}{u(t)}{z(t)} =0 && \text{in } \mathcal{U}^*, && \text{for a.a.\,}\, t \in (0,T),
\\
&
\label{intro:vv-limit-2}
\partial\mathcal{R}_0(z'(t)) + \varepsilon \partial\vpotname{z} (z'(t)) + \mathrm{D}_z \enet{t}{u(t)}{z(t)} \ni 0 && \text{in } \mathcal{Z}^*, && \text{for a.a.\,}\, t \in (0,T),
\end{align}
\end{subequations}
where $\alpha >0$ and
$\vpotname{u}$ a quadratic dissipation potential for the variable $u$. Observe that \eqref{intro:vv-limit} models
systems with (possibly) \emph{different} relaxation times. In fact , the parameter $\alpha>0$ sets which of the two variables $u$ and $z$ relaxes faster to
\emph{equilibrium} and \emph{rate-independent} evolution, respectively.
Let us mention that
the analysis developed in this paper is in the mainstream of a series of recent papers
focused on the coupling between rate-independent and viscous systems. First and foremost, in
\cite{Roub09}
a wide class of rate-independent processes in viscous solids with inertia has been tackled, while the coupling with temperature has further been considered in \cite{Roub10}. In fact, in these systems
the evolution for the internal variable $z$ is purely rate-independent and no vanishing viscosity is added to the equation for $z$,
viscosity and inertia only intervene in the evolution for the displacement $u$. For these processes, the author
has proposed a notion of solution of energetic type consisting of the weakly formulated momentum equation for the displacements (and also of the weak heat equation
in \cite{Roub10}), of an energy balance, and of a \emph{semi-stability} condition. The latter reflects the mixed \emph{rate dependent/independent} character of the system. In
\cite{Roub09} and \cite{Roub13} a vanishing-viscosity analysis (in the momentum equation) has been performed. As discussed in \cite{Roub13} in the context of delamination, this approach leads to \emph{local solutions} (cf.\ also \cite{Miel08?DEMF}),
describing crack initiation (i.e., delamination) in a
physically feasible way.
In \cite{Racca}, the vanishing-viscosity approach has also been developed in the context
of a model for crack growth in the two-dimensional antiplane case, with a pre-assigned crack path, coupling
a viscoelastic momentum equation with a viscous flow rule for the crack tip; again, this procedure leads to solutions jumping later than energetic solutions. With a rescaling technique, a vanishing-viscosity analysis both in the flow rule, and in the momentum equation, has been
recently performed in \cite{DM-Scala} for perfect plasticity, recovering energetic solutions thanks to the convexity of the energy. In \cite{Scala}, the same analysis has led to
\emph{local solutions} for
a delamination system
With the vanishing-viscosity analysis in this paper,
besides finding good \emph{local} conditions for the limit evolution,
we
want to add as an additional feature
a thorough description of the energetic behavior of the solutions at jumps. This shall be deduced from an \emph{energy balance}. Moreover,
in comparison to the aforementioned contributions \cite{Racca,DM-Scala, Scala} a greater emphasis shall be put here on how the multi-rate character of system
\eqref{intro:vv-limit}
enters in the description of the jump dynamics.
In particular, we will
convey that viscosity in $u$ and viscosity $z$
are involved in the path followed by the system at jumps in (possibly) different ways, depending on whether the parameter $\alpha$ is strictly bigger than, or equal to, or strictly smaller than $1$.
To focus on this and to avoid overburdening the paper
with technicalities,
we shall keep to a simple functional analytic setting.
Namely, we shall consider the \emph{finite-dimensional} and \emph{smooth} case
\label{simpler-intro}
\begin{equation}
\label{simpler-intro-1}
\mathcal{U} = \R^n, \qquad \mathcal{Z} = \R^m, \qquad
\cE \in \mathrm{C}^1 ([0,T]\times \R^{n} \times
\R^m)\,.
\end{equation}
Obviously, this considerably simplifies the analysis, since the difficulties attached to nonsmoothness of the energy
and
to infinite-dimensionality are completely avoided. Still, even within such a simple setting
(where, however, we will allow for state-dependent dissipation potentials
$\mathcal{R}_0$,
$\vpotname z$, and $\vpotname u$), the key ideas
of our vanishing-viscosity approach
can be highlighted.
Let us briefly summarize our results, focusing on a further simplified version of
\eqref{intro:vv-limit}. In the setting of
\eqref{simpler-intro-1}, and with the choices
\[
\vpotname u (u') = \frac12 |u'|^2, \qquad
\vpotname z (z') = \frac12 |z'|^2,
\]
system \eqref{intro:vv-limit}
reduces to
the ODE system
\begin{subequations}
\label{eps-system-intro}
\begin{align}
\label{eq-u-intro}
&
\varepsilon^\alpha u'(t) + \mathrm{D}_u \enet{t}{u(t)}{z(t)} =0 && \text{in } (0,T),
\\
&
\label{eq-z-intro}
\partial\mathcal{R}_0(z'(t)) + \varepsilon z'(t) + \mathrm{D}_z \enet{t}{u(t)}{z(t)} \ni 0 && \text{in } (0,T).
\end{align}
\end{subequations}
First of all, following \cite{MRS09,MRS10,mielke-rossi-savare2013}, and along the lines of the \emph{variational} approach to
gradient flows by
\textsc{E.\ De Giorgi} \cite{Ambrosio95, AGS08}, we will pass to the limit as $\varepsilon \downarrow 0$ in the \emph{energy-dissipation} balance associated (and equivalent, by Fenchel-Moreau duality and the chain rule for $\cE$) to
\eqref{eps-system-intro}, namely
\begin{equation}
\label{enid-eps-expl-intro}
\begin{aligned}
&
\enet t{u(t)}{z(t)} +
\int_s^t
\mathcal{R}_0 (z'(r)) +
\frac{\varepsilon}2 |z'(r)|^2 +\frac{\varepsilon^\alpha}2 |u'(r)|^2 \mathrm{d} r
\\ & \quad +
\int_s^t
\frac1{\varepsilon} \conjzname {z}{\mathrm{D}_z \enet r{u(r)}{z(r)}}
+\frac{1}{2\varepsilon^\alpha} |\mathrm{D}_u \enet r{u(r)}{z(r)}|^2
\mathrm{d} r
\\
&
= \enet s{u(s)}{z(s)}+ \int_s^t \partial_t \enet r{u(r)}{z(r)} \mathrm{d} r
\end{aligned}
\end{equation}
for all $0 \leq s \leq t \leq T$, where
$\mathcal{W}_{\mathsf{z}}^*$
is the Legendre transform of
$\mathcal{R}_0 + \vpotname z$.
As we will see in Section \ref{s:3}, \eqref{enid-eps-expl-intro}
is well-suited to unveiling the role played by viscosity in the description of the energetic behavior of the system at jumps. Indeed, it
reflect the competition between the tendency of the system to be governed by \emph{viscous} dissipation both for the variable $z$ and for the variable $u$
(with different rates if $\alpha \neq 1$),
and its tendency to be \emph{locally stable} in $z$, and at equilibrium in $u$.
for $u$, cf.\ also the discussion in Remark \ref{rmk:switch}.
Secondly,
to develop the analysis as $\varepsilon \downarrow 0$ for a family of
curves
$(u_\varepsilon,z_\varepsilon)_\varepsilon \subset H^1 (0,T; \R^n \times \R^m)$ fulfilling \eqref{enid-eps-expl-intro}
we will adopt a by now well-established technique from
\cite{ef-mie06}. Namely,
to capture the viscous transition paths at jump points,
we will reparameterize the curves $(u_\varepsilon,z_\varepsilon)$, for instance by their arc-length. Hence we will address the analysis as
$\varepsilon \downarrow 0$ of the \emph{parameterized curves}
$(\mathsf{t}_\varepsilon,\mathsf{u}_\varepsilon,\mathsf{z}_\varepsilon)_\varepsilon $ defined on the interval $[0,S]$ with values in the extended phase
space $[0,T]\times\R^n \times \R^m$, with $\mathsf{t}_\varepsilon$ the rescaling functions and $\mathsf{u}_\varepsilon:= u_\varepsilon \circ \mathsf{t}_\varepsilon$, $\mathsf{z}_\varepsilon:= z_\varepsilon \circ \mathsf{t}_\varepsilon$.
Under suitable conditions
it can be proved that, up to a subsequence the curves $(\mathsf{t}_\varepsilon,\mathsf{u}_\varepsilon,\mathsf{z}_\varepsilon)_\varepsilon $
converge to a triple $(\mathsf{t},\mathsf{u},\mathsf{z}) \in \mathrm{AC} ([0,S]; [0,T]\times \R^n \times \R^m)$. Its evolution is described by an energy-dissipation balance
obtained
by passing to the limit in the reparameterized version of \eqref{enid-eps-expl-intro}. cf.\ Theorem \ref{th:main}.
We will refer to $(\mathsf{t},\mathsf{u},\mathsf{z})$ as a \emph{parameterized Balanced Viscosity} solution to the rate-independent system
$(\R^n \times \R^m, \cE, \mathcal{R}_0 + \varepsilon \vpotname z + \varepsilon^\alpha \vpotname u) $.
The main result of this paper, Theorem \ref{prop:diff-incl}, provides a more transparent reformulation of the energy-dissipation
balance defining a parameterized Balanced Viscosity solution $(\mathsf{t},\mathsf{u},\mathsf{z})$.
It is
in terms of a system of subdifferential inclusions fulfilled
by the curve
$(\mathsf{t},\mathsf{u},\mathsf{z})$, namely
\begin{equation}
\label{diff-syst-intro}
\begin{aligned}
&
\thn u(s) \mathsf{u}'(s) + (1-\thn{u} (s)) \mathrm{D}_u \enet{\mathsf{t}(s)}{\mathsf{u}(s)}{\mathsf{z}(s)} \ni 0 && \text{for a.a.\,}\, s \in (0,S),
\\
&
(1-\thn{z} (s)) \partial\mathcal{R}_0 (\mathsf{q}(s),\mathsf{z}'(s)) +
\thn z(s) \mathsf{z}'(s) + (1-\thn{z} (s)) \mathrm{D}_z \enet{\mathsf{t}(s)}{\mathsf{u}(s)}{\mathsf{z}(s)} \ni 0 && \text{for a.a.\,}\, s \in (0,S),
\end{aligned}
\end{equation}
where the Borel functions $\thn u,\, \thn z : [0,S] \to [0,1]$ fulfill
\begin{equation}
\label{switching-intro}
\mathsf{t}'(s) \thn u(s) = \mathsf{t}'(s) \thn z (s) =0 \qquad \text{for a.a.\,}\, s \in (0,S),
\end{equation}
The latter condition reveals that the viscous terms $\mathsf{u}'(s) $ and $\mathsf{z}'(s) $ may contribute to \eqref{diff-syst-intro}
only at jumps of the system, corresponding to $\mathsf{t}'(s)=0$ as the function $\mathsf{t}$ records the (slow) external time scale.
In this respect,
\eqref{diff-syst-intro}--\eqref{switching-intro} is akin to the (parameterized) subdifferential inclusion
\begin{equation}
\label{diff-single-intro}
\begin{aligned}
&
\mathrm{D}_u \enet{\mathsf{t}(s)}{\mathsf{u}(s)}{\mathsf{z}(s)} \ni 0 && \text{for a.a.\,}\, s \in (0,S),
\\
&
\partial\mathcal{R}_0 (\mathsf{z}'(s)) +
\theta (s) \mathsf{z}'(s) + \mathrm{D}_z \enet{\mathsf{t}(s)}{\mathsf{u}(s)}{\mathsf{z}(s)} \ni 0 && \text{for a.a.\,}\, s \in (0,S),
\end{aligned}
\end{equation}
with the Borel function $\theta: [0,S] \to [0,\infty)$
fulfilling
\begin{equation}
\label{switching-intro-2}
\mathsf{t}'(s) \theta (s) = 0 \qquad \text{for a.a.\,}\, s \in (0,S).
\end{equation}
Indeed, \eqref{diff-single-intro} is the
subdifferential reformulation for the parameterized Balanced Viscosity solutions
obtained by taking the limit as $\varepsilon \downarrow 0$ in \eqref{van-visco-intro}, where viscosity is added only to the flow rule.
However, note that \eqref{diff-syst-intro} has a much more complex structure than \eqref{diff-single-intro}. In addition to the switching condition
\eqref{switching-intro}, the functions $\thn u$ and $\thn z$ fulfill additional constraints, cf.\ Theorem \ref{prop:diff-incl}.
They
differ in the three cases $\alpha>1$, $\alpha=1$, and $\alpha \in (0,1)$
and show that viscosity in $u$ and $z$ pops back into the description of the system behavior at jumps, in a way depending on whether
$u$ relaxes faster to equilibrium than $z$,
$u$ and $z$ have the same relaxation rate, or $z$
relaxes faster to local stability than $u$.
\paragraph{\bf Plan of the paper} In Section \ref{ss:2.1} we set up all the basic assumptions on the dissipation potentials $\mathcal{R}_0$,
$\vpotname u$, and $\vpotname z$. Section \ref{s:2} is devoted to the generalized gradient system
driven by $\cE$ and the ``viscous'' potential $\mathcal{R}_\varepsilon := \mathcal{R}_0+ \varepsilon \vpotname z + \varepsilon^\alpha \vpotname u$. In particular, we establish a series of estimates on the viscous solutions $(u_\varepsilon,z_\varepsilon)$ which will be at the core of the vanishing viscosity analysis, developed in Section \ref{s:3} with Theorem
\ref{th:main}. In Section \ref{s:4} we will prove Theorem \ref{prop:diff-incl} and explore the mechanical interpretation of parameterized Balanced Viscosity solutions. Finally, in Section \ref{s:5} we will illustrate this solution notion, focusing on how it varies in the cases $\alpha>1$, $\alpha=1$,
$\alpha \in (0,1)$, in two different examples.
\paragraph{\bf Notation}
In what follows, we will denote by
$\langle \cdot, \cdot \rangle$ and by
$|\cdot|$ the scalar product and the norm in any Euclidean space $\R^d$, with $d=n,\, m,\, n+m, \, \ldots$.
Moreover, we will use the same symbol $C$ to denote a positive constant depending on data, and possibly varying from line to line.
\section{Setup}
\label{ss:2.1}
As mentioned in the introduction,
we are going to address a more general version of system \eqref{eps-system-intro},
where the
$1$-positively homogeneous dissipation potential $\mathcal{R}_0$, as well as the
quadratic potentials
$\vpotname u$ and $\vpotname z$
for $u'$ and $z'$, are also depending on the state
variable
\[
q:= (u,z)\in \mathcal{Q}:= \R^{n} \times \R^{m}.
\]
Hence, the rate-independent system is
\begin{equation}
\label{rip-syst}
\partial_{q'} \mathcal{R}_0(q(t),z'(t)) +\mathrm{D}_q \cE(t,q(t)) \ni 0 \qquad \text{in } (0,T),
\end{equation}
namely
\begin{subequations}
\label{rip-limit}
\begin{align}
\label{rip-limit-1}
&
\mathrm{D}_u \enet{t}{u(t)}{z(t)} =0 && \text{for a.a.\,}\, t \in (0,T),
\\
&
\label{rip-limit-2}
\partial\mathcal{R}_0(q(t), z'(t)) + \mathrm{D}_z \enet{t}{u(t)}{z(t)} \ni 0 && \text{for a.a.\,}\, t \in (0,T).
\end{align}
\end{subequations}
We
approximate it with
the following
generalized gradient system
\begin{equation}
\label{gen-grad-syst}
\partial_{q'} \mathcal{R}_\varepsilon(q(t),q'(t)) +\mathrm{D}_q \cE(t,q(t)) \ni 0 \qquad \text{in } (0,T),
\end{equation}
where the overall dissipation potential $\mathcal{R}_\varepsilon$ is of the form
\begin{equation}
\label{form-calR-eps}
\mathcal{R}_\varepsilon (q,q')= \mathcal{R}_\varepsilon (q,(u',z')):= \mathcal{R}_0 (q,z') +\varepsilon\vpot zq{z'} +\varepsilon^\alpha\vpot uq{u'} \quad \text{with } \alpha >0.
\end{equation}
In what follows, let us specify our assumptions on the dissipation potentials $\mathcal{R}_0$, $\vpotname z$, and $\vpotname u$.
\begin{description}
\item[\textbf{Dissipation}] We require that
\begin{equation}
\label{ass:dissip-pot-R}
\tag{$\mathrm{{R}_0}$}
\begin{aligned}
&
\mathcal{R}_0 \in \mathrm{C}^0 (\mathcal{Q} \times \R^m ), \quad \forall\, q \in \mathcal{Q} \ \mathcal{R}_0 (q,\cdot) \text{ is convex and $1$-positively homogeneous, and }
\\ & \exists\, C_{0,R}, \, C_{1,R}>0 \ \forall\, (q,z')\in \mathcal{Q} \times \R^m \, : \qquad
C_{0,R} |z'|\leq \mathcal{R}_0 (q,z') \leq C_{1,R}|z'|,
\end{aligned}
\end{equation}
\begin{equation}
\label{ass:dissip-pot-Vz}
\tag{$\mathrm{{V}_z}$}
\begin{gathered}
\vpotname z : \mathcal{Q} \times \R^m \to [0,\infty) \text{ is of the form }
\vpot zq{z'} = \frac12 \langle \vcof zq{z'}, z' \rangle \quad \text{with }
\\
\vcofname z \in \mathrm{C}^0 (\mathcal{Q};\R^{m \times m}) \quad\text{and}\quad \exists\, C_{0,V}, \, C_{1,V}>0 \ \forall\, q\in \mathcal{Q} \, : \qquad
C_{0,V} |z'|^2 \leq \vpot zq{z'} \leq C_{1,V}|z'|^2,
\end{gathered}
\end{equation}
\begin{equation}
\label{ass:dissip-pot-Vu}
\tag{$\mathrm{{V}_u}$}
\begin{gathered}
\vpotname u : \mathcal{Q} \times \R^n \to [0,\infty) \text{ is of the form }
\vpot uq{u'} = \frac12 \langle \vcof uq{u'}, u' \rangle \quad \text{with }
\\
\vcofname u \in \mathrm{C}^0 (\mathcal{Q};\R^{n\times n}) \quad\text{and}\quad \exists\, \widetilde{C}_{0,V}, \, \widetilde{C}_{1,V}>0 \ \forall\, q\in \mathcal{Q} \, : \qquad
\widetilde{C}_{0,V}|u'|^2 \leq \vpot uq{u'} \leq \widetilde{C}_{1,V}|u'|^2.
\end{gathered}
\end{equation}
\end{description}
For later use, let us recall that, due to the
$1$-homogeneity of $\mathcal{R}_0(q,\cdot)$,
for every $q\in \mathcal{Q}$ the convex analysis subdifferential $\partial \mathcal{R}_0(q,\cdot) : \R^m \rightrightarrows \R^m$
is characterized by
\begin{equation}
\label{charact-1-homog}
\zeta \in \partial \mathcal{R}_0(q,z') \quad \text{if and only if}\quad \begin{cases}
\langle \zeta, w\rangle \leq \mathcal{R}_0(q,w) & \text{for all } w \in \R^m,
\\
\langle \zeta, z'\rangle \geq \mathcal{R}_0(q,z')\,.
\end{cases}
\end{equation}
Furthermore,
observe that \eqref{ass:dissip-pot-Vz} and \eqref{ass:dissip-pot-Vu} ensure that for every $q \in \mathcal{Q}$ the matrices
$ \vcof zq{}\in \R^{n \times n}$ and $ \vcof uq{}\in \R^{m \times m}$ are positive definite, uniformly with respect to $q$. Furthermore, for later use we observe that the conjugate
\[
\moreau uq{\eta} = \sup_{v \in \R^n} \left( \langle \eta, v \rangle - \vpot uqv \rangle\right) = \frac12 \langle \vcofinv uq{\eta}, \eta \rangle
\]
fulfills
\begin{equation}
\label{inv-vcof-1}
\overline{C}_0 |\eta|^2 \leq \moreau uq{\eta} \leq \overline{C}_1 |\eta|^2
\end{equation}
for some $\overline{C}_0,\,\overline{C}_1>0$. We have the analogous coercivity and growth properties for $\mathcal{V}_{\mathsf{z}}^*$.
Our assumptions concerning the energy functional $\cE$, expounded below,
are typical of the \emph{variational approach} to gradient flows and generalized gradient systems. Since we
are in a finite-dimensional setting, to impose \emph{coercivity} it is sufficient to
ask for boundedness of energy sublevels. The power-control condition will allow us to bound $\partial_t \cE$ in the derivation of the basic energy estimate
on system \eqref{gen-grad-syst}, cf.\ Lemma \ref{lemma:2.1} later on.
The smoothness of $\cE$ guarantees the validity of two further, key properties, i.e.\ the continuity of $\mathrm{D}_q\cE$, and the chain rule
(cf.\ \eqref{chain-rule} below), which will play a crucial role for our analysis.
Later on, in Section \ref{s:2}, we will impose that $\cE$ is uniformly convex with respect to $u$. As we will see, this condition will be at the core of the proof of an estimate
for
$\|u'\|_{L^1(0,T;\R^n)}$, uniform with respect to
the parameter $\varepsilon$. Observe that, unlike for $z'$ such estimate does not follow from the basic energy estimate on system \eqref{gen-grad-syst}, since the overall dissipation potential
$\mathcal{R}_\varepsilon$ is degenerate in $u'$ as $\varepsilon \downarrow 0$. It will require additional careful calculations.
\begin{description}
\item[\textbf{Energy}] we assume that $\cE \in \mathrm{C}^1 ([0,T]\times \mathcal{Q})$ and that it is bounded from below by a positive constant
(indeed by adding a constant we can always reduce to this case). Furthermore, we require that
\begin{equation}
\label{ass:E}
\tag{$\mathrm{E}$}
\begin{aligned}
&
\exists\, C_{0,E}\,, \widetilde{C}_{0,E}>0 \ \forall\, (t,q) \in [0,T]\times \mathcal{Q} \,: \quad && \ene tq \geq C_{0,E}|q|^2 - \widetilde{C}_{0,E} && \text{\textbf{(coercivity),}}
\\
&
\exists\, C_{1,E}>0 \ \forall\, (t,q) \in [0,T]\times \mathcal{Q}\, : \quad
&& |\partial_t \ene tq| \leq C_{1,E} \ene tq && \text{\textbf{(power control).}}
\end{aligned}
\end{equation}
\end{description}
In view of \eqref{form-calR-eps}, \eqref{ass:dissip-pot-Vz}, and \eqref{ass:dissip-pot-Vu},
the generalized gradient system
\eqref{gen-grad-syst} reads
\begin{subequations}
\label{eps-system}
\begin{align}
\label{eq-u}
&
\varepsilon^\alpha \vcof{u}{q(t)}{u'(t)} + \mathrm{D}_u \enet{t}{u(t)}{z(t)} =0 && \text{in } (0,T),
\\
&
\label{eq-z}
\varepsilon \vcof{z}{q(t)}{z'(t)} +\partial\mathcal{R}_0(z'(t)) + \mathrm{D}_z \enet{t}{u(t)}{z(t)} =0 && \text{in } (0,T).
\end{align}
\end{subequations}
\paragraph{\bf Existence of solutions to the generalized gradient system \eqref{gen-grad-syst}.}
It follows from the results in \cite{ColliVisintin90,MRS-dne} that, under the present assumptions,
for every $\varepsilon>0$ there exists a solution $q_\varepsilon \in H^{1}(0,T;\mathcal{Q})$ to the Cauchy problem for
\eqref{gen-grad-syst}.
Observe that
$q_\varepsilon$ also fulfills
the energy-dissipation identity
\begin{equation}
\label{enid-eps}
\ene t{q_\varepsilon(t)} +
\int_s^t
\mathcal{R}_\varepsilon (q_\varepsilon(r), q_\varepsilon'(r)) + \mathcal{R}_\varepsilon^* (q_\varepsilon(r),-\mathrm{D}_q \cE(r,q_\varepsilon(r))) \mathrm{d} r
= \ene s{q_\varepsilon(s)} + \int_s^t \partial_t \ene r{q_\varepsilon(r)} \mathrm{d} r.
\end{equation}
In \eqref{enid-eps}, the dual dissipation potential
$\mathcal{R}_\varepsilon^* : \mathcal{Q} \times \R^{n+m} \to \R$ is the Fenchel-Moreau conjugate of $\mathcal{R}_\varepsilon$, i.e.
\begin{equation}
\label{calReps}
\mathcal{R}_\varepsilon^*(q,\xi):= \sup_{v \in \mathcal{Q}} \left(\langle \xi, v \rangle - \mathcal{R}_\varepsilon (q,v) \right).
\end{equation}
In fact, by the Fenchel equivalence the differential inclusion \eqref{gen-grad-syst}
reformulates as
\[
\mathcal{R}_\varepsilon (q_\varepsilon(t),q_\varepsilon'(t)) +\mathcal{R}_\varepsilon^* (q_\varepsilon(t), - \mathrm{D}_q \cE(t,q_\varepsilon(t)) ) =
\langle - \mathrm{D}_q \cE(t,q_\varepsilon(t)), q_\varepsilon'(t) \rangle \qquad \text{for a.a.\,}\, t \in (0,T).
\]
Combining this with the chain rule
\begin{equation}
\label{chain-rule}
\frac{\mathrm{d} }{\mathrm{d} t }\cE (t,q(t)) =\partial_t \cE(t,q(t)) + \langle \mathrm{D}_q \cE(t,q(t)), q'(t) \rangle \qquad \text{for a.a.\,}\, t \in (0,T)
\end{equation}
along any curve $q\in \mathrm{AC}([0,T]; \mathcal{Q})$ and integrating in time, we conclude \eqref{enid-eps}.
The energy balance \eqref{enid-eps} will play a crucial role in our analysis: indeed, after deriving in Sec.\ \ref{s:2} a series of a priori estimates, uniform with respect to the parameter $\varepsilon>0$,
we shall pass to the limit in the parameterized version of
\eqref{enid-eps} as $\varepsilon\downarrow 0$. We will thus obtain a (parameterized) energy-dissipation identity which encodes information on the behavior of the
limit
system for $\varepsilon=0$, in particular
at the jumps of the limit curve $q$ of the solutions $q_\varepsilon$ to \eqref{gen-grad-syst}.
\section{A priori estimates}
\label{s:2}
In this section, we consider a family $(q_\varepsilon)_\varepsilon \subset H^1 (0,T;\mathcal{Q})$ of solutions to
the Cauchy problem for \eqref{gen-grad-syst},
with a converging sequence of initial data $(q_\varepsilon^0)_\varepsilon$, i.e.
\begin{equation}
\label{bded-data}
q_\varepsilon^0 \to q^0
\end{equation}
for some $q^0 \in \mathcal{Q}$.
Our first result, Lemma \ref{lemma:2.1},
provides a series of basic estimates on the functions $(q_\varepsilon)$, as well as a bound for $\| z_\varepsilon'\|_{L^1 (0,T;\R^m)}$, uniform with respect to $\varepsilon$.
It holds under conditions \eqref{ass:dissip-pot-R},
\eqref{ass:dissip-pot-Vz},
\eqref{ass:dissip-pot-Vu}, \eqref{ass:E}, as well as \eqref{bded-data}.
Under a further property of the dissipation potential
$\vpotname u$ (cf.\ \eqref{need-esti} below),
assuming \emph{uniform convexity} of $\cE$ with respect to the variable $u$, and requiring an additional condition
the initial data $(q_\varepsilon^0)_\varepsilon$ (see \eqref{well-prep}), in Proposition \ref{prop:aprio-eps} we
will derive the following crucial estimate,
uniform with respect to $\varepsilon$:
\begin{equation}
\label{L1-est}
\|q_\varepsilon'\|_{L^1 (0,T;\R^{n+m})} \leq C.
\end{equation}
We start with the following result, which does not require the above
mentioned enhanced conditions.
\begin{lemma}
\label{lemma:2.1}
Let $\alpha>0$. Assume \eqref{ass:dissip-pot-R},
\eqref{ass:dissip-pot-Vz}, \eqref{ass:dissip-pot-Vu}, \eqref{ass:E},
and \eqref{bded-data}. Then, there exists a constant $C>0$ such that
for every $\varepsilon>0$
\begin{subequations}
\begin{align}
\label{bound-energies}
&
\text{(a)} \quad \sup_{t \in [0,T]} \ene t{q_\varepsilon(t)}\leq C,
\\
&
\label{bound-q-eps}
\text{(b)} \quad
\sup_{t \in [0,T]} |q_\varepsilon(t)| \leq C,
\\
&
\label{est-z}
\text{(c)} \quad
\int_0^T |z_\varepsilon'(r)| \mathrm{d} r \leq C.
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
We exploit the energy identity \eqref{enid-eps}.
Observe that $\mathcal{R}_\varepsilon^* (q,\xi) \geq 0$ for all $(q,\xi) \in\mathcal{Q} \times R^{n+m}$.
Therefore, we deduce from \eqref{enid-eps}
that
\[
\ene t{q_\varepsilon(t)} \leq \ene 0{q_\varepsilon (0)} + \int_0^t \partial_t \ene r{q_\varepsilon(r)} \mathrm{d} r
\leq C+C_{1,E}\int_0^t \ene r{q_\varepsilon(r)} \mathrm{d} r,
\]
where we have
used the power control from \eqref{ass:E}
and
the fact that $\ene 0{q_\varepsilon (0)} \leq C$, since the $(q_\varepsilon (0))_\varepsilon$ is bounded. The Gronwall Lemma then yields \eqref{bound-energies},
and \eqref{bound-q-eps} ensues from the coercivity of $\cE$.
Using again the power control,
we ultimately infer from \eqref{enid-eps} that
\begin{equation}
\label{en-dissip-est}
\int_0^T
\mathcal{R}_\varepsilon (q_\varepsilon(r), q_\varepsilon'(r)) + \mathcal{R}_\varepsilon^* (q_\varepsilon(r),-\mathrm{D}_q \cE(r,q_\varepsilon(r))) \mathrm{d} r \leq C.
\end{equation}
In particular,
$\int_0^T \mathcal{R}_0 (q_\varepsilon(r), z_\varepsilon'(r)) \mathrm{d} r \leq C$, whence \eqref{est-z} by \eqref{ass:dissip-pot-R}.
\end{proof}
The derivation of the $L^1(0,T;\R^n)$-estimate for $(u_\varepsilon')_\varepsilon$
similar to \eqref{est-z}
clearly does not follow from \eqref{enid-eps},
which only yields $\int_0^T \varepsilon^{\alpha} |u_\varepsilon'(r)|^2 \mathrm{d} r \leq C$ via \eqref{en-dissip-est}
and \eqref{ass:dissip-pot-Vu}.
It is indeed more involved,
and, as already mentioned, it strongly relies on the uniform convexity of $\cE$ with respect to $u$. Furthermore,
we are able to obtain it only under the simplifying condition that
the dissipation potential
$\vpotname u$ in fact \emph{does not} depend on the state variable $q$,
and under an additional well-preparedness condition on the data $(q_\varepsilon^0)_\varepsilon$,
ensuring that the forces $\mathrm{D}_u \ene{0}{q_\varepsilon^0}$ tend to zero, as $\varepsilon \downarrow 0$, with rate $\varepsilon^\alpha$.
\begin{proposition}
\label{prop:aprio-eps}
Let $\alpha>0$.
Assume \eqref{ass:dissip-pot-R},
\eqref{ass:dissip-pot-Vz},
\eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}.
In addition, suppose that
\begin{equation}
\label{need-esti}
\tag{$\mathrm{V}_{u,1}$}
\mathrm{D}_q \vcofname{u} (q) = 0 \quad \text{for all } q \in \mathcal{Q},
\end{equation}
\begin{equation}
\label{en-plus}
\tag{$\mathrm{E}_{1}$}
\begin{aligned}
&\cE \in \mathrm{C}^2 ([0,T]\times \mathcal{Q})
\quad \text{and}
\\
&\exists\, \mu>0 \ \forall\, (t,q) \in [0,T]\times \mathcal{Q}\, : \
\mathrm{D}_u^2 \ene tq\geq \mu \mathbb{I}_{\R^{n \times n}} \quad
\text{\textbf{(uniform convexity w.r.t.\ $u$),}}
\end{aligned}
\end{equation}
and that the initial data $(q_\varepsilon^0)_\varepsilon $ complying with \eqref{bded-data}
also fulfill
\begin{equation}
\label{well-prep}
|\mathrm{D}_u \ene{0}{q_\varepsilon^0}| \leq C \varepsilon^{\alpha}.
\end{equation}
Then,
there exists a constant $C>0$ such that for every $\varepsilon>0$
\begin{equation}
\label{est-u}
\|u_\varepsilon'(t)\|_{L^1(0,T;\R^{n})} \leq C.
\end{equation}
\end{proposition}
\begin{proof}
It follows from \eqref{need-esti} that there exists a given matrix $\overline{\mathbb{V}}_{\mathsf{u}} \in \R^{n\times n}$
such that
\begin{equation}
\label{yes-later}
\vcof uq{} \equiv \overline{\mathbb{V}}_{\mathsf{u}} \quad \text{for all $q\in\mathcal{Q}$,}
\end{equation}
so that
\begin{equation}
\label{v-const}
\vpot uq{u'} = \mathcal{V}_{\mathsf{u}} (u') :=
\frac12 \langle \overline{\mathbb{V}}_{\mathsf{u}} u',u' \rangle.
\end{equation}
Therefore \eqref{eq-u} reduces to
\begin{equation}
\label{simpler}
\varepsilon^\alpha \overline{\mathbb{V}}_{\mathsf{u}} u_\varepsilon'(t) + \mathrm{D}_u
\enet{t}{u_\varepsilon(t)}{z_\varepsilon(t)} =0
\qquad \text{for a.a.\,}\, t \in (0,T).
\end{equation}
We differentiate \eqref{simpler} in time,
and test the resulting equation by $u_\varepsilon'$. Thus we obtain for
almost all $t\in (0,T)$
\begin{equation}
\label{test-nu}
\begin{aligned}
0 & = \varepsilon^\alpha \langle \overline{\mathbb{V}}_{\mathsf{u}}
u_\varepsilon{''}(t), u_\varepsilon'(t) \rangle
+ \langle \mathrm{D}_u^2 \enet{t}{u_\varepsilon(t)}{z_\varepsilon(t)} [u_\varepsilon'(t)], u_\varepsilon'(t) \rangle
+ \langle \mathrm{D}_{u,z}^2 \enet{t}{u_\varepsilon(t)}{z_\varepsilon(t)} [ u_\varepsilon'(t)], z_\varepsilon'(t) \rangle
\\
&
\doteq S_1+S_2+S_3,
\end{aligned}
\end{equation}
where $ \mathrm{D}_{u,z}^2 $ denotes the second-order mixed derivative.
Observe that
\begin{align*}
&
S_1 = \frac{\varepsilon^\alpha}2 \frac{\mathrm{d}}{\mathrm{d} t } \mathcal{V}_{\mathsf{u}} (u_\varepsilon'),
&&
S_2\geq \mu |u_\varepsilon'^2| \geq \tilde{\mu} \mathcal{V}_{\mathsf{u}} (u_\varepsilon'),
\\
&
S_3 \geq - C |u_\varepsilon'||z_\varepsilon'| \geq -C \sqrt{\mathcal{V}_{\mathsf{u}} (u_\varepsilon')} |z_\varepsilon'|.
\end{align*}
Indeed, to estimate $S_2$ we have used the uniform convexity of
$\cE(t,\cdot,z)$, and the growth of $\vpotname u$ from
\eqref{ass:dissip-pot-Vu}. The estimate for $S_3$ follows from
$\sup_{t\in (0,T)} | \mathrm{D}_{u,z}^2 \enet{t}{u_\varepsilon(t)}{z_\varepsilon(t)}| \leq
C$, due to \eqref{bound-q-eps} and the fact that $\mathrm{D}_{u,z}^2 \cE $
is continuous on $ [0,T] \times \mathcal{Q}$, and again from
\eqref{ass:dissip-pot-Vu}. We thus infer from \eqref{test-nu} that
\[
\frac{\mathrm{d} }{\mathrm{d} t} \mathcal{V}_{\mathsf{u}} (u_\varepsilon'(t)) + \frac{\tilde{\mu}}{\varepsilon^\alpha} \mathcal{V}_{\mathsf{u}} (u_\varepsilon'(t)) \leq
\frac{C}{\varepsilon^\alpha} \sqrt{\mathcal{V}_{\mathsf{u}} (u_\varepsilon'(t))} |z_\varepsilon'(t)| \qquad \text{for a.a.\,}\, t \in (0,T),
\]
which rephrases as
\[
\nu_\varepsilon(t) \nu_\varepsilon'(t) + \frac{\tilde{\mu}}{\varepsilon^\alpha} \nu_\varepsilon^2(t) \leq \frac{C}{\varepsilon^\alpha}\nu_\varepsilon(t) |z_\varepsilon'(t)|
\]
where we have used the place-holder $\nu_\varepsilon(t) :=
\sqrt{\mathcal{V}_{\mathsf{u}} (u_\varepsilon'(t))}$. We now argue as in
\cite{Miel08?DEMF} and observe that, without loss of generality, we
may suppose that $\nu_\varepsilon(t) >0$ (otherwise, we replace it by
$\tilde{\nu}_\varepsilon = \sqrt{\nu_\varepsilon +\delta}$, which satisfies the same
estimate, and then let $\delta \downarrow 0$), Hence, we deduce
\[
\nu_\varepsilon'(t) + \frac{\tilde{\mu}}{\varepsilon^\alpha} \nu_\varepsilon(t) \leq
\frac{C}{\varepsilon^\alpha}|z_\varepsilon'(t)|.
\]
Applying the Gronwall lemma
we obtain
\begin{equation}
\label{integrate-time}
\nu_\varepsilon(t) \leq C \exp\left(-\frac{\tilde \mu}{\varepsilon^\alpha}t
\right) \nu_\varepsilon(0) + \frac{C}{\varepsilon^\alpha} \int_0^t \exp
\left(-\frac{\tilde \mu}{\varepsilon^\alpha} (t-r)\right) |z_\varepsilon'(r)| \mathrm{d}
r \doteq a_1^{\varepsilon}(t) + a_2^{\varepsilon}(t)
\end{equation}
for all $t\in (0,T)$. We integrate the above estimate on $(0,T)$.
Now, observe that \eqref{well-prep} guarantees that $\nu_\varepsilon(0)=
\sqrt{\mathcal{V}_{\mathsf{u}} (u_\varepsilon'(0))} \leq C
|\overline{V}_{\mathsf{u}} u_\varepsilon'(0)| = C \varepsilon^{-\alpha} |\mathrm{D}\cE
(0,u_\varepsilon(0))| \leq C $. Hence, we find $\|a_1^{\varepsilon}\|_{L^1(0,T)}
\leq C \nu_\varepsilon (0) \leq C_1$.
In order to estimate $a_2^\varepsilon$ we use the Young inequality for
convolutions, which yields
\[
\|a_2^{\varepsilon}\|_{L^1(0,T)} = \frac{C}{\varepsilon^\alpha} \int_0^T \int_0^t
\exp \left(-\frac{\tilde \mu}{4\varepsilon^\alpha} (t-r) \right) |z_\varepsilon'(r)|
\mathrm{d} r \mathrm{d} t \leq \frac{C}{\varepsilon^\alpha} \left( \int_0^T \exp
\left(-\frac{\tilde \mu}{\varepsilon^\alpha} t \right) \mathrm{d} t \right)
\left( \int_0^T |z_\varepsilon'(t)| \mathrm{d} t \right) \leq C_2
\]
where we have exploited the a priori estimate \eqref{est-z} for
$z_\varepsilon'$. Thus, \eqref{integrate-time} implies \eqref{est-u},
and we are done.
\end{proof}
\section{Limit passage with vanishing viscosity}
\label{s:3}
In this section, we assume that we are given a sequence $(q_\varepsilon)_\varepsilon
\subset H^1(0,T;\mathcal{Q})$ of solutions to \eqref{gen-grad-syst},
satisfying the initial conditions $q_\varepsilon(0) = q_\varepsilon^0$, such that
estimate \eqref{L1-est} holds. As we have shown in Proposition
\ref{prop:aprio-eps}, the well-preparedness \eqref{well-prep} of the
initial data $(q_\varepsilon^0)_\varepsilon$, the condition that the dissipation
potential $\vpotname u$ does not depend on the state $q$, and the
uniform convexity \eqref{en-plus} of $\cE$ with respect to $u$
guarantee the validity of \eqref{L1-est}. However, these conditions
are not needed for the vanishing viscosity analysis.
Therefore, hereafter
we will no longer impos
\eqref{well-prep}, we will allow for a state-dependent dissipation potential $\vpotname u = \vpot uq{u'}$,
and we will stay with the basic conditions \eqref{ass:E} on $\cE$.
\paragraph{\bf The energy-dissipation balance.}
Following the variational approach of \cite{MRS09,MRS10,mielke-rossi-savare2013}, we will pass to the limit in (a
\emph{parameterized} version of) the energy identity \eqref{enid-eps}.
Preliminarily, let us
explicitly calculate the convex-conjugate of the dissipation potential $\mathcal{R}_\varepsilon$ \eqref{form-calR-eps}.
\begin{lemma}
\label{l:cvx-conj-Reps}
Assume \eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz}, and \eqref{ass:dissip-pot-Vu}. Then, the Fenchel-Moreau conjugate \eqref{calReps} of
$\mathcal{R}_\varepsilon$ is given by
\begin{equation}
\label{to-refer-to}
\mathcal{R}_\varepsilon^* (q,\xi)= \frac1{\varepsilon} \conjz {z}{q}{\zeta} + \frac1{\varepsilon^\alpha} \moreau u{q}{\eta} \qquad \text{for all } q \in \mathcal{Q} \text{ and } \xi = (\eta,\zeta) \in \R^{n+m},
\end{equation}
where $\moreau u{q}{\cdot}$ is the conjugate of $\vpot u{q}{\cdot}$, and
\begin{equation}
\label{conju-z}
\conjz {z}{q}{\zeta} = \min_{\omega \in \stab q} \moreau z{q}{\zeta-\omega} \qquad \text{with } \stab q:= \partial\mathcal{R}_0 (q,0),
\end{equation}
$\moreau z{q}{\cdot}$ is the conjugate of $\vpot z{q}{\cdot}$, while
$\mathcal{W}_{\mathsf{z}}^*$
is the conjugate of $\mathcal{R}_0 + \mathcal{V}_{\mathsf{z}}$.
\end{lemma}
\begin{proof}
Since $\mathcal{R}_\varepsilon(q,\cdot)$ is given by the sum of a contribution in the sole variable $z'$ and another in the sole
variable $u'$, we have
\[
\mathcal{R}_\varepsilon^* (q,\xi)=
(\varepsilon^\alpha \vpotname{u})^*(q,\eta) + \conjzspe {z}{\varepsilon}{q}{\zeta} \qquad
\text{for all $\xi = (\eta,\zeta) \in \R^{n + m}$} \qquad
\]
where we
have used the place-holder
$
\conjzspe {z}{\varepsilon}{q}{\zeta} := \left(\mathcal{R}_0 (q,\cdot) + \varepsilon \vpot z{q}{\cdot}\right)^* (\zeta).
$
Now, taking into account that $\vpotname u$ is quadratic, there holds
\[
(\varepsilon^\alpha \vpotname{u})^*(q,\eta) =
\varepsilon^\alpha \vpotname{u}^* \left(q, \frac1{\varepsilon^\alpha} \eta \right) =
\frac1{\varepsilon^\alpha} \moreau u{q}{\eta},
\]
whereas the $\inf$-$\sup$ convolution formula (see e.g.\
\cite{IofTih}) yields $\conjzspe {z}{\varepsilon}{q}{\zeta} = \frac1{\varepsilon}
\conjz {z}{q}{\zeta}$ with $\conjz {z}{q}{\cdot}$ from
\eqref{conju-z}.
\end{proof}
In view of \eqref{to-refer-to}, the energy identity \eqref{enid-eps} rewrites as
\begin{equation}
\label{enid-eps-expl}
\begin{aligned}
&
\ene t{q_\varepsilon(t)} +
\int_s^t
\mathcal{R}_0 (q_\varepsilon(r), z_\varepsilon'(r)) +
\varepsilon \vpot{z}{q_\varepsilon(r)}{z_\varepsilon'(r)} +\varepsilon^\alpha \vpot{u}{q_\varepsilon(r)}{u_\varepsilon'(r)} \mathrm{d} r
\\ &\quad +
\int_s^t
\frac1{\varepsilon}\conjz {z}{q_\varepsilon(r)}{-\mathrm{D}_z \cE(r,q_\varepsilon(r))}
+\frac{1}{\varepsilon^\alpha}\moreau {u}{q_\varepsilon(r)}{-\mathrm{D}_u \cE(r,q_\varepsilon(r))}
\mathrm{d} r
\\
&
= \ene s{q_\varepsilon(s)} + \int_s^t \partial_t \ene r{q_\varepsilon(r)} \mathrm{d} r.
\end{aligned}
\end{equation}
In fact, the second and the third integral terms on the left-hand side of \eqref{enid-eps-expl} reflect the competition between the tendency of the system to be governed by \emph{viscous} dissipation both for the variable $z$ and for the variable $u$, and its tendency to fulfill the \emph{local stability} condition
\[
\conjz {z}{q(t)}{-\mathrm{D}_z \cE(t,q(t))} = 0 \quad \text{i.e.} \quad -\mathrm{D}_z \cE(t,q(t)) \in \stab {q(t)} \qquad \text{for a.a.\,}\, t \in (0,T)
\]
for $z$, and the equilibrium condition
\[
\moreau {u}{q(t)}{-\mathrm{D}_u \cE(r,q(t))} =0 \quad \text{i.e.} \quad -\mathrm{D}_u \cE(t,q(t))=0 \qquad \text{for a.a.\,}\, t \in (0,T)
\]
for $u$, cf.\ also the discussion in Remark \ref{rmk:switch}.
\paragraph{\bf The parameterized energy-dissipation balance.}
We now consider the parameterized curves
$(\mathsf{t}_\varepsilon,\mathsf{q}_\varepsilon) : [0,S_\varepsilon] \to [0,T] \times \mathcal{Q}$, where for every $\varepsilon>0$ the rescaling function
$\mathsf{t}_\varepsilon: [0,S_\varepsilon] \to [0,T]$ is strictly increasing, and
$\mathsf{q}_\varepsilon(s) = q_\varepsilon(\mathsf{t}_\varepsilon(s)).$
We shall suppose that $\sup_{\varepsilon>0} S_\varepsilon<\infty$, and that
\begin{equation}
\label{normalization}
\exists\, C>0 \quad \forall\, \varepsilon>0 \quad \forall\, s \in [0,S_\varepsilon]\, : \qquad
\mathsf{t}_\varepsilon'(s) + |\mathsf{q}_\varepsilon'(s)| \leq C.
\end{equation}
\begin{remark}
\upshape
\label{rmk:arclength}
For instance, as in \cite{ef-mie06, MRS09} we might
choose
\begin{equation}
\label{arclength-choice}
\mathsf{t}_\varepsilon:= \sigma_\varepsilon^{-1} \quad \text{ with } \sigma_\varepsilon(t):= \int_0^t \left( 1+|q_\varepsilon'(r)| \right) \mathrm{d} r,
\end{equation}
and set $S_\varepsilon:= \sigma_\varepsilon (T)$. In fact,
estimate \eqref{L1-est} ensures that $\sup_\varepsilon S_\varepsilon <\infty$. With the choice
\eqref{arclength-choice}
for $\mathsf{t}_\varepsilon$, the functions $(\mathsf{t}_\varepsilon,\mathsf{q}_\varepsilon)$
fulfill the normalization condition
\[
\mathsf{t}_\varepsilon'(s) + |\mathsf{q}_\varepsilon'(s)| =1 \qquad \text{for almost all } s\in (0,S_\varepsilon).
\]
\end{remark}
For the parameterized curves
$(\mathsf{t}_\varepsilon,\mathsf{q}_\varepsilon)$, the energy-dissipation balance \eqref{enid-eps-expl} reads
\begin{equation}
\label{enid-eps-param}
\begin{aligned}
&
\ene {\mathsf{t}_\varepsilon(s_2)}{\mathsf{q}_\varepsilon(s_2)} +\int_{s_1}^{s_2}
\Me{\varepsilon}{\mathsf{q}_\varepsilon(r)}{\mathsf{t}_\varepsilon'(r)}{\mathsf{q}'_\varepsilon(r)}{-\mathrm{D}_q\ene{\mathsf{t}_\varepsilon(r)}{\mathsf{q}_\varepsilon(r)}} \
\mathrm{d} r
\\
& = \ene {\mathsf{t}_\varepsilon(s_1)}{\mathsf{q}_\varepsilon(s_1)}
+ \int_{s_1}^{s_2} \partial_t \ene{\mathsf{t}_\varepsilon(r)}{\mathsf{q}_\varepsilon(r)} \mathsf{t}_\varepsilon'(r) \mathrm{d} r \qquad \text{for all } 0 \leq s_1 \leq s_2 \leq S,
\end{aligned}
\end{equation}
where we have used the dissipation functional
\begin{equation}
\label{Me-def}
\begin{aligned}
\Me{\varepsilon}{q}{\tau}{q'}{\xi} & =
\Me {\varepsilon}{q}{\tau}{(u',z')}{(\eta,\zeta)}
\\ &
:=
\mathcal{R}_0 (q,z') +\frac{\varepsilon}{\tau} \vpot zq{z'} +
\frac{\varepsilon^\alpha}{\tau} \vpot uq{u'} +
\frac\tau\varepsilon \conjz {z}{q}{\zeta} + \frac\tau{\varepsilon^\alpha} \moreau uq\eta.
\end{aligned}
\end{equation}
The passage from \eqref{enid-eps-expl} to \eqref{enid-eps-param} follows from the change of variables
$t \to \mathsf{t}_\varepsilon (r)$, whence $\mathrm{d} t \to \mathsf{t}_\varepsilon'(r) \mathrm{d} r$, while $q_\varepsilon'(t) \to \frac{1}{ \mathsf{t}_\varepsilon'(r) } \mathsf{q}_\varepsilon'(r)$.
In order to pass to the limit in \eqref{enid-eps-param} as $\varepsilon\down0$, it is crucial to investigate the
$\Gamma$-convergence properties of the family of functionals $(\mathcal{M}_\varepsilon)_\varepsilon$. The following result reveals that
the $\Gamma$-limit of $(\mathcal{M}_\varepsilon)_\varepsilon$ depends on whether
the parameter $\alpha$ is above, equal, or below the threshold value $1$.
Let us point out that, for $\alpha\in (0,1)$, setting $\delta = \varepsilon^{\alpha}$ we rewrite $\mathcal{M}_\varepsilon$ as
\begin{equation}
\label{specular}
\Me {\varepsilon}{q}{\tau}{(u',z')}{(\eta,\zeta)} = \mathcal{R}_0 (q,z')
+\frac{\delta^{1/\alpha}}{\tau} \vpot zq{z'}
+
\frac{\delta}{\tau} \vpot uq{u'}
+\frac\tau{\delta^{1/\alpha}} \conjz {z}{q}{\zeta}
+ \frac\tau{\delta} \moreau uq\eta
\end{equation}
with $1/\alpha >1$.
It is thus natural to expect that the upcoming results will be specular in the cases $\alpha\in (0,1)$ and $\alpha>1$.
\begin{proposition}
\label{prop-Gamma-conv}
Assume \eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz}, \eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}.
Then,
the functionals $(\mathcal{M}_\varepsilon)_\varepsilon$ $\Gamma$-converge as $\varepsilon\downarrow 0$ to
$\mathcal{M}_0: \mathcal{Q}\times [0,\infty) \times \mathcal{Q} \times \R^{n+m} \to [0,\infty] $
defined by
\begin{equation}
\label{basic-one}
\Mo q\tau{(u',z')}{(\eta,\zeta)}:= \mathcal{R}_0 (q,z') + \Mored q\tau{(u',z')}{(\eta,\zeta)},
\end{equation}
where for $\tau>0$ we have
\begin{equation}
\label{Mored-basic}
\Mored q\tau{(u',z')}{(\eta,\zeta)} = \left\{ \begin{array}{ll}
0 & \text{if } \conjz {z}{q}{\zeta} =\moreau uq{\eta} =0,
\\
\infty & \text{if } \conjz {z}{q}{\zeta} +\moreau uq{\eta} >0,
\end{array}
\right.
\end{equation}
while for $\tau=0$ we have the following cases:
\begin{itemize}
\item For $\alpha>1$
\begin{equation}
\label{def-Mop1}
\Mored q0{(u',z')}{(\eta,\zeta)} =
\left\{
\begin{array}{ll}
2 \sqrt{\vpot{u}{q}{u'}}\, \sqrt{\moreau uq{\eta}} & \text{ if } \vpot{z}{q}{z'}=0,
\\
2\sqrt{\vpot{z}{q}{z'}} \, \sqrt{\conjz {z}{q}{\zeta}} & \text{ if } \moreau uq{\eta} =0,
\\
\infty & \text{ if } \vpot{z}{q}{z'}\, \moreau uq{\eta} >0,
\end{array}
\right.
\end{equation}
\item For $\alpha=1$
\begin{equation}
\label{def-Mou1}
\Mored q0{(u',z')}{(\eta,\zeta)} = 2 \sqrt{\vpot{z}{q}{z'} + \vpot{u}{q}{u'}}\,\sqrt{\conjz {z}{q}{\zeta} +\moreau uq{\eta} },
\end{equation}
\item For $\alpha\in (0,1)$
\begin{equation}
\label{def-Mom1}
\Mored q0{(u',z')}{(\eta,\zeta)} =
\left\{
\begin{array}{ll}
2 \sqrt{\vpot{u}{q}{u'}} \, \sqrt{\moreau uq{\eta}} & \text{ if } \conjz {z}{q}{\zeta}=0,
\\
2 \sqrt{\vpot{z}{q}{z'}} \, \sqrt{\conjz {z}{q}{\zeta}} & \text{ if } \vpot{u}{q}{u'} =0,
\\
\infty & \text{ if } \vpot{u}{q}{u'}\, \conjz {z}{q}{\zeta} >0.
\end{array}
\right.
\end{equation}
\end{itemize}
Moreover, if $(\tau_\varepsilon,q_\varepsilon') \rightharpoonup (\tau,q') $ in $L^1 (0,S;
(0,T)\times \mathcal{Q})$ and if $(q_\varepsilon,\xi_\varepsilon)\to (q,\xi) $ in $L^1
(0,S; \mathcal{Q} \times \R^{n+m})$,
then for every $0\leq s_1\leq s_2\leq S$
\begin{equation}
\label{ioffe-refined}
\liminf_{\varepsilon \downarrow 0}
\int_0^S \Me{\varepsilon}{q_\varepsilon(s)}{\tau_\varepsilon(s)}{q_\varepsilon'(s)}{\xi_\varepsilon(s)} \mathrm{d} s
\geq \int_0^S \Mo{q(s)}{\tau(s)}{q'(s)}{\xi(s)} \mathrm{d} s\,.
\end{equation}
\end{proposition}
\begin{remark}
\label{rmk:switch}
\upshape
Let us briefly comment on the expression \eqref{basic-one} of the $\Gamma$-limit $\mathcal{M}_0$. To do so, we rephrase
the constraints arising in the switching conditions for the reduced functional $\mathcal{M}_{0}^{\mathrm{red}}$, cf.\ \eqref{Mored-basic}, \eqref{def-Mop1}, and
\eqref{def-Mom1}. Indeed, it follows from \eqref{ass:dissip-pot-Vz} and \eqref{ass:dissip-pot-Vu} (cf.\ \eqref{inv-vcof-1}) that
\[
\begin{array}{lllllll}
& \vpot{z}{q}{z'} =0 & \Leftrightarrow & z'=0, \qquad & \vpot{u}{q}{u'} =0 & \Leftrightarrow & u'=0,
\\
&
\moreau uq{\eta} = 0 & \Leftrightarrow & \eta=0, \qquad & \conjz {z}{q}{\zeta}=0 & \Leftrightarrow & \zeta \in \stab q= \partial\mathcal{R}_0 (q,0).
\end{array}
\]
Therefore, from \eqref{Mored-basic} we read that for $\tau>0$ the functional $\Mored{q}{\tau}{\cdot}{\cdot}$ is finite (and indeed equal to $0$) only for
$\eta$ and $\zeta$ fulfilling
\[
\eta =0, \qquad
\zeta \in \stab q \,.
\]
For $\tau=0$, in the case $\alpha>1$, $\Mored{q}{0}{\cdot}{\cdot}$ is finite if and only if either $z'=0$ or $\eta =0$. As we will see when discussing the physical interpretation of our
vanishing-viscosity result, this means that, at a jump (i.e.\ when $\tau=0$), either $z'=0$, i.e.\ $z$ is frozen, or $u$
fulfills the equilibrium condition $\eta =\mathrm{D}_u \ene tu=0$.
Also in view of \eqref{specular}, the switching conditions for $\alpha\in (0,1)$ are specular
to the ones for $\alpha>1$
in a generalized sense.
In fact, $\Mored{q}{0}{\cdot}{\cdot}$ is finite if and only if either
$u$ is frozen, or $\zeta = \mathrm{D}_z \ene tz \ \in \stab q$, meaning that $z$ fulfills the \emph{local stability} condition.
\end{remark}
\begin{proof}
Observe that
\[
\Me {\varepsilon}{q}{\tau}{(u',z')}{(\eta,\zeta)}=
\mathcal{R}_0 (q,z') +
\Mered q\tau{(u',z')}{(\eta,\zeta)}
\]
with $\Mered q\tau{(u',z')}{(\eta,\zeta)} : = \frac{\varepsilon}{\tau} \vpot zq{z'} +
\frac{\varepsilon^\alpha}{\tau} \vpot uq{u'} +
\frac\tau\varepsilon \conjz {z}{q}{\zeta} + \frac\tau{\varepsilon^\alpha} \moreau uq\eta$.
Since $\mathcal{R}_0$ is continuous with respect to both variables $q$ and $z$ and does not depend on $\varepsilon$, it is
clearly sufficient to prove that the functionals $\mathcal{M}_\varepsilon^{\mathrm{red}}$
$\Gamma$-converge to $\mathcal{M}_0^{\mathrm{red}}$, namely
\begin{align}
&
\label{liminf}
\begin{aligned}
&\Gamma\text{-}\liminf \text{ estimate: }
\\
&
\quad (q_\varepsilon, \tau_\varepsilon,u_\varepsilon',z_\varepsilon',\eta_\varepsilon,\zeta_\varepsilon) \to
(q,\tau,u',z',\eta,\zeta) \ \ \text{for }\varepsilon \to 0 \\
&\qquad \Longrightarrow \ \ \Mored q\tau{(u',z')}{(\eta,\zeta)} \leq
\liminf_{\varepsilon \downarrow 0} \Mered
{q_\varepsilon}{\tau_\varepsilon}{(u_\varepsilon',z_\varepsilon')}{(\eta_\varepsilon,\zeta_\varepsilon)}, &&
\end{aligned}
\\
&
\label{limsup}
\begin{aligned}
&
\Gamma\text{-}\limsup \text{ estimate: }
\\
& \quad
\forall\, (q,\tau,u',z',\eta,\zeta) \ \exists\,
(q_\varepsilon, \tau_\varepsilon,u_\varepsilon',z_\varepsilon',\eta_\varepsilon,\zeta_\varepsilon)_\varepsilon \, : \
\\
&\qquad \qquad
\begin{cases}
(q_\varepsilon,\tau_\varepsilon,u_\varepsilon',z_\varepsilon',\eta_\varepsilon,\zeta_\varepsilon) \to (q,\tau,u',z',\eta,\zeta) \qquad \text{and}
\\
\Mored q\tau{(u',z')}{(\eta,\zeta)} \geq \limsup_{\varepsilon \downarrow 0} \Mered {q_\varepsilon}{\tau_\varepsilon}{(u_\varepsilon',z_\varepsilon')}{(\eta_\varepsilon,\zeta_\varepsilon)}.
\end{cases}
\end{aligned}
\end{align}
Preliminarily, observe that minimizing with respect to $\tau$ we
obtain the lower bound
\begin{equation}
\label{crucial-observation}
\Mered q\tau{(u',z')}{(\eta,\zeta)}
\geq 2 \sqrt{\varepsilon\vpot zq{z'} +
\varepsilon^\alpha \vpot uq{u'} } \sqrt{
\frac1\varepsilon \conjz {z}{q}{\zeta} + \frac1{\varepsilon^\alpha} \moreau uq\eta}.
\end{equation}
In all the three cases $\alpha>1$, $\alpha=1$, and $\alpha\in (0,1)$, the expression \eqref{Mored-basic} of
$\mathcal{M}_{0}^{\mathrm{red}}$
for $\tau>0$ can be easily checked. Indeed,
for the $\Gamma$-$\liminf$ estimate, observe that it is trivial in the case $\conjz {z}{q}{\zeta} =\moreau uq{\eta} =0$, as
$\mathcal{M}_\varepsilon^{\mathrm{red}}$ takes positive values for all $\varepsilon>0$. Suppose now that $\conjz {z}{q}{\zeta} +\moreau uq{\eta} >0$,
e.g.\ that $\moreau uq{\eta} >0$.
Now, $(q_\varepsilon,\eta_\varepsilon) \to (q,\eta)$ implies that $\moreau u{q_\varepsilon}{\eta_\varepsilon} \geq \bar c >0$ for sufficiently small $\varepsilon $,
and from \eqref{crucial-observation} we deduce that
\[
\liminf_{\varepsilon \downarrow 0} \Mered {q_\varepsilon}{\tau_\varepsilon}{(u_\varepsilon',z_\varepsilon')}{(\eta_\varepsilon,\zeta_\varepsilon)}=\infty= \Mored q\tau{(u',z')}{(\eta,\zeta)} \,.
\]
The $\Gamma$-$\limsup$ estimate follows by taking the recovery sequence $(q_\varepsilon\tau_\varepsilon,u_\varepsilon',z_\varepsilon',\eta_\varepsilon,\zeta_\varepsilon) = (q,\tau,u',z',\eta,\zeta)$. In fact,
$\conjz {z}{q}{\zeta} +\moreau uq{\eta} >0$, then the
$\limsup$-inequality in \eqref{limsup} is trivial. If $\conjz {z}{q}{\zeta} =\moreau uq{\eta} =0$, \eqref{limsup} can be checked straightforwardly.
For $\alpha=1$, in the case $\tau=0$,
\eqref{crucial-observation} clearly yields the $\Gamma$-$\liminf$ estimate, whereas the $\Gamma$-$\limsup$ one can be obtained by with the recovery sequence
$(q_\varepsilon,\tau_\varepsilon,u_\varepsilon',z_\varepsilon',\eta_\varepsilon,\zeta_\varepsilon)=(q,\tau_\varepsilon^*,u',z',\eta,\zeta) $ with
\[
\tau_\varepsilon^* = \varepsilon \frac{\sqrt{\vpot zq{z'} +
\vpot uq{u'} } }{\sqrt{
\conjz {z}{q}{\zeta} + \moreau uq\eta}}.
\]
For $\alpha>1$, in the case $\tau=0$, the $\Gamma$-$\liminf$ estimate follows taking into account that \eqref{crucial-observation} yields
\begin{equation}
\label{crucial-observation-bis}
\Mered q\tau{(u',z')}{(\eta,\zeta)}
\geq \frac{2}{\sqrt{\varepsilon^{\alpha-1}}} \sqrt{\vpot zq{z'} \moreau uq\eta}.
\end{equation}
Hence, if both $\vpot zq{z'} >0 $ and $\moreau uq\eta>0$, then $\liminf_{\varepsilon \downarrow 0} \Mered q\tau{(u',z')}{(\eta,\zeta)} =\infty$.
In the case when either $\vpot zq{z'} =0 $ or $\moreau uq\eta=0$, we deduce the $\Gamma$-$\liminf$ estimate from \eqref{crucial-observation}.
For the $\Gamma$-$\limsup$ estimate, we again take the recovery sequence $(t,q,\tau_\varepsilon^{**},u',z',\eta,\zeta) $, where now
\[
\tau_\varepsilon^{**} = \varepsilon \frac{\sqrt{\vpot zq{z'} +
\varepsilon^{\alpha-1} \vpot uq{u'} } }{\sqrt{
\conjz {z}{q}{\zeta} + \frac1{\varepsilon^{\alpha-1}}\moreau uq\eta}}.
\]
The discussion of the case $\alpha
\in (0,1)$ is completely analogous, also in view of \eqref{specular}.
Finally, in order to prove \eqref{ioffe-refined}, we apply the Ioffe Theorem \cite{Ioff77LSIF}. For this, we
introduce a functional $\overline{\mathcal{M}} : [0,\infty) \times \mathcal{Q}\times [0,\infty) \times \mathcal{Q} \times \R^{n+m} \to [0,\infty] $ subsuming
the functionals $\mathcal{M}_\varepsilon$ and
$\mathcal{M}_0$, viz.\
\[
\overline{\mathcal{M}} (\varepsilon; q,\tau,q',\xi) := \left\{
\begin{array}{ll}
\Me \varepsilon q \tau {q'} \xi & \text{if } \varepsilon>0,
\\
\Mo q \tau {q'} \xi & \text{if } \varepsilon=0.
\end{array}
\right.
\]
Arguing in the very same way as in the proof of \cite[Lemma 3.1]{MRS09}, it can be inferred that the functional $\overline{\mathcal{M}}$ is lower semicontinuous
on $[0,\infty) \times \mathcal{Q}\times [0,\infty) \times \mathcal{Q} \times \R^{n+m} $,
and that $(\tau,q') \mapsto \overline{\mathcal{M}} (\varepsilon; q,\tau,q',\xi) $ is convex for all $(\varepsilon,q,\xi) \in [0,\infty) \times \mathcal{Q} \times \R^{n+m} $. Hence, the Ioffe Theorem ensures that
\[
\liminf_{\varepsilon \downarrow 0}
\int_0^S \overline{\mathcal{M}}(\varepsilon;{q_\varepsilon(s)},{\tau_\varepsilon(s)},{q_\varepsilon'(s)},{\xi_\varepsilon(s)}) \mathrm{d} s
\geq \int_0^S \overline{\mathcal{M}}(0;{q(s)},{\tau(s)},{q'(s)},{\xi(s)}) \mathrm{d} s,
\]
whence \eqref{ioffe-refined}.
\end{proof}
Observe that the functional $\mathcal{M}_0$ \eqref{basic-one}
fulfills for all $(q,\tau) \in \mathcal{Q}\times [0,\infty)$
\begin{equation}
\label{mo-big}
\Mo{q}{\tau}{q'}{\xi} \geq \langle q', \xi \rangle = \langle u',\eta \rangle + \langle z', \zeta \rangle \qquad \text{for all } q' =(u',z') \in \mathcal{Q} \text{ and all }
\xi = (\eta,\zeta )\in \R^{n +m}.
\end{equation}
Indeed, for $\tau>0$, the inequality is trivial if either $\moreau uq\eta >0$ or $\conjz zq{\zeta}>0$. When both of them equal $0$, then
$\eta =0$ and
$
\langle q', \xi \rangle = \langle \zeta,z'\rangle \leq \mathcal{R}_0(q,z')= \Mo{q}{\tau}{q'}{\xi}$.
For $\tau =0$, e.g.\ in the case $\alpha >1$ we have, if $z'=0$,
\[
\langle q', \xi \rangle = \langle \eta,u'\rangle \leq \sqrt{\langle \vcof u{q}{u'},u'\rangle } \sqrt{\langle \vcofinv u{q}{\eta},\eta'\rangle }
= \Mored{q}{\tau}{q'}{\xi} +0= \Mo{q}{\tau}{q'}{\xi}
\]
while, if $\eta=0$,
\[
\begin{aligned}
\langle q', \xi \rangle = \langle \zeta,z'\rangle & = \langle \zeta - \omega, z'\rangle + \langle \omega, z'\rangle
\\
& \leq \sqrt{\langle \vcof z{q}{z'},z'\rangle } \sqrt{\langle \vcofinv z{q}{(\zeta{-}\omega)},(\zeta{-}\omega)\rangle } + \mathcal{R}_0(z')
= \Mo{q}{\tau}{q'}{\xi}
\end{aligned}
\]
where we have chosen $w\in \stab q$ such that $ \conjz zq{\zeta} = \moreau{z}{q}{\zeta{ -}\omega} = \frac12 \langle \vcofinv{z}{q}{(\zeta{-}\omega)},(\zeta {-}\omega)\rangle$, and from the fact that $\langle \omega, z'\rangle \leq \mathcal{R}_0(z')
$.
For the ensuing discussions, the set where \eqref{mo-big} holds as an equality shall play a crucial role.
We postpone its precise definition right before the statement of Proposition \ref{prop:charact}, cf.\ \eqref{contact-set} ahead.
\paragraph{\bf The vanishing-viscosity result.}
Theorem \ref{th:main} below
states that, up to a subsequence the
parameterized solutions $(\mathsf{t}_\varepsilon,\mathsf{q}_\varepsilon)_\varepsilon$ of the (Cauchy problems for the) viscous system \eqref{gen-grad-syst}, converge to a
parameterized curve $(\mathsf{t},\mathsf{q})$, complying with the analog of the energy balance \eqref{enid-eps-param}, with $\mathcal{M}_0$ in place of $\mathcal{M}_\varepsilon$.
We postpone after the proof of Theorem \ref{th:main} a thorough analysis of the notion of solution to the rate-independent system
\eqref{rip-limit} thus obtained. Let us instead mention in advance that the line of the argument for
proving the limiting parameterized energy balance \eqref{enid-param-lim} is by now quite standard, cf.\ the proofs of \cite[Thm.\,3.3]{MRS09},
\cite[Thm.\,5.5]{MRS10}. In fact, the \emph{upper energy estimate} (i.e.\ the inequality $\leq$ for \eqref{enid-param-lim}) shall follow from lower semicontinuity arguments, based on the application of the Ioffe Theorem \cite{Ioff77LSIF}. The \emph{lower energy estimate} $\geq $ will instead ensue from the chain rule \eqref{chain-rule}.
We also point out that, for the compactness argument it is actually not necessary to start from parameterized curves for which
estimate \eqref{normalization} holds, uniformly w.r.t.\ time. In fact, the uniform integrability of the sequence $(\mathsf{t}_\varepsilon', \mathsf{q}_\varepsilon')_\varepsilon$ is sufficient, cf.\ \eqref{uniform-integrab} below.
\begin{theorem}
\label{th:main}
Assume \eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz}, \eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}.
Let $(q_\varepsilon)_\varepsilon\subset H^1 (0,T;\mathcal{Q})$ be a sequence of solutions to the Cauchy problem for \eqref{gen-grad-syst}.
Choose nondecreasing surjective parameterizations $\mathsf{t}_\varepsilon : [0,S_\varepsilon] \to [0,T]$ and set
$
\mathsf{q}_\varepsilon(s)= (\mathsf{u}_\varepsilon(s),\mathsf{z}_\varepsilon(s) ): = q_\varepsilon(\mathsf{t}_\varepsilon(s))
$
for $s\in [0,S_\varepsilon]$. Suppose that $S_\varepsilon \to S$ as $\varepsilon \downarrow 0$ up to a subsequence, and that
there exist $q_0 \in \mathcal{Q}$ and $m \in L^1 (0,S)$ such that
$\mathsf{q}_\varepsilon (0) \to q_0$, and
\begin{equation}
\label{uniform-integrab}
m_\varepsilon:= \mathsf{t}_\varepsilon'+|\mathsf{q}_\varepsilon'| \rightharpoonup m \qquad
\text{in } L^1(0,S) \text{ as } \varepsilon \downarrow 0.
\end{equation}
Then,
there exist a (not-relabeled) subsequence
and a parameterized curve $(\mathsf{t},\mathsf{q}) \in \mathrm{AC} ([0,S]; [0,T]\times \mathcal{Q})$
such that as $\varepsilon\downarrow 0$
\begin{equation}
\label{ascoli}
(\mathsf{t}_\varepsilon,\mathsf{q}_\varepsilon) \to (\mathsf{t},\mathsf{q}) \text{ in } \mathrm{C}^0 ([0,S];[0,T]\times \mathcal{Q}),
\end{equation}
$\mathsf{t}'+|\mathsf{q}'| \leq m$ a.e.\ in $(0,S)$,
and $(\mathsf{t},\mathsf{q})$ fulfills the (parameterized) energy identity \begin{equation}
\label{enid-param-lim}
\begin{aligned}
&
\ene {\mathsf{t}(s_2)}{\mathsf{q}(s_2)} +\int_{s_1}^{s_2}
\Mo{\mathsf{q}(r)}{\mathsf{t}'(r)}{\mathsf{q}'(r)}{-\mathrm{D}_q\ene{\mathsf{t}(r)}{\mathsf{q}(r)}} \
\mathrm{d} r
\\
& = \ene {\mathsf{t}(s_1)}{\mathsf{q}(s_1)}
+ \int_{s_1}^{s_2} \partial_t \ene {\mathsf{t}(r)}{\mathsf{q}(r)} \mathsf{t}'(r) \mathrm{d} r \qquad \text{for all } 0 \leq s_1\leq s_2 \leq S.
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
Up to a reparameterization, we may suppose that the
curves $(\mathsf{t}_\varepsilon,\mathsf{q}_\varepsilon)$ are defined on the fixed time interval $[0,S]$. We split the proof is three steps.
\noindent
\underline{Step $1$: compactness.} Observe that
for every $0 \leq s_1 \leq s_2 \leq S$
\begin{equation}
\label{analogue}
|\mathsf{q}_\varepsilon(s_1) - \mathsf{q}_\varepsilon (s_2)| \leq \int_{s_1}^{s_2} |\mathsf{q}_\varepsilon'(s)| \mathrm{d} s \leq \int_{s_1}^{s_2}m_\varepsilon (s) \mathrm{d} s \,.
\end{equation}
Since $(\mathsf{q}_\varepsilon (0))_\varepsilon$ is bounded, we deduce from \eqref{analogue} that $(\mathsf{q}_\varepsilon)_\varepsilon \subset \mathrm{C}^0 ([0,S];\mathcal{Q})$ is bounded as well.
What is more, as the family $(m_\varepsilon)_\varepsilon$
is uniformly integrable \eqref{uniform-integrab}, $(\mathsf{q}_\varepsilon)_\varepsilon$ complies with the equicontinuity condition of the Ascoli-Arzel\`a Theorem and so does
$(\mathsf{t}_\varepsilon)_\varepsilon$, by the analog of estimate \eqref{analogue}. Hence, \eqref{ascoli} follows.
Taking into account that $\cE \in \mathrm{C}^1 ([0,T]\times \mathcal{Q})$, we immediately conclude from \eqref{ascoli} that
\begin{equation}
\label{convergence-of-energies}
\ene{\mathsf{t}_\varepsilon}{\mathsf{q}_\varepsilon} \to \ene{\mathsf{t}}{\mathsf{q}}, \qquad
\mathrm{D}_q \ene{\mathsf{t}_\varepsilon}{\mathsf{q}_\varepsilon} \to \mathrm{D}_q\ene{\mathsf{t}}{\mathsf{q}},
\qquad
\partial_t \ene{\mathsf{t}_\varepsilon}{\mathsf{q}_\varepsilon} \to \partial_t\ene{\mathsf{t}}{\mathsf{q}} \quad \text{uniformly on } [0,S].
\end{equation}
Furthermore, \eqref{uniform-integrab} also yields that the sequences $(\mathsf{t}_\varepsilon')_\varepsilon$ and $(\mathsf{q}_\varepsilon')_\varepsilon$ are uniformly integrable. Thus,
by the Pettis Theorem,
up to a further extraction we find
\begin{equation}
\label{l1-conv}
\mathsf{t}_\varepsilon' \rightharpoonup \mathsf{t}' \qquad \text{in } L^1 (0,S), \qquad \mathsf{q}_\varepsilon' \rightharpoonup \mathsf{q}' \qquad \text{in } L^1 (0,S;\mathcal{Q}),
\end{equation}
whence $\mathsf{t}'+|\mathsf{q}'| \leq m$ a.e.\ in $(0,S)$.
\noindent
\underline{Step $2$: upper energy estimate. }
We now take the limit as $\varepsilon \downarrow 0$ of the (parameterized) energy-dissipation balance \eqref{enid-eps-param} for every $0 \leq s_1 \leq s_2 \leq S$:
\begin{equation}
\label{UEE}
\begin{aligned}
&
\ene {\mathsf{t}(s_2)}{\mathsf{q}(s_2)} +\int_{s_1}^{s_2}
\Mo{\mathsf{q}(r)}{\mathsf{t}'(r)}{\mathsf{q}'(r)}{-\mathrm{D}_q\ene{\mathsf{t}(r)}{\mathsf{q}(r)}}
\mathrm{d} r
\\
&
\stackrel{(1)}{\leq} \lim_{\varepsilon \downarrow 0}\ene {\mathsf{t}_\varepsilon(s_2)}{\mathsf{q}_\varepsilon(s_2)} +\liminf_{\varepsilon \downarrow 0}\int_{s_1}^{s_2}
\Me{\varepsilon}{\mathsf{q}_\varepsilon(r)}{\mathsf{t}_\varepsilon'(r)}{\mathsf{q}'_\varepsilon(r)}{-\mathrm{D}_q\ene{\mathsf{t}_\varepsilon(r)}{\mathsf{q}_\varepsilon(r)}} \
\mathrm{d} r
\\
& = \lim_{\varepsilon \downarrow 0}\ene {\mathsf{t}_\varepsilon(s_1)}{\mathsf{q}_\varepsilon(s_1)}
+ \lim_{\varepsilon \downarrow 0} \int_{s_1}^{s_2} \partial_t \ene{\mathsf{t}_\varepsilon(r)}{\mathsf{q}_\varepsilon(r)} \mathsf{t}_\varepsilon'(r) \mathrm{d} r
\\
&
\stackrel{(2)}{ =} \ene {\mathsf{t}(s_1)}{\mathsf{q}(s_1)}
+ \int_{s_1}^{s_2} \partial_t \ene{\mathsf{t}(r)}{\mathsf{q}(r)} \mathsf{t}'(r) \mathrm{d} r\,,
\end{aligned}
\end{equation}
where $(1) $ follows from the energy convergence in
\eqref{convergence-of-energies} and the previously proved
\eqref{ioffe-refined}, and (2) from \eqref{convergence-of-energies}, again, combined with the first of \eqref{l1-conv}.
This concludes the upper energy estimate.
\noindent
\underline{Step $3$: lower energy estimate.} We have for all $0 \leq s_1\leq s_2 \leq S$ that
\begin{equation}
\label{LEE}
\begin{aligned}
& \ene {\mathsf{t}(s_1)}{\mathsf{q}(s_1)}
+ \int_{s_1}^{s_2} \partial_t \ene{\mathsf{t}(r)}{\mathsf{q}(r)} \mathsf{t}'(r) \mathrm{d} r
\\
& \stackrel{(1)}{ = } \ene {\mathsf{t}(s_2)}{\mathsf{q}(s_2)} + \int_{s_1}^{s_2} \langle -\mathrm{D}_q \ene{\mathsf{t}(r)}{\mathsf{q}(r)} , \mathsf{q}'(r) \rangle \mathrm{d} r
\\
& \stackrel{(2)}{ \leq } \ene {\mathsf{t}(s_2)}{\mathsf{q}(s_2)} +\int_{s_1}^{s_2}
\Mo{\mathsf{q}(r)}{\mathsf{t}'(r)}{\mathsf{q}'(r)}{-\mathrm{D}_q\ene{\mathsf{t}(r)}{\mathsf{q}(r)}}
\mathrm{d} r\,,
\end{aligned}
\end{equation}
where (1) follows from the chain rule, and (2) is due to inequality
\eqref{mo-big}. In this way, we conclude \eqref{enid-param-lim}.
Finally, combining \eqref{UEE} and \eqref{LEE} it is easy to deduce
that
\[
\lim_{\varepsilon \downarrow 0} \int_{s_1}^{s_2}
\Me{\varepsilon}{\mathsf{q}_\varepsilon(r)}{\mathsf{t}_\varepsilon'(r)}{\mathsf{q}_\varepsilon'(r)}{- \mathrm{D}_q
\ene{\mathsf{t}_\varepsilon(r)}{\mathsf{q}_\varepsilon(r)}} \mathrm{d} r = \int_{s_1}^{s_2}
\Mo{\mathsf{q}(r)}{\mathsf{t}'(r)}{\mathsf{q}'(r)}{-\mathrm{D}_q\ene{\mathsf{t}(r)}{\mathsf{q}(r)}} \mathrm{d} r
\]
for all $0\leq s_1 \leq s_2 \leq S$, whence $ \int_{s_1}^{s_2} \mathcal{R}_0
(\mathsf{q}_\varepsilon(r), \mathsf{z}_\varepsilon'(r)) \mathrm{d} r \to \int_{s_1}^{s_2} \mathcal{R}_0
(\mathsf{q}(r), \mathsf{z}'(r)) \mathrm{d} r $.
\end{proof}
\paragraph{\bf Balanced Viscosity parameterized solutions.}
Let us now gain further insight into the notion of solution to system
\eqref{intro:rip-limit} arising from the vanishing-viscosity limit.
First of all, we fix its definition.
\begin{definition}\label{def:bv-param}
Let $(\mathcal{R}_0,\vpotname z,\vpotname u, \mathcal{E})$ comply with
\eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz},
\eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}. A curve $(\mathsf{t},\mathsf{q})
\in \mathrm{AC} ([0,S];[0,T]\times \mathcal{Q})$ is called a
\emph{parameterized Balanced Viscosity} ($\mathrm{pBV}$, for short) solution
to the rate-independent system $(\mathcal{Q}, \cE, \mathcal{R}_0 + \varepsilon
\vpotname z + \varepsilon^\alpha \vpotname u) $ if $\mathsf{t}:[0,S] \to [0,T]$
is nondecreasing, and the pair $(\mathsf{t},\mathsf{q})$ complies with the
energy-dissipation balance \eqref{enid-param-lim} for all $0\leq
s_1\leq s_2 \leq S$.
Furthermore, $(\mathsf{t},\mathsf{q})$ is called
\begin{compactitem}
\item \emph{non-degenerate}, if
\begin{equation}
\mathsf{t}'(s) + |\mathsf{q}'(s)| >0 \qquad \text{for a.a.\,}\, s \in (0,S);
\end{equation}
\item \emph{surjective}, if $\mathsf{t}: [0,S] \to [0,T]$ is surjective.
\end{compactitem}
\end{definition}
\begin{remark}
\label{rmk:nondeg}
\upshape Observe that, even in the case when the function $m$ in
\eqref{uniform-integrab} is a.e.\ strictly positive, Theorem
\ref{th:main} does not guarantee the existence of non-degenerate
$\mathrm{pBV}$ solutions. However, any degenerate $\mathrm{pBV} $ solution
$(\mathsf{t},\mathsf{q})$ can be reparameterized to a non-degenerate one
$(\tilde{\mathsf{t}}, \tilde{\mathsf{q}}) : [0,\tilde{S}] \to [0,T]\times \mathcal{Q}$,
even fulfilling the \emph{normalization condition}
\begin{equation}
\label{fake-norma}
\tilde{\mathsf{t}}'(\sigma) + \tilde{\mathsf{q}}'(\sigma)=1 \qquad \text{for a.a.\,}\,
\sigma \in (0,\tilde S)\,.
\end{equation}
Indeed, following \cite[Rmk.\ 2]{MRS09}, starting from a (possibly
degenerate) solution $(\mathsf{t},\mathsf{q})$, we set
\[
\sigma(s):= \int_0^s \mathsf{t}'(r) + |\mathsf{q}'(r)| \mathrm{d} r \qquad \text{and }
\tilde{S}:= \sigma (S),
\]
and define $(\tilde{\mathsf{t}}(\sigma), \tilde{\mathsf{q}}(\sigma)):=
(\mathsf{t}(s),\mathsf{q}(s))$ if $\sigma = \sigma(s)$. Then, the very same
calculations as in \cite[Rmk.\ 2]{MRS09} lead to \eqref{fake-norma}.
\end{remark}
We conclude this section with a characterization of $\mathrm{pBV}$ solutions
in the same spirit as \cite[Prop.\ 2]{MRS09} and \cite[Prop.\
5.3]{MRS10}, \cite[Cor.\ 4.5]{mielke-rossi-savare2013}. We show that
the energy identity \eqref{enid-param-lim} defining the concept of
$\mathrm{pBV}$ solutions is equivalent to the corresponding energy inequality
on the interval $[0,S]$, and to the energy inequality in a
differential form. Finally, \eqref{char-contset} below provides a
further reformulation of this solution concept which involves the
\emph{contact set} (cf.\ \cite{MRS10, mielke-rossi-savare2013})
\begin{equation}
\label{contact-set}
\contset{q}:= \{ (\tau,q',\xi)\in [0,\infty) \times \mathcal{Q} \times
\R^{n+m} \, : \Mo{q}{\tau}{q'}{\xi} = \langle q',\xi\rangle \}
\end{equation}
Observe that for all $q \in \mathcal{Q}$ the set $\contset{q}$ is closed, as
the functional $\Mo{q}{\cdot}{\cdot}{\cdot}$ is lower semicontinuous.
In Proposition we will provide \ref{l:cont-set} the explicit
representation of $\contset{q}$. This and \eqref{char-contset} we will
be at the core of the reformulation of $\mathrm{pBV}$ solutions in terms of
subdifferential inclusions, which we will discuss in Sec.\ \ref{s:4}.
\begin{proposition}
\label{prop:charact}
Let $(\mathcal{R}_0,\vpotname z,\vpotname u, \mathcal{E})$ comply with
\eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz},
\eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}. A curve $(\mathsf{t},\mathsf{q})
\in \mathrm{AC} ([0,S];[0,T]\times \mathcal{Q})$, with $\mathsf{t}$ nondecreasing,
is a $\mathrm{pBV}$ solution to the rate-independent system $(\mathcal{Q},\cE,
\mathcal{R}_0 + \varepsilon \vpotname z+ \varepsilon^\alpha \vpotname u)$ if and only if
one of the following equivalent conditions is satisfied:
\begin{enumerate}
\item \eqref{enid-param-lim} holds as an inequality on $(0,S)$, i.e.
\[
\begin{aligned}
&
\ene {\mathsf{t}(S)}{\mathsf{q}(S)} +\int_{0}^{S}
\Mo{\mathsf{q}(r)}{\mathsf{t}'(r)}{\mathsf{q}'(r)}{-\mathrm{D}_q\ene{\mathsf{t}(r)}{\mathsf{q}(r)}} \
\mathrm{d} r
\\
& \leq \ene {\mathsf{t}(0)}{\mathsf{q}(0)}
+ \int_{0}^{S} \partial_t \ene {\mathsf{t}(r)}{\mathsf{q}(r)} \mathsf{t}'(r) \mathrm{d} r;
\end{aligned}
\]
\item the above energy inequality holds in the differential form
$\frac{\mathrm{d}}{\mathrm{d} s} \ene{\mathsf{t}}{\mathsf{q}} +
\Mo{\mathsf{q}}{\mathsf{t}'}{\mathsf{q}'}{-\mathrm{D}_q\ene{\mathsf{t}}{\mathsf{q}}} \leq \partial_t
\ene {\mathsf{t}}{\mathsf{q}} \mathsf{t}' $ a.e.\ in $(0,S)$;
\item the triple $(\mathsf{t}',\mathsf{q}',-\mathrm{D}_q \ene{\mathsf{t}}{\mathsf{q}})$ belongs to
the contact set, i.e.
\begin{equation}
\label{char-contset}
(\mathsf{t}'(s),\mathsf{q}'(s),-\mathrm{D}_q \ene{\mathsf{t}(s)}{\mathsf{q}(s)}) \in \contset{\mathsf{q}(s)} \qquad \text{for a.a.\,}\, s \in (0,S).
\end{equation}
\end{enumerate}
\end{proposition}
The proof of Proposition \ref{prop:charact} is omitted: it follows by
exploiting the chain rule \eqref{chain-rule}, with arguments akin to
those in the proof of Theorem \ref{th:main}, see also
\cite[Prop.\,2]{MRS09} and \cite[Prop.\,5.3]{MRS10},
\cite[Cor.\,4.5]{mielke-rossi-savare2013}.
\section{Physical interpretation}
\label{s:4}
The following result provides a thorough description of the (closed)
contact set $\contset{q}$, cf.\ \eqref{contact-set}. As we will see,
the representation of $\contset q$ substantially different in the
three cases $\alpha>1$, $\alpha=1$, and $\alpha\in (0,1)$. That is
why, in Proposition \ref{l:cont-set} below we will use the notation
$\Sigma_{\alpha>1}(q)$, $\Sigma_{\alpha=1}(q)$, and $\Sigma_{\alpha
\in (0,1)}(q)$. We will prove that these sets are given by the
union of subsets describing the various evolution regimes for the
variables $u$ and $z$. The notation for these subsets will be of the
form
\[
\rgm{A}{r}{B}{s} \qquad \text{with } \mathrm{A}, \mathrm{B} \in \{ \mathrm{E}, \mathrm{R}, \mathrm{V}, \mathrm{B} \} \text{ and } \mathsf{r}, \mathsf{s} \in \{ \mathsf{u},
\mathsf{z} \}.
\]
The letters $\mathrm{E}, \mathrm{R}, \mathrm{V}, \mathrm{B}$ stand for \emph{Equilibrated}, \emph{Rate-independent},
\emph{Viscous}, and \emph{Blocked}, respectively. For instance,
$\rgm EuRz$ is the set of $(\tau,q',\xi)$ corresponding to equilibrium for $u$ and rate-independent evolution for $z$, cf.\ \eqref{ssu} below; we postpone more comments after the statement of
Proposition \ref{l:cont-set}.
Observe that all of these sets depend on the state variable $q$, as does $\contset q$. However, for simplicity
we will not highlight this in their notation.
In their description
we shall always refer to the representation
$q'=(u',z')$ for the velocity variable, and
$\xi =(\eta,\zeta)$ for the force variable.
\begin{proposition}
\label{l:cont-set}
Assume \eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz}, \eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}.
Then, for
\begin{description}
\item[\underline{$\alpha >1$}] the contact set is given by
\begin{equation}
\label{cont-set-ap}
\Sigma_{\alpha>1}(q) = \rgm EuRz \cup \rgm VuBz \cup \rgm EuVz
\end{equation}
where
\begin{align}
& \label{ssu}
\rgm EuRz: = \{ (\tau,q',\xi)\, :
\ \tau>0,\ \ q'=(u',z'), \ \xi = (0,\zeta) \text{ and
} \partial\mathcal{R}_0 (q,z') \ni \zeta \}, \\
&\label{fastu}
\rgm VuBz : = \{ (\tau,q',\xi)\, : \ (\tau,q',\xi)
=(0,(u',0),(\eta,\zeta)) \text{ and } \exists\, \thn u \in [0,1]\, :
\ \thn u \vcof u{q}u'= (1-\thn u) \eta \}, \\
&\label{equilu}
\begin{aligned}
\rgm EuVz:= \{ (\tau,q',\xi)\, : \ & \tau =0, \ q'=(u',z'), \ \xi
= (0,\zeta) \text{ and } \\
& \exists\, \thn z \in [0,1] \, : \
(1-\thn z) \partial\mathcal{R}_0 (q,z') + \thn z
\vcof z{q}z' \ni (1-\thn z) \zeta \}.
\end{aligned}
\end{align}
\item[\underline{$\alpha =1$}] the contact set is given by
\begin{equation}
\label{cont-set-au}
\Sigma_{\alpha=1}(q) = \rgm EuRz \cup \rgm VuVz
\end{equation}
where
\begin{align}
&\label{jumpset}
\rgm VuVz: =\left \{ (\tau,q',\xi)\, :
\ \tau=0, \text{ and } \exists\, \theta \in[0,1]\, : \
\Big\{ \begin{array}{l}
\theta \vcof u{q}u' = (1-\theta) \eta, \\
(1-\theta) \partial\mathcal{R}_0 (q,z') + \theta
\vcof z{q}z' \ni (1-\theta) \zeta
\end{array} \right\}.
\end{align}
\item[\underline{$\alpha \in (0,1)$}] the contact set is given by
\begin{equation}
\label{cont-set-am}
\Sigma_{\alpha \in (0,1)}(q) = \rgm EuRz \cup \rgm BuVz \cup \rgm VuRz
\end{equation}
with
\begin{align}
&\label{fastz}
\begin{aligned}
\rgm BuVz:= \{ (\tau,q',\xi): \ & \tau =0, \ q'=(0,z'), \ \xi =
(\eta,\zeta) \text{ and } \\
& \exists\, \thn z \in [0,1]: \;
(1{-}\thn z)\partial\mathcal{R}_0 (q,z') + \thn z
\vcof z{q}z' \ni (1{-}\thn z)\zeta \},
\end{aligned}
\\
& \label{equilz}
\rgm VuRz
:=\left \{ (\tau,q',\xi): \;
(\tau,q',\xi)=(0,(u',z'),(\eta,\zeta)) \text{ and }
\Big\{ \begin{array}{l}
\exists\, \thn u \in [0,1]: \; \thn u \vcof u{q}u'= (1{-}\thn u) \eta,
\\
\partial\mathcal{R}_0 (q,z') \ni \zeta
\end{array}
\right \}.
\end{align}
\end{description}
\end{proposition}
\noindent
As \eqref{char-contset} reveals, the contact set encompasses all the
relevant information on the evolution of a \emph{parameterized
Balanced Viscosity} solution. The form of the sets $\rgm EuRz, \,
\rgm VuBz\, \ldots$ which constitute it is strictly related to the
mechanical interpretation of $\mathrm{pBV}$ solutions which shall be explored
at the end of this section. Let us just explain here that
\begin{compactitem}
\item the set $\rgm EuRz$ corresponds to equilibrium for the variable
$u$ (as $\eta=0$), and a \emph{stick-slip} regime for $z$, which
evolves rate-independently as expressed by $ \partial\mathcal{R}_0 (q,z')
\ni \zeta$. Observe that the stationary state $u'=z'=0$ is also
encompassed.
\item The set $\rgm Vu Bz$ corresponds to the case in which the variable $u$
still has to relax to an equilibrium and thus is governed by a
\emph{fast} dynamics at a jump $\tau=0$, while $z$ is ``blocked by
viscosity'' and thus stays constant ($z'=0$).
\item The set $\rgm EuVz$ corresponds to the regime in which $z$
evolves according to viscosity at a jump $\tau=0$, and $u$ follows
$z$ in such a way that it is at an equilibrium ($\eta=0$).
\item The set $\rgm VuVz$ corresponds to the case where the evolution
of the system at a jump $\tau=0$ is governed by viscosity both in
$u$ and in $z$.
\item The set $ \rgm BuVz $ encompasses the case in which the variable
$z$ at a jump $\tau=0$ evolves according to viscosity, while $u$ is
blocked by viscosity ($u'=0$).
\item The set $ \rgm VuRz $ describes viscous evolution for $u$ and
rate-independent evolution for $z$.
\end{compactitem}
\begin{remark}\upshape
\label{rmk:specular}
Let us stress once more that, as mentioned in advance, in the
vanishing-viscosity limit the evolution regimes for $\alpha>1$ and
$\alpha \in (0,1)$ mirror each other. Indeed, formulae
\eqref{cont-set-ap} and \eqref{cont-set-am} are specular, up to
observing that the analog of the equilibrium regime
$\mathrm{E}_{\mathsf{u}}$ is indeed the rate-independent regime
$\mathrm{R}_{\mathsf{z}}$, see also Figure \ref{fig:SwitchRegimes}.
\end{remark}
\begin{proof}[Proof of Proposition \ref{l:cont-set}]
In all the three cases $\alpha>1$, $\alpha =1$, and $\alpha \in (0,1)$,
for $\tau>0$ the contact condition
$\Mo{q}{\tau}{q'}{\xi} = \langle \xi, q'\rangle$
can hold only if the constraints
$\eta =0$ and $\zeta \in \stab q$ are satisfied. Then, $\Mo{q}{\tau}{q'}{\xi} = \langle \xi, q'\rangle$ reduces to
$\mathcal{R}_0 (q,z') = \langle \zeta,z'\rangle $. Since $\zeta \in \stab q$, this is equivalent to
$\zeta \in \partial\mathcal{R}_0 (q,z')$ by \eqref{charact-1-homog}.
This gives the set
$\rgm EuRz$, which contributes to the contact set $\contset q$
in the three cases $\alpha>1$, $\alpha=1$, and $\alpha \in (0,1)$.
For $\alpha=1$, observe that in the case $\tau=0$ the contact condition is
\begin{equation}
\label{case-alpha1}
\mathcal{R}_0(z') + 2\sqrt{\vpot zq{z'} + \vpot uq{u'} }\sqrt{\conjz
zq{\zeta} + \moreau uq{\eta} } = \langle \zeta, z'\rangle + \langle
\eta, u'\rangle.
\end{equation}
Let us first address the case in which $\sigma_1:= \sqrt{\vpot
zq{z'}+\vpot uq{u'} }=0$ or $\sigma_2:= \sqrt{\conjz
zq{\zeta}+\moreau uq{\eta} } =0$. The former case corresponds to the
stationary state $u'=z'=0$, which means $\theta=1$ in \eqref{jumpset}.
The latter to $\conjz zq\zeta =0$ (if and only if $\zeta \in \stab q$)
and $\eta=0$ Hence \eqref{case-alpha1} becomes $\mathcal{R}_0(z')= \langle
\zeta, z'\rangle$, whence $\zeta \in \partial\mathcal{R}_0 (q,z')$ by
\eqref{charact-1-homog}, again. This corresponds to $\theta=0$ in
\eqref{jumpset}. If $\sigma_1 \sigma_2>0$, then we rewrite $2\sigma_1
\sigma_2$ as $\lambda \sigma_1^2 + \frac1\lambda \sigma_2^2$, with
$\lambda>0$ given by $\lambda = \frac{\sigma_2}{\sigma_1}$. With such
$\lambda$ \eqref{case-alpha1} rewrites as
\[
\mathcal{R}_0(z') + \lambda ( \vpot zq{z'} + \vpot uq{u'}) + \frac1{\lambda}
( \conjz zq{\zeta}+\moreau uq{\eta}) = \langle \zeta, z'\rangle +
\langle \eta, u'\rangle.
\]
Upon multiplying both sides by $\lambda$, using that $\vpotname z$ and
$\vpotname u$ are positively homogeneous of degree $2$, and
rearranging terms, we get
\[
\mathcal{R}_0(z') + \vpot zq{ \lambda z'} + \conjz zq{\zeta} - \langle \zeta,\lambda z'\rangle
= \langle \eta, \lambda u'\rangle - \vpot uq{\lambda u'} - \moreau uq{\eta}.
\]
By the Fenchel-Moreau equivalence, this gives
\[
\begin{array}{ll}
&
\vcof u{q}{(\lambda u')} = \eta, \\
& \partial\mathcal{R}_0 (q, \lambda z') +
\vcof z{q}{(\lambda z')} \ni \zeta
\end{array}
\]
with $\lambda >0$. Then, \eqref{jumpset}
follows with $\theta \in (0,1)$ such that $\lambda = \frac{\theta}{1-\theta}$.
All in all, for
$\alpha =1$ we have proved that, if $(\tau,q',\xi) \in \Sigma_{\alpha=1}(q)$, then either $(\tau,q',\xi) \in \rgm EuRz$, or
$(\tau,q',\xi) \in \rgm VuVz$. This concludes the proof of \eqref{cont-set-au} for $\Sigma_{\alpha=1}(q)$.
In the case $\alpha>1$ and $\tau=0$, $\Mo{q}{\tau}{q'}{\xi} $ is finite if and only if either $z'=0$, or $\eta=0$. In the former case, the contact condition reduces to
$\sqrt{\langle \vcof uq{u'}, u' \rangle} \sqrt{\langle \vcofinv uq{\eta}, \eta \rangle} = \langle \eta, u'\rangle$, which is equivalent to the fact that there exists $\thn u \in [0,1]$ with $\thn u \vcof u{q}u'= (1-\thn u) \eta$. This yields the set $\rgm VuBz$. In the latter case,
the contact condition rephrases as
\[
\mathcal{R}_0(q,z') + \sqrt{\langle \vcof zq{z'}, z' \rangle} \sqrt{\langle \vcofinv zq{(\zeta{-}\omega)}, \zeta{-}\omega \rangle} = \langle \zeta , z'\rangle = \langle \omega,z'\rangle+
\langle \zeta{-}\omega,z'\rangle,
\]
with $\omega \in \stab q$ such that $\conjz zq{\zeta} = \frac12 \langle \vcofinv zq{(\zeta{-}\omega)}, \zeta{-}\omega \rangle$. It is immediate to check that the above chain of equalities implies
\[
\left\{
\begin{array}{ll}
\omega \in \partial \mathcal{R}_0(q,z'),
\\
(1-\thn z )(\zeta{-}\omega )= \thn z \vcof zq{z'} \quad \text{for some } \thn z \in [0,1].
\end{array}
\right.
\]
This yields the set $\rgm EuVz$.
All in all,
in the case $\alpha>1$
we have proved that, if $(\tau,q',\xi) \in \Sigma_{\alpha>1}(q)$, then either $(\tau,q',\xi) \in \rgm EuRz$, or
$(\tau,q',\xi) \in \rgm VuBz$, or $(\tau,q',\xi) \in \rgm EuVz$. This concludes \eqref{cont-set-ap}.
The proof of \eqref{cont-set-am} follows the very same lines and is thus omitted.
\end{proof}
The \underline{\bf main result} of this paper is the following
theorem, which is in fact a direct consequence of the characterization
\eqref{char-contset} of $\mathrm{pBV}$ solutions in terms of the contact set,
and of Proposition \ref{l:cont-set}. Observe that, we confine
ourselves to \emph{non-degenerate} $\mathrm{pBV}$ solutions only. This is not
restrictive, in view of Remark \ref{rmk:nondeg}.
\begin{theorem}[Reformulation as a system of subdifferential inclusions]
\label{prop:diff-incl}
Assume \eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz},
\eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}. A curve $(\mathsf{t},\mathsf{q})
\in \mathrm{AC}([0,S]; [0,T]\times \mathcal{Q})$ with nondecreasing $\mathsf{t}$ is a
\emph{non-degenerate} parameterized Balanced Viscosity solution to the
rate-independent system $(\mathcal{Q},\cE, \mathcal{R}_0 + \varepsilon \vpotname z+
\varepsilon^\alpha \vpotname u)$ if and only if $\mathsf{t}' + |\mathsf{q}'|>0$ a.e.\ in
$(0,S)$ and there exist two Borel functions $\thn u, \, \thn z: [0,S]
\to [0,1]$ such that
the pair $(\mathsf{t},\mathsf{q})$ with $\mathsf{q}= (\mathsf{u},\mathsf{z})$ satisfies the
system of equations for a.a.\ $s\in (0,S)$:
\begin{equation}
\label{diff-syst}
\begin{aligned}
&
\thn u(s)\, \vcof u{\mathsf{q}(s)}{\mathsf{u}'(s)} + (1{-}\thn{u} (s))\, \mathrm{D}_u
\enet{\mathsf{t}(s)}{\mathsf{u}(s)}{\mathsf{z}(s)} \ni 0,
\\
&
(1{-}\thn{z} (s))\, \partial\mathcal{R}_0 (\mathsf{q}(s),\mathsf{z}'(s)) +
\thn z(s)\, \vcof z{\mathsf{q}(s)}{\mathsf{z}'(s)} + (1{-}\thn{z} (s))\, \mathrm{D}_z
\enet{\mathsf{t}(s)}{\mathsf{u}(s)}{\mathsf{z}(s)} \ni 0,
\end{aligned}
\end{equation}
with
\begin{equation}
\label{switching}
\mathsf{t}'(s)\, \thn u(s) = \mathsf{t}'(s) \, \thn z (s) =0
\end{equation}
and the following additional conditions depending on $\alpha$:
\begin{align}
&&&\text{\underline{$\alpha>1$:}}&
\label{add-con-pu} \thn u (s) \,(1 {-}\thn z (s)) =0;
&&\\
&&&\text{\underline{$\alpha=1$:}} &
\label{add-con-uu} \thn u(s) = \thn z (s); &&\\
&&&\text{\underline{$\alpha\in (0,1)$:}} &
\label{add-con-mu} \thn z (s)\, (1{-}\thn u (s)) =0 .&&
\end{align}
\end{theorem}
Figure \ref{fig:SwitchRegimes} displays the structure of the
allowed values for the parameters $(t',\thn u,\thn z)$ depending on $\alpha$.
\begin{figure}[h
\centerline{\unitlength1cm
\begin{picture}(0,0)
\put(1.93,1){
\put(-1.2,-1){$t'$}\put(2.3,.1){$\thn u$}\put(-0.4,1.8){$\thn z$}
\put(-1.8,-0.4){$\rgm EuRz$} \put(0.6,-0.4){$\rgm VuRz$}
\put(1.85,1.1){$\rgm BuVz$} }
\put(7.3,1){
\put(-1.2,-1){$t'$}\put(2.3,.1){$\thn u$}\put(-0.4,1.8){$\thn z$}
\put(-1.8,-0.4){$\rgm EuRz$} \put(0.3,1.2){$\rgm VuVz$} }
\put(12.6,1){
\put(-1.2,-1){$t'$}\put(2.3,.1){$\thn u$}\put(-0.4,1.8){$\thn z$}
\put(-1.8,-0.4){$\rgm EuRz$} \put(0.15,0.75){$\rgm EuVz$}
\put(0.5,1.9){$\rgm VuBz$} }
\end{picture}%
\includegraphics[width=15\unitlength]{TwoRelaxRegimes}}
\caption{The switching between the different regimes, depending on the
cases $\alpha<1$, $\alpha=1$,
and $\alpha>1$, are displayed via the allowed combinations of the
triples $(t',\thn u,\thn z)$.}
\label{fig:SwitchRegimes}
\end{figure}
\begin{remark}
\label{once-more-spec}
\upshape Observe that the conditions \eqref{add-con-pu} and
\eqref{add-con-mu} are specular (cf.\ Remark \ref{rmk:specular}),
revealing once more that the evolution regimes for $\alpha>1$ and
$\alpha<1$ reflect each other. Nonetheless, a major difference occurs
in that, under suitable conditions, for $\alpha>1$ the regime $\rgm
VuBz$ only occurs at the beginning, when $u$ relaxes fast to
equilibrium, cf.\ Proposition \ref{prop:alex}.
\end{remark}
Finally, let us get further insight into the mechanical interpretation of system \eqref{diff-syst}, with the constraints
\eqref{switching} and
\eqref{add-con-pu}--\eqref{add-con-mu}.
Preliminarily, let us point out that, as in the case of parameterized solutions to the rate-independent system
\begin{equation}
\label{simple-rip}
\partial \mathcal{R}_0(z(t),z'(t)) +\mathrm{D}_q \mathcal{I}(t,z(t)) \ni 0 \qquad \text{in } (0,T),
\end{equation}
in the \emph{sole} variable $z$, $\mathsf{t}'(s)=0$ if and only if the
system is jumping in the (slow) external time scale. Therefore, from
\eqref{switching} we gather that, in all of the three cases
$\alpha>1$, $\alpha=1$, and $\alpha \in (0,1)$, when the system does
not jump, then it is either in the \emph{sticking} regime
(i.e. $\mathsf{u}'= \mathsf{z}'=0$), or in the \emph{sliding regime},
namely the evolution of $\mathsf{z}$ is purely rate-independent
(i.e. $\partial\mathcal{R}_0 (\mathsf{q},\mathsf{z}') + \mathrm{D}_z \ene
{\mathsf{t}}{\mathsf{q}} \ni 0$), and $\mathsf{u}$ follows $\mathsf{z}$ in
such a way that it is at an equilibrium (i.e. $-\mathrm{D}_u \ene
{\mathsf{t}}{\mathsf{q}}=0$). It is the description of the system behavior
at jumps that significantly differs for $\alpha>1$, $\alpha=1$, and
$\alpha\in (0,1)$. \medskip
\paragraph{\bfseries \underline{Case $\alpha>1$:} fast relaxation of $u$.}
Here $\mathsf{u}$ relaxes faster to equilibrium than $\mathsf{z}$.
With \eqref{switching} and \eqref{add-con-pu} we are imposing at a
jump that either $\mathsf{z}'=0$ (which follows from $\thn z=1 $,
i.e.\ $\rgm VuBz$) or $\mathsf{u}$ is at equilibrium (corresponding to
$\thn u =0$, i.e.\ $\rgm EuVz$). In fact, $\mathsf{z}$ cannot change
until $\mathsf{u}$ has relaxed to equilibrium.
When $\mathsf{u}$ has reached the equilibrium, then $\mathsf{z}$ may
have either a \emph{sliding jump} (i.e. $\thn z =0$), or a
\emph{viscous jump} ($\thn z \in (0,1)$).
Our next result shows that, in fact, under the condition that the
energy $\cE$ is uniformly convex with respect to the variable $u$
(cf.\ Proposition \ref{prop:aprio-eps}), after an initial phase in
which $\mathsf{z}$ is constant and $\mathsf{u}$ relaxes to an equilibrium
evolving by viscosity (i.e.\ the solution is in regime
$\rgm VuBz$), $\mathsf{u}$ never leaves the equilibrium afterwards. In
that case the evolution of the system is completely described by
$\mathsf{z}$, which turns out to be a parameterized Balanced Viscosity
solution to the rate-independent system driven by the \emph{reduced
energy functional} obtained minimizing out the variable $u$.
\begin{proposition}
\label{prop:alex}
Assume \eqref{ass:dissip-pot-R}, \eqref{ass:dissip-pot-Vz},
\eqref{ass:dissip-pot-Vu}, and \eqref{ass:E}. Additionally, suppose
that $\cE$ complies with \eqref{en-plus}, and denote by
$u=M(t,z)$ the unique solution of $\mathrm{D}_u\cE(t,u,z)=0$, i.e.\ the
minimizer of $\cE(t,\cdot,z)$. Let $(\mathsf{t},\mathsf{q}) \in \mathrm{AC} ([0,S];
[0,T]\times \mathcal{Q} )$ be a parameterized Balanced Viscosity solution to
the rate-independent system $(\mathcal{Q},\cE, \mathcal{R}_0 + \varepsilon \vpotname z+
\varepsilon^\alpha \vpotname u)$ with $\alpha>1$. Set
\begin{equation}
\label{eq:frakS}
\mathfrak S:= \{ s \in [0,S]\, : \ \mathrm{D}_u \ene {\mathsf{t} (s)}{\mathsf{q} (s)} =0\}.
\end{equation}
Then, $\mathfrak S$ is either empty or it has the form $[s_*, S]$ for
some $s_*\in [0,S]$.
(a) Assume $s_*>0$, then for $s\in [0,s_*)=[0,S]\setminus \mathfrak S$
we have $\mathsf{t}(s)=\mathsf{t}(0)$
and $\mathsf{z}(s)= \mathsf{z}(0)$, whereas $\mathsf{u}$ is a solution to the
reparameterized the gradient flow for
$(\R^n,\cE(\mathsf{t}(0),\cdot,\mathsf{z}(0)), \mathbb{V}_\mathsf{u})$ (regime $\rgm VuBz$), namely
\begin{equation}
\label{eq:GS-u-only}
0 = \thn u(s) \, \mathbb{V}_\mathsf{u} (\mathsf{u}(s),\mathsf{z}(0))
\dot\mathsf{u}(s) + (1{-}\thn u(s))\, \mathrm{D}_u
\cE(\mathsf{t}(0),\mathsf{u}(s),\mathsf{z}(0)) \quad \text{with }\mathsf{u}(0) \neq
M(\mathsf{t}(0),\mathsf{z}(0)).
\end{equation}
(b) Assume $\mathfrak S=[s_*,S]$ with $s_*<S$, then for $s\in
[s_*,S]$ we have $\mathsf{u}(s)=M(\mathsf{t}(s),\mathsf{z}(s))$ whereas the pair
$(\mathsf{t},\mathsf{z})$ is a parameterized Balanced Viscosity
solution to the reduced rate-independent system $(\R^m, \mathcal{I}, \mathcal{R}_0 +
\varepsilon \vpotname z)$ with the
\emph{reduced energy functional } $\mathcal{I} : [0,T] \times \R^m \to
\R; (t,z) \mapsto \min_{u \in \R^n} \en
t{u}{z} = \cE(t,M(t,z),z)$, which corresponds to the regimes $\rgm
EuVz$ and $\rgm EuRz$.
\end{proposition}
\begin{proof}
To avoid overloaded notation we will often omit the state-dependence
of the functions $\mathbb{V}_\mathsf{u}$ and $\mathbb{V}_\mathsf{z}$. For easy reference we
repeat all the conditions for a $\mathrm{BV}$ solution $(\mathsf{t},\mathsf{q})$ (cf.\
Theorem \ref{prop:diff-incl}), in the case $\alpha>1$:
\begin{align*}
&\text{(i)} \quad 0=\thn u \mathbb{V}_\mathsf{u} \mathsf{u}' + (1{-}\thn u) \mathrm{D}_u
\cE(\mathsf{t},\mathsf{u},\mathsf{z}),\qquad
\text{(ii)}\quad 0\in (1 {-}\thn z)\partial\mathcal{R}_0(\mathsf{q},\mathsf{z}')+ \thn z
\mathbb{V}_\mathsf{z} \mathsf{z}' + (1{-}\thn z) \mathrm{D}_z \cE(\mathsf{t},\mathsf{u},\mathsf{z}),\\
&\text{(iii)}\quad \mathsf{t}'\thn u=0,\qquad \text{(iv)} \quad \mathsf{t}'\thn z
=0, \qquad \text{(v)} \quad \thn u\,(1{-}\thn z)=0, \qquad
\text{(vi)} \quad \mathsf{t}'+|\mathsf{u}'|+|\mathsf{z}'|>0,
\end{align*}
which have to hold for a.a.\ $s\in (0,S). $
\underline{Step 1:} By the continuity of $(\mathsf{t} ,\mathsf{z})$ and
$\mathrm{D}_u\cE$ the set $\mathfrak S$ is closed, hence its complement
is relatively open. Consider an interval $(s_1,s_2)$ not
intersecting with $\mathfrak S$. Using (i) we find $\thn u>0$ a.e.\
in $(s_1,s_2)$.
Hence, (iii) implies $\mathsf{t}'=0$ a.e., and we obtain
$\mathsf{t}(s)=\mathsf{t}(s_1)$ for $s\in [s_1,s_2]$. By (v) we find $\thn z=1$
a.e.\ Now, (ii) implies $z'=0$ a.e., which implies
$\mathsf{z}(s)=\mathsf{z}(s_1)$ for $s\in [s_1,s_2]$. From (vi) we conclude
$\mathsf{u}'\neq 0$ a.e. Thus, we summarize
\[
\mathsf{t}(s)=\mathsf{t}(s_1), \quad \mathsf{z}(s)=\mathsf{z}(s_1), \quad 0 =
\mathbb{V}_u(\mathsf{u}(s),\mathsf{z}(s_1)) \mathsf{u}'(s) + \lambda(s)
\mathrm{D}_u\cE(\mathsf{t}(s_1),\mathsf{u}(s), \mathsf{z}(s_1)),
\]
where $\lambda(s)=(1{-}\thn u(s))/\thn u(s) \in (0,\infty)$ a.e. In
particular, $\mathsf{u}$ satisfies \eqref{eq:GS-u-only}. From
$u\in \mathrm{AC}([0,S];\R^m)$ and (i) we obtain $\lambda\in
L^1(s_1,s_2)$. Setting $\tau(s)=\int_{s_1}^s \lambda(\sigma)\mathrm{d} \sigma$
and defining the inverse $\hat s$ via $s=\hat{s}(\tau)$
we find $\hat s'(\tau)>0$ and
$\hat s \in W^{1,1}(0,\tau(s_2))$. Moreover, the function $\hat u:
\tau\mapsto \mathsf{u}(\hat s(\tau))$ is a solution of the gradient flow
\begin{equation}
\label{eq:GS-u-resc}
0 = \mathbb{V}_u(\hat u(\tau),\mathsf{z}(s_1)) \hat u{}'(\tau) +
\mathrm{D}_u\cE(\mathsf{t}(s_1),\hat u(\tau), \mathsf{z}(s_1)).
\end{equation}
Furthermore, we see that $s\mapsto \cE(\mathsf{t}(s_1),\mathsf{u}(s),\mathsf{z}(s_1))$ is
strictly decreasing on $[s_1,s_2]$, since its time derivative is given
by $-\langle \mathsf{u}'(s),\mathbb{V}_u\mathsf{u}'(s)\rangle/\lambda(s)$ which is
negative a.e.
\underline{Step 2:} Since $\mathfrak S$ is closed the complement is an
at most countable disjoint union of intervals of the form $(s_1,S]$,
$(s_2,s_3)$, $[0,s_4)$, or $[0,S]$ which are maximal in the sense that
they cannot be extended without meeting $\mathfrak{S}$.
Thus, for the ``open'' sides $s_j$ this means $s_j \in \mathfrak S$.
In the first two cases this means $\mathsf{u}(s_j)=M(\mathsf{t}(s_j),\mathsf{z}(s_j))$,
i.e.\ we start a gradient flow with initial condition in the global
minimizer. Hence, the solution stays constant for all future times,
i.e.\ $\mathsf{u}(s)=\mathsf{u}(s_{1,2})$ for $s\in (s_1,S]$ or
$(s_2,s_3)$, respectively. But this contradicts the fact that
$s\mapsto \cE(\mathsf{t}(s_j),\mathsf{u}(s),\mathsf{z}(s_j))$ is strictly decreasing
(cf.\ Step 1). Hence, the first two cases cannot occur, and we
conclude $\mathfrak S=[s_*,S] $ with $s_*=s_4$ or $\mathfrak
S=\emptyset$. In particular, assertion (a) is established.
\underline{Step 3:} To show (b) assume $s\in \mathfrak S =[s_*,S]$,
then $\mathsf{u}(s)= M(\mathsf{t}(s),\mathsf{z}(s))$ by the definition of $\mathfrak S$.
Observe that $\mathrm{D}_z \mathcal{I}(t,z)= \mathrm{D}_z\cE(t,M(t,z),z)+ \mathrm{D}_z
M(t,z)^T\mathrm{D}_u\cE(t,M(t,z), z)= \mathrm{D}_z\cE(t,M(t,z),z)+0$. Thus,
$(\mathsf{t},\mathsf{z})$ solves
\[
\text{(ii)'}\quad
0\in (1 {-}\thn z)\partial\mathcal{R}_0(\mathsf{z},\mathsf{z}')+ \thn z
\mathbb{V}_\mathsf{z} \mathsf{z}' + (1{-}\thn z) \mathrm{D}_z \mathcal{I}(\mathsf{t},\mathsf{z}),\qquad
\text{(iv)'} \quad \mathsf{t}'\thn z
=0, \qquad \text{(vi)'} \quad \mathsf{t}'+|\mathsf{z}'|>0,
\]
which proves that $(\mathsf{t},\mathsf{z})$ is a BV solution of the reduced
system. For the latter relation note that $\mathsf{t}'(s)+|\mathsf{z}'(s)|=0$
implies $\mathsf{u}'(s)=\frac\mathrm{d}{\mathrm{d} s} M(\mathsf{t}(s),\mathsf{z}(s))=0$ so that
(vi)' follows from (vi).
\end{proof}
Our approach in Step 1 of the above proof uses the qualitative ideas from
\cite{Zani07SPFD,AgRoSa14?TCG}, but our reduction to the simpler
convex case makes the analysis much easier.
\paragraph{\bf \underline{Case $\alpha=1$:} comparable relaxation
times,} Here $u$ and $z$ relax at the same rate.
At a jump, the system may switch to the viscous regime $\rgm VuVz$,
where \emph{both} in the evolution of $u$, and in the evolution for
$z$, viscous dissipation intervenes, modulated by the same coefficient
$\theta = \thn u= \thn z$.
\medskip
\paragraph{\bf \underline{Case $\alpha\in (0,1)$:} fast relaxation of
$z$.} Here $z$ relaxes faster than $u$, and jumps in the
$z$-component are faster than jumps in the $u$-component. If $z$
jumps (possibly governed by viscous dissipation), than $u$ stays
fixed, i.e.\ $u$ is blocked while $z$ moves viscously (regime
$\rgm BuVz$). But then $u$ has still to relax to equilibrium, and
it will do it on a faster scale than the rate-independent motion
of $z$, if $z$ stays in locally stable states (regime $\rgm
VuRz$). Finally, full rate-independent behavior in the regime
$\rgm EuRz$ will occur, where $\mathsf{t}'(s)>0$. Unlike in the case
$\alpha>1$, all three regimes may occur more than once
in the evolution of the system, see Section \ref{ss:6.2} for an
example.
\section{Examples}
\label{s:5}
To illustrate the difference between the three limit models (namely for $\alpha>1$, $\alpha=1$, and $\alpha \in (0,1)$),
we discuss two examples. The first one treats a quadratic
energy and emphasizes the different initial behavior before the
solution converges to a truly rate-independent regime. In the second
example we show that solutions that start in a rate-independent
regime and coincide for the three different limit models may separate
if viscous jumps start, leading to different rate-independent behavior
afterwards.
\subsection{Initial relaxation for a system with quadratic energy}
\label{ss:6.1}
We consider the energy functional $\ene t{u,z}=\frac12(u-z)^2+ \frac12 z^2 -t u$
and trivial viscous energies leading to the ODE system
\begin{align}
\label{eq:Exa1}
\left\{\begin{array}{ll}
0 = \varepsilon^\alpha \dot u + u - z - t, \\
0\in \mathop{\mathrm{Sign}}(\dot z) + \varepsilon \dot z + 2 z - u
\end{array} \right. \qquad \text{with } (u(0),z(0))=(2,-3/2).
\end{align}
We show simulations for the three cases \textcolor{blue}{$\alpha=2$
(blue)}, \textcolor{green}{$\alpha=1$ (green)}, and
\textcolor{red}{$\alpha=1/2$ (red)} with sufficiently small $\varepsilon$
(typically $0.001 \ldots 0.03$). The components $u$ and $z$ as
functions of time are depicted in Figure \ref{F1:uz}.
However, to detect different jump behavior at $t\approx 0$ it
is advantageous to look at the parameterized solutions, which are
depicted in Figure \ref{F3:Para}, showing
$(\mathsf{t},\mathsf{q})$ for the three different cases. The parameterization was
calculated using $\dot s(t)=\max\{0.5,|\dot u(t)|,\dot z(t)| \}$.
\begin{figure}
\centering
\includegraphics[width=16em]{Plot_u}
{\unitlength1em
\begin{picture}(0,0)
\put(-15,9){$u(t)$}
\put(-1,3.8){$t$}
\end{picture}}%
\qquad
\includegraphics[width=16em]{Plot_z}
{\unitlength1em
\begin{picture}(0,0)
\put(-15,9){$z(t)$}
\put(-1,4.5){$t$}
\end{picture}}
\caption{Solutions for \eqref{eq:Exa1}
for the three cases \textcolor{blue}{$\alpha=2$
(blue)}, \textcolor{green}{$\alpha=1$ (green)}, and
\textcolor{red}{$\alpha=1/2$ (red)}.}
\label{F1:uz}
\end{figure}
\begin{figure}
\centering {\unitlength1cm
\includegraphics[width=5\unitlength]{Plot_a20}%
\begin{picture}(0,0)(5,0)
\put(-0.3,2.2){$\mathsf{u}$} \put(-0.3,1){$\mathsf{t}$} \put(-0.3,0){$\mathsf{z}$}
\put(2.07,0){\line(0,1){2.7}} \put(2.45,0){\line(0,1){2.7}}
\color{blue}\put(0.7,2.5){$\rgm VuBz$}
\put(1.85,2.8){$\rgm EuVz$} \put(3,2.5){$\rgm EuRz$}
\end{picture}
\quad
\includegraphics[width=5\unitlength]{Plot_a10}%
\begin{picture}(0,0)(5,0)
\put(-0.2,3.1){$\mathsf{u}$} \put(-0.2,1.3){$\mathsf{t}$} \put(-0.2,0){$\mathsf{z}$}
\put(2.1,0){\line(0,1){3.7}}
\color{green}\put(0.7,3.3){$\rgm VuVz$}
\put(2.6,3.3){$\rgm EuRz$}
\end{picture}
\quad
\includegraphics[width=5\unitlength]{Plot_a05}%
\begin{picture}(0,0)(5,0)
\put(-0.2,3){$\mathsf{u}$} \put(-0.2,1.3){$\mathsf{t}$} \put(-0.15,0){$\mathsf{z}$}
\put(1.8,0.5){\line(0,1){3.2}} \put(2.85,0.5){\line(0,1){3.2}}
\color{red}\put(0.7,3.5){$\rgm BuVz$}
\put(1.9,3.5){$\rgm VuRz$} \put(3.3,3.5){$\rgm EuRz$}
\end{picture}
}
\caption{Solutions $(\mathsf{t},\mathsf{u},\mathsf{z})$ for \eqref{eq:Exa1} with
dotted $\mathsf{t}$, full $\mathsf{u}$, and dashed $\mathsf{z}$. \color{blue} Left
$\alpha=2$, \color{green} middle $\color{green}\alpha=1$,
\color{red} right $\alpha=1/2$.}
\label{F3:Para}
\end{figure}
In the parameterized form we fully see the structure of the jump
for $t\approx 0$. For \underline{$\alpha=2$} we
obtain first a jump from the initial datum $(u,z)=(2,-1.5)$ to
$(u,z)=(-1.5,-1.5)$ on the timescale $\varepsilon^2$, which is the regime
$\rgm VuBz$. Then, $u$ is equilibrated, and a jump to $(-1,-1)$ along
the diagonal $u=z$ occurs on the timescale $\varepsilon$, which is the regime
$\rgm EuVz$. Finally, the
solution finds the rate-independent regime $\rgm EuRz$ with
$(u(t),z(t))=q_\mathrm{ri}(t):=(2t{-}1,t{-}1)$.
For \underline{$\alpha=1/2$} the
solution first jumps to $(2,0.5)$ on the time scale $\varepsilon$, which is
the regime $\rgm BuVz$. Next, and then there is a jump to
$(0.5,0.5)$ in the time scale $\varepsilon^{1/2}$, which is regime $\rgm
VuRz$. Then, the rate-independent regime $\rgm EuRz$ starts, namely
via $(u(t),z(t))=(t{-}0.5, 0.5)$ for $t\in {]0,1.5]}$ and
$q_\mathrm{ri}$ for $t>1.5$.
The behavior for \underline{$\alpha=1$} is
intermediate: the jump occurs along a nonlinear curve in regime
$\rgm VuVz$, and
$q_\mathrm{ri}$ is joined for $t\geq t_*\approx 0.7$, which is regime
$\rgm EuRz$.
The different behavior and the different regimes are
also nicely seen by plotting the trajectories in the $(u,z)$-plane,
see Figure \ref{F2:uzPlane}, where the three different cases for
$\alpha$ are depicted again.
\begin{figure}
{\unitlength1cm
\centering \includegraphics[width=10cm]{Plot_uzPlane}
\begin{picture}(0,0)(4.1,-4.4)
\put(4.4,0.2){$u$} \put(-0.3,5.2){$z$}
\put(-5,0){\line(1,0){10}}
\put(0,-4){\line(0,1){9.5}}
{\color{blue}\put(-4.5,2){$\rgm VuBz$}
\put(-5,-2){\vector(2,-1){2}}
\put(-5.5,-1.9){$\rgm VuBz$}
\put(-2.8,-1.3){$\rgm EuRz$}}
{\color{green}\put(-2.5,3.3){$\rgm VuVz$}
\put(-1.2,2){$\rgm EuRz$}}
{\color{red}\put(-2,4.53){$\rgm BuVz$}
\put(0.62,2){$\rgm VuRz$}}
\put(1.4,5.1){$\rgm EuRz$}
\end{picture}}
\caption{{Solutions $(z(t),u(t))$ for \eqref{eq:Exa1}. The dotted line
is the diagonal $u=z$, while the yellow area is the locally
stable region $|2z{-}u|\leq 1$.} }
\label{F2:uzPlane}
\end{figure}
\subsection{Different jumps starting from the rate-independent regime}
\label{ss:6.2}
Finally we provide an example where the jumps start out of a
rate-independent motion, i.e.\ we first have the regime $\rgm
EuRz$, and then the system becomes unstable and develops a jump. For
this purpose we use the nonconvex energy
\begin{align*}
&\ene t{u,z}=\frac12 (u{-}g(z))^2 + F(z) - tu \quad \text{with }
g(z)=4z^3-4z\\
&\text{ and } F'(z)=-1 + (z{+}1)^2\big({-}40+10(z{+}1)^2+
38 \mathrm{e}^{-10(z+0.5)^2} \big).
\end{align*}
Using the standard viscous potentials as above, the ODE system
reads
\begin{align}
\label{eq:Exa2}
\left\{
\begin{array}{ll}
0 = \varepsilon^\alpha \dot u + u - g(z) - t,\\
0\in \mathop{\mathrm{Sign}}(\dot z) + \varepsilon \dot z + F'(z) +g'(z)(g(z){-}u)
\end{array}
\right. \quad \text{with }
(u(-0.2),z(-0.2))= (-2.4,-1.2).
\end{align}
\begin{figure}
\centering \includegraphics[width=16em]{NLPlot_u} \qquad
\includegraphics[width=16em]{NLPlot_z}
\caption{Solutions for \eqref{eq:Exa2}: left $u(t)$ and right $z(t)$}
\label{F4:uz}
\end{figure}
Figure \ref{F4:uz} shows simulation results of $u(t)$ and $z(t)$ for
the three cases \textcolor{blue}{$\alpha=2$
(blue)}, \textcolor{green}{$\alpha=1$ (green)}, and
\textcolor{red}{$\alpha=1/2$ (red)} with sufficiently small $\varepsilon$. We see
that the solutions stay together for $t\in [-0.2,-0.1]$, which is
exactly the time they stay in regime $\rgm EuRz$. Then, in all three
cases a jump develops, but this is quite different for different
$\alpha$. In Figure \ref{F6:Para} we provide graphics of the same
solutions, but now in the reparameterized form $(\mathsf{t},\mathsf{u},\mathsf{z})$ for
the three $\alpha$-values $2,\ 1$, and $1/2$, where again the
parameterization $s$ is chosen such that $\dot
s(t)=\max\{0.5,|\dot u(t)|,\dot z(t)| \}$. However, for this
example numerical instabilities prevented us from taking $\varepsilon$ small
enough to have a better separation of time scale. Even in the
viscous regimes we still see $\mathsf{t}'>0$ but small. Nevertheless,
Figure \ref{F6:Para} clearly shows the different regimes.
\begin{figure}
\centering
{\unitlength1cm
\includegraphics[width=5\unitlength]{NLPlot_a20}%
\begin{picture}(0,0)(5,0)
\put(-0.2,0){$\mathsf{u}$} \put(-0.2,1.2){$\mathsf{z}$} \put(-0.2,2.1){$\mathsf{t}$}
\put(0.5,0){\line(0,1){4}} \put(3.99,1){\line(0,1){4}}
\color{blue} \put(-0.1,4){$\rgm EuRz$} \put(1.7,4){$\rgm EuVz$}
\put(4.3,3.64){$\rgm EuRz$}
\end{picture}
\quad
\includegraphics[width=5\unitlength]{NLPlot_a10}%
\begin{picture}(0,0)(5,0)
\put(-0.2,0){$\mathsf{u}$} \put(-0.2,1.2){$\mathsf{z}$} \put(-0.2,2.1){$\mathsf{t}$}
\put(0.5,0){\line(0,1){4}} \put(3.9,1){\line(0,1){4}}
\color{green} \put(-0.1,4){$\rgm EuRz$} \put(1.7,4){$\rgm VuVz$}
\put(4.2,3.64){$\rgm EuRz$}
\end{picture}
\quad
\includegraphics[width=5\unitlength]{NLPlot_a05}%
\begin{picture}(0,0)(5,0)
\put(-0.2,0){$\mathsf{u}$} \put(-0.2,0.8){$\mathsf{z}$} \put(-0.2,1.5){$\mathsf{t}$}
\put(0.5,0){\line(0,1){3}} \put(2.2,0){\line(0,1){3}}
\put(3.3,0){\line(0,1){2.9}} \put(3.5,0){\line(0,1){2.9}}
\color{red} \put(-0.1,3.5){$\rgm EuRz$} \put(1,3){$\rgm VuRz$}
\put(2.35,3.5){$\rgm BuVz$} \put(3,3){$\rgm VuRz$} \put(3.8,3.5){$\rgm EuRz$}
\end{picture}
}
\caption{Solutions $(\mathsf{t},\mathsf{u},\mathsf{z})$ for \eqref{eq:Exa2} with
dotted $\mathsf{t}$, full $\mathsf{u}$, and dashed $\mathsf{z}$. \color{blue} Left
$\alpha=2$, \color{green} middle $\color{green} \alpha=1$,
\color{red} right $\alpha=1/2$.}
\label{F6:Para}
\end{figure}
Figure \ref{F5:uzPlane} shows the trajectories in
the $(z,u)$-plane.
\begin{figure}
\centering \includegraphics[width=26em]{NLPlot_uzPlane}
\caption{Solutions $(z(t),u(t))$ for \eqref{eq:Exa2}. The dashed magenta line
is $u=g(z)$, while the black curves display the boundaries of the
locally stable domain $|F'(z) +g'(z)(g(z){-}u)| \leq 1$. }
\label{F5:uzPlane}
\end{figure}
\bibliographystyle{my_alpha} |
1803.07357 | \section{Introduction} \label{Dwreg1}
Let $\Omega \subset \mathds{R}^d$ be an open bounded set with boundary~$\Gamma$.
Throughout this paper we assume that $d \geq 2$.
The classical Dirichlet problem is to find for each $\varphi \in C(\Gamma)$
a function $u \in C(\overline \Omega)$
such that $u|_\Gamma = \varphi$ and $\Delta u = 0$ as distribution on $\Omega$.
The set $\Omega$ is called {\bf Wiener regular} if for every
$\varphi \in C(\Gamma)$ there exists a unique $u \in C(\overline \Omega)$
such that $u|_\Gamma = \varphi$ and $\Delta u = 0$ as distribution on $\Omega$.
The Dirichlet problem has been extended naturally to more general
second-order operators.
For all $k,l \in \{ 1,\ldots,d \} $ let $a_{kl} \colon \Omega \to \mathds{R}$
be a bounded measurable function and suppose that there exists a
$\mu > 0$ such that
\begin{equation}
\mathop{\rm Re} \sum_{k,l=1}^d a_{kl}(x) \, \xi_k \, \overline{\xi_l}
\geq \mu \, |\xi|^2
\label{eSwreg1;2}
\end{equation}
for all $x \in \Omega$ and $\xi \in \mathds{C}^d$.
Further, for all $k \in \{ 1,\ldots,d \} $ let $b_k,c_k,c_0 \colon \Omega \to \mathds{C}$
be bounded and measurable.
Define the map ${\cal A} \colon H^1_{\rm loc}(\Omega) \to {\cal D}'(\Omega)$ by
\[
\langle {\cal A} u,v \rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)}
= \sum_{k,l=1}^d \int_\Omega a_{kl} \, (\partial_k u) \, \overline{\partial_l v}
+ \sum_{k=1}^d \int_\Omega b_k \, u \, \overline{\partial_k v}
+ \sum_{k=1}^d \int_\Omega c_k \, (\partial_k u) \, \overline v
+ \int_\Omega c_0 \, u \, \overline v
\]
for all $u \in H^1_{\rm loc}(\Omega)$ and $v \in C_c^\infty(\Omega)$.
Given $\varphi \in C(\Gamma)$, by a {\bf classical solution} of the
Dirichlet problem we understand a function
$u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$ satisfying ${\cal A} u = 0$
and $u|_\Gamma = \varphi$.
For the pure second-order case (that is $b_k = c_k = c_0 = 0$)
Littman--Stampacchia--Weinberger
\cite{LSW} proved that for all $\varphi \in C(\Gamma)$
there exists a unique classical solution~$u$.
Then Stampacchia \cite{Stam2} Th\'eor\`eme~10.2
added real valued lower order terms, under the condition
(see \cite{Stam2}, (9.2')) that there exists a
$\mu' > 0$
such that
\begin{equation}
\int_\Omega c_0 \, v + \sum_{k=1}^d \int_\Omega b_k \, \partial_k v
\geq \mu' \int_\Omega v
\label{eSwreg1;3.1}
\end{equation}
for all $v \in C_c^\infty(\Omega)^+$.
Gilbarg--Trudinger \cite{GT} Theorem~8.31 merely assume that
\begin{equation}
\int_\Omega c_0 \, v + \sum_{k=1}^d \int_\Omega b_k \, \partial_k v
\geq 0
\label{eSwreg1;3}
\end{equation}
for all $v \in C_c^\infty(\Omega)^+$ in order to obtain the same conclusion.
A consequence of these assumptions is a weak maximum principle,
which implies that $\|u\|_{C(\overline \Omega)} \leq \|\varphi\|_{C(\Gamma)}$
for all $u \in H^1_{\rm loc}(\Omega) \cap C(\overline \Omega)$ satisfying ${\cal A} u = 0$
and $u|_\Gamma = \varphi$.
We may consider (\ref{eSwreg1;3}) as a kind of submarkov condition since
it is equivalent to $-{\cal A} \mathds{1}_\Omega \leq 0$ in ${\cal D}'(\Omega)$.
The aim of this paper is to
show that the positivity condition (\ref{eSwreg1;3}) and the maximum principle
are not needed for the well-posedness of the Dirichlet problem.
In addition we allow the $b_k$ and $c_0$ to be complex valued.
In order to state the main results of this paper in a more
precise way we need a few definitions.
Define the form $\gothic{a} \colon H^1(\Omega) \times H^1(\Omega) \to \mathds{C}$ by
\begin{equation}
\gothic{a}(u,v)
= \sum_{k,l=1}^d \int_\Omega a_{kl} \, (\partial_k u) \, \overline{\partial_l v}
+ \sum_{k=1}^d \int_\Omega b_k \, u \, \overline{\partial_k v}
+ \sum_{k=1}^d \int_\Omega c_k \, (\partial_k u) \, \overline v
+ \int_\Omega c_0 \, u \, \overline v
.
\label{eSwreg1;4}
\end{equation}
Let $A^D$ be the operator in $L_2(\Omega)$ associated with the
form $\gothic{a}|_{H^1_0(\Omega) \times H^1_0(\Omega)}$.
In other words, $A^D$ is the realisation of the elliptic operator ${\cal A}$
in $L_2(\Omega)$ with Dirichlet boundary conditions.
This operator has a compact resolvent.
Moreover, if (\ref{eSwreg1;3}) is valid, then $\ker A^D = \{ 0 \} $ by \cite{GT}
Corollary~8.2.
Instead of (\ref{eSwreg1;3}) we assume the condition $\ker A^D = \{ 0 \} $,
which is equivalent to the uniqueness of the Dirichlet problem
(cf.\ Proposition~\ref{pwreg202.5} below).
The main result of this paper is the following well-posedness result
for the Dirichlet problem.
\begin{thm} \label{twreg101}
Let $\Omega \subset \mathds{R}^d$ be an open bounded Wiener regular set with $d \geq 2$.
For all $k,l \in \{ 1,\ldots,d \} $ let $a_{kl} \colon \Omega \to \mathds{R}$
be a bounded measurable function and suppose that there exists a
$\mu > 0$ such that
\[
\mathop{\rm Re} \sum_{k,l=1}^d a_{kl}(x) \, \xi_k \, \overline{\xi_l}
\geq \mu \, |\xi|^2
\]
for all $x \in \Omega$ and $\xi \in \mathds{C}^d$.
Further, for all $k \in \{ 1,\ldots,d \} $ let $b_k,c_0 \colon \Omega \to \mathds{C}$
and $c_k \colon \Omega \to \mathds{R}$
be bounded and measurable.
Let $A^D$ be as above.
Suppose $0 \not\in \sigma(A^D)$.
Then for all $\varphi \in C(\Gamma)$
there exists a unique $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
such that $u|_\Gamma = \varphi$ and ${\cal A} u = 0$.
Moreover, there exists a constant $c > 0$ such that
\[
\|u\|_{C(\overline \Omega)} \leq c \, \|\varphi\|_{C(\Gamma)}
\]
for all $\varphi \in C(\Gamma)$,
where $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
is such that $u|_\Gamma = \varphi$ and ${\cal A} u = 0$.
\end{thm}
Instead of the homogeneous equation ${\cal A} u = 0$ one can also consider
the inhomogeneous equation ${\cal A} u = f_0 + \sum_{k=1}^d \partial_k f_k$.
We shall do that in Theorem~\ref{twreg216}.
\smallskip
Adopt the notation and assumptions of Theorem~\ref{twreg101}.
Define $P \colon C(\Gamma) \to C(\overline \Omega)$ by $P \varphi = u$,
where $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
is such that $u|_\Gamma = \varphi$ and ${\cal A} u = 0$.
Note that $P \varphi$ is the {\bf classical solution} of the Dirichlet problem.
If $\Omega$ has even a Lipschitz boundary
(which implies Wiener regularity), then there is also a variational
solution of the Dirichlet problem that we describe next.
Denote by ${\mathop{\rm Tr \,}} \colon H^1(\Omega) \to L_2(\Gamma)$ the trace operator.
Again let $a_{kl},b_k,c_k,c_0 \in L_\infty(\Omega)$ and
suppose that the ellipticity condition (\ref{eSwreg1;2}) is satisfied.
Further suppose that $0 \not\in \sigma(A^D)$.
Then for each $\varphi \in {\mathop{\rm Tr \,}} H^1(\Omega)$
there exists a unique $u \in H^1(\Omega)$, called the
{\bf variational solution}, such that
${\cal A} u = 0$ and ${\mathop{\rm Tr \,}} u = \varphi$ (cf.\ Lemma~\ref{lwreg202}).
Define $\gamma \colon {\mathop{\rm Tr \,}} H^1(\Omega) \to H^1(\Omega)$ by setting
$\gamma \varphi = u$.
The second result of this paper says that the variational
solution and the classical solution coincide, if both are defined.
\begin{thm} \label{twreg102}
Adopt the notation and assumptions of Theorem~\ref{twreg101}.
Suppose that $\Omega$ has a Lipschitz boundary.
Let $\varphi \in C(\Gamma) \cap {\mathop{\rm Tr \,}} H^1(\Omega)$.
Then $P \varphi = \gamma \varphi$ almost everywhere on $\Omega$.
\end{thm}
The last main result of this paper concerns a parabolic equation.
Let $A_c$ denote the part of the operator $A^D$ in $C_0(\Omega)$.
So
\[
D(A_c) = \{ u \in D(A^D) \cap C_0(\Omega) : A^D u \in C_0(\Omega) \}
\]
and $A_c = A^D|_{D(A_c)}$.
\begin{thm} \label{twreg103}
Adopt the notation and assumptions of Theorem~\ref{twreg101}.
Then $-A_c$ generates a holomorphic $C_0$-semigroup on $C_0(\Omega)$.
Moreover, $e^{-t A_c} u = e^{-t A^D} u$ for all $u \in C_0(\Omega)$
and $t > 0$.
\end{thm}
In Section~\ref{Swreg2} we prove Theorem~\ref{twreg101} via
an iteration argument.
Section~\ref{Swreg3new} is devoted to the comparison of the classical
and the variational solutions of the Dirichlet problem.
Theorem~\ref{twreg102} is proved there with the help of a deep
result of Dahlberg \cite{Dahlberg}.
We consider the semigroup on $C_0(\Omega)$ in Section~\ref{Swreg4new} and
prove Theorem~\ref{twreg103}.
\section{The Dirichlet problem} \label{Swreg2}
In this section we prove Theorem~\ref{twreg101} on the well-posedness
of the Dirichlet problem.
The technique is a reduction to the Stampacchia result mentioned in the
introduction.
For this reason we introduce the following two forms and operators.
Adopt the notation and assumptions of Theorem~\ref{twreg101}.
For all $\lambda \in \mathds{R}$ define the forms
$\gothic{a}_\lambda, \gothic{b}_\lambda \colon H^1(\Omega) \times H^1(\Omega) \to \mathds{C}$
by
\begin{eqnarray*}
\gothic{a}_\lambda(u,v)
& = & \gothic{a}(u,v) + \lambda \, (u,v)_{L_2(\Omega)} \quad \mbox{and} \\
\gothic{b}_\lambda(u,v)
& = & \sum_{k,l=1}^d \int_\Omega a_{kl} \, (\partial_k u) \, \overline{\partial_l v}
+ \sum_{k=1}^d \int_\Omega c_k \, (\partial_k u) \, \overline v
+ \lambda \int_\Omega u \, \overline v
,
\end{eqnarray*}
where $\gothic{a}$ is as in (\ref{eSwreg1;4}).
Define similarly ${\cal A}_\lambda,{\cal B}_\lambda \colon H^1_{\rm loc}(\Omega) \to {\cal D}'(\Omega)$
and let $B^D$ be the operator associated with the sesquilinear form
$\gothic{b}_0|_{H^1_0(\Omega) \times H^1_0(\Omega)}$.
It follows from ellipticity that there exists a $\lambda_0 > 0$
such that
\[
\frac{\mu}{2} \, \|v\|_{H^1(\Omega)}^2
\leq \mathop{\rm Re} \gothic{a}_{\lambda_0}(v)
\quad \mbox{and} \quad
\frac{\mu}{2} \, \|v\|_{H^1(\Omega)}^2
\leq \mathop{\rm Re} \gothic{b}_{\lambda_0}(v)
\]
for all $v \in H^1(\Omega)$.
Note that ${\cal B}_\lambda$ satisfies the submarkovian condition
$- {\cal B}_\lambda \mathds{1}_\Omega \leq 0$, that is (\ref{eSwreg1;3}),
and even Stampacchia's condition (\ref{eSwreg1;3.1})
for all $\lambda > 0$.
So we can and will apply Stampacchia's result (in the proof of
Lemma~\ref{lwreg210}).
We first investigate the operator $A^D$ in $L_2(\Omega)$.
Note that $f_0 + \sum_{k=1}^d \partial_k f_k \in {\cal D}'(\Omega)$
for all $f_0,f_1,\ldots,f_d \in L_1(\Omega)$.
The next lemma is also valid if the $a_{kl}$ and $c_k$ are complex valued.
\begin{lemma} \label{lwreg202}
Let $f_1,\ldots,f_d \in L_2(\Omega)$.
Let $\tilde p \in (1,\infty)$ be such that
$\tilde p \geq \frac{2d}{d+2}$.
Further let $f_0 \in L_{\tilde p}(\Omega)$.
Then there exists a unique $u \in H^1_0(\Omega)$ such that
${\cal A} u = f_0 + \sum_{k=1}^d \partial_k f_k$.
\end{lemma}
\begin{proof}
There exists a unique $T \in {\cal L}(H^1_0(\Omega))$ such that
$(T u,v)_{H^1_0(\Omega)} = \gothic{a}(u,v)$
for all $u,v \in H^1_0(\Omega)$.
Then $T$ is injective because $\ker A^D = \{ 0 \} $.
Moreover, the inclusion $H^1_0(\Omega) \hookrightarrow L_2(\Omega)$ is compact.
Hence the operator $T$ is invertible by the Fredholm--Lax--Milgram lemma,
\cite{AEKS} Lemma~4.1.
Clearly $v \mapsto \sum_{k=1}^d (f_k, \partial_k v)_{L_2(\Omega)}$
is continuous from $H^1_0(\Omega)$ into $\mathds{C}$.
Define $F \colon C_c^\infty(\Omega) \to \mathds{C}$ by
$F(v) = \langle f_0,v \rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)}$.
We claim that $F$
extends to a continuous function from $H^1_0(\Omega)$ into $\mathds{C}$.
If $d \geq 3$, then $H^1_0(\Omega) \subset L_r(\Omega)$, where
$r = \frac{2d}{d-2}$.
So $H^1_0(\Omega) \subset L_q(\Omega)$, where $q$ is the dual
exponent of $\tilde p$.
The last inclusion is also valid if $d = 2$.
So in any case the map $F$ extends to a continuous function from
$H^1_0(\Omega)$ into $\mathds{C}$.
Then the lemma follows.
\end{proof}
The next lemma is valid for a general bounded open set $\Omega$
and does not use the condition $0 \not\in \sigma(A^D)$.
It is an extension of \cite{ABenilan} Lemma~4.2.
\begin{lemma} \label{lwreg205}
Let $u \in C_0(\Omega) \cap H^1_{\rm loc}(\Omega)$ and
$f_1,\ldots,f_d \in L_2(\Omega)$.
Let $\tilde p \in (1,\infty)$ be such that
$\tilde p \geq \frac{2d}{d+2}$.
Further let $f_0 \in L_{\tilde p}(\Omega)$.
Suppose that ${\cal A} u = f_0 + \sum_{k=1}^d \partial_k f_k$.
Then $u \in H^1_0(\Omega)$.
\end{lemma}
\begin{proof}
As at the end of the previous proof there exists an $M_0 > 0$ such that
$|\int_\Omega f_0 \, \overline v| \leq M_0 \, \|v\|_{H^1(\Omega)}$ for all
$v \in H^1_0(\Omega)$.
Set $M = M_0 + \sum_{k=1}^d \|f_k\|_2$.
Let $\varepsilon > 0$.
Set $v_\varepsilon = (\mathop{\rm Re} u-\varepsilon)^+$.
Then $\mathop{\rm supp} v_\varepsilon \subset \Omega$ is compact.
Hence there exists an open $\Omega_1 \subset \mathds{R}^d$ such that
$\mathop{\rm supp} v_\varepsilon
\subset \Omega_1 \subset \overline{\Omega_1} \subset \Omega$.
Then $v_\varepsilon \in H^1_0(\Omega_1)$.
Moreover,
\begin{eqnarray}
\lefteqn{
\sum_{k,l=1}^d \int_{\Omega_1} a_{kl} \, (\partial_k u) \, \overline{\partial_l v}
+ \sum_{k=1}^d \int_{\Omega_1} b_k \, u \, \overline{\partial_k v}
+ \sum_{k=1}^d \int_{\Omega_1} c_k \, (\partial_k u) \, \overline v
+ \int_{\Omega_1} c_0 \, u \, \overline v
} \hspace*{90mm} \nonumber \\*
& = & \int_{\Omega_1} f_0 \, \overline v
+ \sum_{k=1}^d \int_{\Omega_1} f_k \, \overline{\partial_k v}
\label{elwreg205;1}
\end{eqnarray}
for all $v \in C_c^\infty(\Omega_1)$.
Since $u|_{\Omega_1} \in H^1(\Omega_1)$ it follows that (\ref{elwreg205;1})
is valid for all $v \in H^1_0(\Omega_1)$.
Choosing $v = v_\varepsilon$ gives
\begin{eqnarray*}
\lefteqn{
\Big| \sum_{k,l=1}^d \int_\Omega a_{kl} \, (\partial_k u) \, \partial_l v_\varepsilon
+ \sum_{k=1}^d \int_\Omega b_k \, u \, \partial_k v_\varepsilon
+ \sum_{k=1}^d \int_\Omega c_k \, (\partial_k u) \, v_\varepsilon
+ \int_\Omega c_0 \, u \, v_\varepsilon \Big|
} \hspace*{60mm} \\*
& \leq & M_0 \, \|v_\varepsilon\|_{H^1(\Omega)}
+ \sum_{k=1}^d \|f_k\|_2 \, \|\partial_k v_\varepsilon\|_2
\leq M \, \|v_\varepsilon\|_{H^1(\Omega)}
.
\end{eqnarray*}
On the other hand,
$\partial_k v_\varepsilon
= \partial_k ((\mathop{\rm Re} u-\varepsilon)^+)
= \mathds{1}_{[\mathop{\rm Re} u > \varepsilon]} \, \partial_k \mathop{\rm Re} u$
for all $k \in \{ 1,\ldots,d \} $ by \cite{GT} Lemma~7.6.
Therefore
\begin{eqnarray*}
\lefteqn{
\mathop{\rm Re} \sum_{k,l=1}^d \int_\Omega a_{kl} \, (\partial_k u) \, \partial_l v_\varepsilon
+ \mathop{\rm Re} \sum_{k=1}^d \int_\Omega b_k \, u \, \partial_k v_\varepsilon
+ \mathop{\rm Re} \sum_{k=1}^d \int_\Omega c_k \, (\partial_k u) \, v_\varepsilon
+ \mathop{\rm Re} \int_\Omega c_0 \, u \, v_\varepsilon
} \hspace*{10mm} \\*
& = &
\sum_{k,l=1}^d \int_\Omega a_{kl} \, (\partial_k v_\varepsilon) \, \partial_l v_\varepsilon
+ \mathop{\rm Re} \sum_{k=1}^d \int_\Omega b_k \, u \, \partial_k v_\varepsilon
+ \sum_{k=1}^d \int_\Omega c_k \, (\partial_k \mathop{\rm Re} u) \, v_\varepsilon
+ \mathop{\rm Re} \int_\Omega c_0 \, u \, v_\varepsilon \\
& = & \mathop{\rm Re} \gothic{a}(v_\varepsilon)
+ \varepsilon \sum_{k=1}^d \int_\Omega (\mathop{\rm Re} b_k) \, \partial_k v_\varepsilon
- \sum_{k=1}^d \int_\Omega (\mathop{\rm Im} b_k) \, (\mathop{\rm Im} u) \, \partial_k v_\varepsilon
\\*
& & {} \hspace*{30mm}
+ \varepsilon \int_\Omega (\mathop{\rm Re} c_0) \, v_\varepsilon
- \int_\Omega (\mathop{\rm Im} c_0) \, (\mathop{\rm Im} u) \, v_\varepsilon \\
& \geq & \frac{\mu}{2} \, \|v_\varepsilon\|_{H^1(\Omega)}^2
- \lambda_0 \, \|v_\varepsilon\|_2^2
- \varepsilon \, M' \, |\Omega|^{1/2} \, \|v_\varepsilon\|_{H^1(\Omega)}
- M' \, \|u\|_2 \, \|v_\varepsilon\|_{H^1(\Omega)}
,
\end{eqnarray*}
where $M' = \|c_0\|_\infty + \sum_{k=1}^d \|b_k\|_\infty$.
Since
$\|v_\varepsilon\|_2
= \|(\mathop{\rm Re} u-\varepsilon)^+\|_2
\leq \|u\|_2
\leq |\Omega|^{1/2} \, \|u\|_{C_0(\Omega)}$,
it follows that
\[
\frac{\mu}{2} \, \|(\mathop{\rm Re} u-\varepsilon)^+\|_{H^1(\Omega)}^2
\leq M'' \, \|(\mathop{\rm Re} u-\varepsilon)^+\|_{H^1(\Omega)}
+ \lambda_0 \, |\Omega| \, \|u\|_{C_0(\Omega)}^2
\]
for all $\varepsilon \in (0,1]$, where
$M'' = M + M' \, |\Omega|^{1/2} \, (\|u\|_{C_0(\Omega)} + 1)$.
Therefore the sequence $((\mathop{\rm Re} u - 2^{-n})^+)_{n \in \mathds{N}_0}$ is bounded in
$H^1_0(\Omega)$.
Passing to a subsequence if necessary, we may assume without loss of generality
that there exists a $w \in H^1_0(\Omega)$ such that
$\lim (\mathop{\rm Re} u - 2^{-n})^+ = w$ weakly in $H^1_0(\Omega)$.
Then $\lim (\mathop{\rm Re} u - 2^{-n})^+ = w$ in $L_2(\Omega)$.
But $\lim (\mathop{\rm Re} u - 2^{-n})^+ = (\mathop{\rm Re} u)^+$ in $L_2(\Omega)$.
So $(\mathop{\rm Re} u)^+ = w \in H^1_0(\Omega)$.
Similarly $(\mathop{\rm Re} u)^-, (\mathop{\rm Im} u)^+, (\mathop{\rm Im} u)^- \in H^1_0(\Omega)$.
So $u \in H^1_0(\Omega)$.
\end{proof}
Lemma~\ref{lwreg205} together with the condition $0 \not\in \sigma(A^D)$
gives the uniqueness in Theorem~\ref{twreg101}.
\begin{prop} \label{pwreg202.5}
For all $\varphi \in C(\Gamma)$ there exists at most one
$u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
such that $u|_\Gamma = \varphi$ and ${\cal A} u = 0$.
\end{prop}
\begin{proof}
Let $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
and suppose that $u|_\Gamma = 0$ and ${\cal A} u = 0$.
Then $u \in C_0(\Omega)$.
Hence $u \in H^1_0(\Omega)$ by Lemma~\ref{lwreg205}.
Also ${\cal A} u = 0$.
Therefore $u \in D(A^D)$ and $A^D u = 0$.
But $0 \not\in \sigma(A^D)$.
So $u = 0$.
\end{proof}
In the next proposition we use that $\Omega$ is Wiener regular.
\begin{prop} \label{pwreg203}
Let $\lambda > \lambda_0$ and $p \in (d,\infty]$.
Let $f_0 \in L_{p/2}(\Omega)$ and $f_1,\ldots,f_d \in L_p(\Omega)$.
Then there exists a unique $u \in H^1_0(\Omega) \cap C_0(\Omega)$
such that ${\cal B}_\lambda u = f_0 + \sum_{k=1}^d \partial_k f_k$.
\end{prop}
\begin{proof}
Since $a_{kl}$ and $c_k$ are real valued for all $k,l \in \{ 1,\ldots,d \} $
we may assume that $f_0,\ldots,f_d$ are real valued.
By \cite{GT} Theorem~8.31 there exists a unique $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
such that ${\cal B}_\lambda u = f_0 + \sum_{k=1}^d \partial_k f_k$ and $u|_\Gamma = 0$.
Then $u \in C_0(\Omega)$ and the existence follows from Lemma~\ref{lwreg205}.
The uniqueness follows from Proposition~\ref{pwreg202.5}.
\end{proof}
\begin{cor} \label{cwreg206}
Let $\lambda > \lambda_0$ and $p \in (d,\infty]$.
Let $f_0 \in L_{p/2}(\Omega)$ and $f_1,\ldots,f_d \in L_p(\Omega)$.
Let $u \in H^1_0(\Omega)$ and suppose that ${\cal B}_\lambda u = f_0 + \sum_{k=1}^d \partial_k f_k$.
Then $u \in C_0(\Omega)$.
\end{cor}
\begin{proof}
By Proposition~\ref{pwreg203} there exists a $\tilde u \in H^1_0(\Omega) \cap C_0(\Omega)$
such that ${\cal B}_\lambda \tilde u = f_0 + \sum_{k=1}^d \partial_k f_k$.
Then ${\cal B}_\lambda(u - \tilde u) = 0$.
So $\gothic{b}_\lambda(u - \tilde u,v) = 0$ first for all $v \in C_c^\infty(\Omega)$ and
then by density for all $v \in H^1_0(\Omega)$.
Choose $v = u - \tilde u$.
Then $\frac{\mu}{2} \, \|u - \tilde u\|_{H^1(\Omega)}^2 \leq \mathop{\rm Re} \gothic{b}_\lambda(u - \tilde u) = 0$.
So $u = \tilde u \in C_0(\Omega)$.
\end{proof}
We next wish to add the other lower order terms.
\begin{prop} \label{pwreg209}
There exists a $c > 0$ such that
for all $\Phi \in C^1(\mathds{R}^d)$ there exists a unique
$u \in H^1(\Omega) \cap C(\overline \Omega)$ such that
$u|_\Gamma = \Phi|_\Gamma$ and ${\cal A} u = 0$.
Moreover,
\[
\|u\|_{C(\overline \Omega)}
\leq c \, \|\Phi|_\Gamma\|_{C(\Gamma)}
. \]
\end{prop}
For the proof we need some lemmas.
In the next lemma we introduce a parameter $\delta$ in order
to avoid duplication of the proof.
\begin{lemma} \label{lwreg207}
Fix $\delta \in [0,\lambda_0 + 1]$.
\mbox{}
\begin{tabel}
\item \label{lwreg207-1}
For all $f \in L_2(\Omega)$ and $\lambda > \lambda_0$
there exists a unique $u \in H^1_0(\Omega)$ such that
\begin{equation}
\gothic{b}_\lambda(u,v)
= \sum_{k=1}^d (b_k \, f, \partial_k v)_{L_2(\Omega)}
+ \, ((c_0 - \delta \, \mathds{1}_\Omega) \, f, v)_{L_2(\Omega)}
\label{elwreg207;1}
\end{equation}
for all $v \in H^1_0(\Omega)$.
\end{tabel}
For all $\lambda > \lambda_0$ define
$R_\lambda \colon L_2(\Omega) \to L_2(\Omega)$ by $R_\lambda f = u$, where
$u \in H^1_0(\Omega)$ is as in~{\rm (\ref{elwreg207;1})}.
\begin{tabel}
\setcounter{teller}{1}
\item \label{lwreg207-1.5}
There exists a $c_1 > 0$ such that
\[
\|R_\lambda f\|_{L_q(\Omega)} \leq c_1 \, (\lambda - \lambda_0)^{-1/4} \, \|f\|_{L_2(\Omega)}
\]
for all $\lambda > \lambda_0$ and $f \in L_2(\Omega)$,
where $\frac{1}{q} = \frac{1}{2} - \frac{1}{4d}$.
\item \label{lwreg207-2}
There exists a $c_2 \geq 1$ such that
\[
\|R_\lambda f\|_{L_q(\Omega)} \leq c_2 \, \|f\|_{L_p(\Omega)}
\]
for all $\lambda \in [\lambda_0 + 1,\infty)$,
$p,q \in [2,\infty]$ and $f \in L_p(\Omega)$ with $\frac{1}{q} = \frac{1}{p} - \frac{1}{4d}$.
\item \label{lwreg207-3}
If $\lambda > \lambda_0$,
$p \in (d,\infty]$ and $f \in L_p(\Omega)$, then $R_\lambda f \in C_0(\Omega)$.
\end{tabel}
\end{lemma}
\begin{proof}
`\ref{lwreg207-1}'.
This follows from the Lax--Milgram theorem.
`\ref{lwreg207-1.5}'.
Define $M = \|c_0 - \delta \, \mathds{1}_\Omega\|_{L_\infty(\Omega)} + \sum_{k=1}^d \|b_k\|_{L_\infty(\Omega)}$.
Let $\lambda > \lambda_0$, $f \in L_2(\Omega)$ and set $u = R_\lambda f$.
Then
\begin{eqnarray*}
\frac{\mu}{2} \, \|u\|_{H^1(\Omega)}^2 + (\lambda - \lambda_0) \|u\|_{L_2(\Omega)}^2
& \leq & \mathop{\rm Re} \gothic{b}_{\lambda_0}(u) + (\lambda - \lambda_0) \|u\|_{L_2(\Omega)}^2 \\
& = & \mathop{\rm Re} \gothic{b}_\lambda(u) \\
& = & \mathop{\rm Re} \sum_{k=1}^d (b_k \, f, \partial_k u)_{L_2(\Omega)}
+ \mathop{\rm Re} ((c_0 - \delta \, \mathds{1}_\Omega) \, f, u)_{L_2(\Omega)} \\
& \leq & M \, \|f\|_{L_2(\Omega)} \, \|u\|_{H^1(\Omega)}
.
\end{eqnarray*}
So
$\|u\|_{H^1(\Omega)}
\leq 2 \mu^{-1} \, M \, \|f\|_{L_2(\Omega)}$
and
\[
\|R_\lambda f\|_{L_2(\Omega)}
= \|u\|_{L_2(\Omega)}
\leq \sqrt{\frac{2}{\mu (\lambda - \lambda_0)} } \, M \, \|f\|_{L_2(\Omega)}
. \]
By the Sobolev embedding theorem there exists a $c_1 > 0$ such that
$\|v\|_{L_{q_1}(\Omega)} \leq c_1 \, \|v\|_{H^1(\Omega)}$ for all $v \in H^1_0(\Omega)$,
where $\frac{1}{q_1} = \frac{1}{2} - \frac{1}{2d}$.
(The extra factor $2$ is to avoid a separate case for $d=2$.)
Then
$\|R_\lambda f\|_{L_{q_1}(\Omega)}
\leq 2 \mu^{-1} \, c_1 \, M \, \|f\|_{L_2(\Omega)}$.
Hence
\[
\|R_\lambda f\|_{L_q(\Omega)}
\leq \|R_\lambda f\|_{L_2(\Omega)}^{1/2} \, \|R_\lambda f\|_{L_{q_1}(\Omega)}^{1/2}
\leq c_2 \, (\lambda - \lambda_0)^{-1/4} \, \|f\|_{L_2(\Omega)}
, \]
where $c_2 = (2/\mu)^{3/4} \, c_1^{1/2} \, M$.
`\ref{lwreg207-2}'.
Apply Corollary~\ref{cwreg206} with $p = 4d$ and $\lambda = \lambda_0 + 1$.
It follows that $R_{\lambda_0 + 1} f \in C_0(\Omega)$ for all $f \in L_p(\Omega)$.
Clearly the map $R_{\lambda_0 + 1} |_{L_p(\Omega)} \colon L_p(\Omega) \to C_0(\Omega)$
has a closed graph.
Hence it is continuous.
In particular, there exists a $c_3 > 0$ such that
$\|R_{\lambda_0 + 1} f\|_{L_\infty(\Omega)}
= \|R_{\lambda_0 + 1} f\|_{C_0(\Omega)}
\leq c_3 \, \|f\|_{L_p(\Omega)}$
for all $f \in L_p(\Omega)$.
Let $\lambda \geq \lambda_0 + 1$ and $f \in L_2(\Omega)$.
Write $u = R_\lambda f$ and $u_0 = R_{\lambda_0 + 1} f$.
Then $\gothic{b}_\lambda(u,v) = \gothic{b}_{\lambda_0 + 1}(u_0,v)$
and therefore
$\gothic{b}_\lambda(u - u_0, v) = - (\lambda - \lambda_0 - 1) \, (u,v)_{L_2(\Omega)}$
for all $v \in H^1_0(\Omega)$.
Hence $u - u_0 \in D(B^D)$ and $(B^D + \lambda \, I) (u - u_0) = - (\lambda - \lambda_0 - 1) \, u_0$.
Consequently
\[
R_\lambda
= \Big( I - (\lambda - \lambda_0 - 1) \, (B^D + \lambda \, I)^{-1} \Big) R_{\lambda_0 + 1}
\]
for all $\lambda \geq \lambda_0 + 1$.
Since the semigroup generated by $-B^D$ has Gaussian bounds, there exists a
$c_4 \geq 1$ such that
$\|(B^D + \lambda \, I)^{-1}\|_{\infty \to \infty} \leq c_4 \, \lambda^{-1}$
for all $\lambda \geq \lambda_0 + 1$.
Then $\|R_\lambda f\|_{L_\infty(\Omega)} \leq 2 c_3 \, c_4 \, \|f\|_{L_p(\Omega)}$
for all $\lambda \geq \lambda_0 + 1$ and $f \in L_p(\Omega)$.
Finally let $p' \in (2,4d)$ and let $q' \in (2,\infty)$ be such that
$\frac{1}{q'} = \frac{1}{p'} - \frac{1}{4d}$.
There exists a $\theta \in (0,1)$ such that
$\frac{1}{p'} = \frac{1-\theta}{2} + \frac{\theta}{p}$.
Then $\frac{1}{q'} = \frac{1-\theta}{q}$, where
$\frac{1}{q} = \frac{1}{2} - \frac{1}{4d}$.
Let $c_1 > 0$ be as in Statement~\ref{lwreg207-1.5}.
The operator $R_\lambda$ is bounded from $L_2(\Omega)$ into
$L_q(\Omega)$ with norm at most~$c_1$ by Statement~\ref{lwreg207-1.5},
and we just proved that the operator $R_\lambda$ is bounded from $L_p(\Omega)$ into
$L_\infty(\Omega)$ with norm at most~$2 c_3 \, c_4$.
Hence by interpolation the operator $R_\lambda$ is bounded from
$L_{p'}(\Omega)$ into $L_{q'}(\Omega)$ with norm bounded by
$c_1^{1-\theta} \, (2 c_3 \, c_4)^\theta \leq c_1 + 2 c_3 \, c_4$,
which gives Statement~\ref{lwreg207-2}.
`\ref{lwreg207-3}'.
This is a special case of Corollary~\ref{cwreg206}.
\end{proof}
The main step in the proof of Proposition~\ref{pwreg209} is
the next lemma.
\begin{lemma} \label{lwreg210}
There exist $\lambda > \lambda_0$ and $c > 0$ such that
for all $\Phi \in C^1(\overline \Omega) \cap H^1(\Omega)$ there exists a unique
$u \in H^1(\Omega) \cap C(\overline \Omega)$ such that
$u|_\Gamma = \Phi|_\Gamma$ and ${\cal A}_\lambda u = 0$.
Moreover,
\[
\|u\|_{C(\overline \Omega)}
\leq c \, \|\Phi|_\Gamma\|_{C(\Gamma)}
. \]
\end{lemma}
\begin{proof}
Choose $\delta = 0$ in Lemma~\ref{lwreg207}.
Let $c_1$ and $c_2$ be as in Lemma~\ref{lwreg207}.
Let $\lambda \in (\lambda_0 + 1,\infty)$ be such that
$c_1 \, c_2^{2d-1} \, (\lambda - \lambda_0)^{-1/4} \, (1 + |\Omega|) \leq \frac{1}{2}$.
Let $R_\lambda$ be as in Lemma~\ref{lwreg207}.
Set $\varphi = \Phi|_\Gamma$.
There exist unique $w,\tilde w \in H^1_0(\Omega)$ such that
$\gothic{a}_\lambda(w,v) = \gothic{a}_\lambda(\Phi,v)$ and
$\gothic{b}_\lambda(\tilde w,v) = \gothic{b}_\lambda(\Phi,v)$
for all $v \in H^1_0(\Omega)$.
Then $\tilde w \in C_0(\Omega)$ by Corollary~\ref{cwreg206}.
Define $u = \Phi - w$ and $\tilde u = \Phi - \tilde w$.
Then $\tilde u \in H^1(\Omega) \cap C(\overline \Omega)$ and
$\tilde u|_\Gamma = \varphi$.
Moreover, $\gothic{a}_\lambda(u,v) = 0$ and $\gothic{b}_\lambda(\tilde u,v) = 0$
for all $v \in H^1_0(\Omega)$,
and $\|\tilde u\|_{C(\overline \Omega)} \leq \|\varphi\|_{C(\Gamma)}$
by the result of Stampacchia mentioned in the introduction
(\cite{Stam2} Th\'eor\`eme~3.8).
Let $v \in H^1_0(\Omega)$.
Then
\[
\gothic{b}_\lambda(\tilde u - u,v)
= \sum_{k=1}^d (b_k \, u, \partial_k v)_{L_2(\Omega)}
+ (c_0 \, u, v)_{L_2(\Omega)}
\]
and
$\tilde u - u = R_\lambda u$ by the definition of $R_\lambda$.
For all $n \in \{ 0,\ldots,2d \} $ define $p_n = \frac{4d}{2d-n}$.
Then $p_0 = 2$, $p_{2d-1} = 4d$, $p_{2d} = \infty$
and $\frac{1}{p_n} = \frac{1}{p_{n-1}} - \frac{1}{4d}$
for all $n \in \{ 1,\ldots,2d \} $.
So $\|\tilde u - u\|_{L_{p_n}(\Omega)} \leq c_2 \, \|u\|_{L_{p_{n-1}}(\Omega)}$
for all $n \in \{ 2,\ldots,2d \} $ and
$\|\tilde u - u\|_{L_{p_1}(\Omega)}
\leq c_1 \, (\lambda - \lambda_0)^{-1/4} \, \|u\|_{L_2(\Omega)}$
by Lemma~\ref{lwreg207}\ref{lwreg207-2} and \ref{lwreg207-1.5}.
Then
\[
\|u\|_{L_{p_1}(\Omega)}
\leq c_1 \, (\lambda - \lambda_0)^{-1/4} \, \|u\|_{L_2(\Omega)}
+ (1 + |\Omega|) \, \|\tilde u\|_{L_\infty(\Omega)}
\]
and
\[
\|u\|_{L_{p_n}(\Omega)}
\leq c_2 \, \|u\|_{L_{p_{n-1}}(\Omega)}
+ (1 + |\Omega|) \, \|\tilde u\|_{L_\infty(\Omega)}
\]
for all $n \in \{ 2,\ldots,2d \} $.
It follows by induction to $n$ that
\[
\|u\|_{L_{p_n}(\Omega)}
\leq c_1 \, c_2^{n-1} \, (\lambda - \lambda_0)^{-1/4} \, \|u\|_{L_2(\Omega)}
+ (1 + |\Omega|) \sum_{k=0}^{n-1} c_2^k \, \|\tilde u\|_{L_\infty(\Omega)}
\]
for all $n \in \{ 2,\ldots,2d \} $.
So $u \in L_{p_{2d-1}}(\Omega) = L_{4d}(\Omega)$ and
$\tilde u - u = R_\lambda u \in C_0(\Omega)$ by Lemma~\ref{lwreg207}\ref{lwreg207-3}.
In particular $u \in C(\overline \Omega)$.
Moreover,
\begin{eqnarray*}
\|u\|_{L_\infty(\Omega)}
& = & \|u\|_{L_{p_{2d}}(\Omega)} \\
& \leq & c_1 \, c_2^{2d-1} \, (\lambda - \lambda_0)^{-1/4} \, \|u\|_{L_2(\Omega)}
+ 2 d \, (1 + |\Omega|) \, c_2^{2d-1} \, \|\tilde u\|_{L_\infty(\Omega)} \\
& \leq & c_1 \, c_2^{2d-1} \, (\lambda - \lambda_0)^{-1/4} \, (1 + |\Omega|) \, \|u\|_{L_\infty(\Omega)}
+ 2 d \, (1 + |\Omega|) \, c_2^{2d-1} \, \|\tilde u\|_{L_\infty(\Omega)} \\
& \leq & \frac{1}{2} \, \|u\|_{L_\infty(\Omega)}
+ 2 d \, (1 + |\Omega|) \, c_2^{2d-1} \, \|\tilde u\|_{L_\infty(\Omega)}
\end{eqnarray*}
by the choice of $\lambda$.
So
\[
\|u\|_{L_\infty(\Omega)}
\leq 4 d \, (1 + |\Omega|) \, c_2^{2d-1} \, \|\tilde u\|_{L_\infty(\Omega)}
\leq 4 d \, (1 + |\Omega|) \, c_2^{2d-1} \, \|\varphi\|_{C(\Gamma)}
\]
and the proof of the lemma is complete.
\end{proof}
We next wish to remove the $\lambda$ in Lemma~\ref{lwreg210}.
For future purposes, we consider the full inhomogeneous problem.
\begin{prop} \label{pwreg214}
Let $p \in (d,\infty]$, $f_0 \in L_{p/2}(\Omega)$
and let $f_1,\ldots,f_d \in L_p(\Omega)$.
Let $u \in H^1_0(\Omega)$ be such that
${\cal A} u = f_0 + \sum_{k=1}^d \partial_k f_k$.
Then $u \in C_0(\Omega)$.
\end{prop}
\begin{proof}
Without loss of generality we may assume that $p \in (d,4d)$.
Choose $\lambda = \delta = \lambda_0 + 1$ in Lemma~\ref{lwreg207} and in Proposition~\ref{pwreg203}.
By Proposition~\ref{pwreg203} there exists a unique $\tilde u \in H^1_0(\Omega) \cap C_0(\Omega)$
such that ${\cal B}_\lambda \tilde u = f_0 + \sum_{k=1}^d \partial_k f_k$.
If $v \in C_c^\infty(\Omega)$, then
\begin{eqnarray*}
\gothic{b}_\lambda(\tilde u,v)
& = & \langle f_0 + \sum_{k=1}^d \partial_k f_k,v \rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)} \\
& = & \gothic{a}(u,v) \\
& = & \gothic{b}_\lambda(u,v)
+ \sum_{k=1}^d (b_k \, u, \partial_k v)_{L_2(\Omega)}
+ ((c_0 - \delta \, \mathds{1}_\Omega) \, u, v)_{L_2(\Omega)}
.
\end{eqnarray*}
So
\[
\gothic{b}_\lambda(\tilde u - u,v)
= \sum_{k=1}^d (b_k \, u, \partial_k v)_{L_2(\Omega)}
+ ((c_0 - \delta \, \mathds{1}_\Omega) \, u, v)_{L_2(\Omega)}
\]
and by density for all $v \in H^1_0(\Omega)$.
Hence $u - \tilde u = R_\lambda u$, where $R_\lambda$ is as in Lemma~\ref{lwreg207}.
For all $n \in \{ 0,\ldots,2d-1 \} $ define $p_n = \frac{4d}{2d-n}$.
Then $u - \tilde u \in L_2(\Omega) = L_{p_0}(\Omega)$.
It follows by induction to $n$ that $u \in L_{p_{n-1}}(\Omega)$ and
$u - \tilde u \in L_{p_n}(\Omega)$ for all $n \in \{ 1,\ldots,2d-1 \} $, where
the last part follows from Lemma~\ref{lwreg207}\ref{lwreg207-2}.
Hence $u - \tilde u \in L_{p_{2d-1}}(\Omega) = L_{4d}(\Omega)$
and $u \in L_p(\Omega)$.
Then Lemma~\ref{lwreg207}\ref{lwreg207-3} gives
$u - \tilde u = R_\lambda u \in C_0(\Omega)$ and therefore $u \in C_0(\Omega)$.
\end{proof}
\begin{cor} \label{cwreg208}
Let $p \in (d,\infty]$.
Then $(A^D)^{-1} (L_p(\Omega)) \subset C_0(\Omega)$.
\end{cor}
\begin{cor} \label{cwreg211}
There exists a $c' > 0$ such that
$\|(A^D)^{-1} f\|_{L_\infty(\Omega)} \leq c' \, \|f\|_{L_\infty(\Omega)}$
for all $f \in L_\infty(\Omega)$.
\end{cor}
\begin{proof}
Closed graph theorem.
\end{proof}
\begin{proof}[{\bf Proof of Proposition~\ref{pwreg209}.}]
Let $c,\lambda > 0$ be as in Lemma~\ref{lwreg210}
and let $c' > 0$ be as in Corollary~\ref{cwreg211}.
By Lemma~\ref{lwreg210} there exists a unique
$\tilde u \in H^1(\Omega) \cap C(\overline \Omega)$
such that $\tilde u|_\Gamma = \Phi|_\Gamma$ and ${\cal A}_\lambda \tilde u = 0$.
By Lemma~\ref{lwreg202} there exists a unique $w \in H^1_0(\Omega)$ such that
$\gothic{a}(w,v) = \gothic{a}(\Phi|_\Omega,v)$ for all $v \in H^1_0(\Omega)$.
Set $u = \Phi|_\Omega - w$ and $\tilde w = \Phi|_\Omega - \tilde u$.
Then
\begin{eqnarray*}
\gothic{a}(w,v)
& = & \gothic{a}(\Phi|_\Omega,v)
= \gothic{a}_\lambda(\Phi|_\Omega,v) - \lambda \, (\Phi,v)_{L_2(\Omega)}
= \gothic{a}_\lambda(\tilde w,v) - \lambda \, (\Phi,v)_{L_2(\Omega)} \\
& = & \gothic{a}(\tilde w,v)
+ \lambda \, (\tilde w,v)_{L_2(\Omega)}
- \lambda \, (\Phi,v)_{L_2(\Omega)}
= \gothic{a}(\tilde w,v)
- \lambda \, (\tilde u,v)_{L_2(\Omega)}
\end{eqnarray*}
for all $v \in H^1_0(\Omega)$.
So
\[
\gothic{a}(\tilde u - u, v)
= \gothic{a}(w - \tilde w,v)
= - \lambda \, (\tilde u,v)_{L_2(\Omega)}
. \]
Since $\tilde u - u \in H^1_0(\Omega)$ it follows that
$A^D(\tilde u - u) = - \lambda \, \tilde u$.
Consequently, $u = \tilde u + \lambda \, (A^D)^{-1} \tilde u \in C_0(\Omega)$
by Corollary~\ref{cwreg208}.
Moreover,
\begin{eqnarray*}
\|u\|_{C(\overline \Omega)}
& = & \|u\|_{L_\infty(\Omega)}
\leq \|\tilde u\|_{L_\infty(\Omega)} + \lambda \, \|(A^D)^{-1} \tilde u\|_{L_\infty(\Omega)} \\
& \leq & (1 + c' \, \lambda) \, \|\tilde u\|_{L_\infty(\Omega)}
\leq (1 + c' \, \lambda) \, c \, \|\Phi|_\Gamma\|_{C(\Gamma)}
\end{eqnarray*}
and the proof of Proposition~\ref{pwreg209} is complete.
\end{proof}
Define $|||\cdot||| \colon H^1_{\rm loc}(\Omega) \to [0,\infty]$ by
\[
|||u|||
= \sup_{\delta > 0}
\sup_{\scriptstyle \Omega_0 \subset \Omega \; {\rm open} \atop
\scriptstyle d(\Omega_0,\Gamma) = \delta}
\delta \Big( \int_{\Omega_0} |\nabla u|^2 \Big)^{1/2}
. \]
Finally we need the following Caccioppoli inequality.
\begin{prop} \label{pwreg213}
There exists a $c' \geq 1$ such that
$|||u||| \leq c' \, \|u\|_{L_2(\Omega)}$
for all $u \in H^1(\Omega)$ such that ${\cal A} u = 0$.
\end{prop}
\begin{proof}
See \cite{GiM} Theorem~4.4.
\end{proof}
Now we are able to prove Theorem~\ref{twreg101}.
\begin{proof}[{\bf Proof of Theorem~\ref{twreg101}.}]
The uniqueness is already proved in Proposition~\ref{pwreg202.5}.
Let $c > 0$ and $c' \geq 1$ be as in Propositions~\ref{pwreg209} and \ref{pwreg213}.
Let $\Phi \in C^1(\mathds{R}^d) \cap H^1(\mathds{R}^d)$.
By Proposition~\ref{pwreg209} there exists a unique
$u \in H^1(\Omega) \cap C(\overline \Omega)$ such that
$u|_\Gamma = \Phi|_\Gamma$ and ${\cal A} u = 0$.
Moreover,
\begin{eqnarray}
\|u\|_{C(\overline \Omega)} + |||u|||
& \leq & \|u\|_{C(\overline \Omega)} + c' \, \|u\|_{L_2(\Omega)} \nonumber \\
& \leq & (2 + |\Omega|) \, c' \, \|u\|_{C(\overline \Omega)} \nonumber \\
& \leq & (2 + |\Omega|) \, c \, c' \, \|\Phi|_\Gamma\|_{C(\Gamma)}
.
\label{etwreg101;4}
\end{eqnarray}
It follows from (\ref{etwreg101;4}) that we can define a linear map
$F \colon \{ \Phi|_\Gamma : \Phi \in C^1(\mathds{R}^d) \cap H^1(\mathds{R}^d) \} \to H^1(\Omega) \cap C(\overline \Omega)$
by $F(\Phi|_\Gamma) = u$, where $u \in H^1(\Omega) \cap C(\overline \Omega)$
is such that
$u|_\Gamma = \Phi|_\Gamma$ and ${\cal A} u = 0$.
Now let $\varphi \in C(\Gamma)$.
By the Stone--Weierstra\ss\ theorem there are
$\Phi_1,\Phi_2,\ldots \in C^1(\mathds{R}^d) \cap H^1(\mathds{R}^d)$ such that
$\lim \Phi_n|_\Gamma = \varphi$ in $C(\Gamma)$.
Set $u_n = F(\Phi_n|_\Gamma)$ for all $n \in \mathds{N}$.
Then it follows from (\ref{etwreg101;4}) that $(u_n)_{n \in \mathds{N}}$
is a Cauchy sequence in $C(\overline \Omega)$.
Let $u = \lim u_n$ in $C(\overline \Omega)$.
Also $(u_n)_{n \in \mathds{N}}$ is a Cauchy sequence in $H^1_{\rm loc}(\Omega)$ by
(\ref{etwreg101;4}).
So $u \in H^1_{\rm loc}(\Omega)$.
Since ${\cal A} u_n = 0$ for all $n \in \mathds{N}$, one deduces that
${\cal A} u = 0$.
Moreover, $u|_\Gamma = \lim u_n|_\Gamma = \lim \Phi_n|_\Gamma = \varphi$.
This proves existence.
Finally,
\[
\|u\|_{C(\overline \Omega)}
= \lim \|u_n\|_{C(\overline \Omega)}
\leq \lim (2 + |\Omega|) \, c \, c' \, \|\Phi_n|_\Gamma\|_{C(\Gamma)}
= (2 + |\Omega|) \, c \, c' \, \|\varphi\|_{C(\Gamma)}
. \]
This completes the proof of Theorem~\ref{twreg101}.
\end{proof}
Theorem~\ref{twreg101} has the following extension.
\begin{thm} \label{twreg216}
Adopt the notation and assumptions of Theorem~\ref{twreg101}.
Let $\varphi \in C(\Gamma)$, $p \in (d,\infty]$, $f_0 \in L_{p/2}(\Omega)$
and let $f_1,\ldots,f_d \in L_p(\Omega)$.
Then there exists a unique $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
such that $u|_\Gamma = \varphi$ and ${\cal A} u = f_0 + \sum_{k=1}^d \partial_k f_k$.
\end{thm}
\begin{proof}
The uniqueness follows as in the proof of Proposition~\ref{pwreg202.5}.
By Lemma~\ref{lwreg202} there exists a $u_0 \in H^1_0(\Omega)$ such that
${\cal A} u_0 = f_0 + \sum_{k=1}^d \partial_k f_k$.
Then $u_0 \in C_0(\Omega)$ by Proposition~\ref{pwreg214}.
By Theorem~\ref{twreg101} there exists a $u_1 \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$
such that $u_1|_\Gamma = \varphi$ and ${\cal A} u_1 = 0$.
Define $u = u_0 + u_1$.
Then $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$.
Moreover, $u|_\Gamma = \varphi$ and ${\cal A} u = f_0 + \sum_{k=1}^d \partial_k f_k$.
\end{proof}
We conclude this section with some results for the classical solution.
They will be used in Section~\ref{Swreg3new} and are of independent interest.
Recall that $P \colon C(\Gamma) \to C(\overline \Omega)$ is given by
$P \varphi = u$, where $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$ is
the classical solution, so $u|_\Gamma = \varphi$ and ${\cal A} u = 0$.
\begin{prop} \label{pwreg240}
Let $\Phi \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$.
Suppose there exists a $w \in H^1_0(\Omega)$ such that ${\cal A} \Phi = {\cal A} w$.
Then $w \in C(\overline \Omega)$ and $P(\Phi|_\Gamma) = \Phi - w$.
\end{prop}
\begin{proof}
Write $\tilde w = \Phi - P(\Phi|_\Gamma)$.
Then $\tilde w \in C_0(\Omega) \cap H^1_{\rm loc}(\Omega)$
and ${\cal A} \tilde w = {\cal A} \Phi = {\cal A} w = f_0 + \sum_{k=1}^d \partial_k f_k$,
where $f_0 = c_0 \, w + \sum_{l=1}^d c_l \, \partial_l w \in L_2(\Omega)$
and $f_k = - \sum_{l=1}^d a_{lk} \, \partial_l w - b_k \, w \in L_2(\Omega)$
for all $k \in \{ 1,\ldots,d \} $.
So $\tilde w \in H^1_0(\Omega)$ by Lemma~\ref{lwreg205}.
Hence ${\cal A}(\tilde w - w) = $ and $\tilde w - w \in \ker A^D = \{ 0 \} $.
So $w = \tilde w = \Phi - P(\Phi|_\Gamma)$.
\end{proof}
We need the dual map of ${\cal A}$.
Define the map ${\cal A}^t \colon H^1_{\rm loc}(\Omega) \to {\cal D}'(\Omega)$ by
\[
\langle {\cal A} u,v \rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)}
= \sum_{k,l=1}^d \int_\Omega a_{lk} \, (\partial_k u) \, \overline{\partial_l v}
- \sum_{k=1}^d \int_\Omega \overline{c_k} \, u \, \overline{\partial_k v}
- \sum_{k=1}^d \int_\Omega \overline{b_k} \, (\partial_k u) \, \overline v
+ \int_\Omega \overline{c_0} \, u \, \overline v
\]
for all $u \in H^1_{\rm loc}(\Omega)$ and $v \in C_c^\infty(\Omega)$.
\begin{cor} \label{cwreg241}
Suppose that $a_{kl}, b_k, c_k \in W^{1,\infty}(\Omega)$ for all
$k,l \in \{ 1,\ldots,d \} $.
Let $\Phi \in C(\overline \Omega)$.
Suppose there exists a $w \in H^1_0(\Omega)$ such that
\[
\langle \Phi, {\cal A}^t v\rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)}
= \gothic{a}(w,v)
\]
for all $v \in C_c^\infty(\Omega)$.
Then $w \in C(\overline \Omega)$ and $P(\Phi|_\Gamma) = \Phi - w$.
\end{cor}
\begin{proof}
By assumption one has $\langle \Phi - w, {\cal A}^t v\rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)} = 0$
for all $v \in C_c^\infty(\Omega)$.
Hence $\Phi - w \in H^1_{\rm loc}(\Omega)$ by elliptic regularity.
So $\Phi \in H^1_{\rm loc}(\Omega)$ and
\[
\langle {\cal A} \Phi, v\rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)}
= \langle \Phi, {\cal A}^t v\rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)}
= \gothic{a}(w,v)
= \langle {\cal A} w, v\rangle_{{\cal D}'(\Omega) \times {\cal D}(\Omega)}
\]
for all $v \in C_c^\infty(\Omega)$.
Therefore ${\cal A} \Phi = {\cal A} w$ and the result follows from
Proposition~\ref{pwreg240}.
\end{proof}
The last corollary takes a very simple form for the Laplacian.
\begin{cor} \label{cwreg242}
Let $\Phi \in C(\overline \Omega)$.
Suppose that $\Delta \Phi \in H^{-1}(\Omega)$.
Let $w \in H^1_0(\Omega)$ be such that $\Delta \Phi = \Delta w$ as distribution.
Then $w \in C(\overline \Omega)$ and $P(\Phi|_\Gamma) = \Phi - w$.
\end{cor}
This corollary is a special case of \cite{AD2} Theorem~1.1.
\section{Variational and classical solutions: comparison} \label{Swreg3new}
In this section we show that the variational and classical solutions
of the Dirichlet problem are the same.
For that we assume throughout this section that $\Omega$ is an open
set with Lipschitz boundary.
Moreover, we adopt the assumptions and notation of Theorem~\ref{twreg101}.
Recall that for all $\varphi \in C(\Gamma)$ we denote by $P \varphi \in C(\overline \Omega)$
the classical solution and for all $\varphi \in H^{1/2}(\Gamma)$,
we denote by $\gamma \varphi \in H^1(\Omega)$ the variational solution
of the Dirichlet problem.
We shall prove in this section that they coincide if both are defined.
The fact that they coincide for restrictions to $\Gamma$ of functions in
$C(\overline \Omega) \cap H^1(\Omega)$ is a consequence of Proposition~\ref{pwreg240}.
We state this as a proposition.
\begin{prop} \label{pwreg411}
Let $\Phi \in C(\overline \Omega) \cap H^1(\Omega)$.
Then $P (\Phi|_\Gamma) = \gamma (\Phi|_\Gamma)$ almost everywhere.
\end{prop}
So for the proof of Theorem~\ref{twreg102} it suffices to show that
the map $\Phi \mapsto \Phi|_\Gamma$ from $C(\overline \Omega) \cap H^1(\Omega)$
into $C(\Gamma) \cap H^{1/2}(\Gamma)$ is surjective.
This is surprisingly difficult to prove.
We first prove Theorem~\ref{twreg102} for the Laplacian with the help of
Proposition~\ref{pwreg411} and a deep result of Dahlberg.
As a consequence we obtain the desired surjectivity result.
Then as noticed earlier, Theorem~\ref{twreg102} follows for our general
elliptic operator.
\begin{thm} \label{tdtnc401}
Assume that $a_{kl} = \delta_{kl}$ and $b_k = c_k = c_0 = 0$
for all $k,l \in \{ 1,\ldots,d \} $.
Let $\varphi \in C(\Gamma) \cap H^{1/2}(\Gamma)$.
Then $P \varphi = \gamma \varphi$ almost everywhere.
\end{thm}
\begin{proof}
Let $x \in \Omega$.
By Dahlberg \cite{Dahlberg} Theorem~1 there exists a unique $k_x \in L_1(\Gamma)$ such that
$(P \varphi)(x) = \int_\Gamma k_x \, \varphi \, d\sigma$
for all $\varphi \in C(\Gamma)$.
Now let $\varphi \in C(\Gamma) \cap H^{1/2}(\Gamma)$.
Without loss of generality we may assume that $\varphi$ is real valued.
Then there exists a $u \in H^1(\Omega,\mathds{R})$ such that $\varphi = {\mathop{\rm Tr \,}} u$.
Since $H^1(\Omega) \cap C(\overline \Omega)$ is dense in $H^1(\Omega)$, there
exist $u_1,u_2,\ldots \in H^1(\Omega,\mathds{R}) \cap C(\overline \Omega)$ such that
$\lim u_n = u$ in $H^1(\Omega)$.
Define $v_n = (-\|\varphi\|_{L_\infty(\Gamma)}) \vee u_n \wedge \|\varphi\|_{L_\infty(\Gamma)}$
for all $n \in \mathds{N}$.
Then $v_n \in H^1(\Omega) \cap C(\overline \Omega)$.
Write $\varphi_n = v_n|_\Gamma = {\mathop{\rm Tr \,}} v_n \in C(\Gamma) \cap H^{1/2}(\Gamma)$
for all $n \in \mathds{N}$.
Then $P \varphi_n = \gamma \varphi_n$ almost everywhere for all $n \in \mathds{N}$
by Proposition~\ref{pwreg411}.
Note that
\[
\lim \varphi_n
= \lim {\mathop{\rm Tr \,}} v_n
= (-\|\varphi\|_{L_\infty(\Gamma)}) \vee {\mathop{\rm Tr \,}} u \wedge \|\varphi\|_{L_\infty(\Gamma)}
= \varphi
\]
in $H^{1/2}(\Gamma)$.
So by continuity of $\gamma$ one deduces that
$\gamma \varphi = \lim \gamma \varphi_n$ in $H^1(\Omega)$
and in particular in $L_2(\Omega)$.
Passing to a subsequence, if necessary, we may assume that
\[
(\gamma \varphi)(x) = \lim (\gamma \varphi_n)(x)
\]
for almost all $x \in \Omega$.
Using again that $\lim \varphi_n = \varphi$ in $H^{1/2}(\Gamma)$
and therefore also in $L_2(\Gamma)$,
we may assume that
$\lim \varphi_n = \varphi$ almost everywhere on $\Gamma$.
Hence if $x \in \Omega$, then
\[
(P \varphi)(x)
= \int_\Gamma k_x \, \varphi \, d\sigma
= \lim \int_\Gamma k_x \, \varphi_n \, d\sigma
= \lim (P \varphi_n)(x)
\]
by the Lebesgue dominated convergence theorem.
Since $P \varphi_n = \gamma \varphi_n$ almost everywhere for all $n \in \mathds{N}$
one concludes that $(P \varphi)(x) = (\gamma \varphi)(x)$ for almost all $x \in \Omega$.
\end{proof}
The desired surjectivity result is the following corollary
of Theorem~\ref{tdtnc401}.
\begin{cor} \label{cdtnc402}
Let $\Omega \subset \mathds{R}^d$ be a bounded open set with Lipschitz boundary.
Let $\varphi \in C(\Gamma) \cap H^{1/2}(\Gamma)$.
Then there exists a $u \in H^1(\Omega) \cap C(\overline \Omega)$
such that $\varphi = u|_\Gamma$.
\end{cor}
\begin{proof}[{\bf Proof of Theorem~\ref{twreg102}.}]
This follows from Corollary~\ref{cdtnc402} and Proposition~\ref{pwreg411}.
\end{proof}
\begin{cor} \label{cwreg305}
Adopt the notation and assumptions of Theorem~\ref{twreg101}.
Suppose that $\Omega$ has a Lipschitz boundary.
Let $u \in C(\overline \Omega) \cap H^1_{\rm loc}(\Omega)$ and
suppose that ${\cal A} u = 0$.
Then $u \in H^1(\Omega)$ if and only if $u|_\Gamma \in H^{1/2}(\Gamma)$.
\end{cor}
\begin{proof}
`$\Rightarrow$' is trivial.
`$\Leftarrow$'.
Suppose $u|_\Gamma \in H^{1/2}(\Gamma)$.
Then $u = P(u|_\Gamma) = \gamma (u|_\Gamma) \in H^1(\Omega)$ by Theorem~\ref{twreg102}.
\end{proof}
\section{Semigroup and holomorphy on $C_0(\Omega)$} \label{Swreg4new}
In this section we prove Theorem~\ref{twreg103}.
Throughout this section we adopt the notation and assumptions of Theorem~\ref{twreg101}.
We need several lemmas.
\begin{lemma} \label{lwreg310}
The operator $A_c$ is invertible and $(A_c)^{-1} = (A^D)^{-1}|_{C_0(\Omega)}$.
\end{lemma}
\begin{proof}
If $v \in C_0(\Omega)$, then $(A^D)^{-1} v \in C_0(\Omega)$ by
Corollary~\ref{cwreg208}.
Moreover, $A^D ((A^D)^{-1} v) = v$.
So $(A^D)^{-1} v \in D(A_c)$ and $A_c ((A^D)^{-1} v) = v$.
Hence $A_c$ is surjective.
Since $A^D$ is injective, also $A_c$ is injective.
Therefore $A_c$ is invertible and
$(A_c)^{-1} = (A^D)^{-1}|_{C_0(\Omega)}$.
\end{proof}
The next proof is inspired by arguments in \cite{ABenilan} Theorem~4.4.
\begin{lemma} \label{lwreg311}
The domain $D(A_c)$ of the operator $A_c$ is dense in $C_0(\Omega)$.
\end{lemma}
\begin{proof}
Let $\rho \in M(\Omega)$, the Banach space of all
complex measures on $\Omega$ and suppose that $\int_\Omega v \, d\rho = 0$
for all $v \in D(A_c)$.
There exist $w_1,w_2,\ldots \in L_2(\Omega)$ such that
$\sup \|w_n\|_{L_1(\Omega)} < \infty$ and
$\lim \int_\Omega v \, \overline{w_n} = \int_\Omega v \, d\rho$
for all $v \in C_0(\Omega)$.
Choose $p = d+2$ and let $q \in (1,2)$ be the dual exponent of $p$.
It follows from Proposition~\ref{pwreg214} that the operator $(A^D)^{-1}$
extends to a continuous operator from $W^{-1,p}(\Omega)$ into $C_0(\Omega)$.
Hence the operator $(A^D)^{-1*}$
extends to a continuous operator from $M(\Omega)$ into $W^{1,q}_0(\Omega)$.
In particular, there exists a $c > 0$ such that
$\|(A^D)^{-1*} w\|_{W^{1,q}_0(\Omega)} \leq c \, \|w\|_{L_1(\Omega)}$
for all $w \in L_2(\Omega)$.
For all $n \in \mathds{N}$ set $u_n = (A^D)^{-1*} w_n$.
We emphasise that $u_n \in D((A^D)^*)$.
Then $\sup \|u_n\|_{W^{1,q}_0(\Omega)} < \infty$.
Note that $W^{1,q}_0(\Omega)$ is reflexive.
Hence passing to a subsequence if necessary,
there exists a $u \in W^{1,q}_0(\Omega)$
such that $\lim u_n = u$ weakly in $W^{1,q}_0(\Omega)$.
Let $v \in C_c^\infty(\Omega)$.
Then $(A^D)^{-1} v \in D(A_c)$ by Lemma~\ref{lwreg310}.
Therefore
\begin{eqnarray*}
0
= \int_\Omega (A^D)^{-1} v \, d\rho
& = & \lim \int_\Omega \Big( (A^D)^{-1} v \Big) \, \overline{w_n} \\
& = & \lim (v , (A^D)^{-1*} w_n)_{L_2(\Omega)}
= \lim (v , u_n)_{L_2(\Omega)}
= \lim \int_\Omega v \, \overline{u_n}
= \lim \int_\Omega v \, \overline u
.
\end{eqnarray*}
Hence $u = 0$.
Again let $v \in C_c^\infty(\Omega)$.
Then
\[
\int_\Omega v \, d\rho
= \lim \int_\Omega v \, \overline{w_n}
= \lim (v, (A^D)^* u_n)_{L_2(\Omega)}
= \lim \gothic{a}(v, u_n)
= 0
, \]
where we used (\ref{eSwreg1;4}).
So $\rho = 0$ and $D(A_c)$ is dense in $C_0(\Omega)$.
\end{proof}
Now we prove that $-A_c$ generates a holomorphic $C_0$-semigroup.
\begin{proof}[{\bf Proof of Theorem~\ref{twreg103}.}]
Let $S$ be the semigroup generated by $-A^D$.
Then $S$ has a kernel with Gaussian upper bounds by \cite{Ouh5} Theorem~6.10
(see also \cite{Daners} Theorem~6.1 for operators with
real valued coefficients and \cite{AE1} Theorems~3.1 and 4.4).
Hence the semigroup $S$ extends consistently
to a semigroup $S^{(p)}$ on $L_p(\Omega)$
for all $p \in [1,\infty]$.
Choose $p \in (d,\infty]$.
Let $t > 0$ and $u \in L_2(\Omega)$.
Since $S$ is a holomorphic semigroup, one deduces that $S_t u \in D(A^D)$
and $A^D \, S_t u \in L_2(\Omega)$.
Next the Gaussian kernel bounds imply that $S_t$ maps $L_2(\Omega)$
into $L_p(\Omega)$.
So $A^D \, S_{2t} u = S_t \, A^D \, S_t u \in L_p(\Omega)$
and
\begin{equation}
S_{2t} u \in (A^D)^{-1} (L_p(\Omega)) \subset C_0(\Omega)
\label{etwreg103;10}
\end{equation}
by Corollary~\ref{cwreg208}.
Hence $S_t C_0(\Omega) \subset C_0(\Omega)$ for all $t > 0$.
For all $t > 0$ let $S^c_t = S_t|_{C_0(\Omega)} \colon C_0(\Omega) \to C_0(\Omega)$.
Then $(S^c_t)_{t > 0}$ is a semigroup on $C_0(\Omega)$.
Moreover, using again the Gaussian kernel bounds
there exists an $M \geq 1$ such that
$\|S^c_t\| \leq \|S^{(\infty)}_t\| \leq M$
for all $t \in (0,1]$.
Let $t \in (0,1]$ and $u \in D(A_c)$.
Then
\[
\|(I - S^c_t) u\|_{C_0(\Omega)}
= \| \int_0^t S^s \, A_c u \, ds\|_{C_0(\Omega)}
\leq \int_0^t M \, \|A_c u\|_\infty \, ds
= M \, t \, \|A_c u\|_\infty
. \]
So $\lim_{t \downarrow 0} S^c_t u = u$ in $C_0(\Omega)$.
Since $D(A_c)$ is dense in $C_0(\Omega)$ by Lemma~\ref{lwreg311},
one deduces that $\lim_{t \downarrow 0} S^c_t u = u$ in $C_0(\Omega)$
for all $u \in C_0(\Omega)$.
So $S^c$ is a $C_0$-semigroup.
Finally, using once more the Gaussian kernel bounds,
it follows that the semigroup $S^c$ is holomorphic (see \cite{AE1} Theorem~5.4).
\end{proof}
We conclude this section by establishing Gaussian kernels which are continuous
up to the boundary.
For this we use the following special case of \cite{AE8} Theorem~2.1.
\begin{prop} \label{pwreg350}
Suppose that $|\partial \Omega| = 0$.
Let $T$ be a semigroup in $L_2(\Omega)$ such that
$T_t L_2(\Omega) \subset C(\overline \Omega)$ and
$T_t^* L_2(\Omega) \subset C(\overline \Omega)$
for all $t > 0$.
Then for all $t > 0$ there exists a unique $k_t \in C(\overline \Omega \times \overline \Omega)$
such that
\[
(T_t u)(x) = \int_\Omega k_t(x,y) \, u(y) \, dy
\]
for all $u \in L_2(\Omega)$ and $x \in \Omega$.
\end{prop}
We continue to denote by $S$ the semigroup generated by $-A^D$
and we also denote by $S$ the holomorphic extension.
For all $\theta \in (0,\pi]$ let
$\Sigma(\theta) = \{ z \in \mathds{C} \setminus \{ 0 \} : |\arg z| < \theta \} $
be the open sector with (half)angle~$\theta$.
\begin{thm} \label{twreg351}
Adopt the notation and assumptions of Theorem~\ref{twreg101}.
In addition assume that $b_k$ is real valued for all $k \in \{ 1,\ldots,d \} $.
Let $\theta$ be the holomorphy angle of $S$.
Then for all $z \in \Sigma(\theta)$ there exists a unique
$k_z \in C(\overline \Omega \times \overline \Omega)$ such that the following is
valid.
\begin{tabelR}
\item \label{twreg351-1}
$(S_z u)(x) = \int_\Omega k_z(x,y) \, u(y) \, dy$
for all $z \in \Sigma(\theta)$, $u \in L_2(\Omega)$ and $x \in \overline \Omega$.
\item \label{twreg351-2}
$k_z(x,y) = 0$ for all $z \in \Sigma(\theta)$ and
$x,y \in \overline \Omega$ with $x \in \partial \Omega$ or $y \in \partial \Omega$.
\item \label{twreg351-3}
The map $z \mapsto k_z$ is holomorphic from $\Sigma(\theta)$ into
$C(\overline \Omega \times \overline \Omega)$.
\item \label{twreg351-4}
For all $\theta' \in (0,\theta)$ there exist $b,c,\omega > 0$ such that
\[
|k_z(x,y)|
\leq c \, |z|^{-d/2} \, e^{\omega |z|} \, e^{- b \frac{|x-y|^2}{|z|}}
\]
for all $z \in \Sigma(\theta')$ and $x,y \in \overline \Omega$.
\end{tabelR}
\end{thm}
\begin{proof}
It follows from (\ref{etwreg103;10}) that $S_z L_2(\Omega) \subset C_0(\Omega)$
for all $z \in \Sigma(\theta)$.
Since the coefficients $b_k$ are real, also the adjoint operator
satisfies the conditions of Theorem~\ref{twreg101}.
Therefore $S_z^* L_2(\Omega) \subset C_0(\Omega)$
for all $z \in \Sigma(\theta)$.
It follows from Proposition~\ref{pwreg350} that for all
$z \in \Sigma(\theta)$ there exists a unique
$k_z \in C(\overline \Omega \times \overline \Omega)$ such that
$(S_z u)(x) = \int_\Omega k_z(x,y) \, u(y) \, dy$
for all $u \in L_2(\Omega)$ and $x \in \overline \Omega$.
Since $S_z u \in C_0(\Omega)$ one deduces that $k_z(x,y) = 0$
for all $z \in \Sigma(\theta)$, $x \in \partial \Omega$ and
$y \in \overline \Omega$.
Considering adjoints the same is valid with $x$ and $y$ interchanged.
If $v,w \in C_0(\Omega)$, then the map
\[
z \mapsto
\langle k_z, v \otimes \overline w
\rangle_{C(\overline \Omega \times \overline \Omega) \times C(\overline \Omega \times \overline \Omega)^*}
= (S_z u, v)_{L_2(\Omega)}
\]
is holomorphic on $\Sigma(\theta)$.
Therefore Statement~\ref{twreg351-3} is a consequence of \cite{ArN} Theorem~3.1.
The Gaussian bounds of Statement~\ref{twreg351-4} follow from
\cite{AE1} Theorem~5.4.
\end{proof}
\subsection*{Acknowledgements}
The second-named author is most grateful for the hospitality extended
to him during a fruitful stay at the University of Ulm.
He wishes to thank the University of Ulm for financial support.
Part of this work is supported by an
NZ-EU IRSES counterpart fund and the Marsden Fund Council from Government funding,
administered by the Royal Society of New Zealand.
Part of this work is supported by the
EU Marie Curie IRSES program, project `AOS', No.~318910. |
1808.10399 | \section*{Acknowledgments}
Sven Buechel would like to thank his doctoral advisor Udo Hahn, JULIE Lab, for funding his research visit at the University of Pennsylvania.
\section{Introduction}
\label{sec:intro}
Over two decades after the seminal work by \newcite{Picard97} the quest of \textit{Affective Computing}, to ease the interaction with computers by giving them a sense of how emotions shape our perception and behavior, is still far from being fulfilled. Undoubtedly, major progress has been made in NLP, with sentiment analysis being one of the most vivid and productive areas in recent years \cite{Liu15}.
However, the vast majority of contributions has focused on \textit{polarity prediction}, typically only distinguishing between positive and negative feeling or evaluation, usually in social media postings or product reviews \cite{Rosenthal17,Socher13}. Only very recently, researchers started exploring more sophisticated models of human emotion on a larger scale \cite{Wang16acl,Abdul17acl, Mohammad17wassa, Buechel17eacl, Buechel18coling, Buechel18naacl}. Yet such approaches, often rooted in psychological theory, also turned out to be more challenging in respect to annotation and modeling \cite{Strapparava07}.
Surprisingly, one of the most valuable affective phenomena for improving human-machine interaction has received surprisingly little attention: \textit{Empathy}. Prior work focused mostly on {\it spoken dialogue}, commonly addressing conversational agents, psychological interventions, or call center applications \cite{Mcquiggan2007,Fung16,Perez-Rosas17,Alam17}.
In contrast, to the best of our knowledge, only three contributions \cite{Xiao12,Gibson15,Khanpour17} previously addressed {\it text-based} empathy prediction\footnote{
Psychological studies commonly distinguish between \textit{state} and \textit{trait} empathy. While the former construct describes the amount of empathy a person experiences as a direct result of encountering a given stimulus, the latter refers to how empathetic one is on average and across situations. This studies exclusively addresses \textit{state empathy}. For a contribution addressing \textit{trait empathy} from an NLP perspective, see \newcite{Abdul2017icwsm}.
}
(see Section \ref{sec:modeling} for details). Yet, all of them are limited in three ways: (a) neither of their corpora are available leaving the NLP community without shared data, (b) empathy ratings were provided by others than the one actually experiencing it which qualifies only as a weak form of ground truth, and (c) their notion of empathy is quite basic, falling short of current and past theory.
In this contribution we present the first publicly available gold standard for text-based empathy prediction.
It is constructed using a novel annotation methodology which reliably captures empathy assessments via multi-item scales. The corpus as well as our work as a whole is also unique in being---to the best of our knowledge---the first computational approach differentiating \textit{multiple types of empathy}, empathic concern and personal distress, a distinction well recognized throughout psychology and other disciplines.\footnote{Data and code are available at: \url{https://github.com/wwbp/empathic_reactions}}
\section{Corpus Design and Methodology}
\label{sec:corpus}
\textbf{Background. }
Most psychological theories of empathic states are focused on reactions to negative rather than positive events. Empathy for positive events remains less well understood and is thought to be regulated differently \cite{morelli2015emerging}. Thus we focus on empathetic reactions to
need or suffering.
Despite the fact that everyone has an immediate, implicit understanding of empathy, research has been vastly inconsistent in its definition and operationalization \cite{cuff2016empathy}.
%
There is agreement, however, that there are multiple forms of empathy (see below).
%
%
The by far most widely cited state empathy scale
is Batson's Empathic Concern -- Personal Distress Scale \cite{batson1987distress}, henceforth {\it empathy} and {\it distress}.
%
Distress is a self-focused, negative affective state that occurs when one feels upset due to witnessing an entity's suffering or need, potentially via ``catching'' the suffering target's negative emotions.
Empathy is a warm, tender, and compassionate feeling for a suffering target. It is other-focused, retains self-other separation, and is marked by relatively more positive affect
\cite{batson1991evidence,goetz2010compassion,Mikulincer10,sober1997unto}.
\textbf{Selection of News Stories. }
Two research interns (psychology undergraduates) collected a total of 418 articles from popular online news platforms, selected to likely evoke empathic reactions, after being briefed on the goal and background of this study.
These articles were then used to elicit empathic responses in participants.
\textbf{Acquiring Text and Ratings. }
The corpus acquisition was set up as a crowdsourcing task on \texttt{MTurk.com} pointing to a \texttt{Qualtrics.com} questionnaire. The participants completed background measures on demographics and personality, and then proceeded to the main part of the survey where they read a random selection of five of the news articles.
After reading each of the articles, participants were asked to rate their level of empathy and distress before describing their thoughts and feelings about it in writing.
In contrast to previous work, this set-up allowed us to acquire empathy scores of the actual {\it writer} of a text, instead of having to rely on an external evaluation by third parties (often student assistants with background in computer science). Arguably, our proposed annotation methodology yields more appropriate gold data, yet also leads to more variance in the relationship between linguistic features
and empathic state ratings. That is because each rating reflects a single individual's feelings rather than a more stable average assessment by multiple raters. To account for this, we use {\it multi-item scales} as is common practice in psychology. I.e., participants give ratings for multiple items measuring the same construct (e.g., empathy) which are then averaged to obtain more reliable results.
As far as we know, this is the first time that multi-item scales are used in sentiment analysis.\footnote{ Here, we use \textit{sentiment} as an umbrella term subsuming semantic orientation, emotion, as well as highly related concepts such as empathy.}
In our case, participants used Batson's Empathic Concern -- Personal Distress Scale (see above), i.e, rating 6 items for empathy (e.g., {\it warm, tender, moved}) and 8 items for distress (e.g., {\it troubled, disturbed, alarmed}) using a 7-point scale for each of those (see Appendix for details).
After rating their empathy, participants were asked to share their feelings about the article as they would with a friend in a private message or with a group of friends as a social media post in 300 to 800 characters.
Our final gold standard consists of these {\it messages} combined with the numeric ratings for empathy and distress.
In sum, 403 participants completed the survey. Median completion time was 32 minutes and each participant received 4 USD as compensation.
\textbf{Post-Processing. }
Each message was manually reviewed by the authors. Responses which deviated from the task description (e.g., mere copying from the articles at display) were removed (31 responses, 155 messages), leading to a total 1860 messages in our final corpus. Gold ratings for empathy and distress were derived by averaging the respective items of the two multi-item scales.
\section{Corpus Analysis}
\label{sec:analysis}
For a first impression of the language of our new gold standard, we provide illustrative examples in Table \ref{tab:examples}. The participant in Example (1) displays higher empathy than distress, (2) displays higher distress than empathy, and (3) shows neither empathic state, but employs sarcasm, colloquialisms and social-media-style acronyms to express lack of emotional response to the article. As can be seen, the language of our corpus is diverse and authentic, featuring many phenomena of natural language which render its computational understanding difficult, thus constituting a sound but challenging gold standard for empathy prediction.
{\bf Token Counts.} We tokenized the 1860 messages using NLTK tools \cite{Bird06}. In total, our corpus amounts to $173,686$ tokens. Individual message length varies between 52 and 198 tokens, the median being 84.
See Appendix for details.
{\bf Rating Distribution. }
\input{figs/scatter_empathy_distress.tex}
Figure \ref{fig:empathy_vs_distress} displays the bivariate distribution of empathy and distress ratings. As can be seen both target variables have a clear linear dependence, yet show only a moderate Pearson correlation of $r{=}.451$, similar to what was found in prior research \cite{batson1987distress,batson1997empathy}.
This finding supports that the two scales capture distinct affective phenomena and underscores the importance of our decision to describe empathic states in terms of {\it multiple} target variables, constituting a clear advancement over previous work.
Both kinds of ratings show good coverage over the full range of the scales.
{\bf Reliability of Ratings. } Since each message is annotated by only one rater, its author, typical measures of inter-rater agreement are not applicable.
Instead, we compute {\it split-half reliability} (SHR), a standard approach in psychology \cite{cronbach1947test} which also becomes increasingly popular in sentiment analysis \cite{Mohammad17wassa,Buechel18coling}. SHR is computed by splitting the ratings for the individual scale items (e.g., {\it warm}, {\it tender}, etc. for empathy) of all participants randomly into two groups, averaging the individual item ratings for each group and participant, and then measuring the correlation between both groups. This process is repeated 100 times with random splits, before again averaging the results.
Doing so for empathy and distress, we find very high\footnote{
For a comparison against previously reported SHR values for different emotional categories, see \newcite{Mohammad17starsem}.
}
SHR values of $r{=}.875$ and $.924$, respectively.
\section{Modeling Empathy and Distress}
\label{sec:modeling}
In this section, we provide experimental results for modeling empathy and distress ratings based on the participants' messages (see Section \ref{sec:corpus}). We examine three different types of models, varying in design complexity. Distinct models were trained for empathy and distress prediction.
First, ten percent of our newly created gold standard were randomly sampled to be used in development experiments. Then, the main experiment was conducted using 10-fold cross-validation (CV), providing each model with identical train-test splits to increase reliability. The dev set was excluded for the CV experiment.
Model performance is measured in terms of Pearson correlation $r$ between predicted values and the human gold ratings. Thus, we phrase the prediction of empathy and distress as regression problems.
The input to our models
is based on word embeddings, namely the publicly available FastText embeddings
which were trained on Common Crawl (${\approx}600$B tokens) \cite{Bojanowski17,Mikolov18advances}.
{\bf Ridge. } Our first approach is Ridge regression, an $\ell^2$-regularized version of linear regression. The centroid of the word embeddings of the words in a message is used as features (embedding centroid). The regularization coefficient $\alpha$ is automatically chosen from $\{1, .5, .1, ..., .0001\}$ during training.
{\bf FFN. } Our second approach is a Feed-Forward Net with two hidden layers (256 and 128 units, respectively) with ReLU activation. Again, the embedding centroid is used as features.
{\bf CNN. } The last approach is a Convolutional Neural Net.\footnote{
Recurrent models did not perform well during development due to high sequence length.
}
We use a single convolutional layer with filter sizes 1 to 3, each with 100 output channels, followed by an average pooling layer and a dense layer of 128 units. ReLUs were used for the convolutional and again for the dense layer
Both deep learning models were trained using the Adam optimizer \cite{Kingma15} with a fixed learning rate of $10^{-3}$ and a batch size of 32. We trained for a maximum of 200 epochs yet applied early stopping if the performance on the validation set did not improve for 20 consecutive epochs. We applied dropout with probabilities of $.2$, $.5$ and $.5$ on input, dense and pooling layers, respectively. Moreover $\ell^2$ regularization of $.001$ was applied to the weights of conv and dense layers. Word embeddings were not updated
\input{tabs/performance.tex}
The results are provided in Table \ref{tab:performance}. As can be seen, all of our models achieve satisfying performance figures ranging between $r{=}.379$ and $.444$, given the assumed difficulty of the task (see Section \ref{sec:analysis}).
On average over the two target variables, the CNN performs best, followed by Ridge and the FFN. While the CNN significantly outperforms the other models in every case, the differences between Ridge and the FFN are not statistically significant for either empathy or distress.\footnote{We use a two-tailed $t$-test for paired samples based on the results of the individual CV runs; $p<.05$.}
The improvements of the CNN over the other two approaches are much more pronounced for distress than for empathy.
Since only the CNN is able to capture semantic effects from composition and word order, our data suggest that these phenomena are more important for predicting distress, whereas lexical features alone already perform quite well for empathy.
{\bf Discussion. }
In comparison to closely related tasks such as emotion prediction \cite{Mohammad17wassa} our performance figures for empathy and distress prediction are generally lower. However, given the small amount of previous work for the problem at hand, we argue that our results are actually quite strong. This becomes obvious, again, in comparison with emotion analysis where early work achieved correlation values around $r{=}.3$ at most \cite{Strapparava07}. Yet state-of-the-art performance literally doubled over the last decade \cite{Beck17}, in part due to much larger training sets.
Comparison to the limited body of previous work in text-based empathy prediction is difficult for a number of reasons, e.g., differences in domain, evaluation metric, as well as methodology and linguistic level of annotation.
\newcite{Khanpour17} annotate and model empathy in online health communities on the \textit{sentence}-level, whereas the instances in our corpus are much longer and comprise multiple sentences. In contrast to our work, they treat empathy prediction as a classification problem. Their best performing model, a CNN-LSTM, achieves an F-score of .78.
\newcite{Gibson15} predict therapists' empathy in motivational interviews. Each therapy session transcript received one numeric score. Thus, each prediction is based on much more language data than our individual messages comprise.
Their best model achieves a Spearman rank correlation of $.61$ using $n$-gram and psycholinguistic features.
Our contribution goes beyond both of these studies by, first, enriching empathy prediction with personal distress and, second, by annotating and modeling the empathic state actually felt by the writer, instead of relying on external assessments.
\section{Conclusion}
This contribution was the first to attempt empathy prediction in terms of {\it multiple} target variables, empathic concern and personal distress.
We proposed a novel annotation methodology
capturing empathic states actually felt by the author of a statement, instead of relying on third-party assessments.
To ensure high reliability in this single-rating setting, we employ multi-item scales in line with best practices in psychology.
Hereby we create the first publicly available gold standard for empathy prediction in written language, our survey being set-up and supervised by an expert psychologist.
Our analysis shows that the data set excels with high rating reliability and an authentic and diverse language, rich of challenging phenomena such as sarcasm. We provide experimental results for three different predictive models, our CNN turning out superior.
\subsection*{Details on Stimulus and Instructions}
Before being used in our survey, the selected news articles were categorized by the research interns who gathered them in terms of their intensity of suffering (major or minor), cause of suffering (political, human, nature or other), patient of suffering (humans, animals, environment, or other) and scale of suffering (individual or mass). Research interns also provided a short list of key words for each article. This additional information was gathered to examine the influence of these factors on empathy elicitation and modeling performance in later studies.
At the beginning of the survey participants completed background items covering general demographics (including age, gender, and ethnicity), the most commonly used {\it trait} empathy scale, the Interpersonal Reactivity Index \cite{davis1980interpersonal}, a brief assessment of the Big 5 personality traits \cite{gosling2003very}, life satisfaction \cite{diener1985satisfaction}, as well as a brief measure of generalized trust.
After reading each of the articles, participants rated their level of empathic concern and personal distress using multi-item scales. {\bf Figure \ref{fig:scales}} shows a cropped screenshot of the survey hosted on \texttt{Qualtrics.com}.
The first six items ({\it warm, tender, sympathetic, softhearted, moved}, and {\it compassionate}) refer to empathy. The last eight items ({\it worried, upset, troubled, perturbed, grieved, disturbed, alarmed}, and {\it distressed}) refer to distress.
\input{figs/scales.tex}
After completing the rating items, participants were instructed to describe their reactions in writing as follows:
{\it Now that you have read this article, please write a message to a friend or friends about your feelings and thoughts regarding the article you just read. This could be a private message to a friend or something you would post on social media. Please do not identify your intended friend(s) --- just write your thoughts about the article as if you were communicating with them. Please use between 300 and 800 characters.}
\subsection*{Further Corpus Analyses}
The word clouds in {\bf Figure \ref{fig:wordcloud_empathy}} and {\bf Figure \ref{fig:wordcloud_distress}} show 1-grams of our corpus which correlate significantly (Benjamini-Hochberg corrected $p < .05$) with high empathy and high distress ratings, respectively. In the word clouds, larger size indicates higher correlation and the color scale, gray-blue-red, indicates word frequency, dark red being most prevalent. The Differential Language Analysis Toolkit \cite{schwartz2017dlatk} was utilized for this analysis.
As can be seen, the word clouds display high face-validity, giving further evidence for the soundness of our acquisition methodology.
\input{figs/empathy_pos.tex}
\input{figs/distress_pos.tex}
{\bf Figure \ref{fig:hist_token_counts}} displays the distribution of the message length of our corpus in tokens. As can be seen the majority of messages contain between 60 and 100 tokens. Yet outliers go up to almost 200. The introduction of a character cap for the writing task proved successful in comparison to a pilot study where this measure has not been in place. In the latter case, the maximum number of tokens was nearly twice as high due to even stronger outliers.
\input{figs/hist_token_count.tex} |
1708.09370 | \section{Introduction}
It has recently been demonstrated that the Einstein-Rosen bridge of the eternal AdS black hole can be made traversable via a particular double-trace deformation of the boundary CFTs \cite{Gao:2016bin}. Before the deformation, the two boundary theories were non-interacting and placed in a specific entangled state $|{\Psi_{\rm tfd}}\rangle = \sum_E {e^{-{\beta E \over 2}}\over \sqrt{Z}}|E,E\rangle$. The deformation creates shockwaves in the bulk, with negative average null energy, which shrink the horizon of the black hole a little, allowing a particle to traverse the wormhole from one asymptotic region to the other. The deformation can also be formulated as a quantum teleportation protocol between the two CFTs \cite{Gao:2016bin,Maldacena:2017axo}. This setup has provided evidence for the smoothness of the horizon of the eternal black hole and for the ER=EPR proposal \cite{Maldacena:2013xja}.
In this paper we consider a similar experiment on a large class of states with different details in the entanglement between the CFTs. These states are of the form $|\Psi_T \rangle= e^{i H_R T} |{\Psi_{\rm tfd}}\rangle$, where $T$ is a parameter controlling the entanglement.
While these states are as entangled as $|{\Psi_{\rm tfd}}\rangle$,
they are {\it different} quantum states. We argue that the double-trace deformation (and the quantum teleportation protocol) can be modified to apply to each one of the states from this class. This provides evidence
that they all have a smooth horizon.
This simple observation has some interesting implications. First, states of this family with $T>0$ can in principle be used in a lab setup to allow an observer crossing the wormhole to travel far in the future in finite amount of proper time. During the
trip the observer is mostly in free fall. Second, for states with $T<0$, the bulk observer experiences evolution by finite proper time, while the elapsed time in the lab can become very small. Finally, the fact that we can establish the smoothness of this class of time-shifted states is of some interest for the firewall paradox \cite{Almheiri:2012rt, Almheiri:2013hfa, Marolf:2013dba} and the state-dependent proposal of \cite{Papadodimas:2012aq,Papadodimas:2013b,Papadodimas:2013,Papadodimas:2013kwa,Papadodimas:2015xma,Papadodimas:2015jra}.
We emphasize that the CFT correlators needed to support the claims of this paper are isomorphic to those relevant for \cite{Gao:2016bin, Maldacena:2017axo}. Hence, proving the traversability of the wormhole in the time-shifted states is equivalent to the same proof for the TFD.
This paper is organized as follows: in section 2, we discuss the basic setup used by \cite{Gao:2016bin,Maldacena:2017axo}, which is the basis of the traversable wormhole. In section 3, we argue that time-shifted wormholes can be made traversable in a similar manner. In sections 4 and 5 we discuss this setup from a laboratory and a quantum-teleportation point of view. Finally, in section 6 we discuss some connections to the firewall paradox and state-dependence.
\section{\bf Traversable AdS wormholes}
In \cite{Maldacena:2001kr} it was proposed that two non-interacting copies of the same holographic CFT placed in the ``thermofield'' (TFD) entangled state
$$
|{\Psi_{\rm tfd}}\rangle = \sum_E {e^{-{\beta E \over 2}} \over \sqrt{Z}} |E\rangle \otimes |E\rangle
$$
are dual to the eternal black hole in AdS. This gravitational background can also be thought of as a wormhole connecting two asymptotic AdS regions. However, in this setup the wormhole is non-traversable, which is important for consistency
given that the boundary CFTs are non-interacting and hence no information can be exchanged between them.
It was realized in \cite{Gao:2016bin} that the wormhole can become traversable by coupling the two CFTs with a double-trace perturbation $e^{i g {\cal O}_L {\cal O}_R}$, which is turned on for a short time around $t=0$. Here ${\cal O}_{L/R}$ is a simple operator in the two corresponding CFTs.
By selecting the sign of $g$ appropriately, the perturbation creates negative null energy shockwaves falling into the black hole from both sides, see figure \ref{fig1}. This shrinks the horizon a little. As a result, an observer who dives from the left CFT at $t=t_{\rm in}<0$ towards the black hole emerges on the right side and reaches close to the right boundary at $t=t_{\rm out}>0$. Here both $|t_{\rm in}|$ and $t_{\rm out}$ are of the order of the scrambling time $\beta \log S$. The details may depend on the theory and the form of the shockwave.
This setup is interesting because it allows us to probe the space-time in the interior of the wormhole purely in terms of 2-sided correlators of standard CFT operators. Directly probing the black hole interior from the CFT is more difficult,
because we first have to define approximately local operators in AdS, which is non-trivial, especially behind the horizon. The setup of \cite{Gao:2016bin} bypasses the need to define these local bulk operators, as it probes the interior indirectly.
The observer is created by a local CFT operator $\phi_L(t_{\rm in})$, the perturbation is generated by CFT operators ${\cal O}_L(0){\cal O}_R(0)$ and the outgoing observer is detected by a local CFT operator $\phi_R(t_{\rm out})$. So the question of what happens to an observer falling through this wormhole can be translated into a computation of correlators of local CFT operators, the analogue of an S-matrix element in AdS. These well-defined (though difficult to compute in practice\footnote{In \cite{Maldacena:2017axo} relevant correlators were computed in the SYK model.}) CFT correlators can in principle provide evidence for the smoothness of the horizon of the eternal black hole and of the proposal \cite{Maldacena:2001kr}.
\begin{figure}[t]
\centering
\includegraphics{BasicShockPenrose}
\caption{A double trace deformation creates a shockwave which displaces the probe $\phi$, allowing it to escape from the black hole. The coordinates are discontinues at the shockwave, while the path of the probe is smooth.}
\label{fig1}
\end{figure}
In \cite{Gao:2016bin} it was pointed out that this protocol is related to quantum-teleportation, and in \cite{Maldacena:2017axo} this quantum-teleportation protocol was made more explicit. First the observer is placed in the left CFT at $t=t_{\rm in}$. At $t=0$ the operator ${\cal O}_L$ is measured. Depending on the resulting eigenvalue $o_L$, the unitary $e^{i g o_L {\cal O}_R}$ is applied on the right CFT. Then the quantum state of the system at time $t=t_{\rm out}$ contains the observer emerging from the black hole into the right asymptotic AdS region. See also \cite{Susskind:2017nto}.
\section{Time-shifted wormholes}
In this paper we will work under the assumption that the TFD state can be made traversable for a semi-classical observer, as argued in \cite{Gao:2016bin, Maldacena:2017axo}. Using this as our starting postulate, we point out that there is a large class of other states with similar behaviour. These are states of the form
\begin{equation}
\label{tfdshift}
|\Psi_T \rangle \equiv e^{i H_R T} |{\Psi_{\rm tfd}}\rangle = \sum_E {e^{-{\beta E \over 2}} \over \sqrt{Z}} e^{i E T}|E\rangle \otimes |E\rangle
\end{equation}
It is important to realize that these are different quantum states from $|{\Psi_{\rm tfd}}\rangle$, due to the energy-dependent phases.
The bulk interpretation of these time-shifted states, is that they are related to the usual eternal black hole by a large diffeomorphism, see for example \cite{Papadodimas:2015xma}. This is a diffeomorphism which acts as a time translation on the right boundary, but trivially on the left boundary. Since this is a large diffeomorphism (allowed by the boundary conditions), we are not supposed to mod-out by it. Instead, it maps a physical state to a different physical state. The states \eqref{tfdshift} can be represented as the usual eternal AdS black hole, but where the wormhole is ``anchored'' at different points in time on the two boundaries.
\vskip10pt
\noindent{\bf i) ${\bf T>0}$}
\vskip10pt
We argue that for every choice of $T>0$ the traversable wormhole protocol of \cite{Gao:2016bin} can be implemented: at $t=t_{\rm in}<0$ an observer jumps into the left CFT. At $t=0$ we briefly couple the two CFTs by the operator $e^{i g {\cal O}_L(0) X_R(0)}$, where now $X_R(0)=e^{iH_RT}\mathcal{O}_R e^{-iH_RT}$ is the ``precursor'' of the operator ${\cal O}_R(T)$. Then at time $t=T + t_{\rm out}$ the observer will come out in the right CFT, in exactly the same form as in the original experiment on the state $|{\Psi_{\rm tfd}}\rangle$. See figure \ref{fig2}. Alternatively, we could have used a precursor on the left, i.e. coupling the two CFTs with $e^{i g Y_L(T) \mathcal{O}_R(T)}$ where $Y_L(0)=e^{-iH_LT}\mathcal{O}_L e^{iH_LT}$, or some combination of left and right precursors at time $t$ satisfying $t_{\rm in} < t<T$. The details of how the result of this experiment is isomorphic
to that of the TFD is explained in appendix \ref{AppCTW}.
\begin{figure}[ht]
\centering\centering
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\textwidth]{RetarShockPenrose}
\caption{ }
\label{fig2a}
\end{subfigure}
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\textwidth]{LeftShockPenrose}
\caption{ }
\label{fig2b}
\end{subfigure}
\caption{a) In the time-shifted wormhole, with $T>0$, we need to act with a more complicated operator $X_R$ to receive the probe. b) Similar results can be achieved by using a precursor on the left CFT. Note that the Penrose diagrams can be misleading for precursors, because they may have a more involved bulk interpretation, see for example \cite{Heemskerk:2012mn}. However, the quantum state on the boundary after the end of the experiment can be reliably predicted.}
\label{fig2}
\end{figure}
We emphasize that this statement is {\it exact}, even if $T$ is appreciably large. In other words, provided we accept that the protocol leads to a smooth traversable wormhole for the observer falling into the TFD, then same can happen for all the other states, without any approximation. By tuning $T$ we can arrange that the observer will emerge significantly later in the future. Moreover, the quantum state of the observer after emerging in the right CFT will be exactly the same --- simply displaced in time. In particular her memories, and the amount of proper time that she will think has elapsed, will be the same\footnote{If we consider a big black hole in AdS whose radius $R_{\rm bh} \sim R_{\rm AdS}$ then the elapsed proper time will also be of order $R_{\rm AdS}$.} and independent of $T$.
\begin{figure}
\centering
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\textwidth]{PerturbShockLPenrose}
\caption{ }
\label{fig3a}
\end{subfigure}
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\textwidth]{PerturbShockPenrose}
\caption{ }
\label{fig3b}
\end{subfigure}
\caption{a) The memory of the probe can be modified by sending an early perturbation from the right. b) The same setup for the thermofield state.}
\label{fig3}
\end{figure}
While we can show that the observer emerges in the right CFT with memories of a smooth crossing of the wormhole, there is a logical possibility that the following scenario took place: during the crossing the observer actually experienced some unpleasant parts --- for example a firewall --- which killed her upon impact. Then the dynamics of the system ``resurrected`` the observer on the right side and in a state with memories corresponding to a smooth crossing. This scenario may sound un-natural, but in some sense it is not so difficult to realize mathematically: for instance imagine an observer living inside a quantum system and that we act at $t=t_0$ with a unitary $U$ which kills him. At a later time $t_1$ we act with the precursor $e^{-i H (t_1-t_0)} U^{-1} e^{i H (t_1-t_0)}$. Then the quantum state of the system for $t>t_1$ is the same as what it would have been, had we not acted with the first unitary which killed the observer. In this sense the sequence of $U$ at $t_0$ and its inverse precursor at $t=t_1$ kills and resurrects the observer. Moreover, when resurrected the observer has no memories of the fact that he had been killed. Notice that the unitary $U$ and its inverse precursor do not have to be fine-tuned with respect to the initial state of the observer in order to be able to resurrect him.
In our setup the meaning of this question is whether 2-sided CFT correlators can exclude the possibility that the observer was killed when falling into the black hole from the left and resurrected when emerging in the right CFT.
In order to directly address this question we would have to define local bulk observables which would be able to tell us what really happened in the middle of the bulk spacetime. As an easier alternative, we can send early signals from the right to probe the path of the observer, see figure \ref{fig3}. These signals must be sufficiently weak, to avoid killing the observer or pushing the observer \cite{Shenker:2013yza, Shenker:2014} outside the window in which the coupling between the CFTs allows the extraction of the observer. We can then study if these signals modify the final quantum state of the outgoing observer in the way which is expected from effective field theory. If they do so, then we get additional evidence, though not definitive proof, that
nothing dramatic happened to the observer while crossing. Then this becomes again a statement of CFT correlators, which could in principle be computed.
We can see that these CFT correlators in the time-shifted TFD states are again isomorphic to the same correlators in the TFD state, provided that the signals from the right are sent with the appropriate time-shift. Hence if nothing strange happens to an observer crossing the TFD state as claimed by \cite{Gao:2016bin, Maldacena:2017axo}, then the same will be true for the time-shifted states. See appendix \ref{AppCTW} for some details.
\vskip10pt
\noindent{\bf ii) ${\bf T<0}$}
\vskip10pt
\begin{figure}
\centering
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\textwidth]{AdvanShockPenrose}
\caption{ }
\label{fig4a}
\end{subfigure}
\begin{subfigure}[b]{.48\textwidth}
\includegraphics[width=\textwidth]{PresentShockPenrose}
\caption{ }
\label{fig4b}
\end{subfigure}
\caption{a) In the time-shifted wormhole, with $T < 0$, we can still recover the probe provided
that T is not to large. b) The extreme case in which we receive the probe almost immediately.}
\label{fig4}
\end{figure}
We can also consider states with $T<0$, with $|T|<t_{\rm out}$. We can couple the two CFTs with the operator $e^{i g {\cal O}_L(0) X_R(0)}$, where again $X_R$ is the precursor of the operator ${\cal O}_R(T)$. Provided that $|T|<t_{\rm out}$, the observer will emerge in the right CFT at $t=T+t_{\rm out}>0$. Notice that in this setup the total lab time it takes for the observer to cross the wormhole is $T+ |t_{\rm in}| +t_{\rm out}$, which is less than $|t_{\rm in}|+t_{\rm out}$. So the crossing of the observer is accelerated for the lab frame, even though the proper time according to the observer is exactly the same.
Actually, we can shorten the lab time even more as follows. We throw the observer from the left at $t=t_{\rm in}$ and we couple the two CFTs at $t=t_{\rm in} + \epsilon$ by the operator $e^{i g Y_L X_R}$, where both are precursors. Then the observer comes out at $t= T+t_{\rm out}$. Causality requires that $T+t_{\rm out} > t_{\rm in} + \epsilon$. This means we can push $T$ towards the negative values all the way to $T_{\rm min}= -|t_{\rm in}|-t_{\rm out} + \epsilon$. In this case the observer emerges on the right CFT almost immediately, even though according to her own experience the same (finite) amount of proper time as before has elapsed.
Notice that the full bulk interpretation of these protocols may be complicated due to the use of the precursors, which are complicated non-local operators. On the other hand we can reliably predict the exact quantum state of the observer --- with memories of smooth crossing and finite elapsed proper time --- as she emerges on the right CFT.
\section{Laboratory point of view}
In order to avoid possible confusions regarding the meaning of the time-shift in \eqref{tfdshift}, it is useful to think of the experiment in the following way.
We imagine a laboratory in a universe where gravity does not play an important role. We have two identical CFTs realized in some material in the laboratory. These are supposed to be holographic CFTs dual to gravity in AdS.
There is only one common time, the laboratory time $t$. Each of the CFTs evolve with its own Hamiltonian, but since the CFTs live in the laboratory we identify the CFT time with the lab time $t_L=t_R = t$.
The CFTs are prepared to be in a specific entangled state \eqref{tfdshift}. Hence it is more appropriate to think of the parameter $T$ as a ``dial'' that selects the initial state, rather than a time-shift. Of course preparing two CFTs in the TFD state or in one of its time-shifted cousins would be very difficult in practice, but possible in principle.
The laboratory technician can prepare a protocol where the observer is first injected into the left CFT at $t=t_{\rm in}$. At $t=0$ the lab technician couples the two CFTs by the operator mentioned previously. Then at $t=T+t_{\rm out}$ the observer emerges in the right CFT. From the point of view of the observer only a finite proper time elapses which is independent of $T$, but from the point of view of the lab the elapsed time is $T + |t_{\rm in}|+t_{\rm out}$, where $T$ can be arbitrarily large. Moreover throughout this experiment the observer is in free-fall, except for the (mild) interaction with the shockwave. We emphasize that the subjective experience of the observer is independent of the value of $T$ and in particular the strengh of the interaction with the schockwave is also independent of $T$.
It is interesting to notice that from the boundary point of view, the quantum information of the observer jumps from the left to the right CFT at $t=0$ when we couple the two CFTs. Then it stays scrambled in the right CFT for a long time, until it emerges in simple form at $t=T+t_{\rm out}$. For instance, suppose that the observer on the left CFT carries a spin which is maximally entangled with some external reference spin. For $t<0$ the purification of the reference spin is in the left CFT. Right after $t>0$ the purification is in the right CFT but in scrambled form. Eventually at $T+t_{\rm out}$ the purification of the reference spin is in the right CFT in terms of a simple spin carried by the observer.
\section{Quantum teleportation}
The double-trace perturbation introduced in \cite{Gao:2016bin} can be slightly modified to be interpreted as a quantum teleportation protocol. In \cite{Maldacena:2017axo} this was described as follows:
we make use of the fact that anything we do on the left boundary after acting with the double trace perturbation cannot affect the right boundary. For example, we could measure $\mathcal{O}_L$ just after the perturbation. Because $\mathcal{O}_L$ and $e^{ig\mathcal{O}_L\mathcal{O}_R}$ commute it would be equivalent to
measure $\mathcal{O}_L$ just before the perturbation and then perturb by $e^{i o_L {\cal O}_R}$, where $o_L$ is the eigenvalue measured. Therefore, instead of acting with the double trace perturbation, the lab technician can implement the following protocol. First he releases the probe at $t=t_{\rm in}$ in the left CFT. Then he measures ${\cal O}_L$ at $t=0$ and project onto one if its eigenstates with resulting eigenvalue $o_L$. Then he
acts with a unitary $ e^{igo_L\mathcal{O}_R}$ at $t=0$ on the right CFT. The right CFT density matrix at the end of the teleportation protocol will be the same as the one in the double trace protocol, while the one on the left boundary will be different.
Notice that in the step of recording $o_L$ and selecting accordingly the unitary on the right we have the transfer of {\it classical} information from left to right, which is a part of a quantum teleportation protocol.
The quantum teleportation protocol can be immediately realized for the time-shifted states: the lab technician first measures ${\cal O}_L$ at time $t=0$. Using the resulting eigenvalue $o_L$ he applies at $t=0$ the unitary $U=e^{i go_L X_R}$ on the right CFT. Here $X_R(t=0)$ is the complicated precursor corresponding to the simple operator ${\cal O}_R(t=T)$. Finally
at time $t=T+t_{\rm out}$ the density matrix of the right CFT will be the same as in the experiment on the TFD at time $t=t_{\rm out}$. This protocol is possible in principle, but it requires the use of the complicated operator $X_R$.
In the case that $T>0$ the lab technician can avoid having to use a complicated precursor, by performing an alternative ``time-delayed quantum teleportation protocol''.
He releases the probe in the left CFT at $t=t_{\rm in}$ ii) then he projects onto an eigenstate of $\mathcal{O}_L$ at $t=0$, recording the eigenvalue $o_L$. Then he waits until $t=T$ and he acts with a simple unitary $U=e^{igo_L\mathcal{O}_R}$ at $t=T$. Finally he considers the right CFT density matrix at $t=t_{\rm out}+T$.
This protocol has the advantage that we do not have to use complicated precursors. We notice that we cannot use this protocol when $T<0$, as we would need to apply a unitary before the measurement of ${\cal O}_L$.
\section{Comments on state-dependence and the firewall}
\label{statedep}
The firewall paradox can be understood in its most precise formulation in terms of {\it typical} pure states of a 1-sided black hole in AdS. The argument starts by assuming that typical pure states have a smooth interior. It is then assumed that there
should exist some fixed linear operators acting on the Hilbert space, which correspond to local semiclassical observables behind the horizon. It is then shown that, according to bulk effective field theory, these observables would have to obey an algebra which is inconsistent with the density of states in the CFT \cite{Almheiri:2013hfa, Marolf:2013dba}. For this argument to work, it is important that we demand a smooth interior for a large class of states, i.e. for typical states. If we only look at a small number of states, then the paradox becomes less
sharp. In \cite{Papadodimas:2012aq,Papadodimas:2013b,Papadodimas:2013,Papadodimas:2013kwa,Papadodimas:2015xma,Papadodimas:2015jra} it was proposed that the paradox in its strongest form, i.e. for typical states, can be resolved by allowing the interior operators to depend on the state.
The smoothness of TFD state, as demonstrated by \cite{Gao:2016bin, Maldacena:2017axo} does not disprove the firewall, as the TFD is one particular state, while the firewall paradox becomes relevant when we consider {\it many} states. However, in \cite{Papadodimas:2015xma} it was pointed out that a version of the firewall paradox can also be formulated if we consider the entire family of time-shifted TFD states $e^{i H_R T}|{\Psi_{\rm tfd}}\rangle$ for all $T\in {\mathbb R}$. It was shown in \cite{Papadodimas:2015xma} that if we demand smoothness for all of these states, then we run into a firewall-like paradox, unless we accept that the interior operators are state-dependent.
The argument of \cite{Papadodimas:2015xma} was based on the assumption of smoothness for all time-shifted TFD states. This seems very plausible from the bulk point of view, since they are all related to TFD by a large diffeomorphism. However, it would be more satisfying if there was more
direct evidence for the smoothness of the time-shifted TFD states.
\begin{figure}[t]
\centering
\includegraphics{LocalShortShockPenrose}
\caption{Local operators at points $P,Q$ are state-dependent}
\label{fig5}
\end{figure}
In this paper we argued that by applying the teleportation protocol \cite{Gao:2016bin, Maldacena:2017axo} to the time-shifted states for all $T\in {\mathbb R}^+$, we find that all of them have a smooth interior. This disproves the firewall
within a class of states where one would naively expect some firewall-like behavior\footnote{The argument of \cite{Papadodimas:2015xma} leads to a firewall-like paradox, even if we restrict to the family of states with $T>0$. To formulate this paradox we need to be able to take $T$ up to a time scale of order $e^S$.}. A natural explanation is that the interior operators in these
states are indeed state-dependent.
The class of time-shifted TFD states, together with the perturbation which allows the particle to escape the horizon, raise an interesting aspect of state-dependence for observables {\it outside the horizon}. Consider a local bulk operator at point $P$ in figure \ref{fig5}. According to the infalling observer,
this point is reached by diving in from the left CFT at $t_{\rm in}$ and freely-falling for a fixed amount of proper-time. For the infaller this relational prescription of the point $P$ is the same for all states, independent of $T$. However, the measurement of the operator at $P$ takes place at laboratory time $t=T+t_{\rm out}$. So this local operator at $P$ can be represented as {\it the same} operator in the Schr{\"o}dinger picture, however --- depending on the microstate --- it is applied by the infalling observer on the Schr{\"o}dinger-picture Hilbert space corresponding to a different time.
We notice that the same property holds for local operators inside the horizon for this class of states, for example for a local operator at point $Q$. It is interesting to understand how this happens from the point of view of the infalling semiclassical observer, i.e. how does she naturally identify the correct moment in time where the operators have to be applied.
\section{Discussion}
We investigated an extension of the traversable-wormhole protocol of \cite{Gao:2016bin, Maldacena:2017axo}, which has interesting physical interpretations. We argued that using a larger class of entangled states, the time-shifted thermofield states, can lead to experiments involving time-travel in the lab.
General relativity allows time-travel to the future, by hovering near the horizon for a while and then flying away an observer can travel to the future. However, in order to move far in time, this method is not very pleasant, as it requires large proper accelerations. In this paper we described a more comfortable time-machine based on quantum entanglement. From the point of view of the observer the experience is pleasant, even if the desired time-difference is large.
We notice that when the time shift $T$ becomes of the order of, or larger that the Poincare recurrence time, then the physical interpretation of the process must be done more carefully, since the observer may come out earlier than $T+t_{\rm out}$, in the ``previous Poincare recurrence''
We also argued that for certain states with $T<0$ we can retrieve the observer almost immediately. One might worry that we can create a very fast computer by sending a computer through the wormhole, while there are fundamental bounds on computation speeds \cite{Lloyd:2000}. However, the CFTs creating the wormhole should be included as part of the computer, which will presumably respect the bounds.
It would be interesting to investigate the traversable wormhole protocol for more general entangled states of two CFTs. One particular class of such states would be superpositions of time-shifted thermofield states. Finally it would be interesting to investigate the possibility of traversing a single-sided black hole. In the case of the SYK model this was discussed in \cite{Kourkoulou:2017zaj}. Another class of candidate states in general holographic CFTs, which could be used as a starting point was proposed in \cite{Papadodimas:2017qit}.
\begin{acknowledgments}
We would like to thank C. Bachas, J. de Boer, A. Gnecchi, M. Guica, D. Jafferis, A. Puhm, S. Raju, A. Stergiou, E. Verlinde for discussions and comments. R.v.B. was supported by NCCR SwissMAP of the Swiss National Science Foundation. K.P. would like to thank ENS, Paris and IHES for hospitality during completion of this work and the Royal Netherlands Academy of Sciences (KNAW).
\end{acknowledgments} |
quant-ph/0703070 | \section{Introduction}
From the beginning of quantum field theory to the present day there is a great
hope that the occurrence of infinities in the formalism of quantum field
theory results from an incomplete description of space-time at very small
distances \cite{Schw}. Bohr and Heisenberg have been the first to suggest that
quantum field theories should be formulated on a space-time lattice, since it
would imply the existence of a smallest length \cite{Heis}. One of the
earliest and most serious attempts towards this goal was the concept of
'quantized space-time' due to Snyder \cite{Sny47, Yan47}. Of course there were
many other researchers who took up this idea over and over again (see for
example Refs. \cite{Fli48, Hill55, Das60, Gol63}) and each of the different
approaches has its own difficulties and advantages. However, with the arrival
of quantum groups and quantum spaces \cite{Ku83, Wor87, Dri85, Jim85, Drin86,
RFT90, Tak90, Man88, Maj95, ChDe96, KS97, Maj93-Int} in the eighties of the
last century a new, very promising method to discretize space and time seems
to be available \cite{FLW96, CW98}. The observation that it leads to very
realistic deformations of classical\ space-time symmetries nourishes the hope
for a new powerful\ regularization schema \cite{GKP96, MajReg, Oec99}.
In this paper attention is focused on q-deformed quantum spaces \cite{CSSW90,
PW90, SWZ91, Maj91, LWW97, Maj94-10}. (For other deformations of space-time
see Refs. \cite{Lu92, Cas93, Dob94, DFR95, ChDe95, ChKu04, Koch04}.)
Concretely, we deal with the so-called 'braided line' and the
three-dimensional q-deformed Euclidean space. The braided line can be viewed
as deformation of the set of real numbers, while the three-dimensional
q-deformed Euclidean space is nothing other than a non-commutative version of
the classical three dimensional Euclidean space. Essentially for us is the
fact that on both spaces differential calculi exist which are recognized as
q-analogs of classical translational symmetry \cite{WZ91, CSW91, Song92,
OSWZ92, Maj93-2}. The aim of our paper is to show that this algebraic
framework is suitable to formulate a q-deformed version of non-relativistic
Schr\"{o}dinger theory.
Towards this end we first extend the coordinate algebras of braided line and
three-dimensional q-deformed Euclidean space by a time element in such a way
that it perfectly fits into the existing algebraic structures. Section
\ref{AlgSet} is devoted to this task. Our considerations will show us that the
time element is central in the corresponding space-time algebras and is
decoupled from space coordinates. In Sec.\thinspace\ref{EleqAn}\ these results
are combined with those of our previous work in Refs. \cite{WW01, BW01, Wac02,
Wac03, Wac04, Wac05, qAn} to give a q-deformed version of analysis to the
extended quantum spaces of braided line and three-dimensional q-deformed
Euclidean space. In doing so, we present expressions for calculating products,
partial derivatives, integrals, exponentials, translations, and braided
products on the quantum spaces under consideration. We will see that due to
its special algebraic properties the time element behaves like a commutative
coordinate and the expressions for its derivatives, integrals, and so on are
given by the classical ones. For the space coordinates, however, the situation
is somewhat different, since their derivatives, integrals, and so on lead to
one- and three-dimensional versions of the well-known q-calculus \cite{Kac00}.
In this manner we obtain space-time structures in which space is discretized
but time is still continuous.
Our results so far will then be used to introduce time evolution operators in
a consistent way. Towards this end we start in Sec.\thinspace\ref{SecTimEvo}
from q-deformed Taylor rules for the quantum spaces under consideration.
Exploiting the algebraic properties of space and momentum variables we finally
regain the well-known expressions for the time evolution operator. In
Sec.\thinspace\ref{SHPic} this result will enable us to formulate q-deformed
versions of the Schr\"{o}dinger and the Heisenberg picture in considerable
analogy to the undeformed case. Finally, Sec.\thinspace\ref{SecCon} closes the
considerations so far by a conclusion before we will take them up in part II
of this paper.
\section{Algebraic set-up\label{AlgSet}}
This section is devoted to the algebras we are dealing throughout the paper,
i.e. the braided line and the q-deformed Euclidean space in three dimensions.
It is our aim to enlarge both algebras by a time element and to derive
commutation relations for the generators of the extended algebras.
\subsection{Extended braided line \label{ExtBraAlg}}
Let us recall that the algebraic properties of quantum groups and quantum
spaces are completely encoded in their R-matrices, for which we require to
satisfy the quantum Yang-Baxter equation \cite{Maj95, ChDe96, KS97}:%
\begin{equation}
\hat{R}_{12}\hat{R}_{23}\hat{R}_{12}=\hat{R}_{23}\hat{R}_{12}\hat{R}_{23}.
\end{equation}
If we want to extend the algebra of the braided line by a time element with
trivial braiding the R-matrix for this space should take the form%
\begin{equation}
\hat{R}_{kl}^{ij}=%
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & q
\end{pmatrix}
, \label{ExtRma}%
\end{equation}
where $q>1$ and $i,j,k,l\in\{0,1\}.$ Notice that rows and columns of the above
matrix are arranged in the order $00,$ $01,$ $10,$ and $11.$ We take the
convention that the indices $0$ and $1$ respectively correspond to a time and
a space coordinate. One can check that the above matrix indeed gives a
solution to the Yang-Baxter equation (I am very grateful to Alexander Schmidt
for doing this calculation with Mathematica.). Thus, we refer to it as the
R-matrix of the so-called \textit{extended braided line}.
Next, we would like to find the commutation relations for the generators of
the extended braided line. Towards this end we first determine the eigenvalues
of the new R-matrix in (\ref{ExtRma}). They take on the values $1,$ $-1,$ and
$q.$ The projectors onto the corresponding eigenspaces are then given by%
\begin{gather}
P_{+}=\frac{(\hat{R}-\text{Id})(\hat{R}-q\text{Id})}{2(1+q)}=%
\begin{pmatrix}
0 & 0 & 0 & 0\\
0 & 1/2 & -1/2 & 0\\
0 & -1/2 & 1/2 & 0\\
0 & 0 & 0 & 0
\end{pmatrix}
,\\
P_{-}=\frac{(\hat{R}+\text{Id})(\hat{R}-q\text{Id})}{2(1-q)}=%
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1/2 & 1/2 & 0\\
0 & 1/2 & 1/2 & 0\\
0 & 0 & 0 & 0
\end{pmatrix}
,\\
P_{0}=\frac{(\hat{R}+\text{Id})(\hat{R}-\text{Id})}{(q+1)(q-1)}=%
\begin{pmatrix}
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 1
\end{pmatrix}
.
\end{gather}
The projector $P_{-}$ can be recognized as q-analog of an antisymmetrizer. For
this reason, it determines the commutation relations for the extended braided
line, i.e.%
\begin{equation}
(P_{-})_{kl}^{ij}\,X^{k}X^{l}=0\quad\Rightarrow\quad X^{0}X^{1}=X^{1}X^{0},
\label{QuaSpaRel1dim}%
\end{equation}
where summation over repeated indices is understood. The other two projectors
lead us to the commutation relations of the exterior algebra of the braided
line, since the differentials have to fulfill%
\begin{equation}
(P_{+})_{kl}^{ij}\,dX^{k}dX^{l}=0,\quad(P_{0})_{kl}^{ij}\,dX^{k}dX^{l}=0,
\end{equation}
or, more concretely,%
\begin{equation}
dX^{0}dX^{0}=0,\quad dX^{1}dX^{1}=0,\quad dX^{0}dX^{1}=-dX^{1}dX^{0}.
\label{DiffBrai}%
\end{equation}
Let us note that the commutation relations of the differentials can
alternatively be written as%
\begin{equation}
dX^{i}dX^{j}=(P_{-})_{kl}^{ij}\,dX^{k}dX^{l}=-\hat{R}_{kl}^{ij}\,dX^{k}dX^{l},
\end{equation}
which implies%
\begin{equation}
X^{i}dX^{j}=\hat{R}_{kl}^{ij}\,dX^{k}X^{l}. \label{VerDiffKoor}%
\end{equation}
As next step we introduce partial derivatives by
\begin{equation}
d=dX^{i}\partial_{i}, \label{DefPartDer}%
\end{equation}
where the exterior derivative $d$ obeys the usual properties of nilpotency and
Leibniz rule, i.e.
\begin{align}
d^{2} & =0,\nonumber\\
d(fg) & =(df)g+f(dg). \label{ExtDerivN}%
\end{align}
From (\ref{ExtDerivN}) together with (\ref{VerDiffKoor}) it can be shown that
we have as Leibniz rules%
\begin{equation}
\partial_{i}X^{j}=\delta_{i}^{j}+\hat{R}_{il}^{jk}X^{l}\partial_{k},
\label{DifCalExt}%
\end{equation}
or, more explicitly,%
\begin{align}
\partial_{0}X^{0} & =1+X^{0}\partial_{0},\nonumber\\
\partial_{0}X^{1} & =X^{1}\partial_{0},\label{Diff1dimUn1}\\[0.1in]
\partial_{1}X^{0} & =X^{0}\partial_{1},\nonumber\\
\partial_{1}X^{1} & =1+qX^{1}\partial_{1}. \label{Diff1dimUn2}%
\end{align}
For the sake of completeness, it should be noted that partial derivatives
satisfy the same commutation relations as quantum space coordinates, i.e.%
\begin{equation}
\partial_{0}\partial_{1}=\partial_{1}\partial_{0}.
\end{equation}
Now, we would like to enrich the algebraic structure by adding a conjugation.
A consistent choice is given by%
\begin{equation}
\overline{X^{0}}=X^{0},\quad\overline{X^{1}}=X^{1},
\end{equation}
and%
\begin{equation}
\overline{\partial_{0}}=-\partial_{0},\quad\overline{\partial_{1}}%
=-\partial_{1}.
\end{equation}
Applying this conjugation to the relations in (\ref{Diff1dimUn1}) and
(\ref{Diff1dimUn2}) yields a second differential calculus. Its relations read%
\begin{align}
\hat{\partial}_{0}X^{0} & =1+X^{0}\hat{\partial}_{0},\nonumber\\
\hat{\partial}_{0}X^{1} & =X^{1}\hat{\partial}_{0}, \label{Diff1dim1}%
\\[0.1in]
\hat{\partial}_{1}X^{0} & =X^{0}\hat{\partial}_{1},\nonumber\\
\hat{\partial}_{1}X^{1} & =1+q^{-1}X^{1}\hat{\partial}_{1},
\label{Diff1dim2}%
\end{align}
where $\hat{\partial}_{0}\equiv\partial_{0}$ and $\hat{\partial}_{1}\equiv
q\partial_{1}.$ In analogy to (\ref{DifCalExt}) we have
\begin{equation}
\hat{\partial}_{i}X^{j}=\delta_{i}^{j}+(\hat{R}^{-1})_{il}^{jk}X^{l}%
\hat{\partial}_{k}.
\end{equation}
Last but not least, we would like to say a few words about the quantum group
coacting on the extended braided line. If we require for the coaction
\begin{equation}
\beta(X^{i})=T_{j}^{i}\otimes X^{j},
\end{equation}
to be compatible with the algebraic structure of the extended braided line the
quantum group generators have to be subject to the relations%
\begin{equation}
\hat{R}_{kl}^{ij}\,T_{m}^{k}T_{n}^{l}=T_{k}^{i}T_{l}^{j}\hat{R}_{mn}^{kl},
\end{equation}
from which it follows that%
\begin{equation}
T_{j}^{i}=%
\begin{pmatrix}
a & 0\\
0 & b
\end{pmatrix}
\qquad\text{with }ab=ba.
\end{equation}
If we have $\overline{a}=a$ and $\overline{b}=b$ the extended braided line
even becomes a comodule-$\ast$-algebra.
\subsection{Extended three-dimensional q-deformed Euclidean space}
As in the previous subsection we start our considerations from the R-matrix.
The R-matrix for the three-dimensional q-deformed Euclidean space extended by
a time element is of block-diagonal form. Its building-blocks read
\cite{LSW94}%
\begin{gather}%
\begin{tabular}
[c]{ccc}
& $++$ & $--$\\\cline{2-3}%
$++$ & \multicolumn{1}{|c}{$1$} & $0$\\
$--$ & \multicolumn{1}{|c}{$0$} & $1$%
\end{tabular}
\ ,\\[0.1in]%
\begin{tabular}
[c]{ccc}
& $+3$ & $3+$\\\cline{2-3}%
$+3$ & \multicolumn{1}{|c}{$0$} & $q^{-2}$\\
$3+$ & \multicolumn{1}{|c}{$q^{-2}$} & $q^{-2}\lambda\lambda_{+}$%
\end{tabular}
\ ,\\[0.1in]%
\begin{tabular}
[c]{ccc}
& $3-$ & $-3$\\\cline{2-3}%
$3-$ & \multicolumn{1}{|c}{$0$} & $q^{-2}$\\
$-3$ & \multicolumn{1}{|c}{$q^{-2}$} & $q^{-2}\lambda\lambda_{+}$%
\end{tabular}
\ ,\\[0.1in]%
\begin{tabular}
[c]{llll}
& $+-$ & $33$ & $-+$\\\cline{2-4}%
$+-$ & \multicolumn{1}{|l}{$0$} & $0$ & $q^{-4}$\\
$33$ & \multicolumn{1}{|l}{$0$} & $q^{-2}$ & $q^{-3}\lambda\lambda_{+}$\\
$-+$ & \multicolumn{1}{|l}{$q^{-4}$} & $q^{-3}\lambda\lambda_{+}$ &
$q^{-3}\lambda^{2}\lambda_{+}$%
\end{tabular}
\ ,\\[0.1in]%
\begin{tabular}
[c]{llllllll}
& $00$ & $0+$ & $03$ & $0-$ & $+0$ & $30$ & $-0$\\\cline{2-8}%
$00$ & \multicolumn{1}{|l}{$1$} & $0$ & $0$ & $0$ & $1$ & $0$ & $0$\\
$0+$ & \multicolumn{1}{|l}{$0$} & $0$ & $0$ & $0$ & $0$ & $1$ & $0$\\
$03$ & \multicolumn{1}{|l}{$0$} & $0$ & $0$ & $0$ & $0$ & $0$ & $1$\\
$0-$ & \multicolumn{1}{|l}{$0$} & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\\
$+0$ & \multicolumn{1}{|l}{$0$} & $1$ & $0$ & $0$ & $0$ & $0$ & $0$\\
$30$ & \multicolumn{1}{|l}{$0$} & $0$ & $1$ & $0$ & $0$ & $0$ & $0$\\
$-0$ & \multicolumn{1}{|l}{$0$} & $0$ & $0$ & $1$ & $0$ & $0$ & $0$%
\end{tabular}
\ .
\end{gather}
Notice that space coordinates are labeled by $+,3,$ or $-$, while the index
$0$ refers to the time element.
To get commutation relations for the coordinates in space and time we need to
know the eigenvalues of the above R-matrix. They are given by $1,$ $-q^{-4},$
$q^{-6},$ and $-1.$ Thus, the projectors onto the corresponding irreducible
subspaces can be calculated from the identities%
\begin{align}
P_{+} & =\frac{(\hat{R}+q^{-4}\text{Id})(\hat{R}-q^{-6}\text{Id})(\hat
{R}+\text{Id})}{2(1+q^{-4})(1-q^{-6})},\\[0.1in]
P_{-} & =\frac{(\hat{R}-\text{Id})(\hat{R}-q^{-6}\text{Id})(\hat
{R}+\text{Id})}{(1+q^{-4})(q^{-4}+q^{-6})(1-q^{-4})},\\[0.1in]
P_{0} & =\frac{(\hat{R}-\text{Id})(\hat{R}+q^{-4}\text{Id})(\hat
{R}+\text{Id})}{(q^{-6}-1)(q^{-6}+q^{-4})(q^{-6}+1)},\\[0.1in]
P^{\prime} & =\frac{(\hat{R}-\text{Id})(\hat{R}+q^{-4}\text{Id})(\hat
{R}-q^{-6}\text{Id})}{2(q^{-4}-1)(1+q^{-6})}.
\end{align}
The projectors $P_{-}$ and $P^{\prime}$ lead us to the defining\ relations of
the extended three-dimensional q-deformed Euclidean space:%
\begin{equation}
(P_{-})_{kl}^{ij}\,X^{k}X^{l}=0,\qquad(P^{\prime})_{kl}^{ij}\,X^{k}X^{l}=0.
\end{equation}
Written out explicitly, these relations become
\begin{gather}
X^{0}X^{+}=X^{+}X^{0},\quad X^{0}X^{-}=X^{-}X^{0},\quad X^{0}X^{3}=X^{3}%
X^{0},\nonumber\\
X^{-}X^{3}=q^{2}X^{3}X^{-},\quad X^{3}X^{+}=q^{2}X^{+}X^{3},\nonumber\\
X^{-}X^{+}-X^{+}X^{-}=\lambda X^{3}X^{3},
\end{gather}
where $\lambda=q-q^{-1}.$ It should be mentioned that under transformations of
the quantum group $SU_{q}(2)$ the quantum space coordinates $X^{+},$ $X^{3},$
and $X^{-}$ behave like components of a three-vector, while the time
coordinate $X^{0}$ transforms like a scalar. This situation is in complete
analogy to the undeformed case.
The exterior algebra to the extended q-deformed Euclidean space in three
dimensions is defined by the relations%
\begin{equation}
(P_{+})_{kl}^{ij}\,dX^{k}dX^{l}=0,\qquad(P_{0})_{kl}^{ij}\,dX^{k}dX^{l}=0.
\label{ExtAlg3dim}%
\end{equation}
From these relations we obtain%
\begin{align}
dX^{A}dX^{B} & =-q^{4}(\hat{R}_{SO_{q}(3)})_{CD}^{AB}\,dX^{C}dX^{D}%
\nonumber\\
& =-q^{-4}(\hat{R}_{SO_{q}(3)}^{-1})_{CD}^{AB}\,dX^{C}dX^{D},
\end{align}
and%
\begin{equation}
dX^{0}dX^{0}=0,\quad dX^{0}dX^{A}=-dX^{A}dX^{0},
\end{equation}
where $A,B\in\{+,3,-\}.\ $Notice that\ $\hat{R}_{SO_{q}(3)}$ stands for the
R-matrix of the three-dimensional q-deformed Euclidean space without a time
element. Furthermore, we took the convention that capital letters like $A,B,$
etc. run through $(+,3,-).$
The commutation relations between differentials of coordinates require that
the braiding between quantum space coordinates and their differentials takes
the form%
\begin{equation}
X^{A}dX^{B}=q^{4}(\hat{R}_{SO_{q}(3)})_{CD}^{AB}\,dX^{C}X^{D},
\label{BraidXDif1}%
\end{equation}
or, alternatively,%
\begin{equation}
X^{A}dX^{B}=q^{-4}(\hat{R}_{SO_{q}(3)}^{-1})_{CD}^{AB}\,dX^{C}X^{D}.
\label{BraidXDif2}%
\end{equation}
If the time element is involved we additionally have%
\begin{equation}
X^{0}dX^{A}=dX^{A}dX^{0},\quad X^{A}dX^{0}=dX^{0}X^{A},\quad X^{0}%
dX^{0}=dX^{0}X^{0}. \label{BraidX0}%
\end{equation}
We recommend Ref. \cite{MSW04} if the reader is unfamiliar with the reasonings
leading to the above relations.
Now, we have everything together to introduce partial derivatives. We start
from the Leibniz rules%
\begin{align}
dX^{A}. & =(dX^{A}).+X^{A}d.,\nonumber\\
dX^{0}. & =(dX^{0}).+X^{0}d., \label{LeibExtDer}%
\end{align}
where the point stands for an additional element. Then we substitute the
exterior derivative by $d=dX^{i}\partial_{i}$ and obtain%
\begin{align}
dX^{i}\partial_{i}X^{A} & =dX^{A}+X^{A}dX^{i}\partial_{i},\nonumber\\
dX^{i}\partial_{i}X^{0} & =dX^{0}+X^{0}dX^{i}\partial_{i}.
\end{align}
Using relations (\ref{BraidXDif1}) and (\ref{BraidX0}) we are able to switch
all differentials to the far left:%
\begin{align}
dX^{i}\partial_{i}X^{A} & =dX^{A}+dX^{0}X^{A}\partial_{0}+q^{4}(\hat
{R}_{SO_{q}(3)})_{CD}^{AB}\,dX^{C}X^{D}\partial_{B},\nonumber\\
dX^{i}\partial_{i}X^{0} & =dX^{0}+dX^{i}X^{0}\partial_{i}.
\end{align}
Since the differentials $dX^{i},$ $i\in\{+,3,0,-\},$ are linearly independent,
it follows from the above identities that%
\begin{align}
\partial_{B}X^{A} & =\delta_{B}^{A}+q^{4}(\hat{R}_{SO_{q}(3)})_{CD}%
^{AB}\,X^{C}\partial_{D},\nonumber\\
\partial_{0}X^{A} & =X^{A}\partial_{0},
\end{align}
and%
\begin{equation}
\partial_{0}X^{0}=1+X^{0}\partial_{0},\quad\partial_{A}X^{0}=X^{0}\partial
_{A}.
\end{equation}
If we apply (\ref{BraidXDif2}) instead of (\ref{BraidXDif1}) in the above
derivation, we arrive at a second differential calculus with relations%
\begin{align}
\hat{\partial}_{B}X^{A} & =\delta_{B}^{A}+q^{-4}(\hat{R}_{SO_{q}(3)}%
^{-1})_{CD}^{AB}\,X^{C}\hat{\partial}_{D},\nonumber\\
\hat{\partial}_{0}X^{A} & =X^{A}\hat{\partial}_{0},
\end{align}
and%
\begin{equation}
\hat{\partial}_{0}X^{0}=1+X^{0}\hat{\partial}_{0},\quad\hat{\partial}_{A}%
X^{0}=X^{0}\hat{\partial}_{A},
\end{equation}
where $\hat{\partial}_{A}\equiv q^{6}\partial_{A}$ for $A\in\{+,3,-\}$ and
$\hat{\partial}_{0}\equiv\partial_{0}.$ For the sake of completeness let us
note that partial derivatives now obey the same commutation relations\ as
covariant quantum space coordinates, i.e.%
\begin{gather}
\partial_{0}\partial_{+}=\partial_{+}\partial_{0},\quad\partial_{0}%
\partial_{-}=\partial_{-}\partial_{0},\quad\partial_{0}\partial_{3}%
=\partial_{3}\partial_{0},\nonumber\\
\partial_{+}\partial_{3}=q^{2}\partial_{3}\partial_{+},\quad\partial
_{3}\partial_{-}=q^{2}\partial_{-}\partial_{3},\nonumber\\
\partial_{+}\partial_{-}-\partial_{-}\partial_{+}=\lambda\partial_{3}%
\partial_{3}.
\end{gather}
It is rather instructive to write the Leibniz rules out. In doing so, we
obtain
\begin{align}
\partial_{+}X^{0} & =X^{0}\partial_{+},\nonumber\\
\partial_{+}X^{+} & =1+q^{4}X^{+}\partial_{+},\nonumber\\
\partial_{+}X^{3} & =q^{2}X^{3}\partial_{+},\nonumber\\
\partial_{+}X^{-} & =X^{-}\partial_{+},\label{Lei3dimExp1}\\[0.16in]
\partial_{3}X^{0} & =X^{0}\partial_{3},\nonumber\\
\partial_{3}X^{+} & =q^{2}X^{+}\partial_{3},\nonumber\\
\partial_{3}X^{3} & =1+q^{2}X^{3}\partial_{3}+q^{2}\lambda\lambda_{+}%
X^{+}\partial_{+},\nonumber\\
\partial_{3}X^{-} & =q^{2}X^{-}\partial_{3}+q\lambda\lambda_{+}X^{3}%
\partial_{+},\\[0.16in]
\partial_{-}X^{0} & =X^{0}\partial_{-},\nonumber\\
\partial_{-}X^{+} & =X^{+}\partial_{-},\nonumber\\
\partial_{-}X^{3} & =q^{2}X^{3}\partial_{-}+q\lambda\lambda_{+}X^{+}%
\partial_{3},\nonumber\\
\partial_{-}X^{-} & =1+q^{4}X^{-}\partial_{-}+q^{2}\lambda\lambda_{+}%
X^{3}\partial_{3}+q\lambda^{2}\lambda_{+}X^{+}\partial_{+},\\[0.16in]
\partial_{0}X^{0} & =1+X^{0}\partial_{0},\nonumber\\
\partial_{0}X^{+} & =X^{+}\partial_{0},\nonumber\\
\partial_{0}X^{3} & =X^{3}\partial_{0},\nonumber\\
\partial_{0}X^{-} & =X^{-}\partial_{0}.
\end{align}
and%
\begin{align}
\hat{\partial}_{+}X^{0} & =X^{0}\hat{\partial}_{+},\nonumber\\
\hat{\partial}_{+}X^{-} & =X^{-}\hat{\partial}_{+},\nonumber\\
\hat{\partial}_{+}X^{3} & =q^{-2}X^{3}\hat{\partial}_{+}-q\lambda\lambda
_{+}X^{-}\hat{\partial}_{3},\nonumber\\
\hat{\partial}_{+}X^{+} & =1+q^{-4}X^{+}\hat{\partial}_{+}-q^{-2}%
\lambda\lambda_{+}X^{3}\hat{\partial}_{3}+q^{-1}\lambda^{2}\lambda_{+}%
X^{-}\hat{\partial}_{-},\\[0.16in]
\hat{\partial}_{3}X^{0} & =X^{0}\hat{\partial}_{3},\nonumber\\
\hat{\partial}_{3}X^{-} & =q^{-2}X^{-}\hat{\partial}_{3},\nonumber\\
\hat{\partial}_{3}X^{3} & =1+q^{-2}X^{3}\hat{\partial}_{3}-q^{-2}%
\lambda\lambda_{+}X^{-}\hat{\partial}_{-},\nonumber\\
\hat{\partial}_{3}X^{+} & =q^{-2}X^{+}\hat{\partial}_{3}-q^{-1}%
\lambda\lambda_{+}X^{3}\hat{\partial}_{-},\\[0.16in]
\hat{\partial}_{-}X^{0} & =X^{0}\hat{\partial}_{-}\nonumber\\
\hat{\partial}_{-}X^{+} & =X^{+}\hat{\partial}_{-},\nonumber\\
\hat{\partial}_{-}X^{3} & =q^{-2}X^{3}\hat{\partial}_{-},\nonumber\\
\hat{\partial}_{-}X^{-} & =1+q^{-4}X^{-}\hat{\partial}_{-},\\[0.16in]
\hat{\partial}_{0}X^{0} & =1+X^{0}\hat{\partial}_{0},\nonumber\\
\hat{\partial}_{0}X^{+} & =X^{+}\hat{\partial}_{3},\nonumber\\
\hat{\partial}_{0}X^{3} & =X^{3}\hat{\partial}_{0},\nonumber\\
\hat{\partial}_{0}X^{-} & =X^{-}\hat{\partial}, \label{Lei3dimExp2}%
\end{align}
where we set, for brevity, $\lambda_{+}\equiv q+q^{-1}.$
It remains to introduce a conjugation being compatible with the algebraic
structure presented so far. To this end, we need the quantum metric of the
three-dimensional q-deformed Euclidean space. Its explicit form can be read
off from the projector $P_{0},$ as it holds \cite{LWW97}%
\begin{equation}
(P_{0})_{CD}^{AB}=\frac{1}{g^{EF}g_{EF}}g^{AB}g_{CD}. \label{MetBes}%
\end{equation}
In this manner, one can verify that the non-vanishing entries of $g_{AB}$ and
$g^{AB}$ are given by
\begin{equation}
g^{+-}=g_{+-}=-q,\quad g^{-+}=g_{-+}=-q^{-1},\quad g^{33}=g_{33}=1.
\end{equation}
With the three-dimensional quantum metric at hand we are able to write down
the conjugation properties of coordinates and partial derivatives in a rather
compact form:%
\begin{equation}
\overline{X^{A}}=X_{A}=g_{AB}X^{B},\quad\overline{\partial_{A}}=-\partial
^{A}=-g^{AB}\partial_{B},
\end{equation}
and%
\begin{equation}
\overline{X^{0}}=X^{0},\quad\overline{\partial_{0}}=-\partial_{0}.
\end{equation}
\section{Elements of q-analysis\label{EleqAn}}
In this section a q-deformed version of analysis is given to the extended
braided line and the extended q-deformed Euclidean space in three dimensions.
Especially, we present expressions for computing star products, braided
products, q-translations, operator representations of partial derivatives,
q-integrals, and q-exponentials. With this toolbox of essential elements of
q-analysis we are in a position to formulate quantum mechanics on the quantum
spaces under consideration. Finally, it should be noted that the reasonings in
this section are mainly based on the ideas developed in Refs. \cite{WW01,
BW01, Wac02, Wac03, Wac04, Wac05, qAn, Maj93-Int, Maj93-5, CSW93, Schir94,
CHMW99, Maj94-10, Maj95, Maj95star}.
\subsection{Extended braided line\label{QAnBraid}}
First of all, let us mention that the product on the extended braided line is
the commutative one. This observation follows from a short look at the
commutation relations in Eq. (\ref{QuaSpaRel1dim}). However, if we want to
commute functions living in distinct quantum spaces things become slightly
different. In this respect, let us recall that the commutation relations
between generators of different quantum spaces are determined by the R-matrix
or its inverse, i.e.%
\begin{equation}
X^{i}\odot_{\bar{L}}Y^{j}\equiv(1\otimes X^{i})(Y^{j}\otimes1)=\hat{R}%
_{kl}^{ij}Y^{k}\otimes X^{l},
\end{equation}
and alternatively%
\begin{equation}
X^{i}\odot_{L}Y^{j}\equiv(1\otimes X^{i})(Y^{j}\otimes1)=(\hat{R}^{-1}%
)_{kl}^{ij}Y^{k}\otimes X^{l}.
\end{equation}
These relations lead us to braided products for commutative functions,%
\begin{align}
f(x^{i})\odot_{\bar{L}}g(y^{j}) & =q^{\hat{n}_{y^{1}}\otimes\,\hat{n}%
_{x^{1}}}g(y^{j})\otimes f(x^{i}),\nonumber\\
f(x^{i})\odot_{L}g(y^{j}) & =q^{-\hat{n}_{y^{1}}\otimes\,\hat{n}_{x^{1}}%
}g(y^{j})\otimes f(x^{i}),
\end{align}
where we introduced the operator%
\begin{equation}
\hat{n}_{x^{1}}\equiv x^{1}\frac{\partial}{\partial x^{1}}. \label{N-Op}%
\end{equation}
Notice that the partial derivative in Eq. (\ref{N-Op}) is a commutative one.
At this place it should also be mentioned that throughout this paper we use
the convention to write generators of quantum spaces in capitals, while
commutative coordinates are written in small letters. (In the case of the
braided line the identification with a commutative algebra is rather trivial.
For q-deformed Euclidean space in three dimensions, however, such an
identification needs some more thoughts.)
Next, we come to translations on the extended braided line. Translations on
quantum spaces are described by their Hopf structures. On quantum space
generators these Hopf structures become%
\begin{align}
\Delta_{L}(X^{0}) & =X^{0}\otimes1+1\otimes X^{0},\nonumber\\
\Delta_{L}(X^{1}) & =X^{1}\otimes1+\Lambda^{-1}\otimes X^{1}%
,\nonumber\\[0.1in]
S_{L}(X^{0}) & =-X^{0},\nonumber\\
S_{L}(X^{1}) & =-\Lambda X^{1},\nonumber\\[0.1in]
\epsilon_{L}(X^{i}) & =0, \label{HopStrBrai1}%
\end{align}
and
\begin{align}
\Delta_{\bar{L}}(X^{0}) & =X^{0}\otimes1+1\otimes X^{0},\nonumber\\
\Delta_{\bar{L}}(X^{1}) & =X^{1}\otimes1+\Lambda\otimes X^{1}%
,\nonumber\\[0.1in]
S_{\bar{L}}(X^{0}) & =-X^{0},\nonumber\\
S_{\bar{L}}(X^{1}) & =-\Lambda^{-1}X^{1},\nonumber\\[0.1in]
\epsilon_{\bar{L}}(X^{i}) & =0, \label{HopStrBrai2}%
\end{align}
where $\Lambda$ stands for a unitary scaling operator subject to%
\begin{gather}
\Lambda X^{0}=X^{0}\Lambda,\quad\Lambda X^{1}=qX^{1}\Lambda,\nonumber\\
\Lambda\partial_{0}=\Lambda\partial_{0},\quad\Lambda\partial_{1}=q^{-1}%
\Lambda\partial_{1}. \label{ActLam}%
\end{gather}
This scaling operator and its inverse can be viewed as generators of a Hopf
algebra denoted by $\mathcal{H}$. The corresponding Hopf structure reads%
\begin{equation}
\Delta(\Lambda)=\Lambda\otimes\Lambda,\quad S(\Lambda)=\Lambda^{-1}%
,\quad\epsilon(\Lambda)=1.
\end{equation}
To proceed any further we need the algebra morphisms $\mathcal{W}_{L}^{-1}$
and $\mathcal{W}_{R}^{-1}$ defined by
\begin{align}
\mathcal{W}_{L}^{-1} & :\mathcal{A}_{q}\rtimes\mathcal{H}\longrightarrow
\mathcal{A}_{q},\nonumber\\
\mathcal{W}_{L}^{-1}((X^{0})^{n_{0}}(X^{1})^{n_{1}}\otimes h) & \equiv
(x^{0})^{n_{0}}(x^{1})^{n_{1}}\,\varepsilon(h),\\[0.1in]
\mathcal{W}_{R}^{-1} & :\mathcal{H}\ltimes\mathcal{A}_{q}\longrightarrow
\mathcal{A}_{q},\nonumber\\
\mathcal{W}_{R}^{-1}(h\otimes(X^{0})^{n_{0}}(X^{1})^{n_{1}}) &
\equiv\varepsilon(h)\,(x^{0})^{n_{0}}(x^{1})^{n_{1}}.
\end{align}
With these mappings at hand we are able to introduce the operations%
\begin{align}
f(x^{i}\oplus_{L/\bar{L}}y^{j}) & \equiv((\mathcal{W}_{L}^{-1}%
\otimes\mathcal{W}_{L}^{-1})\circ\Delta_{L/\bar{L}})(f),\\[0.16in]
f(\ominus_{L/\bar{L}}\,x^{i}) & \equiv(\mathcal{W}_{R}^{-1}\circ
S_{L/\bar{L}})(f).
\end{align}
Repeating the same steps already applied in Ref. \cite{Wac04} one can show
that%
\begin{align}
f(x^{i}\oplus_{L}y^{j}) & =\sum_{k,l=0}^{\infty}\frac{(x^{0})^{k}(x^{1}%
)^{l}}{k!\,[[l]]_{q^{-1}}!}\Big (\frac{\partial}{\partial y^{0}}%
\Big )^{k}(D_{q^{-1}}^{1})^{l}f(y^{j}),\label{TransL}\\[0.16in]
f(x^{i}\oplus_{\bar{L}}y^{j}) & =\sum_{k,l=0}^{\infty}\frac{(x^{0}%
)^{k}(x^{1})^{l}}{k!\,[[l]]_{q}!}\Big (\frac{\partial}{\partial y^{0}%
}\Big )^{k}(D_{q}^{1})^{l}f(y^{j}), \label{TransLq}%
\end{align}
and%
\begin{align}
f(\ominus_{L}\,x^{i}) & =q^{-\frac{1}{2}\hat{n}_{x^{1}}(\hat{n}_{x^{1}}%
-1)}f(-x^{i}),\\[0.16in]
f(\ominus_{\bar{L}}\,x^{i}) & =q^{\frac{1}{2}\hat{n}_{x^{1}}(\hat{n}_{x^{1}%
}-1)}f(-x^{i}).
\end{align}
Notice that the expressions in (\ref{TransL}) and (\ref{TransLq}) use the
so-called \textit{Jackson derivatives} \cite{Jack08}%
\begin{equation}
D_{q^{a}}^{i}f\equiv\frac{f(q^{a}x^{i})-f\left( x^{i}\right) }%
{(q^{a}-1)x^{i}},\qquad a\in\mathbb{C}.
\end{equation}
Furthermore, the so-called antisymmetric q-numbers are given by%
\begin{equation}
\left[ \left[ n\right] \right] _{q^{a}}\equiv\sum_{k=0}^{n-1}q^{ak}%
=\frac{1-q^{an}}{1-q^{a}},
\end{equation}
and their factorials are defined by%
\begin{equation}
\left[ \left[ n\right] \right] _{q^{a}}!\equiv\left[ \left[ 1\right]
\right] _{q^{a}}\left[ \left[ 2\right] \right] _{q^{a}}\ldots\left[
\left[ n\right] \right] _{q^{a}},\qquad\left[ \left[ 0\right] \right]
_{q^{a}}!\equiv1.
\end{equation}
Now, we would like to turn our attention to operator representations of
partial derivatives. From the q-deformed Leibniz rules in (\ref{Diff1dimUn1})
and (\ref{Diff1dimUn2}) as well as those in (\ref{Diff1dim1}) and
(\ref{Diff1dim2}) we can derive right and left actions of partial derivatives
on the algebra of quantum space coordinates. To this end, we repeatedly apply
the Leibniz rules to the product of a partial derivative with a normally
ordered monomial of coordinates, until we obtain an expression with all
partial derivatives standing to the right of all quantum space coordinates,
i.e.%
\begin{equation}
\partial^{i}(X^{0})^{k_{0}}(X^{1})^{k_{1}}=\big (\partial_{(1)}^{i}%
\triangleright(X^{0})^{k_{0}}(X^{1})^{k_{1}}\big )\partial_{(2)}^{i}.
\label{VerParX}%
\end{equation}
Taking the counit of all partial derivatives appearing on the right-hand side
finally yields the left action of $\partial^{i}$, since we have%
\begin{equation}
\big (\partial_{(1)}^{i}\triangleright(X^{0})^{k_{0}}(X^{1})^{k_{1}%
}\big )\varepsilon(\partial_{(2)}^{i})=\partial^{i}\triangleright
(X^{0})^{k_{0}}(X^{1})^{k_{1}}. \label{BerWirkPar}%
\end{equation}
Right actions of partial derivatives can be calculated in a similar way if we
start from a partial derivative standing to the right of a normally ordered
monomial and commute it to the left of all quantum space coordinates. Instead
of (\ref{VerParX}) and (\ref{BerWirkPar}) we have
\begin{equation}
(X^{0})^{k_{0}}(X^{1})^{k_{1}}\partial^{i}=\partial_{(2)}^{i}\big ((X^{0}%
)^{k_{0}}(X^{1})^{k_{1}}\triangleleft\partial_{(1)}^{i}\big ),
\end{equation}
and%
\begin{equation}
\varepsilon(\partial_{(2)}^{i})\big ((X^{0})^{k_{0}}(X^{1})^{k_{1}%
}\triangleleft\partial_{(1)}^{i}\big )=(X^{0})^{k_{0}}(X^{1})^{k_{1}%
}\triangleleft\partial^{i}.
\end{equation}
These reasonings show us a method to calculate explicit formulae for the
action of partial derivatives on normally ordered monomials. From these
results we can finally read off the operator representations
\begin{align}
\partial_{0}\triangleright f(x^{i}) & =\frac{\partial}{\partial x^{0}%
}f(x^{i}),\nonumber\\
\partial_{1}\triangleright f(x^{i}) & =D_{q}^{1}\,f(x^{i}%
),\label{ActPartBrai1}\\[0.16in]
\hat{\partial}_{0}\,\bar{\triangleright}\,f(x^{i}) & =\frac{\partial
}{\partial x^{0}}f(x^{i}),\nonumber\\
\hat{\partial}_{1}\,\bar{\triangleright}\,f(x^{i}) & =D_{q^{-1}}%
^{1}\,f(x^{i}),
\end{align}
and%
\begin{align}
f(x^{i})\triangleleft\hat{\partial}_{0} & =-\frac{\partial}{\partial x^{0}%
}f(x^{i}),\nonumber\\
f(x^{i})\triangleleft\hat{\partial}_{x} & =-D_{q^{-1}}^{1}\,f(x^{i}%
),\\[0.16in]
f(x^{i})\,\bar{\triangleleft}\,\partial_{0} & =-\frac{\partial}{\partial
x^{0}}f(x^{i}),\nonumber\\
f(x^{i})\,\bar{\triangleleft}\,\partial_{1} & =-D_{q}^{1}\,f(x^{i}).
\label{ActPartBraid2}%
\end{align}
With these formulae at hand it follows at once that
\begin{gather}
df(x^{i})=dx^{j}\partial_{j}\triangleright f(x^{i})=0\nonumber\\
\Leftrightarrow f(x^{i})\big |_{x^{0}=\,a}=f(x^{i})\big |_{x^{0}=\,b},\quad
f(x^{i})\big |_{x^{1}=\,a}=f(x^{i})\big |_{x^{1}=\,qa},
\end{gather}
for all $a,b\in\mathbb{C}$. Notice that the above condition characterizes
functions being constant from the point of view of q-deformation.
Next, we come to integrals on the extended braided line. (For the different
approaches to introduce integrals on q-deformed spaces see also Refs.
\cite{Wac02, qAn, Fio93, Sta96, Chry96, KM94, CSW93, WZ91, CHMW99}.) Integrals
can be recognized as operations being inverse to partial derivatives. Thus, we
first try to extend the algebra of partial derivatives by introducing inverse
elements. In doing so, we get as additional relations%
\begin{align}
(\partial_{0})^{-1}\partial_{0} & =\partial_{0}(\partial_{0})^{-1}%
=1,\nonumber\\
(\partial_{1})^{-1}\partial_{1} & =\partial_{1}(\partial_{1})^{-1}%
=1,\nonumber\\
(\partial_{0})^{-1}\partial_{1} & =\partial_{1}(\partial_{0})^{-1}%
,\nonumber\\
(\partial_{1})^{-1}\partial_{0} & =\partial_{0}(\partial_{1})^{-1}%
,\nonumber\\
(\partial_{0})^{-1}(\partial_{1})^{-1} & =(\partial_{1})^{-1}(\partial
_{0})^{-1}.
\end{align}
As next step we search for representations of the inverse partial derivatives
that fulfill the above relations. It should be obvious that they are given by
\begin{align}
(\partial_{0})^{-1}\big |_{x^{0}=a}^{b}\triangleright f(x^{i}) & =\int
_{a}^{b}dx^{0}\,f(x^{i}),\nonumber\\
(\partial_{1})^{-1}\big |_{x^{1}=\,a}^{b}\triangleright f(x^{i}) &
=(D_{q}^{1})^{-1}\big |_{x^{1}=\,a}^{b}f(x^{i}),\\[0.16in]
(\hat{\partial}_{0})^{-1}\big |_{x^{0}=a}^{b}\,\bar{\triangleright}\,f(x^{i})
& =\int_{a}^{b}dx^{0}\,f(x^{i}),\nonumber\\
(\hat{\partial}_{1})^{-1}\big |_{x^{1}=\,a}^{b}\,\bar{\triangleright}%
\,f(x^{i}) & =(D_{q^{-1}}^{1})^{-1}\big |_{x^{1}=\,a}^{b}f(x^{i}),
\end{align}
and%
\begin{align}
f(x^{i})\triangleleft(\hat{\partial}_{0})^{-1}\big |_{x^{0}=\,a}^{b} &
=-\int_{a}^{b}dx^{0}\,f(x^{i}),\nonumber\\
f(x^{i})\triangleleft(\hat{\partial}_{1})^{-1}\big |_{x^{1}=\,a}^{b} &
=-(D_{q^{-1}}^{1})^{-1}\big |_{x^{1}=\,a}^{b}f(x^{i}),\\[0.16in]
f(x^{i})\,\bar{\triangleleft}\,(\partial_{0})^{-1}\big |_{x^{0}=\,a}^{b} &
=-\int_{a}^{b}dx^{0}\,f(x^{i}),\nonumber\\
f(x^{i})\,\bar{\triangleleft}\,(\partial_{1})^{-1}\big |_{x^{1}=\,a}^{b} &
=-(D_{q}^{1})^{-1}\big |_{x^{1}=\,a}^{b}f(x^{i}),
\end{align}
where $(D_{q}^{i})^{-1}$ denotes the \textit{Jackson integral} operator
\cite{Jack27}. For the sake of completeness we would like to give the
definition of the Jackson integral. For $a>0,$ $q>1,$ and $x^{i}>0,$ it
becomes
\begin{align}
(D_{q^{a}}^{i})^{-1}\big |_{0}^{x^{i}}f & =-(1-q^{a})\sum_{k=1}^{\infty
}(q^{-ak}x^{i})f(q^{-ak}x^{i}),\nonumber\\
(D_{q^{a}}^{i})^{-1}\big |_{x^{i}}^{\infty}f & =-(1-q^{a})\sum_{k=0}%
^{\infty}(q^{ak}x^{i})f(q^{ak}x^{i}),\nonumber\\
(D_{q^{-a}}^{i})^{-1}\big |_{0}^{x^{i}}f & =(1-q^{-a})\sum_{k=0}^{\infty
}(q^{-ak}x^{i})f(q^{-ak}x^{i}),\nonumber\\
(D_{q^{-a}}^{i})^{-1}\big |_{x^{i}}^{\infty}f & =(1-q^{-a})\sum
_{k=1}^{\infty}(q^{ak}x^{i})f(q^{ak}x^{i}), \label{Jackson1N}%
\end{align}
and, likewise for $a>0,$ $q>1,$ and $x^{i}<0,$
\begin{align}
(D_{q^{a}}^{i})^{-1}\big |_{x^{i}}^{0}f & =(1-q^{a})\sum_{k=1}^{\infty
}(q^{-ak}x^{i})f(q^{-ak}x^{i}),\nonumber\\
(D_{q^{a}}^{i})^{-1}\big |_{-\infty}^{x^{i}}f & =(1-q^{a})\sum_{k=0}%
^{\infty}(q^{ak}x^{i})f(q^{ak}x^{i}),\nonumber\\
(D_{q^{-a}}^{i})^{-1}\big |_{x^{i}}^{0}f & =-(1-q^{-a})\sum_{k=0}^{\infty
}(q^{-ak}x^{i})f(q^{-ak}x^{i}),\nonumber\\
(D_{q^{-a}}^{i})^{-1}\big |_{-\infty}^{x^{i}}f & =-(1-q^{-a})\sum
_{k=1}^{\infty}(q^{ak}x^{i})f(q^{ak}x^{i}). \label{Jackson2N}%
\end{align}
In analogy to the undeformed case we have rules for integration by parts. To
derive them we start from the Leibniz rules for partial derivatives. These
Leibniz rules can be read off from the coproducts in (\ref{HopStrBrai1}) and
(\ref{HopStrBrai2}). In this manner, we find%
\begin{align}
\partial_{0}\triangleright(fg) & =(\partial_{0}\triangleright f)g+f(\partial
_{0}\triangleright g),\nonumber\\
\partial_{1}\triangleright(fg) & =(\partial_{1}\triangleright f)g+(\Lambda
\triangleright f)(\partial_{1}\triangleright g),\\[0.16in]
\hat{\partial}_{0}\,\bar{\triangleright}\,(fg) & =(\hat{\partial}_{0}%
\,\bar{\triangleright}\,f)g+f(\hat{\partial}_{0}\,\bar{\triangleright
}\,g),\nonumber\\
\hat{\partial}_{1}\,\bar{\triangleright}\,(fg) & =(\hat{\partial}_{1}%
\,\bar{\triangleright}\,f)g+(\Lambda^{-1}\triangleright f)(\hat{\partial}%
_{1}\,\bar{\triangleright}\,g),
\end{align}
and%
\begin{align}
(fg)\triangleleft\hat{\partial}_{0} & =f(g\triangleleft\hat{\partial}%
_{0})+(f\triangleleft\hat{\partial}_{0})g,\nonumber\\
(fg)\triangleleft\hat{\partial}_{1} & =f(g\triangleleft\hat{\partial}%
_{1})+(f\triangleleft\hat{\partial}_{1})(g\triangleleft\Lambda),\\[0.16in]
(fg)\,\bar{\triangleleft}\,\partial_{0} & =f(g\,\bar{\triangleleft
}\,\partial_{0})+(f\,\bar{\triangleleft}\,\partial_{0})g,\nonumber\\
(fg)\,\bar{\triangleleft}\,\partial_{1} & =f(g\,\bar{\triangleleft
}\,\partial_{1})+(f\,\bar{\triangleleft}\,\partial_{1})(g\triangleleft
\Lambda^{-1}).
\end{align}
Hitting the above equations with the corresponding integral operator and
rearranging terms, we get%
\begin{align}
(\partial_{0})^{-1}\big |_{x^{0}=\,a}^{b}\triangleright(\partial
_{0}\triangleright f)g & =fg\big |_{x^{0}=\,a}^{b}-(\partial_{0}%
)^{-1}\big |_{x^{0}=\,a}^{b}\triangleright f(\partial_{0}\triangleright
g),\nonumber\\
(\partial_{1})^{-1}\big |_{x^{1}=\,a}^{b}\,\triangleright(\partial
_{1}\triangleright f)g & =fg\big |_{x^{1}=\,a}^{b}-(\partial_{1}%
)^{-1}\big |_{x^{1}=\,a}^{b}\triangleright(\Lambda\triangleright
f)(\partial_{1}\triangleright g),\\[0.16in]
(\hat{\partial}_{0})^{-1}\big |_{x^{0}=\,a}^{b}\,\bar{\triangleright}%
\,(\hat{\partial}_{0}\,\bar{\triangleright}\,f)g & =fg\big |_{x^{0}=\,a}%
^{b}-(\hat{\partial}_{0})^{-1}\big |_{x^{0}=\,a}^{b}\,\bar{\triangleright
}\,f(\hat{\partial}_{0}\,\bar{\triangleright}\,g),\nonumber\\
(\hat{\partial}_{1})^{-1}\big |_{x^{1}=\,a}^{b}\,\bar{\triangleright}%
\,(\hat{\partial}_{1}\,\bar{\triangleright}\,f)g & =fg\big |_{x^{1}=\,a}%
^{b}-(\hat{\partial}_{1})^{-1}\big |_{x^{1}=\,a}^{b}\,\bar{\triangleright
}\,(\Lambda^{-1}\triangleright f)(\hat{\partial}_{1}\,\bar{\triangleright
}\,g),
\end{align}
and%
\begin{align}
f(g\triangleleft\hat{\partial}_{0})\triangleleft(\hat{\partial}_{0}%
)^{-1}\big |_{x^{0}=\,a}^{b} & =fg\big |_{x^{0}=\,a}^{b}-(f\triangleleft
\hat{\partial}_{0})g\triangleleft(\hat{\partial}_{0})^{-1}\big |_{x^{0}%
=\,a}^{b},\nonumber\\
f(g\triangleleft\hat{\partial}_{1})\triangleleft(\hat{\partial}_{1}%
)^{-1}\big |_{x^{1}=\,a}^{b} & =fg\big |_{x^{1}=\,a}^{b}-(f\triangleleft
\hat{\partial}_{1})(g\triangleleft\Lambda)\triangleleft(\hat{\partial}%
_{1})^{-1}\big |_{x^{1}=\,a}^{b},\\[0.16in]
f(g\,\bar{\triangleleft}\,\partial_{0})\,\bar{\triangleleft}\,(\partial
_{0})^{-1}\big |_{x^{0}=\,a}^{b} & =fg\big |_{x^{0}=\,a}^{b}-(f\,\bar
{\triangleleft}\,\partial_{0})g\,\bar{\triangleleft}\,(\partial_{0}%
)^{-1}\big |_{x^{0}=\,a}^{b},\nonumber\\
f(g\,\bar{\triangleleft}\,\partial_{1})\,\bar{\triangleleft}\,(\partial
_{1})^{-1}\big |_{x^{1}=\,a}^{b} & =fg\big |_{x^{1}=\,a}^{b}-(f\,\bar
{\triangleleft}\,\partial_{1})(g\triangleleft\Lambda^{-1})\,\bar
{\triangleleft}\,(\partial_{1})^{-1}\big |_{x^{1}=\,a}^{b}.
\end{align}
Before we can apply these formulae it remains to write down the explicit form
of the action of the scaling operator. A short glance at the identities in
(\ref{ActLam}) should tell us that%
\begin{align}
\Lambda\triangleright f(x^{i}) & =f(x^{i})\triangleleft\Lambda^{-1}%
=f(x^{0},qx^{1}),\nonumber\\
\Lambda^{-1}\triangleright f(x^{i}) & =f(x^{i})\triangleleft\Lambda
=f(x^{0},q^{-1}x^{1}).
\end{align}
We would like to close this subsection by dealing with dual pairings and
q-exponentials. In Ref. \cite{Maj93-5} it was shown that the algebra of
quantum space coordinates and that of the corresponding partial derivatives
are dual to each other. The dual pairings are given by%
\begin{align}
\big \langle f(\partial_{i}),g(x^{j})\big \rangle_{L,\bar{R}} &
\equiv(f(\partial_{i})\triangleright g(x^{j}))|_{x^{j}=\,0}=(f(\partial
_{i})\,\bar{\triangleleft}\,g(x^{j}))|_{\partial_{i}=\,0},\nonumber\\
\big \langle f(\hat{\partial}_{i}),g(x^{j})\big \rangle_{\bar{L},R} &
\equiv(f(\hat{\partial}_{i})\,\bar{\triangleright}\,g(x^{j}))|_{x^{j}%
=\,0}=(f(\hat{\partial}_{i})\triangleleft g(x^{j}))|_{\partial_{i}%
=\,0},\\[0.16in]
\big \langle f(x^{i}),g(\partial_{j})\big \rangle_{L,\bar{R}} &
\equiv(f(x^{i})\,\bar{\triangleleft}\,g(\partial_{j}))|_{x^{i}=\,0}%
=(f(x^{i})\triangleright g(\partial_{j}))|_{\partial_{j}=\,0},\nonumber\\
\big \langle f(x^{i}),g(\hat{\partial}_{j})\big \rangle_{\bar{L},R} &
\equiv(f(x^{i})\triangleleft g(\hat{\partial}_{j}))|_{x^{i}=\,0}%
=(f(x^{i})\,\bar{\triangleright}\,g(\hat{\partial}_{j}))|_{\partial_{j}=\,0}.
\end{align}
On monomials we get
\begin{align}
\big \langle(\partial_{0})^{n_{0}}(\partial_{1})^{n_{1}},(X^{0})^{m_{0}}%
(X^{1})^{m_{1}}\big \rangle_{L,\bar{R}} & =n_{0}![[n_{1}]]_{q}%
!\,\delta^{n_{0},m_{0}}\delta^{n_{1},m_{1}},\nonumber\\
\big \langle(\hat{\partial}_{0})^{n_{0}}(\hat{\partial}_{1})^{n_{1}}%
,(X^{0})^{m_{0}}(X^{1})^{m_{1}}\big \rangle_{\bar{L},R} & =n_{0}%
![[n_{1}]]_{q^{-1}}!\,\delta^{n_{0},m_{0}}\delta^{n_{1},m_{1}},
\label{ExpDualAnf}%
\end{align}
and%
\begin{align}
\big \langle(X^{0})^{m_{0}}(X^{1})^{m_{1}},(\partial_{0})^{n_{0}}(\partial
_{1})^{n_{1}}\big \rangle_{L,\bar{R}} & =(-1)^{n_{0}+n_{1}}n_{0}%
![[n_{1}]]_{q}!\,\delta^{n_{0},m_{0}}\delta^{n_{1},m_{1}},\nonumber\\
\big \langle(X^{0})^{m_{0}}(X^{1})^{m_{1}},(\hat{\partial}_{0})^{n_{0}}%
(\hat{\partial}_{1})^{n_{1}}\big \rangle_{\bar{L},R} & =(-1)^{n_{0}+n_{1}%
}n_{0}![[n_{1}]]_{q^{-1}}!\,\delta^{n_{0},m_{0}}\delta^{n_{1},m_{1}}.
\label{ExplDualEnd}%
\end{align}
These equalities can easily be checked by the identities in
(\ref{ActPartBrai1})-(\ref{ActPartBraid2}).
Now, let us make contact with q-deformed exponentials. From an abstract point
of view an exponential is nothing other than an object whose dualization is
one of the above pairings. In this sense, the exponential is given by the
expression%
\begin{equation}
\exp(x^{i}|\partial_{j})\equiv\sum_{a}e^{a}\otimes f_{a}, \label{ExpAl1N}%
\end{equation}
or%
\begin{equation}
\exp(\partial_{i}|x^{j})\equiv\sum_{a}f_{a}\otimes e^{a}, \label{ExpAl2N}%
\end{equation}
where $\{e_{a}\}$ is a basis in the coordinate algebra and $\{f^{a}\}$ a dual
basis in the algebra of partial derivatives.
If we want to derive explicit formulae for q-deformed exponentials it is our
task to determine a basis of the coordinate algebra being dual to a given one
of the algebra of derivatives. Inserting the elements of the two bases into
the expressions (\ref{ExpAl1N}) and (\ref{ExpAl2N}) will then provide us with
formulae for q-deformed exponentials. It should be obvious that the two bases
being dually paired depend on the choice of the pairing. Thus, each pairing in
(\ref{ExpDualAnf}) and (\ref{ExplDualEnd}) leads to its own q-exponential:%
\begin{align}
& \big\langle f(\partial_{i}),g(x^{j})\big\rangle_{L,\bar{R}} & &
\Rightarrow & & \exp(x^{i}|\partial_{j})_{\bar{R},L},\nonumber\\
& \big\langle f(\hat{\partial}_{i}),g(x^{j})\big\rangle_{\bar{L},R} & &
\Rightarrow & & \exp(x^{i}|\hat{\partial}_{j})_{R,\bar{L}},\\[0.16in]
& \big\langle f(x^{i}),g(\partial_{j})\big\rangle_{L,\bar{R}} & &
\Rightarrow & & \exp(\partial_{i}|x^{j})_{\bar{R},L},\nonumber\\
& \big\langle f(x^{i}),g(\hat{\partial}_{j})\big\rangle_{\bar{L},R} & &
\Rightarrow & & \exp(\hat{\partial}_{i}|x^{j})_{R,\bar{L}}.
\end{align}
From the results in (\ref{ExpDualAnf}) and (\ref{ExplDualEnd}) we can rather
easily read off two dually paired bases. Proceeding in the above mentioned way
we find%
\begin{align}
\exp(x^{i}|\partial_{j})_{\bar{R},L} & =\sum_{n_{0},n_{1}=0}^{\infty}%
\frac{1}{n_{0}![[n_{1}]]_{q}!}(x^{0})^{n_{0}}(x^{1})^{n_{1}}\otimes
(\partial_{0})^{n_{0}}(\partial_{1})^{n_{1}}\nonumber\\
& =\exp(x^{0}\otimes\partial_{0})\cdot\exp_{q}(x^{1}\otimes\partial
_{1}),\label{Exp1dimAnf}\\[0.16in]
\exp(x^{i}|\hat{\partial}_{j})_{R,\bar{L}} & =\sum_{n_{0},n_{1}=0}^{\infty
}\frac{1}{n_{0}![[n_{1}]]_{q^{-1}}!}(x^{0})^{n_{0}}(x^{1})^{n_{1}}\otimes
(\hat{\partial}_{0})^{n_{0}}(\hat{\partial}_{1})^{n_{1}}\nonumber\\
& =\exp(x^{0}\otimes\hat{\partial}_{0})\cdot\exp_{q^{-1}}(x^{1}\otimes
\hat{\partial}_{1}),
\end{align}
and%
\begin{align}
\exp(\partial_{i}|x^{j})_{\bar{R},L} & =\sum_{n_{0},n_{1}=0}^{\infty}%
\frac{1}{n_{0}![[n_{1}]]_{q^{-1}}!}(\partial_{0})^{n_{0}}(\partial_{1}%
)^{n_{1}}\otimes(x^{0})^{n_{0}}(x^{1})^{n_{1}}\nonumber\\
& =\exp(\partial_{0}\otimes x^{0})\cdot\exp_{q^{-1}}(\partial_{1}\otimes
x^{1}),\\[0.16in]
\exp(\hat{\partial}_{i}|x^{j})_{R,\bar{L}} & =\sum_{n_{0},n_{1}=0}^{\infty
}\frac{1}{n_{0}![[n_{1}]]_{q}!}(\hat{\partial}_{0})^{n_{0}}(\hat{\partial}%
_{1})^{n_{1}}\otimes(x^{0})^{n_{0}}(x^{1})^{n_{1}}\nonumber\\
& =\exp(\hat{\partial}_{0}\otimes x^{0})\cdot\exp_{q}(\hat{\partial}%
_{1}\otimes x^{1}). \label{Exp1dimEnd}%
\end{align}
\subsection{Extended three-dimensional q-deformed Euclidean space}
In this subsection we collect the elements of q-analysis to the extended
three-dimensional q-deformed Euclidean space. Before we can do so, we first
have to answer the question how to perform calculations on an algebra of
non-commutative coordinates - in the following denoted by $\mathcal{A}_{q}$.
This can be accomplished by a kind of pullback that transforms operations on
the non-commutative coordinate algebra to those on a commutative one. For this
to become more clear, one should realize that the non-commutative coordinate
algebra\ we are dealing with satisfies the \textit{Poincar\'{e}-Birkhoff-Witt
property}. It tells us that the dimension of a subspace of homogeneous
polynomials has to be the same as for commuting coordinates. This property is
the deeper reason why monomials of a given normal ordering constitute a basis
of the non-commutative algebra $\mathcal{A}_{q}$. Due to this fact we can
establish a vector space isomorphism between $\mathcal{A}_{q}$ and a
commutative algebra $\mathcal{A}$ generated by ordinary coordinates
$x^{1},x^{2},\ldots,x^{n}$:
\begin{align}
\mathcal{W} & :\mathcal{A}\longrightarrow\mathcal{A}_{q},\nonumber\\
\mathcal{W}((x^{1})^{i_{1}}\ldots(x^{n})^{i_{n}}) & \equiv(X^{1})^{i_{1}%
}\ldots(X^{n})^{i_{n}}. \label{AlgIso}%
\end{align}
This vector space isomorphism can even be extended to an algebra isomorphism
by introducing a non-commutative product in $\mathcal{A}$, the so-called
\textit{star product} \cite{BFF78, Moy49, MSSW00}. This product is defined via
the relation
\begin{equation}
\mathcal{W}(f\circledast g)=\mathcal{W}(f)\cdot\mathcal{W}(g),
\end{equation}
being tantamount to%
\begin{equation}
f\circledast g\equiv\mathcal{W}^{-1}\left( \mathcal{W}\left( f\right)
\cdot\mathcal{W}\left( g\right) \right) ,
\end{equation}
where $f$ and $g$ are formal power series in $\mathcal{A}$.
In the case of the extended three-dimensional q-deformed Euclidean space it is
convenient to work with the normal orderings%
\begin{equation}
\mathcal{W}\left( (x^{0})^{n_{0}}(x^{+})^{n_{+}}(x^{3})^{n_{3}}(x^{-}%
)^{n_{-}}\right) =(X^{0})^{n_{0}}(X^{+})^{n_{+}}(X^{3})^{n_{3}}(X^{-}%
)^{n_{-}}, \label{sternid1}%
\end{equation}
and%
\begin{equation}
\widetilde{\mathcal{W}}\left( (x^{0})^{n_{0}}(x^{+})^{n_{+}}(x^{3})^{n_{3}%
}(x^{-})^{n_{-}}\right) =(X^{0})^{n_{0}}(X^{-})^{n_{-}}(X^{3})^{n_{3}}%
(X^{+})^{n_{+}}.
\end{equation}
The star product corresponding to the first choice takes the form \cite{WW01}%
\begin{align}
f(x^{i})\circledast g(x^{j})=\, & \sum_{k=0}^{\infty}\lambda^{k}\frac
{(x^{3})^{2k}}{[[k]]_{q^{4}}!}q^{2(\hat{n}_{x^{3}}\hat{n}_{y^{+}}+\,\hat
{n}_{x^{-}}\hat{n}_{y^{3}})}\nonumber\label{sternformel}\\
\, & \times\,\left. (D_{q^{4}}^{-})^{k}f(x^{i})\cdot(D_{q^{4}}^{+}%
)^{k}g(y^{j})\right\vert _{y\rightarrow x},
\end{align}
and likewise for the second choice,%
\begin{align}
\tilde{f}(x^{i})\circledast\tilde{g}(x^{j})=\, & \sum_{k=0}^{\infty}\left(
-\lambda\right) ^{k}\frac{(x^{3})^{2k}}{[[k]]_{q^{-4}}!}q^{-2(\hat{n}_{x^{3}%
}\hat{n}_{y^{+}}+\,\hat{n}_{x^{-}}\hat{n}_{y^{3}})}\nonumber\\
\, & \times\,\left. (D_{q^{-4}}^{+})^{k}\tilde{f}(x^{i})\cdot(D_{q^{-4}%
}^{-})^{k}\tilde{g}(y^{j})\right\vert _{y\rightarrow x}.
\end{align}
Notice that the tilde on top of the symbols for the functions shall remind us
of the fact that the star product refers to the algebra homomorphism
$\widetilde{\mathcal{W}}$. Extending the three-dimensional q-deformed
Euclidean space by a time element does not really change the operator
expressions for the star product. This observation is a consequence of the
fact that the time element is central in the algebra of quantum space coordinates.
In very much the same way as was done for the braided line we can calculate
actions of partial derivatives on normally ordered monomials by applying the
commutation relations in (\ref{Lei3dimExp1})-(\ref{Lei3dimExp2}). By means of
the algebra isomorphisms (\ref{AlgIso}) these actions carry over to
commutative functions, i.e. we have
\begin{align}
\partial_{i}\triangleright\mathcal{W}(f) & =\mathcal{W}(\partial
_{i}\triangleright f),\quad f\in\mathcal{A}\text{,}\nonumber\\
\mathcal{W}(f)\triangleleft\partial_{i} & =\mathcal{W}(f\triangleleft
\partial_{i}),
\end{align}
or%
\begin{align}
\partial_{i}\triangleright f & \equiv\mathcal{W}^{-1}\left( \partial
_{i}\triangleright\mathcal{W}(f)\right) ,\nonumber\\
f\triangleleft\partial_{i} & \equiv\mathcal{W}^{-1}\left( \mathcal{W}%
(f)\triangleleft\partial_{i}\right) .
\end{align}
In the work of Ref. \cite{BW01} we derived operator representations
of\ q-deformed partial derivatives by applying these ideas. The results for
the q-deformed three-dimensional Euclidean space can easily be modified to
include the time element $X^{0}$ and the corresponding partial derivative
$\partial_{0}$. In doing so we get%
\begin{align}
\partial_{0}\triangleright f & =\frac{\partial}{\partial x^{0}}f,\nonumber\\
\partial_{+}\triangleright f & =D_{q^{4}}^{+}f,\nonumber\\
\partial_{3}\triangleright f & =D_{q^{2}}^{3}f(q^{2}x^{+}),\nonumber\\
\partial_{-}\triangleright f & =D_{q^{4}}^{-}f(q^{2}x^{3})+\lambda
x^{+}(D_{q^{2}}^{3})^{2}f. \label{DarstAbl}%
\end{align}
The expressions for the other types of actions of partial derivatives follow
from the above formulae by applying the substitutions%
\begin{align}
\partial_{i}\triangleright f & \overset{{%
\genfrac{}{}{0pt}{}{\pm}{q}%
}{%
\genfrac{}{}{0pt}{}{\rightarrow}{\rightarrow}%
}{%
\genfrac{}{}{0pt}{}{\mp}{1/q}%
}}{\longleftrightarrow}\hat{\partial}_{\overline{i}}\,\bar{\triangleright
}\,\tilde{f},\nonumber\\
f\,\bar{\triangleleft}\,\partial_{i} & \overset{{%
\genfrac{}{}{0pt}{}{\pm}{q}%
}{%
\genfrac{}{}{0pt}{}{\rightarrow}{\rightarrow}%
}{%
\genfrac{}{}{0pt}{}{\mp}{1/q}%
}}{\longleftrightarrow}\tilde{f}\triangleleft\hat{\partial}_{\overline{i}},
\label{TransRule1}%
\end{align}
and%
\begin{align}
\partial_{i}\triangleright f & \overset{+\leftrightarrow-}%
{\longleftrightarrow}f\,\bar{\triangleleft}\,\partial_{\overline{i}%
},\nonumber\\
\hat{\partial}_{i}\,\bar{\triangleright}\,\tilde{f} & \overset
{+\leftrightarrow-}{\longleftrightarrow}\tilde{f}\triangleleft\hat{\partial
}_{\overline{i}}, \label{TransRule2}%
\end{align}
where the symbols $\overset{{%
\genfrac{}{}{0pt}{}{\pm}{q}%
}{%
\genfrac{}{}{0pt}{}{\rightarrow}{\rightarrow}%
}{%
\genfrac{}{}{0pt}{}{\mp}{1/q}%
}}{\longleftrightarrow}$ and $\overset{+\leftrightarrow-}{\longleftrightarrow
}$ respectively denote transitions via the substitutions%
\begin{equation}
D_{q^{a}}^{\pm}\rightarrow D_{q^{-a}}^{\mp},\quad x^{\pm}\rightarrow x^{\mp
},\quad q\rightarrow q^{-1},
\end{equation}
and%
\begin{equation}
D_{q^{a}}^{\pm}\rightarrow D_{q^{a}}^{\mp},\quad x^{\pm}\rightarrow x^{\mp}.
\end{equation}
Notice that in (\ref{TransRule1}) and (\ref{TransRule2}) we introduced a
conjugate index with%
\begin{equation}
\overline{(+,3,-,0)}=(-,3,+,0).
\end{equation}
Now, we come to integrals for the extended three-dimensional q-deformed
Euclidean space. For this reason we enhance the algebra of partial derivatives
by introducing inverse elements. The additional relations then read%
\begin{align}
(\partial_{i})^{-1}\partial_{i} & =\partial_{i}(\partial_{i})^{-1}%
=1,\nonumber\\
(\partial_{0})^{-1}\partial_{i} & =\partial_{i}(\partial_{0})^{-1}%
,\nonumber\\
(\partial_{i})^{-1}\partial_{0} & =\partial_{0}(\partial_{i})^{-1},\quad
i\in\{+,3,-,0\},\nonumber\\
(\partial_{3})^{-1}\partial_{\pm} & =q^{\pm2}\partial_{\pm}(\partial
_{3})^{-1},\nonumber\\
(\partial_{\pm})^{-1}\partial_{3} & =q^{\mp2}\partial_{3}(\partial_{\pm
})^{-1},\nonumber\\
(\partial_{+})^{-1}\partial_{-} & =\partial_{-}(\partial_{+})^{-1}%
-q^{-4}\lambda(\partial_{3})^{2}(\partial_{+})^{-2},\nonumber\\
\partial_{+}(\partial_{-})^{-1} & =(\partial_{-})^{-1}\partial_{+}%
-q^{-4}\lambda(\partial_{-})^{-2}(\partial_{3})^{2}.
\end{align}
As a next step we would like to find representations for the inverse partial
derivatives. From a short glance at (\ref{DarstAbl}) it should become obvious
that
\begin{align}
(\partial_{0})^{-1}\big |_{x^{0}=\,a}^{b}\triangleright f & =\int_{a}%
^{b}dx^{0}\,f,\nonumber\\
(\partial_{+})^{-1}\big |_{x^{+}=\,a}^{b}\triangleright f & =(D_{q^{4}}%
^{+})^{-1}\big |_{x^{+}=\,a}^{b}f,\nonumber\\
(\partial_{3})^{-1}\big |_{x^{3}=\,a}^{b}\triangleright f & =(D_{q^{3}}%
^{3})^{-1}\big |_{x^{3}=\,a}^{b}f(q^{-2}x^{+}).
\end{align}
It remains to derive the representation corresponding to\ $(\partial_{-}%
)^{-1}$. To this end, the representation of $\partial_{-}$ is divided up into
a classical part and corrections vanishing in the undeformed limit
$q\rightarrow1$, i.e.%
\begin{equation}
\partial_{-}\triangleright f=(\partial_{-})_{\text{cl}}f+(\partial
_{-})_{\text{cor}}f,
\end{equation}
where
\begin{equation}
(\partial_{-})_{\text{cl}}f=D_{q^{4}}^{-}f(q^{2}x^{3}),\quad(\partial
_{-})_{\text{cor}}f=\lambda x^{+}(D_{q^{2}}^{3})^{2}f.
\end{equation}
Then we can proceed as follows:%
\begin{align}
(\partial_{-})^{-1}\triangleright f & =\frac{1}{(\partial_{-})_{\text{cl}%
}+(\partial_{-})_{\text{cor}}}f=\frac{1}{(\partial_{-})_{\text{cl}}\left(
1+(\partial_{-})_{\text{cl}}^{-1}(\partial_{-})_{\text{cor}}\right)
}f\nonumber\\
& =\frac{1}{1+(\partial_{-})_{\text{cl}}^{-1}(\partial_{-})_{\text{cor}}%
}\cdot\frac{1}{(\partial_{-})_{\text{cl}}}f\nonumber\\
& =\sum_{k=0}^{\infty}\left( -1\right) ^{k}\left[ ((\partial
_{-})_{\text{cl}}^{-1}(\partial_{-})_{\text{cor}}\right] ^{k}((\partial
_{-})_{\text{cl}})^{-1}f\nonumber\label{IntegralE3N}\\
& =\sum_{k=0}^{\infty}q^{2k(k+1)}(-\lambda x^{+})^{k}(D_{q^{2}}^{3}%
)^{2k}(D_{q^{4}}^{-})^{-(k+1)}f(q^{-2(k+1)}x^{3}).
\end{align}
In complete analogy to the correspondences in (\ref{TransRule1}) and
(\ref{TransRule2}) the other types of representations follow from the above
formulae by applying the transformation rules%
\begin{align}
(\partial_{i})^{-1}\triangleright f & \overset{{%
\genfrac{}{}{0pt}{}{\pm}{q}%
}{%
\genfrac{}{}{0pt}{}{\rightarrow}{\rightarrow}%
}{%
\genfrac{}{}{0pt}{}{\mp}{1/q}%
}}{\longleftrightarrow}(\hat{\partial}_{\overline{i}})^{-1}\,\bar
{\triangleright}\,\tilde{f},\nonumber\\
f\,\bar{\triangleleft}\,(\partial_{i})^{-1} & \overset{{%
\genfrac{}{}{0pt}{}{\pm}{q}%
}{%
\genfrac{}{}{0pt}{}{\rightarrow}{\rightarrow}%
}{%
\genfrac{}{}{0pt}{}{\mp}{1/q}%
}}{\longleftrightarrow}\tilde{f}\triangleleft(\hat{\partial}_{\overline{i}%
})^{-1},
\end{align}
and%
\begin{align}
(\partial_{i})^{-1}\triangleright f & \overset{+\leftrightarrow
-}{\longleftrightarrow}f\,\bar{\triangleleft}\,(\partial_{\overline{i}}%
)^{-1},\nonumber\\
(\hat{\partial}_{i})^{-1}\,\bar{\triangleright}\,\tilde{f} & \overset
{+\leftrightarrow-}{\longleftrightarrow}\tilde{f}\triangleleft(\hat{\partial
}_{\overline{i}})^{-1}.
\end{align}
Next, we would like to concern ourselves with q-translations on the
three-dimensional q-deformed Euclidean space. We already mentioned that the
Hopf structures on quantum space coordinates imply their translations. For the
coproducts on quantum space coordinates we have%
\begin{align}
\Delta_{L}(X^{0})=\, & X^{0}\otimes1+1\otimes X^{0},\nonumber\\
\Delta_{L}(X^{-})=\, & X^{-}\otimes1+\Lambda^{-1/2}\tau^{-1/2}\otimes
X^{-},\nonumber\\
\Delta_{L}(X^{3})=\, & X^{3}\otimes1+\Lambda^{-1/2}\otimes X^{3}%
+\lambda\lambda_{+}\Lambda^{-1/2}L^{+}\otimes X^{-},\nonumber\\
\Delta_{L}(X^{+})=\, & X^{-}\otimes1+\Lambda^{-1/2}\tau^{-1/2}\otimes
X^{+}+q\lambda\lambda_{+}\Lambda^{-1/2}\tau^{1/2}L^{+}\otimes X^{3}\nonumber\\
\, & +\,q^{-2}\lambda^{2}\lambda_{+}\Lambda^{-1/2}\tau^{1/2}(L^{+}%
)^{2}\otimes X^{-},
\end{align}
and
\begin{align}
\Delta_{\bar{L}}(X^{0})=\, & X^{0}\otimes1+1\otimes X^{0},\nonumber\\
\Delta_{\bar{L}}(X^{+})=\, & X^{+}\otimes1+\Lambda^{1/2}\tau^{-1/2}\otimes
X^{+},\nonumber\\
\Delta_{\bar{L}}(X^{3})=\, & X^{3}\otimes1+\Lambda^{1/2}\otimes
X^{3}+\lambda\lambda_{+}\Lambda^{1/2}L^{-}\otimes X^{+},\nonumber\\
\Delta_{\bar{L}}(X^{-})=\, & X^{-}\otimes1+\Lambda^{1/2}\tau^{1/2}\otimes
X^{-}+q^{-1}\lambda\lambda_{+}\Lambda^{1/2}\tau^{1/2}L^{-}\otimes
X^{3}\nonumber\\
\, & +\,q^{-2}\lambda^{2}\lambda_{+}\Lambda^{1/2}\tau^{1/2}(L^{-}%
)^{2}\otimes X^{+}, \label{CoproConN}%
\end{align}
where\ $L^{+},$ $L^{-},$ and $\tau$ denote generators of $U_{q}(su_{2}),$
while $\Lambda$ plays the role of a scaling operator with ($A=\{+,3,-\}$),%
\begin{gather}
\Lambda X^{0}=X^{0}\Lambda,\quad\Lambda X^{A}=q^{4}X^{A}\Lambda,\nonumber\\
\Lambda\partial_{0}=\partial_{0}\Lambda,\quad\Lambda\partial_{A}%
=q^{-4}\partial_{A}\Lambda.
\end{gather}
The corresponding antipodes take the form%
\begin{align}
S_{L}(X^{0})=\, & -X^{0},\nonumber\\
S_{L}(X^{-})=\, & -\Lambda^{1/2}\tau^{1/2}X^{-},\nonumber\\
S_{L}(X^{3})=\, & -\Lambda^{1/2}X^{3}+q^{2}\lambda\lambda_{+}\Lambda
^{1/2}\tau^{1/2}L^{+}X^{-},\nonumber\\
S_{L}(X^{+})=\, & -\Lambda^{1/2}\tau^{-1/2}X^{-}+q\lambda\lambda_{+}%
\Lambda^{1/2}L^{+}X^{3}\nonumber\\
\, & -\,q^{4}\lambda^{2}\lambda_{+}\Lambda^{1/2}\tau^{1/2}(L^{+})^{2}X^{-},
\end{align}
and%
\begin{align}
S_{\bar{L}}(X^{0})= & -X^{0},\nonumber\\
S_{\bar{L}}(X^{+})= & -\Lambda^{-1/2}\tau^{1/2}X^{+},\nonumber\\
S_{\bar{L}}(X^{3})= & -\Lambda^{-1/2}X^{3}+q^{-2}\lambda\lambda_{+}%
\Lambda^{-1/2}\tau^{1/2}L^{-}X^{+},\nonumber\\
S_{\bar{L}}(X^{-})= & -\Lambda^{-1/2}\tau^{-1/2}X^{-}+q^{-1}\lambda
\lambda_{+}\Lambda^{-1/2}L^{-}X^{3}\nonumber\\
& -q^{-4}\lambda^{2}\lambda_{+}\Lambda^{-1/2}\tau^{1/2}(L^{-})^{2}X^{+}.
\end{align}
We see that coproduct and antipode become rather simple on the time element.
This observation is a direct consequence of the fact that the time element is
completely decoupled from position space. For the same reason the Hopf
structures on the subspace spanned by the coordinates $X^{+},$ $X^{3},$ and
$X^{-}$ are identical to those already presented in the work of Ref.
\cite{Wac04}.
It is not very difficult to modify the reasonings in Ref. \cite{Wac04} in a
way that they take account of the existence of the time element $X^{0}$. In
this manner, we can show that the above relations imply
\begin{align}
& f(x^{i}\oplus_{\bar{L}}y^{j})=\sum_{k_{0}=0}^{n_{0}}\sum_{k_{+}=0}^{n_{+}%
}\sum_{k_{3}=0}^{n_{3}}\sum_{k_{-}=0}^{n_{-}}\sum_{l=0}^{k_{3}}(q\lambda
\lambda_{+})^{l}\nonumber\\
& \qquad\times\frac{(x^{0})^{k_{0}}(x^{+})^{k_{+}}(x^{3})^{k_{3}-l}%
(x^{-})^{k_{-}+\,l}(y^{+})^{l}}{k_{0}![[2i]]_{q^{2}}!![[k_{+}]]_{q^{4}%
}![[k_{3}-l]]_{q^{2}}![[k_{-}]]_{q^{4}}!}\nonumber\\
& \qquad\times\,\Big ((D_{q^{4}}^{+})^{k_{+}}(D_{q^{2}}^{3})^{k_{3}%
+l}(D_{q^{4}}^{-})^{k_{-}}\Big (\frac{\partial}{\partial x^{0}}\Big )^{k_{0}%
}f\Big )(q^{2(k_{3}-l)}y^{+},q^{2k_{-}}y^{3}). \label{KomCoprDreiN}%
\end{align}
and
\begin{align}
& \hat{U}(f(\ominus_{\bar{L}}\,x^{i}))=\sum_{k=0}^{\infty}(q^{-1}%
\lambda\lambda_{+})^{k}q^{4k^{2}}\,\frac{(x^{+}x^{-})^{k}}{[[2k]]_{q^{2}}%
!!}\nonumber\\
& \qquad\times(D_{q^{2}}^{3})^{2k}\,q^{2(\hat{n}_{+}^{2}+\,\hat{n}_{-}%
^{2})+\hat{n}_{3}(2\hat{n}_{+}+\,2\hat{n}_{-}+\,\,\hat{n}_{3}-1)}%
\,f(-x^{0},-x^{+},-q^{-2k}x^{3},-x^{-}), \label{KomAntDreiN}%
\end{align}
where
\begin{equation}
\lbrack\lbrack2k]]_{q^{2}}!!=[[2k]]_{q^{2}}[[2(k-1)]]_{q^{2}}\ldots
\lbrack\lbrack2]]_{q^{2}}.
\end{equation}
The operator $\hat{U}$ in Eq. (\ref{KomAntDreiN}) transforms a function of
normal ordering $X^{+}X^{3}X^{-}$ into another function representing the same
element but now for reversed ordering. Its explicit form\ was presented in the
work of Ref.\ \cite{BW01}. The expressions corresponding to the other Hopf
structures are obtained from the formulae in (\ref{KomCoprDreiN}) and
(\ref{KomAntDreiN}) most easily by means of the transitions%
\begin{equation}
f(x^{i}\oplus_{\bar{L}}y^{j})\overset{{%
\genfrac{}{}{0pt}{}{\pm}{q}%
}{%
\genfrac{}{}{0pt}{}{\rightarrow}{\rightarrow}%
}{%
\genfrac{}{}{0pt}{}{\mp}{1/q}%
}}{\longleftrightarrow}\tilde{f}(x^{i}\,\oplus_{L}\,y^{j}),
\end{equation}
and
\begin{equation}
\hat{U}(f(\ominus_{\bar{L}}\,x^{i}))\overset{{%
\genfrac{}{}{0pt}{}{\pm}{q}%
}{%
\genfrac{}{}{0pt}{}{\rightarrow}{\rightarrow}%
}{%
\genfrac{}{}{0pt}{}{\mp}{1/q}%
}}{\longleftrightarrow}\hat{U}^{-1}(\tilde{f}(\ominus_{L}\,x^{i})).
\end{equation}
The tilde again reminds us of the fact that the function refers to reversed
normal ordering.
Next, we would like to say a few words about braided products on the extended
q-deformed Euclidean space in three dimensions. As we know, braided products
describe how elements of different quantum spaces commute. In this sense they
are an essential ingredient to formulate multiplication on tensor products of
quantum spaces. The entries $\mathcal{L}_{j}^{i}$ of the so-called L-matrix
determine the braiding of the quantum space coordinates $X^{i},$
$i\in\{0,+,3,-\}$ (if not stated otherwise summation over repeated indices is
to be understood):%
\begin{equation}
X^{i}\odot_{L}w=(\mathcal{L}_{j}^{i}\triangleright w)\otimes X^{j}.
\end{equation}
The explicit form of the L-matrix can be read off from the coproduct on
coordinates, since it holds%
\begin{equation}
\Delta_{L}(X^{i})=X^{i}\otimes1+\mathcal{L}_{j}^{i}\otimes X^{j}.
\end{equation}
In very much the same way we have%
\begin{equation}
X^{i}\odot_{\bar{L}}w=(\mathcal{\bar{L}}_{j}^{i}\triangleright w)\otimes
X^{j},
\end{equation}
and%
\begin{equation}
\Delta_{\bar{L}}(X^{i})=X^{i}\otimes1+\mathcal{\bar{L}}_{j}^{i}\otimes X^{j}.
\end{equation}
These considerations are consistent with the observation that the time
coordinate $X^{0}$ shows trivial braiding. In Ref. \cite{Wac05} we presented
operator expressions that realize braided products for the three-dimensional
q-deformed Euclidean space on a commutative coordinate algebra. Due to the
trivial braiding of the time coordinate these expressions carry over to the
extended three-dimensional q-deformed Euclidean space without any changes.
Last but not least, we come to dual pairings and q-exponentials. We already
recalled their definition in Sec.\thinspace\ref{QAnBraid}. With the results of
Ref. \cite{Wac03} it is not very difficult to show that%
\begin{align}
& \big \langle(\partial_{0})^{n_{0}}(\partial_{-})^{n_{-}}(\partial
_{3})^{n_{3}}(\partial_{+})^{n_{-}},(X^{0})^{m_{0}}(X^{+})^{m_{+}}%
(X^{3})^{m_{3}}(X^{-})^{m_{-}}\big \rangle_{L,\bar{R}}=\nonumber\\
& \qquad=\delta_{m_{-},n_{-}}\delta_{m_{3},n_{3}}\delta_{m_{+},n_{+}}%
\delta_{m_{0},n_{0}}m_{0}!\,[[m_{+}]]_{q^{4}}!\,[[m_{3}]]_{q^{2}}%
!\,[[m_{-}]]_{q^{4}}!,\\[0.16in]
& \big \langle(\hat{\partial}_{0})^{n_{0}}(\hat{\partial}_{+})^{n_{+}}%
(\hat{\partial}_{3})^{n_{3}}(\hat{\partial}_{-})^{n_{-}},(X^{0})^{m_{0}}%
(X^{-})^{m_{-}}(X^{3})^{m_{3}}(X^{+})^{m_{+}}\big \rangle_{\bar{L}%
,R}=\nonumber\\
& \qquad=\delta_{m_{-},n_{-}}\delta_{m_{3},n_{3}}\delta_{m_{+},n_{+}}%
\delta_{m_{0},n_{0}}m_{0}!\,[[m_{+}]]_{q^{-4}}!\,[[m_{3}]]_{q^{-2}}%
!\,[[m_{-}]]_{q^{-4}}!,
\end{align}
and%
\begin{align}
& \big \langle(X^{0})^{m_{0}}(X^{+})^{m_{+}}(X^{3})^{m_{3}}(X^{-})^{m_{-}%
},(\partial_{0})^{n_{0}}(\partial_{-})^{n_{-}}(\partial_{3})^{n_{3}}%
(\partial_{+})^{n_{+}}\big \rangle_{L,\bar{R}}=\nonumber\\
& \qquad=(-1)^{n_{0}+n_{+}+n_{3}+n_{-}}\delta_{m_{-},n_{-}}\delta
_{m_{3},n_{3}}\delta_{m_{+},n_{+}}\delta_{m_{0},n_{0}}\nonumber\\
& \qquad\hspace{0.16in}\times m_{0}!\,[[m_{+}]]_{q^{4}}!\,[[m_{3}]]_{q^{2}%
}!\,[[m_{-}]]_{q^{4}}!,\\[0.16in]
& \big \langle(X^{0})^{m_{0}}(X^{-})^{m_{-}}(X^{3})^{m_{3}}(X^{+})^{m_{+}%
},(\hat{\partial}_{0})^{n_{0}}(\hat{\partial}_{+})^{n_{+}}(\hat{\partial}%
_{3})^{n_{3}}(\hat{\partial}_{-})^{n_{-}}\big \rangle_{L,\bar{R}}=\nonumber\\
& \qquad=(-1)^{n_{0}+n_{+}+n_{3}+n_{-}}\delta_{m_{-},n_{-}}\delta
_{m_{3},n_{3}}\delta_{m_{+},n_{+}}\delta_{m_{0},n_{0}}\nonumber\\
& \qquad\hspace{0.16in}\times m_{0}!\,[[m_{+}]]_{q^{-4}}!\,[[m_{3}]]_{q^{-2}%
}!\,[[m_{-}]]_{q^{-4}}!.
\end{align}
From these pairings we can read off the explicit form of q-exponentials for
the extended three-dimensional q-deformed Euclidean space:%
\begin{align}
& \exp(x^{i}|\partial_{j})_{\bar{R},L}=\nonumber\\
& \qquad=\sum_{\underline{n}=0}^{\infty}\frac{(x^{0})^{n_{0}}(x^{+})^{n_{+}%
}(x^{3})^{n_{3}}(x^{-})^{n_{-}}\otimes(\partial_{0})^{n_{0}}(\partial
_{-})^{n_{-}}(\partial_{3})^{n_{3}}(\partial_{+})^{n_{+}}}{n_{0}%
!\,[[n_{+}]]_{q^{4}}!\,[[n_{3}]]_{q^{2}}!\,[[n_{-}]]_{q^{4}}!}%
,\label{Exp3dimAnf}\\[0.1in]
& \exp(x^{i}|\hat{\partial}_{j})_{R,\bar{L}}=\nonumber\\
& \qquad=\sum_{\underline{n}=0}^{\infty}\frac{(x^{0})^{n_{0}}(x^{-})^{n_{-}%
}(x^{3})^{n_{3}}(x^{+})^{n_{+}}\otimes(\hat{\partial}_{0})^{n_{0}}%
(\hat{\partial}_{+})^{n_{+}}(\hat{\partial}_{3})^{n_{3}}(\hat{\partial}%
_{-})^{n_{-}}}{n_{0}!\,[[n_{+}]]_{q^{-4}}!\,[[n_{3}]]_{q^{-2}}!\,[[n_{-}%
]]_{q^{-4}}!}.
\end{align}
The expressions for the other exponentials follow from these formulae by
applying the transformations%
\begin{align}
& \exp(x^{i}|\partial_{j})_{\bar{R},L}{}\overset{+\leftrightarrow
-}{\longleftrightarrow}\exp(\partial_{i}|x^{j})_{\bar{R},L},\nonumber\\
& \exp(x^{i}|\hat{\partial}_{j})_{R,\bar{L}}{}\overset{+\leftrightarrow
-}{\longleftrightarrow}\exp(\hat{\partial}_{i}|x^{j})_{R,\bar{L}%
},\label{Exp3dimEnd}%
\end{align}
where the symbol $\overset{+\leftrightarrow-}{\longleftrightarrow}$ denotes a
transition between the two expressions via one of the following substitutions:%
\begin{align}
\text{a)}\quad X^{i} & \leftrightarrow-\partial_{i},\quad\partial
_{i}\leftrightarrow X^{i},\nonumber\\
\text{b)}\quad X^{i} & \leftrightarrow-\hat{\partial}_{i},\quad\hat{\partial
}_{i}\leftrightarrow X^{i}.
\end{align}
\section{Time evolution operator\label{SecTimEvo}}
In this section we discuss the question how wave functions on the quantum
spaces under consideration change in time. First of all, we recall that
translations on quantum spaces are generated by q-exponentials \cite{qAn,
Maj95, Maj93-5, Wac04, SW04}. This observation leads us to q-deformed Taylor
rules which take the form \cite{qAn}%
\begin{align}
\exp(x^{i}\oplus_{\bar{L}}(\ominus_{\bar{L}}\,y^{j})|\partial_{k})_{\bar{R}%
,L}\overset{\partial|y}{\triangleright}g(y^{l}) & =g(x^{i}),\nonumber\\
\exp(x^{i}\oplus_{L}(\ominus_{L}\,y^{j})|\hat{\partial}_{k})_{R,\bar{L}%
}\,\overset{\partial|y}{\bar{\triangleright}}\,g(y^{k}) & =g(x^{i}),
\label{q-TayRec}%
\end{align}
and%
\begin{align}
g(y^{l})\,\overset{y|\partial}{\bar{\triangleleft}}\,\exp(\partial
_{k}|(\ominus_{R}\,y^{j})\oplus_{R}x^{i})_{\bar{R},L} & =g(x^{i}%
),\nonumber\\
g(y^{l})\overset{y|\partial}{\triangleleft}\exp(\hat{\partial}_{k}%
|(\ominus_{\bar{R}}\,y^{j})\oplus_{\bar{R}}x^{i})_{R,\bar{L}} & =g(x^{i}).
\label{q-TayRecN}%
\end{align}
For a correct understanding of these expressions see also Ref. \cite{qAn,
Wac04}.
If the q-deformed Taylor rules shall describe translations in time, only, they
have to be modified as follows:%
\begin{align}
\big [\exp(x^{i}\oplus_{\bar{L}}(\ominus_{\bar{L}}\,y^{j})|\partial_{k}%
)_{\bar{R},L}\overset{\partial|y}{\triangleright}g(y^{l})\big ]_{x^{A}%
=\,y^{A}} & =g(y^{i})\big |_{y^{0}=\,x^{0}},\nonumber\\
\big [\exp(x^{i}\oplus_{L}(\ominus_{L}\,y^{j})|\hat{\partial}_{k})_{R,\bar{L}%
}\,\overset{\partial|y}{\bar{\triangleright}}\,g(y^{l})\big ]_{x^{A}=\,y^{A}}
& =g(y^{i})\big |_{y^{0}=\,x^{0}},
\end{align}
and%
\begin{align}
\big [g(y^{l})\,\overset{y|\partial}{\bar{\triangleleft}}\,\exp(\partial
_{k}|(\ominus_{R}\,y^{j})\oplus_{R}x^{i})_{\bar{R},L}\big ]_{x^{A}=\,y^{A}}
& =g(y^{i})\big |_{y^{0}=\,x^{0}},\nonumber\\
\big [g(y^{l})\overset{y|\partial}{\triangleleft}\exp(\hat{\partial}%
_{k}|(\ominus_{\bar{R}}\,y^{j})\oplus_{\bar{R}}x^{i})_{R,\bar{L}}%
\big ]_{x^{A}=\,y^{A}} & =g(y^{i})\big |_{y^{0}=\,x^{0}},
\end{align}
where $A$ represents the indices $(+,3,-)$. In the above expressions we first
perform a general translation and then require that the space coordinates of
the translated function take on the same values as the original function.
Since space and time are completely decoupled from each other the above
formulae simplify to
\begin{align}
\big [\exp(x^{0}\otimes\partial_{0})\overset{\partial|y}{\triangleright
}g(y^{i})\big ]_{y^{0}=\,0} & =g(y^{i})\big |_{y^{0}=\,x^{0}},\nonumber\\
\big [\exp(x^{0}\otimes\hat{\partial}_{0})\overset{\partial|y}{\triangleright
}g(y^{i})\big ]_{y^{0}=\,0} & =g(y^{i})\big |_{y^{0}=\,x^{0}},
\end{align}
and%
\begin{align}
\big [g(y^{i})\,\overset{y|\partial}{\bar{\triangleleft}}\,\exp(-\partial
_{0}\otimes x^{0})\big ]_{y^{0}=\,0} & =g(y^{i})\big |_{y^{0}=\,x^{0}%
},\nonumber\\
\big [g(y^{i})\overset{y|\partial}{\triangleleft}\exp(-\hat{\partial}%
_{0}\otimes x^{0})\big ]_{y^{0}=\,0} & =g(y^{i})\big |_{y^{0}=\,x^{0}}.
\end{align}
In quantum mechanics the set of values a wave function takes on in space at a
certain time completely determines the behavior of that wave function at all
later times. This requires that time derivatives acting on wave functions can
be substituted by a linear operator i$^{-1}H$ that acts on space coordinates,
only, and has the same algebraic properties as the time derivative
$\partial_{0}$.
In this manner, it should be clear that for the time evolution operator we
have%
\begin{align}
\phi(x^{A},t) & =\mathcal{U}_{L}(t,t^{\prime}=0)\overset{x}{\triangleright
}\phi(x^{A},t^{\prime}=0)\nonumber\\
& =\mathcal{U}_{\bar{L}}(t,t^{\prime}=0)\overset{x}{\triangleright}\phi
(x^{A},t^{\prime}=0)\nonumber\\
& =\phi(x^{A},t^{\prime}=0)\overset{x}{\triangleleft}\mathcal{U}%
_{R}(t,t^{\prime}=0)\nonumber\\
& =\phi(x^{A},t^{\prime}=0)\overset{x}{\triangleleft}\mathcal{U}_{\bar{R}%
}(t,t^{\prime}=0), \label{TimEvoId1}%
\end{align}
where%
\begin{align}
\mathcal{U}_{L}(t,t^{\prime} & =0)=\mathcal{U}_{\bar{L}}(t,t^{\prime
}=0)\equiv\exp(-t\otimes\text{i}H),\\[0.08in]
\mathcal{U}_{R}(t,t^{\prime} & =0)=\mathcal{U}_{\bar{R}}(t,t^{\prime
}=0)\equiv\exp(\text{i}H\otimes t).
\end{align}
We see that the time evolution operator is of the same form as in the
undeformed case. In the remainder of this section we collect basic properties
of the time evolution operators. This is mainly done for the purpose of
providing consistent notation.
First of all, we are seeking operators $\mathcal{U}_{\gamma}^{-1}(t,t^{\prime
}=0),$ $\gamma\in\{L,\bar{L},R,\bar{R}\},$ with\
\begin{align}
& \mathcal{U}_{\gamma}(t,t^{\prime}=0)\,\mathcal{U}_{\gamma}^{-1}%
(t,t^{\prime}=0)=1,\nonumber\\
& \mathcal{U}_{\gamma}^{-1}(t,t^{\prime}=0)\,\mathcal{U}_{\gamma}%
(t,t^{\prime}=0)=1.
\end{align}
One readily checks that%
\begin{align}
& \mathcal{U}_{\alpha}^{-1}(t,t^{\prime}=0)\equiv\mathcal{U}_{\alpha
}(-t,t^{\prime}=0)=\exp(t\otimes\text{i}H),\\[0.08in]
& \mathcal{U}_{\beta}^{-1}(t,t^{\prime}=0)\equiv\mathcal{U}_{\beta
}(-t,t^{\prime}=0)=\exp(-\text{i}H\otimes t),
\end{align}
where $\alpha\in\{L,\bar{L}\}$ and $\beta\in\{R,\bar{R}\}.$ As a direct
consequence of these identities we have%
\begin{align}
\phi(x^{A},t^{\prime}=0) & =\,\mathcal{U}_{\alpha}^{-1}(t,t^{\prime
}=0)\triangleright\phi(x^{A},t)\nonumber\\
& =\,\phi(x^{A},t)\triangleleft\mathcal{U}_{\beta}^{-1}(t,t^{\prime}=0).
\label{TimEvoId2N}%
\end{align}
The operators $\mathcal{U}_{\gamma}^{-1}(t,t^{\prime}=0)$ describe particles
traversing backwards in time, since we have%
\begin{align}
\phi(x^{A},-t) & =\,\mathcal{U}_{\alpha}^{-1}(t,t^{\prime}=0)\triangleright
\phi(x^{A},t^{\prime}=0)\nonumber\\
& =\,\phi(x^{A},t^{\prime}=0)\triangleleft\mathcal{U}_{\beta}^{-1}%
(t,t^{\prime}=0).
\end{align}
Now, we are in a position to generalize the time evolution operators by
\begin{align}
& \mathcal{U}_{\alpha}(t,t^{\prime})\equiv\mathcal{U}_{\alpha}(t,t^{\prime
\prime}=0)\,\mathcal{U}_{\alpha}^{-1}(t^{\prime},t^{\prime\prime}%
=0)=\exp(-(t-t^{\prime})\otimes\text{i}H),\label{GenTimEvo1}\\[0.08in]
& \mathcal{U}_{\beta}(t,t^{\prime})\equiv\mathcal{U}_{\beta}^{-1}(t^{\prime
},t^{\prime\prime}=0)\,\mathcal{U}_{\beta}(t,t^{\prime\prime}=0)=\exp
(-\text{i}H\otimes(t^{\prime}-t)). \label{GenTimEvo2}%
\end{align}
The new operators tell us how wave functions change under a time displacement
$t^{\prime}\rightarrow t$:%
\begin{equation}
\phi(x^{A},t)=\mathcal{U}_{\alpha}(t,t^{\prime})\triangleright\phi
(x^{A},t^{\prime})=\phi(x^{A},t^{\prime})\triangleleft\mathcal{U}_{\beta
}(t,t^{\prime}). \label{TimDisp}%
\end{equation}
To prove these equalities one can apply the identities in (\ref{TimEvoId1})
and (\ref{TimEvoId2N}). An essential feature of the time evolution operators
in (\ref{GenTimEvo1}) and (\ref{GenTimEvo2}) is the composition property,%
\begin{align}
\mathcal{U}_{\alpha}(t,t^{\prime}) & =\mathcal{U}_{\alpha}(t,0)\,\mathcal{U}%
_{\alpha}^{-1}(t^{\prime},0)\nonumber\\
& =\mathcal{U}_{\alpha}(t,0)\,\mathcal{U}_{\alpha}^{-1}(t^{\prime\prime
},0)\,\mathcal{U}_{\alpha}(t^{\prime\prime},0)\,\mathcal{U}_{\alpha}%
^{-1}(t^{\prime},0)\nonumber\\
& =\mathcal{U}_{\alpha}(t,t^{\prime\prime})\,\mathcal{U}_{\alpha}%
(t^{\prime\prime},t^{\prime}),
\end{align}
and%
\begin{align}
\mathcal{U}_{\beta}(t,t^{\prime}) & =\mathcal{U}_{\beta}^{-1}(t^{\prime
},0)\,\mathcal{U}_{\beta}(t,0)\nonumber\\
& =\mathcal{U}_{\beta}^{-1}(t^{\prime},0)\,\mathcal{U}_{\beta}(t^{\prime
\prime},0)\,\mathcal{U}_{\beta}^{-1}(t^{\prime\prime},0)\,\mathcal{U}_{\beta
}(t,0)\nonumber\\
& =\mathcal{U}_{\beta}(t^{\prime\prime},t^{\prime})\,\mathcal{U}_{\beta
}(t,t^{\prime\prime}).
\end{align}
Next, we would like to consider operators $\mathcal{U}_{\gamma}^{-1}%
(t,t^{\prime})$ being subject to%
\begin{equation}
\mathcal{U}_{\gamma}(t,t^{\prime})\,\mathcal{U}_{\gamma}^{-1}(t,t^{\prime
})=\mathcal{U}_{\gamma}^{-1}(t,t^{\prime})\,\mathcal{U}_{\gamma}(t,t^{\prime
})=1.
\end{equation}
They are given by%
\begin{align}
\mathcal{U}_{\alpha}^{-1}(t,t^{\prime}) & \equiv\mathcal{U}_{\alpha
}(t^{\prime},t^{\prime\prime}=0)\,\mathcal{U}_{\alpha}^{-1}(t,t^{\prime\prime
}=0)=\mathcal{U}_{\alpha}(t^{\prime},t)\nonumber\\
& =\mathcal{U}_{\alpha}(-t,-t^{\prime})=\exp((t-t^{\prime})\otimes
\text{i}H),\\[0.16in]
\mathcal{U}_{\beta}^{-1}(t,t^{\prime}) & \equiv\mathcal{U}_{\beta}%
^{-1}(t,t^{\prime\prime}=0)\,\mathcal{U}_{\beta}(t^{\prime},t^{\prime\prime
}=0)=\mathcal{U}_{\beta}(t^{\prime},t)\nonumber\\
& =\mathcal{U}_{\beta}(-t,-t^{\prime})=\exp(\text{i}H\otimes(t^{\prime}-t)).
\end{align}
These operators reverse the time displacement $t^{\prime}\rightarrow t$:
\begin{equation}
\phi(x^{A},t^{\prime})=\mathcal{U}_{\alpha}^{-1}(t,t^{\prime})\triangleright
\phi(x^{A},t)=\phi(x^{A},t)\triangleleft\mathcal{U}_{\beta}^{-1}(t,t^{\prime
}).
\end{equation}
Last but not least we would like to mention some simplifications. Let us
recall that the time coordinate shows trivial braiding. Thus, the tensor
products in the expressions on the right-hand side of (\ref{GenTimEvo1}) and
(\ref{GenTimEvo2}) can be omitted. Concretely, we can make the identifications%
\begin{equation}
\exp(t\otimes\text{i}H)=\exp(\text{i}H\otimes t)=\exp(\text{i}Ht),
\end{equation}
and%
\begin{align}
\mathcal{U}(t,t^{\prime}) & =\mathcal{U}_{\alpha}(t,t^{\prime}%
)=\mathcal{U}_{\beta}(-t,-t^{\prime})\nonumber\\
& =\mathcal{U}_{\alpha}^{-1}(-t,-t^{\prime})=\mathcal{U}_{\beta}%
^{-1}(t,t^{\prime}),
\end{align}
where
\begin{equation}
\mathcal{U}(t,t^{\prime})=\exp(-\text{i}H(t-t^{\prime})).
\end{equation}
It is obvious that the operator $\mathcal{U}(t,t^{\prime})$ becomes unitary,
if the Hamiltonian $H$ is assumed to be Hermitian:%
\begin{equation}
\mathcal{U}^{-1}(t,t^{\prime})=\mathcal{U}(-t,-t^{\prime})=\mathcal{U}%
(t^{\prime},t)=\mathcal{U}^{\dag}(t,t^{\prime}).
\end{equation}
\section{Schr\"{o}dinger and Heisenberg picture\label{SHPic}}
In the last section we found that the time evolution operator on the quantum
spaces under consideration is of the same form as its undeformed counterpart.
For this reason we should be able to introduce the Heisenberg and the
Schr\"{o}dinger picture on our quantum spaces along the same line of
reasonings as in the undeformed case (see for example Ref. \cite{Sak94}).
To begin with we derive differential equations for the time evolution
operators. We have
\begin{align}
\partial_{0}\overset{t}{\triangleright}\mathcal{U}_{L}(t,t^{\prime}) &
=\partial_{0}\overset{t}{\triangleright}\mathcal{U}_{L}(t,0)\,\mathcal{U}%
_{L}^{-1}(t^{\prime},0)\nonumber\\
& =\partial_{0}\overset{t}{\triangleright}\exp(-t\otimes\text{i}%
H)\,\mathcal{U}_{L}^{-1}(t^{\prime},0)\nonumber\\
& =\exp(-t\otimes\text{i}H)\,(-\text{i}H)\,\mathcal{U}_{L}^{-1}(t^{\prime
},0)\nonumber\\
& =-\text{i}H\exp(-t\otimes\text{i}H)\,\mathcal{U}_{L}^{-1}(t^{\prime
},0)\nonumber\\
& =-\text{i}H\,\mathcal{U}_{L}(t,0)\,\mathcal{U}_{L}^{-1}(t^{\prime
},0)\nonumber\\
& =-\text{i}H\,\mathcal{U}_{L}(t,t^{\prime}),
\end{align}
and, likewise,
\begin{align}
\mathcal{U}_{R}(t,t^{\prime})\overset{t}{\triangleleft}\hat{\partial}_{0} &
=\mathcal{U}_{R}^{-1}(t^{\prime},0)\,\mathcal{U}_{R}(t,0)\overset
{t}{\triangleleft}\hat{\partial}_{0}\nonumber\\
& =\mathcal{U}_{R}^{-1}(t^{\prime},0)\,\exp(\text{i}H\otimes t)\overset
{t}{\triangleleft}\hat{\partial}_{0}\nonumber\\
& =\mathcal{U}_{R}^{-1}(t^{\prime},0)\,(-\text{i}H)\,\exp(\text{i}H\otimes
t)\nonumber\\
& =\mathcal{U}_{R}^{-1}(t^{\prime},0)\,\exp(\text{i}H\otimes t)\,(-\text{i}%
H)\nonumber\\
& =\mathcal{U}_{R}^{-1}(t^{\prime},0)\,\mathcal{U}_{R}(t,0)\,(-\text{i}%
H)\nonumber\\
& =\mathcal{U}_{R}(t,t^{\prime})\,(-\text{i}H).
\end{align}
In this manner, we find\ that%
\begin{align}
\text{i}\partial_{0}\overset{t}{\triangleright}\mathcal{U}_{L}(t,t^{\prime})
& =H\,\mathcal{U}_{L}(t,t^{\prime}),\nonumber\\
\text{i}\hat{\partial}_{0}\,\overset{t}{\bar{\triangleright}}\,\mathcal{U}%
_{\bar{L}}(t,t^{\prime}) & =H\,\mathcal{U}_{\bar{L}}(t,t^{\prime}),
\label{SchoEq1}%
\end{align}
and%
\begin{align}
\mathcal{U}_{R}(t,t^{\prime})\overset{t}{\triangleleft}(\text{i}\hat{\partial
}_{0}) & =\mathcal{U}_{R}(t,t^{\prime})H,\nonumber\\
\mathcal{U}_{\bar{R}}(t,t^{\prime})\,\overset{t}{\bar{\triangleleft}%
}\,(\text{i}\partial_{0}) & =\mathcal{U}_{\bar{R}}(t,t^{\prime})H.
\label{SchroEq2}%
\end{align}
The above equations, which are often referred to as Schr\"{o}dinger equations
of the time evolution operator, correspond to different geometries. However,
from the considerations so far one can conclude that the equations in
(\ref{SchoEq1}) and (\ref{SchroEq2}) are not really different from each other.
Thus, the reader may think that such a distinction is unnecessary. But this is
not the case, since the realization of the Hamiltonian often depends on the
choice for the geometry.
It should also be mentioned that the differential equations in (\ref{SchoEq1})
and (\ref{SchroEq2}) are equivalent to the integral equations%
\begin{align}
\mathcal{U}_{\alpha}(t,t^{\prime}) & =1-\text{i}\int_{t^{\prime}}%
^{t}dt^{\prime\prime}\,H\,\mathcal{U}_{\alpha}(t^{\prime\prime},t^{\prime
}),\nonumber\\
\mathcal{U}_{\beta}(t,t^{\prime}) & =1+\text{i}\int_{t^{\prime}}%
^{t}dt^{\prime\prime}\,\mathcal{U}_{\beta}(t^{\prime\prime},t^{\prime})H,
\end{align}
if we require%
\begin{equation}
\mathcal{U}_{\gamma}(t,t)=1.
\end{equation}
Formal solutions are given by%
\begin{align}
\mathcal{U}_{\alpha}(t,t^{\prime}) & =1+\sum_{n=1}^{\infty}\text{i}^{-n}%
\int_{t^{\prime}}^{t}dt_{1}\int_{t^{\prime}}^{t_{1}}dt_{2}\ldots
\int_{t^{\prime}}^{t_{n-1}}dt_{n}\,H(t_{1})H(t_{2})\ldots H(t_{n}),\nonumber\\
\mathcal{U}_{\beta}(t,t^{\prime}) & =1+\sum_{n=1}^{\infty}\text{i}^{n}%
\int_{t^{\prime}}^{t}dt_{1}\int_{t^{\prime}}^{t_{1}}dt_{2}\ldots
\int_{t^{\prime}}^{t_{n-1}}dt_{n}\,H(t_{n})H(t_{n-1})\ldots H(t_{1}).
\end{align}
Let us recall that in the Schr\"{o}dinger picture wave functions vary with
time, while observables like $X^{i}$ and $P^{i}$ are fixed in time. We obtain
the equations of motion in the Schr\"{o}dinger picture by combining
(\ref{TimDisp}) with the equations in (\ref{SchoEq1}) and (\ref{SchroEq2}).
Proceeding in this manner yields%
\begin{align}
\text{i}\partial_{0}\overset{t}{\triangleright}\phi(x^{A},t) &
=\text{i}\partial_{0}\overset{t}{\triangleright}\mathcal{U}_{L}(t,t^{\prime
})\triangleright\phi(x^{A},t^{\prime})\nonumber\\
& =H\,\mathcal{U}_{L}(t,t^{\prime})\triangleright\phi(x^{A},t^{\prime
})=H\overset{x}{\triangleright}\phi(x^{A},t),\\[0.1in]
\text{i}\hat{\partial}_{0}\,\overset{t}{\bar{\triangleright}}\,\phi(t,x^{A})
& =\text{i}\hat{\partial}_{0}\,\overset{t}{\bar{\triangleright}}%
\,\mathcal{U}_{\bar{L}}(t,t^{\prime})\triangleright\phi(x^{A},t^{\prime
})\nonumber\\
& =H\,\mathcal{U}_{\bar{L}}(t,t^{\prime})\triangleright\phi(x^{A},t^{\prime
})=H\,\overset{x}{\bar{\triangleright}}\,\phi(x^{A},t),
\end{align}
and
\begin{align}
\phi(x^{A},t)\overset{t}{\triangleleft}(\text{i}\hat{\partial}_{0}) &
=\phi(x^{A},t^{\prime})\triangleleft\,\mathcal{U}_{R}(t,t^{\prime})\overset
{t}{\triangleleft}\hat{\partial}_{0}\nonumber\\
& =\phi(x^{A},t^{\prime})\triangleleft\,\mathcal{U}_{R}(t,t^{\prime}%
)H=\phi(x^{A},t)\overset{x}{\triangleleft}H,\\[0.1in]
\phi(x^{A},t)\,\overset{t}{\bar{\triangleleft}}\,(\text{i}\partial_{0}) &
=\phi(x^{A},t^{\prime})\triangleleft\,\mathcal{U}_{\bar{R}}(t,t^{\prime
})\,\overset{t}{\bar{\triangleleft}}\,(\text{i}\partial_{0})\nonumber\\
& =\phi(x^{A},t^{\prime})\triangleleft\,\mathcal{U}_{\bar{R}}(t,t^{\prime
})H=\phi(x^{A},t)\,\overset{x}{\bar{\triangleleft}}\,H.
\end{align}
Next, we would like to discuss the implications of these equations on the time
dependence of transition amplitudes and expectation values. To this end, we
first introduce sesquilinear forms on the quantum spaces under consideration.
In analogy to the undeformed case they can be defined by \cite{WQK1}%
\begin{align}
\big \langle f,g\big \rangle_{\gamma} & \equiv\int_{-\infty}^{+\infty
}d_{\gamma}^{n}x\,\overline{f(x^{A},t)}\overset{t,x}{\circledast}%
g(x^{B},t),\nonumber\\
\big \langle f,g\big \rangle_{\gamma}^{\prime} & \equiv\int_{-\infty
}^{+\infty}d_{\gamma}^{n}x\,f(x^{A},t)\overset{t,x}{\circledast}%
\overline{g(x^{B},t)}, \label{PraSes}%
\end{align}
where again $\gamma\in\{L,\bar{L},R,\bar{R}\}.$ For the integrals over the
whole space we have to insert the expressions \cite{Wac02, Wac04, qAn}
\begin{itemize}
\item[(i)] (braided line)%
\begin{align}
\int_{-\infty}^{+\infty}d_{L}x\,f(x^{A},t) & =(D_{q}^{1})^{-1}%
\big |_{-\infty}^{\infty}\,f\nonumber\\
& =-\int_{-\infty}^{+\infty}d_{\bar{R}}x\,f(x^{A},t),\\[0.1in]
\int_{-\infty}^{+\infty}d_{\bar{L}}x\,f(x^{A},t) & =(D_{q^{-1}}^{1}%
)^{-1}\big |_{-\infty}^{\infty}\,f\nonumber\\
& =-\int_{-\infty}^{+\infty}d_{R}x\,f(x^{A},t),
\end{align}
\item[(ii)] (q-deformed Euclidean space)%
\begin{align}
\int_{-\infty}^{+\infty}d_{L}^{3}x\,f(x^{A},t) & =\frac{q^{-6}}{4}(D_{q^{2}%
}^{+})^{-1}\big |_{-\infty}^{\infty}(D_{q^{2}}^{3})^{-1}\big |_{-\infty
}^{\infty}(D_{q^{2}}^{-})^{-1}\big |_{-\infty}^{\infty}\,f\nonumber\\
& =-\int_{-\infty}^{+\infty}d_{\bar{R}}^{3}x\,f(t,x^{A}),\\[0.1in]
\int_{-\infty}^{+\infty}d_{\bar{L}}^{3}x\,f(x^{A},t) & =\frac{q^{6}}%
{4}(D_{q^{-2}}^{-})^{-1}\big |_{-\infty}^{\infty}(D_{q^{-2}}^{3}%
)^{-1}\big |_{-\infty}^{\infty}(D_{q^{-2}}^{+})^{-1}\big |_{-\infty}^{\infty
}\,f\nonumber\\
& =-\int_{-\infty}^{+\infty}d_{R}^{3}x\,f(x^{A},t),
\end{align}
\textbf{ }However, there is one difficulty we have to overcome here. The
conjugation properties of q-deformed integrals are responsible for the fact
that the sesquilinear forms in (\ref{PraSes}) are not symmetrical \cite{qAn,
WQK1}. To circumvent this problem one can take the sesquilinear forms%
\begin{align}
\big \langle f,g\big \rangle_{1} & \equiv\frac{\text{i}^{n}}{2}%
\big (\big \langle f,g\big \rangle_{L}+\big \langle f,g\big \rangle_{\bar{R}%
}\big ),\nonumber\\
\big \langle f,g\big \rangle_{2} & \equiv\frac{\text{i}^{n}}{2}%
\big (\big \langle f,g\big \rangle_{\bar{L}}+\big \langle f,g\big \rangle_{R}%
\big ),\label{SymSes1}\\[0.16in]
\big \langle f,g\big \rangle_{1}^{\prime} & \equiv\frac{\text{i}^{n}}%
{2}\big (\big \langle f,g\big \rangle_{L}^{\prime}%
+\big \langle f,g\big \rangle_{\bar{R}}^{\prime}\big ),\nonumber\\
\big \langle f,g\big \rangle_{2}^{\prime} & \equiv\frac{\text{i}^{n}}%
{2}\big (\big \langle f,g\big \rangle_{\bar{L}}^{\prime}%
+\big \langle f,g\big \rangle_{R}^{\prime}\big ). \label{SymSes2}%
\end{align}
\end{itemize}
Clearly, all information on the time development of a sesquilinear form is
contained in the time dependence of its arguments. Normally, the time
evolution operators are unitary, so sesquilinear forms of two wave functions
should not vary with time. In complete analogy to the undeformed case we have
$(i=1,2)$%
\begin{align}
\big \langle\phi,\psi\big \rangle_{i} & \equiv\int_{-\infty}^{+\infty}%
d_{i}^{n}x\,\overline{\phi(x^{A},t)}\overset{t,x}{\circledast}\psi
(x^{B},t)\nonumber\\
& =\int_{-\infty}^{+\infty}d_{i}^{n}x\,\overline{\mathcal{U}(t,t^{\prime
})\triangleright\phi(x^{A},t^{\prime})}\overset{t,x}{\circledast}%
(\mathcal{U}(t,t^{\prime})\triangleright\psi(x^{B},t^{\prime}))\nonumber\\
& =\int_{-\infty}^{+\infty}d_{i}^{n}x\,(\,\overline{\phi(x^{A},t^{\prime}%
)}\triangleleft\mathcal{U}^{\dag}(t,t^{\prime}))\overset{t,x}{\circledast
}(\mathcal{U}(t,t^{\prime})\triangleright\psi(x^{B},t^{\prime}))\nonumber\\
& =\int_{-\infty}^{+\infty}d_{i}^{n}x\,\overline{\phi(x^{A},t^{\prime}%
)}\overset{t^{\prime}\!,x}{\circledast}(\mathcal{U}^{-1}(t,t^{\prime
})\,\mathcal{U}(t,t^{\prime})\triangleright\psi(x^{B},t^{\prime}))\nonumber\\
& =\int_{-\infty}^{+\infty}d_{i}^{n}x\,\overline{\phi(x^{A},t^{\prime}%
)}\overset{t^{\prime}\!,x}{\circledast}\psi(x^{B},t^{\prime})\nonumber\\
& =\int_{-\infty}^{+\infty}d_{i}^{n}x\,\overline{\phi(x^{A},t^{\prime}%
=0)}\overset{x}{\circledast}\psi(x^{B},t^{\prime}=0)\nonumber\\
& =\big \langle\phi,\psi\big \rangle_{i}\big |_{t=0.} \label{CalTimDevSes}%
\end{align}
where, for brevity, we introduced%
\begin{align}
\int_{-\infty}^{+\infty}d_{1}^{n}x & \equiv\frac{\text{i}^{n}}{2}%
\Big (\int_{-\infty}^{+\infty}d_{L}^{n}x+\int_{-\infty}^{+\infty}d_{\bar{R}%
}^{n}x\Big ),\nonumber\\
\int_{-\infty}^{+\infty}d_{2}^{n}x & \equiv\frac{\text{i}^{n}}{2}%
\Big (\int_{-\infty}^{+\infty}d_{\bar{L}}^{n}x+\int_{-\infty}^{+\infty}%
d_{R}^{n}x\Big ). \label{SubInt}%
\end{align}
Similar arguments lead us to
\begin{align}
\big \langle\phi,\psi\big \rangle_{i}^{\prime} & \equiv\int_{-\infty
}^{+\infty}d_{i}^{n}x\,\phi(x^{A},t)\overset{t,x}{\circledast}\overline
{\psi(x^{B},t)}\nonumber\\
& =\int_{-\infty}^{+\infty}d_{i}^{n}x\,\phi(x^{A},t^{\prime})\overset
{t^{\prime}\!,x}{\circledast}\overline{\psi(x^{B},t^{\prime})}\nonumber\\
& =\int_{-\infty}^{+\infty}d_{i}^{n}x\,\phi(x^{A},t^{\prime}=0)\overset
{x}{\circledast}\overline{\psi(x^{B},t^{\prime}=0)}\nonumber\\
& =\big \langle\phi,\psi\big \rangle_{i}^{\prime}\big |_{t=0.}%
\end{align}
Wee see that on the quantum spaces under consideration wave functions keep
their normalization, i.e. the equalities%
\begin{equation}
\big \langle\phi,\phi\big \rangle_{i,x}=1, \label{NorBed1}%
\end{equation}
or%
\begin{equation}
\big \langle\phi,\phi\big \rangle_{i,x}^{\prime}=1, \label{NorBed2}%
\end{equation}
remain unchanged as time goes by.
Next, we turn attention to matrix elements of observables and examine their
time development. With the same reasonings already applied in
(\ref{CalTimDevSes}) we obtain%
\begin{align}
\big \langle\phi,\hat{O}\triangleright\psi\big \rangle_{i,x} &
=\int_{-\infty}^{\infty}d_{i}^{n}x\,\overline{\phi(x^{A},t)}\overset
{t,x}{\circledast}(\hat{O}\triangleright\psi(x^{B},t))\nonumber\\
& =\int_{-\infty}^{\infty}d_{i}^{n}x\,\overline{\mathcal{U}(t,t^{\prime
})\triangleright\phi(x^{A},t^{\prime})}\overset{t,x}{\circledast}(\hat
{O}\triangleright(\mathcal{U}(t,t^{\prime})\triangleright\psi(x^{B},t^{\prime
})))\nonumber\\
& =\int_{-\infty}^{\infty}d_{i}^{n}x\,\overline{\phi(x^{A},t^{\prime}%
)}\overset{t^{\prime}\!,x}{\circledast}(\mathcal{U}^{-1}(t,t^{\prime}%
)\,\hat{O}\,\mathcal{U}(t,t^{\prime})\triangleright\psi(x^{B},t^{\prime
}))\nonumber\\
& =\int_{-\infty}^{\infty}d_{i}^{n}x\,\overline{\phi(x^{A},0)}\overset
{x}{\circledast}(\mathcal{U}^{-1}(t,0)\,\hat{O}\,\mathcal{U}%
(t,0)\triangleright\psi(x^{B},0)).
\end{align}
Repeating the same steps for the sesquilinear forms with apostrophe we get%
\begin{align}
\big \langle\phi\triangleleft\hat{O}^{\prime},\psi\big \rangle_{i,x}^{\prime}
& =\int_{-\infty}^{\infty}d_{i}^{n}x\,(\phi(x^{A},t^{\prime})\triangleleft
\mathcal{U}(t,t^{\prime})\,\hat{O}^{\prime}\,\mathcal{U}^{-1}(t,t^{\prime
}))\overset{t^{\prime},x}{\circledast}\overline{\psi(x^{B},t^{\prime}%
)}\nonumber\\
& =\int_{-\infty}^{\infty}d_{i}^{n}x\,(\phi(x^{A},0)\triangleleft
\mathcal{U}(t,0)\,\hat{O}^{\prime}\,\mathcal{U}^{-1}(t,0))\overset
{x}{\circledast}\overline{\psi(x^{B},0)}.
\end{align}
The above reasonings show us that the Heisenberg picture can indeed be
introduced in very much the same way as is done in the undeformed case, i.e.
we define the Heisenberg picture observable by%
\begin{equation}
\hat{O}_{H}\equiv\mathcal{U}^{-1}(t,0)\,\hat{O}\,\mathcal{U}(t,0),\qquad
\hat{O}_{H}^{\prime}\equiv\mathcal{U}(t,0)\,\hat{O}^{\prime}\,\mathcal{U}%
^{-1}(t,0),
\end{equation}
while the corresponding wave functions are independent from time:%
\begin{equation}
\phi_{H}(x^{A})\equiv\phi(x^{A},t=0).
\end{equation}
It should be obvious that this convention leads to the same matrix elements
and expectation values as in the Schr\"{o}dinger picture.
In the Heisenberg picture time evolution is assigned to observables and not to
wave functions. Thus, the equations of motion do not concern wave functions
but observables. Realizing that the time derivatives on the quantum spaces
under considerations coincide with those on commutative spaces we regain the
well-known Heisenberg equations of motion, i.e.%
\begin{align}
\frac{d\hat{O}_{H}}{dt} & =\frac{\partial\mathcal{U}^{-1}(t,0)}{\partial
t}\,\hat{O}\,\mathcal{U}(t,0)+\mathcal{U}^{-1}(t,0)\,\hat{O}\,\frac
{\partial\mathcal{U}(t,0)}{\partial t}\nonumber\\
& =\text{i}H\,\mathcal{U}^{-1}(t,0)\,\hat{O}\,\mathcal{U}(t,0)-\,\mathcal{U}%
^{-1}(t,0)\,\hat{O}\,\mathcal{U}(t,0)\,\text{i}H\nonumber\\
& =\text{i}[H,\hat{O}_{H}],\\[0.16in]
\frac{d\hat{O}_{H}^{\prime}}{dt} & =\frac{\partial\mathcal{U}(t,0)}{\partial
t}\,\hat{O}^{\prime}\,\mathcal{U}^{-1}(t,0)+\mathcal{U}(t,0)\,\hat{O}^{\prime
}\,\frac{\partial\mathcal{U}^{-1}(t,0)}{\partial t}\nonumber\\
& =-\text{i}H\,\mathcal{U}(t,0)\,\hat{O}^{\prime}\,\mathcal{U}^{-1}%
(t,0)+\,\mathcal{U}(t,0)\,\hat{O}^{\prime}\,\mathcal{U}^{-1}(t,0)\,\text{i}%
H\nonumber\\
& =\text{i}[\hat{O}_{H}^{\prime},H],
\end{align}
where we assumed the Hamiltonian to be time-independent.
\section{Conclusion\label{SecCon}}
Let us end with some comments on what we have done so far. In this article we
enhanced the algebras of braided line and q-deformed three-dimensional
Euclidean space by adding a time element. This was done in a way being
consistent with the existing algebraic framework. We were then able to apply
our reasonings about constructing q-deformed analogs of classical analysis. We
saw that the time element is completely decoupled from space coordinates and
behaves like a commutative variable. In doing so, we arrived at mathematical
structures in which space is discretized while time is still continuous. The
clear distinction between space and time made it easy to develop the basics of
a q-deformed analog of non-relativistic Schr\"{o}dinger theory. Fortunately,
we could apply the same reasonings as in the undeformed case, to which our
results tend in the limit $q\rightarrow1$.
Especially, we found that the time evolution operators are of the same general
form as their undeformed counterpart, i.e. they can again be obtained by
exponentiation of a Hamiltonian. The Schr\"{o}dinger and the Heisenberg
picture could be developed in a rather straightforward way and apart from the
fact that we have different q-geometries we could regain the well-known
equations of motion, i.e. the Schr\"{o}dinger and the Heisenberg equations. In
this manner, we laid the foundations for discretized versions of
non-relativistic quantum mechanics that do not lack space-time symmetries.
Based on the reasonings of part I we will continue this program in part II of
our paper. In this respect, let us point out that compared to other quantum
spaces, like the q-deformed Minkowski space,\ extended braided line and
extended three-dimensional q-deformed Euclidean space provide a rather simple
arena for studying the implications of q-deformation on quantum mechanics and
quantum field theory.
Last but not least, we would like to say a few words about q-deformed
superanalysis on the braided line, since this subject has not been treated up
to now. To this end we have to consider the antisymmetrized space determined
by relation (\ref{DiffBrai}). (However, we are mainly interested in the
subspace that is spanned by the q-deformed Grassmann variable $\theta^{1}$
subject to $(\theta^{1})^{2}=0.)$ In the work of Refs. \cite{MSW04, SW04} it
was described how to construct superanalysis on q-deformed quantum spaces. It
is rather easy to apply these ideas to the antisymmetrized braided line. In
what follows we give a short review of the results of this undertaking.
If we require for the differential $d\equiv d\theta^{i}(\partial_{\theta}%
)_{i}$ to hold%
\begin{align}
d^{2} & =0,\nonumber\\
d(fg) & =(df)g-f(dg),
\end{align}
the Leibniz rules for the two differential calculi on the\ antisymmetrized
braided line become%
\begin{align}
(\partial_{\theta})_{i}\theta^{j} & =\delta_{i}^{j}-\hat{R}_{il}%
^{jk}\,\theta^{l}(\partial_{\theta})_{k},\nonumber\\
(\hat{\partial}_{\theta})_{i}\theta^{j} & =\delta_{i}^{j}-(\hat{R}%
^{-1})_{il}^{jk}\,\theta^{l}(\hat{\partial}_{\theta})_{k},
\end{align}
where%
\begin{equation}
(\hat{\partial}_{\theta})_{0}=-(\partial_{\theta})_{0},\quad(\hat{\partial
}_{\theta})_{1}=-q(\partial_{\theta})_{1}.
\end{equation}
Especially, we have%
\begin{align}
(\partial_{\theta})_{1}\theta^{1} & =1-q\theta^{1}(\partial_{\theta}%
)_{1},\nonumber\\
(\hat{\partial}_{\theta})_{1}\theta^{1} & =1-q^{-1}\theta^{1}(\hat{\partial
}_{\theta})_{1}.
\end{align}
For supernumbers of the form%
\begin{equation}
f(\theta^{1})=f^{\prime}+f_{1}\theta^{1},\quad f^{\prime},f_{1}\in\mathbb{C},
\end{equation}
the actions of antisymmetric partial derivatives take the form%
\begin{equation}
(\partial_{\theta})_{1}\triangleright f(\theta^{1})=(\hat{\partial}_{\theta
})_{1}\,\bar{\triangleright}\,f(\theta^{1})=f_{1},
\end{equation}
and%
\begin{equation}
f(\theta^{1})\,\bar{\triangleleft}\,(\partial_{\theta})_{1}=f(\theta
^{1})\triangleleft(\hat{\partial}_{\theta})_{1}=-f_{1}.
\end{equation}
In complete analogy to the undeformed case integration and differentiation are
the same on q-deformed antisymmetrized spaces :%
\begin{align}
\int d_{L}\theta^{1}\,f(\theta^{1}) & =(\partial_{\theta})_{1}\triangleright
f(\theta^{1})=f_{1},\nonumber\\
\int d_{\bar{L}}\theta^{1}\,f(\theta^{1}) & =(\hat{\partial}_{\theta}%
)_{1}\,\bar{\triangleright}\,f(\theta^{1})=f_{1},\\[0.1in]
\int d_{R}\theta^{1}\,f(\theta^{1}) & =f(\theta^{1})\triangleleft
(\hat{\partial}_{\theta})_{1}=-f_{1},\nonumber\\
\int d_{\bar{R}}\theta^{1}\,f(\theta^{1}) & =f(\theta^{1})\,\bar
{\triangleleft}\,(\partial_{\theta})_{1}=-f_{1.}%
\end{align}
Following the ideas in Ref. \cite{SW04} translations of q-deformed Grassmann
variables are determined by their Hopf structures, for which we have%
\begin{align}
\Delta_{L}(\theta^{i}) & =\theta^{i}\otimes1+\tilde{\Lambda}^{-1}%
\otimes\theta^{i},\nonumber\\
S_{L}(\theta^{i}) & =-\tilde{\Lambda}^{-1}\theta^{i},\nonumber\\
\epsilon_{L}(\theta^{i}) & =0, \label{HopAntBra1}%
\end{align}
and
\begin{align}
\Delta_{\bar{L}}(\theta^{i}) & =\theta^{i}\otimes1+\tilde{\Lambda}%
\otimes\theta^{i},\nonumber\\
S_{\bar{L}}(\theta^{i}) & =-\tilde{\Lambda}\theta^{i},\nonumber\\
\epsilon_{\bar{L}}(\theta^{i}) & =0, \label{HopAntBra2}%
\end{align}
where the unitary scaling operator $\tilde{\Lambda}$ has to fulfill%
\begin{equation}
\tilde{\Lambda}\theta^{i}=-q^{\delta_{i1}}\theta^{i}\tilde{\Lambda}%
,\qquad\tilde{\Lambda}(\partial_{\theta})_{i}=-q^{-\delta_{i1}}(\partial
_{\theta})_{i}\tilde{\Lambda}.
\end{equation}
The above Hopf structures then induce the operations (for the details see Ref.
\cite{SW04})
\begin{equation}
f(\theta^{1}\oplus_{L}\psi^{1})=f(\theta^{1}\oplus_{\bar{L}}\psi
^{1})=f^{\prime}+f_{1}(\theta^{1}+\psi^{1}),
\end{equation}
and%
\begin{equation}
f(\ominus_{L}\,\theta^{1})=f(\ominus_{\bar{L}}\,\theta^{1})=f^{\prime}%
-f_{1}\theta^{1}.
\end{equation}
For the sake of completeness let us note that from the coproducts in
(\ref{HopAntBra1}) and (\ref{HopAntBra2}) we can read off the explicit form of
the L-matrices for the Grassmann variables $\theta^{i},$ $i=0,1$. As soon as
we know the action of the scaling operator $\tilde{\Lambda}$ on a given
element $w$ the L-matrices provide a simple method to calculate the braiding
of q-deformed Grassmann variables with $w$ (see also Ref. \cite{MSW04}).
In complete analogy to the symmetrized braided line we can introduce dual
pairings for the antisymmetrized braided line. For normally ordered monomials
we find as non-vanishing pairings
\begin{align}
\big \langle(\partial_{\theta})_{i},\theta^{j}\big \rangle_{L,\bar{R}} &
=\delta_{i}^{j},\nonumber\\
\big \langle(\hat{\partial}_{\theta})_{i},\theta^{j}\big \rangle_{\bar{L},R}
& =\delta_{i}^{j},\\[0.1in]
\big \langle\theta^{j},(\partial_{\theta})_{i}\big \rangle_{L,\bar{R}} &
=-\delta_{i}^{j},\nonumber\\
\big \langle\theta^{j},(\hat{\partial}_{\theta})_{i}\big \rangle_{\bar{L},R}
& =-\delta_{i}^{j},
\end{align}
and%
\begin{align}
\big \langle(\partial_{\theta})_{0}(\partial_{\theta})_{1},\theta^{1}%
\theta^{0}\big \rangle_{L,\bar{R}} & =1,\nonumber\\
\big \langle(\hat{\partial}_{\theta})_{1}(\hat{\partial}_{\theta})_{0}%
,\theta^{0}\theta^{1}\big \rangle_{\bar{L},R} & =1,\\[0.1in]
\big \langle\theta^{0}\theta^{1},(\partial_{\theta})_{1}(\partial_{\theta
})_{0}\big \rangle_{L,\bar{R}} & =1,\nonumber\\
\big \langle\theta^{1}\theta^{0},(\hat{\partial}_{\theta})_{0}(\hat{\partial
}_{\theta})_{1}\big \rangle_{\bar{L},R} & =1.
\end{align}
On the subspace spanned by $\theta^{1}$ these pairings correspond to the
exponentials%
\begin{align}
\exp(\theta^{1}|(\partial_{\theta})_{1})_{\bar{R},L} & =1+\theta^{1}%
\otimes(\partial_{\theta})_{1},\nonumber\\
\exp(\theta^{1}|(\hat{\partial}_{\theta})_{1})_{R,\bar{L}} & =1+\theta
^{1}\otimes(\hat{\partial}_{\theta})_{1},\\[0.1in]
\exp((\partial_{\theta})_{1}|\theta^{1})_{\bar{R},L} & =1-(\partial_{\theta
})_{1}\otimes\theta^{1},\nonumber\\
\exp((\hat{\partial}_{\theta})_{1}|\theta^{1})_{R,\bar{L}} & =1-(\hat
{\partial}_{\theta})_{1}\otimes\theta^{1},
\end{align}
which, in turn, give rise to the q-deformed delta functions%
\begin{align}
\delta_{L}^{1}(\eta_{1}) & =\int d_{L}\theta^{1}\,\exp(\theta^{1}|\eta
_{1})_{\bar{R},L}=\eta_{1},\nonumber\\
\delta_{\bar{L}}^{1}(\eta_{1}) & =\int d_{\bar{L}}\theta^{1}\,\exp
(\theta^{1}|\eta_{1})_{R,\bar{L}}=\eta_{1},\\
\delta_{R}^{1}(\eta_{1}) & =\int d_{R}\theta^{1}\,\exp(\eta_{1}|\theta
^{1})_{R,\bar{L}}=\eta_{1},\nonumber\\
\delta_{\bar{R}}^{1}(\eta_{1}) & =\int d_{\bar{R}}\theta^{1}\,\exp(\eta
_{1}|\theta^{1})_{\bar{R},L}=\eta_{1}.
\end{align}
\vspace{0.16in}
\noindent\textbf{Acknowledgements}
First of all I am very grateful to Eberhard Zeidler for very interesting and
useful discussions, special interest in my work and financial support.
Furthermore, I would like to thank Alexander Schmidt for useful discussions
and his steady support. Finally, I thank Dieter L\"{u}st for kind hospitality. |
hep-ex/0703034 | \section{Introduction}
\label{sec:intro}
The experiments CDF and D\O, taking data at the Tevatron
proton-antiproton collider located at the Fermi National Accelerator
Laboratory, have made several direct experimental measurements of the
top-quark pole mass, \ensuremath{M_{\mathrm{t}}}. The pioneering measurements were based on about
$100~\ensuremath{\mathrm{pb}^{-1}}$ of \hbox{Run-I}\ (1992-1996) data~\cite{Mtop1-CDF-di-l-PRLa,
Mtop1-CDF-di-l-PRLb,
Mtop1-CDF-di-l-PRLb-E, Mtop1-D0-di-l-PRL, Mtop1-D0-di-l-PRD,
Mtop1-CDF-l+j-PRL, Mtop1-CDF-l+j-PRD, Mtop1-D0-l+j-old-PRL,
Mtop1-D0-l+j-old-PRD, Mtop1-D0-l+j-new1, Mtop1-CDF-all-j-PRL,
Mtop1-D0-all-j-PRL}
and include results from the \ensuremath{\ttbar\rightarrow\had}\ (all-j), the \ensuremath{\ttbar\rightarrow\ljt}\ (l+j), and the
\ensuremath{\ttbar\rightarrow\dil}\ (di-l) decay channels\footnote{Here $\ell=e$ or $\mu$. Decay
channels with explicit tau lepton identification are presently under
study and are not yet used for measurements of the top-quark mass.}.
Results using approximately $350~\ensuremath{\mathrm{pb}^{-1}}$ of \hbox{Run-II}\
(2001-present) data have been published in the l+j and di-l
channels~\cite{Mtop2-CDF-l+j-350PRL, Mtop2-CDF-l+j-350PRD,
Mtop2CDF-di-l-350PRL, Mtop2-D0-l+j-370PRD}. More recently results using
about $1~\ensuremath{\mathrm{fb}^{-1}}$ of \hbox{Run-II}\ data have also been published~\cite{Mtop2-CDF-di-l-1fbPRD, Mtop2-CDF-lxy-new}.
The \hbox{Run-II}\ measurements summarized here are the most recent results in the
l+j, di-l, and all-j channels using $700-1000~\ensuremath{\mathrm{pb}^{-1}}$ of data and improved
analysis techniques~\cite{
Mtop2-CDF-di-l-1fbPRD,
Mtop2-CDF-lxy-new,
Mtop2-CDF-l+j-new,
Mtop2-CDF-all-j-new,
Mtop2-D0-l+j-new,
Mtop2-D0-di-l-new}.
\vspace*{0.10in}
This note reports the world average top-quark mass obtained by
combining five published \hbox{Run-I}\ measurements~\cite{Mtop1-CDF-di-l-PRLb,
Mtop1-CDF-di-l-PRLb-E, Mtop1-D0-di-l-PRD, Mtop1-CDF-l+j-PRD,
Mtop1-D0-l+j-new1, Mtop1-CDF-all-j-PRL} with two published \hbox{Run-II}\ CDF
results~\cite{Mtop2-CDF-di-l-1fbPRD, Mtop2-CDF-lxy-new}, two preliminary
\hbox{Run-II}\ CDF results~\cite{Mtop2-CDF-l+j-new, Mtop2-CDF-all-j-new} and two
preliminary \hbox{Run-II}\ D\O\ results~\cite{Mtop2-D0-l+j-new,Mtop2-D0-di-l-new}.
The combination takes into
account the statistical and systematic uncertainties and their correlations
using the method of references~\cite{Lyons:1988, Valassi:2003} and supersedes
previous combinations~\cite{Mtop1-tevewwg04,Mtop-tevewwgSum05,
Mtop-tevewwgWin06,Mtop-tevewwgSum06}.
The most precise individual measurements of $\ensuremath{M_{\mathrm{t}}}$ are now the preliminary
measurements in the l+j channel from Run II. These are
$170.9\pm2.5\;\rm GeV/c^2$ (CDF, \cite{Mtop2-CDF-l+j-new}) and
$170.5 \pm 2.7\;\rm GeV/c^2$ (D\O, \cite{Mtop2-D0-l+j-new}). These
have weights in the new $\ensuremath{M_{\mathrm{t}}}$ combination of 39\% and 40\%, respectively.
\vspace*{0.10in}
The input measurements and error categories used in the combination are
detailed in Section~\ref{sec:inputs} and~\ref{sec:errors}, respectively.
The correlations used in the combination are discussed in
Section~\ref{sec:corltns} and the resulting world average top-quark mass
is given in Section~\ref{sec:results}. A summary and outlook are presented
in Section~\ref{sec:summary}.
\section{Input Measurements}
\label{sec:inputs}
For this combination eleven measurements of \ensuremath{M_{\mathrm{t}}}\ are used, five published
\hbox{Run-I}\ results, and two published plus four preliminary \hbox{Run-II}\ results.
In general,
the \hbox{Run-I}\ measurements all have relatively large statistical uncertainties
and their systematic uncertainty is dominated by the total jet energy scale
(JES) uncertainty. In \hbox{Run-II}\ both CDF and D\O\ take advantage of the larger
\ensuremath{t\overline{t}}\ samples available and employ new analysis techniques to reduce
both these uncertainties. In particular the JES is constrained
using an in-situ calibration based on the invariant mass of $W\rightarrow qq^{\prime}$
decays in the l+j and all-j channels. The \hbox{Run-II}\ D\O\ analysis in the l+j
channel constrains the response of light-quark jets using the in-situ
$W\rightarrow qq^{\prime}$ decays. Residual JES
uncertainties associated with $\eta-$ and $p_{T}$-dependencies as well as
uncertainties specific to the response of $b$-jets are treated separately.
Similarly, the \hbox{Run-II}\ CDF analysis in the l+j and all-j channels also
constrain the JES using the in-situ $W\rightarrow qq^{\prime}$ decays. Small residual
JES uncertainties arising from $\eta-$ and $p_{T}$-dependencies and the
modeling of $b$-jets are included in separate error categories. The \hbox{Run-II}\
CDF di-l measurement uses a JES determined from external calibration samples.
Some parts of the associated uncertainty are correlated with the \hbox{Run-I}\ JES
uncertainty as noted below.
\vspace*{0.10in}
In previous combinations the \hbox{Run-II}\ CDF l+j analysis used the JES determined from
the external calibration as an additional Gaussian constraint. This required us
to treat that measurement as two separate inputs in the combination in order to
accurately account for all the JES correlations. This Gaussian constraint is not used
in the present analysis as it does not significantly improve the sensitivity. Thus
we can treat this measurement as a single input in the same manner as all the other
measurements.
\vspace*{0.10in}
A new analysis technique from CDF is also included (lxy). This measurement uses
the mean decay-length from B-tagged jets to determine the top-quark mass. While
the statistical sensitivity is not nearly as good as the more traditional methods,
this technique has the advantage that since it uses only tracking information, it
is almost entirely independent of JES uncertainties. Additionally, since it does
not require a full event reconstruction, it can use a more inclusive sample of
$t\overline{t}$ candidates (e.g. events with $\geq3$~jets). As the statitistics
of this sample continue to grow, this method could offer a nice cross-check of
the top-quark mass that's largely independent of the dominant JES systematic
uncertainty which plagues the other measurements. The statistical
correlation between this measurement and the \hbox{Run-II}\ CDF l+j measurement is
determined using Monte Carlo signal-plus-background psuedo-experiments which
correctly account for the sample overlap and is found to be consistent with
zero (to within $<1\%$) independent of the assumed top-quark mass.
\vspace*{0.10in}
The inputs used in the combination are summarized in Table~\ref{tab:inputs}
with their uncertainties sub-divided into the categories described in the
next Section. The correlations between the inputs are described in
Section~\ref{sec:corltns}.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|l||rrr|rr||rrrr|rr|}
\hline
& \multicolumn{5}{|c||}{{\hbox{Run-I}} published} & \multicolumn{6}{|c|}{{\hbox{Run-II}} preliminary} \\ \cline{2-12}
& \multicolumn{3}{|c|}{ CDF } & \multicolumn{2}{|c||}{ D\O\ }
& \multicolumn{4}{|c|}{ CDF } & \multicolumn{2}{|c|}{ D\O\ } \\
& all-j & l+j & di-l & l+j & di-l & l+j & di-l & all-j & lxy & l+j & di-l \\
\hline
Lumi (\ensuremath{\mathrm{fb}^{-1}}) & 0.11 & 0.11 & 0.11 & 0.13 & 0.13 & 0.9 & 1.0 & 1.0 & 0.7 & 0.9 & 1.0 \\\hline
Result & 186.0 & 176.1 & 167.4 & 180.1 & 168.4 & 170.9 & 164.5 & 171.1 & 183.9 & 170.5 & 172.5 \\
\hline
\hline
iJES & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 1.4 & 0.0 & 2.4 & 0.0 & 0.0 & 0.0 \\
aJES & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.6 & 2.0 \\
bJES & 0.6 & 0.6 & 0.8 & 0.7 & 0.7 & 0.6 & 0.6 & 0.4 & 0.0 & 0.5 & 1.8 \\
cJES & 3.0 & 2.7 & 2.6 & 2.0 & 2.0 & 0.0 & 2.8 & 0.0 & 0.0 & 0.0 & 4.3 \\
dJES & 0.3 & 0.7 & 0.6 & 0.0 & 0.0 & 0.2 & 1.6 & 0.0 & 0.0 & 1.6 & 1.9 \\
rJES & 4.0 & 3.4 & 2.7 & 2.5 & 1.1 & 0.0 & 1.3 & 0.0 & 0.3 & 0.0 & 0.0 \\
Signal & 1.8 & 2.6 & 2.8 & 1.1 & 1.8 & 1.1 & 0.9 & 1.3 & 1.4 & 0.6 & 0.7 \\
BG & 1.7 & 1.3 & 0.3 & 1.0 & 1.1 & 0.2 & 0.7 & 1.0 & 2.3 & 0.3 & 0.6 \\
Fit & 0.6 & 0.0 & 0.7 & 0.6 & 1.1 & 0.4 & 0.9 & 0.7 & 4.8 & 0.4 & 0.9 \\
MC & 0.8 & 0.1 & 0.6 & 0.0 & 0.0 & 0.2 & 0.9 & 1.0 & 0.7 & 0.0 & 0.0 \\
UN/MI & 0.0 & 0.0 & 0.0 & 1.3 & 1.3 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\
\hline
Syst. & 5.7 & 5.3 & 4.9 & 3.9 & 3.6 & 1.9 & 3.9 & 3.2 & 5.6 & 2.0 & 5.6 \\
Stat. & 10.0 & 5.1 & 10.3 & 3.6 & 12.3 & 1.6 & 3.9 & 2.8 & 14.8 & 1.8 & 5.8 \\
\hline
\hline
Total & 11.5 & 7.3 & 11.4 & 5.3 & 12.8 & 2.5 & 5.6 & 4.3 & 15.8 & 2.7 & 8.0 \\ \hline\hline
\end{tabular}
\end{center}
\caption[Input measurements]{Summary of the measurements used to determine the
world average $\ensuremath{M_{\mathrm{t}}}$. All numbers are in $\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$. The error categories and
their correlations are described in the text. The total systematic uncertainty
and the total uncertainty are obtained by adding the relevant contributions
in quadrature.}
\label{tab:inputs}
\end{table}
\section{Error Categories}
\label{sec:errors}
We employ the same error categories as used for the previous world
average~\cite{Mtop-tevewwgSum06}. They include a detailed
breakdown of the various sources of uncertainty and aim to
lump together sources of systematic uncertainty that share the same or
similar origin. For example, the ``Signal'' category discussed below
includes the uncertainties from ISR, FSR, and PDF - all of which affect
the modeling of the \ensuremath{t\overline{t}}\ signal. Additional categories are included
in order to accommodate specific types of correlations. For example,
the jet energy scale (JES) uncertainty is sub-divided into several
components in order to more accurately accommodate our best estimate of
the relevant correlations. Each error category is discussed below.
\vspace*{0.10in}
\begin{description}
\item[Statistical:] The statistical uncertainty associated with the
\ensuremath{M_{\mathrm{t}}}\ determination.
\item[iJES:] That part of the JES uncertainty which originates from
in-situ calibration procedures and is uncorrelated among the
measurements. In the combination reported here it corresponds to
the statistical uncertainty associated with the JES determination
using the $W\rightarrow qq^{\prime}$ invariant mass in the CDF \hbox{Run-II}\
l+j and all-h measurements. Residual JES uncertainties, which arise
from effects
not considered in the in-situ calibration, are included in other
categories.
\item[aJES:] That part of the JES uncertainty which originates from
differences in detector $e/h$ response between $b$-jets and light-quark
jets. It is specific to the D\O\ \hbox{Run-II}\ measurements and is
taken to be uncorrelated with the D\O\ \hbox{Run-I}\ and CDF measurements.
\item[bJES:] That part of the JES uncertainty which originates from
uncertainties specific to the modeling of $b$-jets and which is correlated
across all measurements. For both CDF and D\O\ this includes uncertainties
arising from
variations in the semi-leptonic branching fraction, $b$-fragmentation
modeling, and differences in the color flow between $b$-jets and light-quark
jets. These were determined from \hbox{Run-II}\ studies but back-propagated
to the \hbox{Run-I}\ measurements, whose rJES uncertainties (see below) were
then corrected in order to keep the total JES uncertainty constant.
\item[cJES:] That part of the JES uncertainty which originates from
modeling uncertainties correlated across all measurements. Specifically
it includes the modeling uncertainties associated with light-quark
fragmentation and out-of-cone corrections.
\item[dJES:] That part of the JES uncertainty which originates from
limitations in the calibration data samples used and which is
correlated between measurements within the same data-taking period
(ie. Run~I or Run~II) but not between experiments. For CDF this
corresponds to uncertainties associated with the $\eta$-dependent JES
corrections which are estimated using di-jet data events. For D\O\
\hbox{Run-II}\ this corresponds to uncertainties associated
with the light-quark response as determined using the $W\rightarrow qq^{\prime}$
invariant mass in the l+j channel and propagated to the di-l channel.
The residual $\eta$-dependent and $p_{T}$-dependent uncertainties for the
D\O\ \hbox{Run-II}\ measurements are also included here since they are
constrained using \hbox{Run-II}\ $\gamma+$jet data samples.
\item[rJES:] The remaining part of the JES uncertainty which is
correlated between all measurements of the same experiment
independent of data-taking period, but is uncorrelated between
experiments. This is dominated by uncertainties in the calorimeter
response to light-quark jets. For CDF this also includes small
uncertainties associated with the multiple interaction and underlying
event corrections.
\item[Signal:] The systematic uncertainty arising from uncertainties
in the modeling of the \ensuremath{t\overline{t}}\ signal which is correlated across all
measurements. This includes uncertainties from variations in the ISR,
FSR, and PDF descriptions used to generate the \ensuremath{t\overline{t}}\ Monte Carlo samples
that calibrate each method. It also includes small uncertainties
associated with biases associated with the identification of $b$-jets.
\item[Background:] The systematic uncertainty arising from uncertainties
in modeling the dominant background sources and correlated across
all measurements in the same channel. These
include uncertainties on the background composition and shape. In
particular uncertainties associated with the modeling of the QCD
multi-jet background (all-j and l+j), uncertainties associated with the
modeling of the Drell-Yan background (di-l), and uncertainties associated
with variations of the fragmentation scale used to model W+jets
background (all channels) are included.
\item[Fit:] The systematic uncertainty arising from any source specific
to a particular fit method, including the finite Monte Carlo statistics
available to calibrate each method.
\item[Monte Carlo:] The systematic uncertainty associated with variations
of the physics model used to calibrate the fit methods and correlated
across all measurements. For CDF it includes variations observed when
substituting PYTHIA~\cite{PYTHIA4,PYTHIA5,PYTHIA6} (Run~I and Run~II)
or ISAJET~\cite{ISAJET} (Run~I) for HERWIG~\cite{HERWIG5,HERWIG6} when
modeling the \ensuremath{t\overline{t}}\ signal. Similar
variations are included for the D\O\ \hbox{Run-I}\ measurements. The D\O\
\hbox{Run-II}\ measurements use ALPGEN~\cite{ALPGEN} to model the \ensuremath{t\overline{t}}\ signal and the
variations considered are included in the Signal category above.
\item[UN/MI:] This is specific to D\O\ and includes the uncertainty
arising from uranium noise in the D\O\ calorimeter and from the
multiple interaction corrections to the JES. For D\O\ \hbox{Run-I}\ these
uncertainties were sizable, while for \hbox{Run-II}\, owing to the shorter
integration time and in-situ JES determination, these uncertainties
are negligible.
\end{description}
These categories represent the current preliminary understanding of the
various sources of uncertainty and their correlations. We expect these to
evolve as we continue to probe each method's sensitivity to the various
systematic sources with ever improving precision. Variations in the assignment
of uncertainties to the error categories, in the back-propagation of the bJES
uncertainties to \hbox{Run-I}\ measurements, in the approximations made to
symmetrize the uncertainties used in the combination, and in the assumed
magnitude of the correlations all negligibly effect ($\ll 0.1\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$) the
combined \ensuremath{M_{\mathrm{t}}}\ and total uncertainty.
\section{Correlations}
\label{sec:corltns}
The following correlations are used when making the combination:
\begin{itemize}
\item The uncertainties in the Statistical, Fit, and iJES
categories are taken to be uncorrelated among the measurements.
\item The uncertainties in the aJES and dJES categories are taken
to be 100\% correlated among all \hbox{Run-I}\ and all \hbox{Run-II}\ measurements
on the same experiment, but uncorrelated between Run~I and Run~II
and uncorrelated between the experiments.
\item The uncertainties in the rJES and UN/MI categories are taken
to be 100\% correlated among all measurements on the same experiment.
\item The uncertainties in the Background category are taken to be
100\% correlated among all measurements in the same channel.
\item The uncertainties in the bJES, cJES, Signal, and Generator
categories are taken to be 100\% correlated among all measurements.
\end{itemize}
Using the inputs from Table~\ref{tab:inputs} and the correlations specified
here, the resulting matrix of total correlation co-efficients is given in
Table~\ref{tab:coeff}.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|ll||rrr|rr||rrrr|rr|}
\hline
& & \multicolumn{5}{|c||}{{\hbox{Run-I}} published} & \multicolumn{6}{|c|}{{\hbox{Run-II}} preliminary} \\ \cline{3-13}
& & \multicolumn{3}{|c|}{ CDF } & \multicolumn{2}{|c||}{ D\O\ }
& \multicolumn{4}{|c|}{ CDF } & \multicolumn{2}{|c|}{ D\O\ } \\
& & l+j & di-l & all-j & l+j & di-l & l+j & di-l & all-j & lxy & l+j & di-l \\
\hline
\hline
CDF-I & l+j & 1.00& & & & & & & & & & \\
CDF-I & di-l & 0.29& 1.00& & & & & & & & & \\
CDF-I & all-j & 0.32& 0.19& 1.00& & & & & & & & \\
\hline
D\O-I & l+j & 0.26& 0.15& 0.14& 1.00& & & & & & & \\
D\O-I & di-l & 0.11& 0.08& 0.07& 0.16& 1.00& & & & & & \\
\hline
\hline
CDF-II & l+j & 0.19& 0.13& 0.08& 0.14& 0.07& 1.00& & & & & \\
CDF-II & di-l & 0.36& 0.23& 0.26& 0.24& 0.12& 0.13& 1.00& & & & \\
CDF-II & all-j & 0.12& 0.10& 0.10& 0.08& 0.05& 0.17& 0.10& 1.00& & & \\
CDF-II & lxy & 0.07& 0.03& 0.02& 0.05& 0.01& 0.05& 0.03& 0.04& 1.00& & \\
\hline
D\O-II & l+j & 0.12& 0.07& 0.05& 0.10& 0.04& 0.16& 0.06& 0.09& 0.04& 1.00& \\
D\O-II & di-l & 0.25& 0.16& 0.17& 0.25& 0.11& 0.09& 0.32& 0.05& 0.01& 0.26& 1.00 \\
\hline
\end{tabular}
\end{center}
\caption[Global correlations between input measurements]{The resulting
matrix of total correlation coefficients used to determined the
world average top quark mass.}
\label{tab:coeff}
\end{table}
The measurements are combined using a program implementing a numerical
$\chi^2$ minimization as well as the analytic BLUE
method~\cite{Lyons:1988, Valassi:2003}. The two methods used are
mathematically equivalent, and are also equivalent to the method used
in an older combination~\cite{TM-2084}, and give identical results for
the combination. In addition, the BLUE method yields the decomposition
of the error on the average in terms of the error categories specified
for the input measurements~\cite{Valassi:2003}.
\section{Results}
\label{sec:results}
The combined value for the top-quark mass is:
\begin{eqnarray}
\ensuremath{M_{\mathrm{t}}} & = & 170.9 \pm 1.8~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }\,,
\end{eqnarray}
with a $\chi^2$ of 9.2 for 10 degrees of freedom, which corresponds to
a probability of 51\% indicating good agreement among all the input
measurements. The total uncertainty can be sub-divided into the
contributions from the various error categories as: Statistical ($\pm1.1$),
total JES ($\pm1.1$), Signal ($\pm0.9$), Background ($\pm0.3$), Fit
($\pm0.3$), Monte Carlo ($\pm0.2$), and UN/MI ($\pm0.1$), for a total
Systematic ($\pm1.5$), where all numbers are in units of \ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }.
The pull and weight for each of the inputs are listed in Table~\ref{tab:stat}.
The input measurements and the resulting world average mass of the top
quark are summarized in Figure~\ref{fig:summary}.
\vspace*{0.10in}
The weights of many of the \hbox{Run-I}\ measurements are negative.
In general, this situation can occur if the correlation between two measurements
is larger than the ratio of their total uncertainties. This is indeed the case
here. In these instances the less precise measurement
will usually acquire a negative weight. While a weight of zero means that a
particular input is effectively ignored in the combination, a negative weight
means that it affects the resulting central value and helps reduce the total
uncertainty. See reference~\cite{Lyons:1988} for further discussion of
negative weights.
\begin{figure}[p]
\begin{center}
\includegraphics[width=0.8\textwidth]{w07_mt.eps}
\end{center}
\caption[Summary plot for the world average top-quark mass]
{A summary of the input measurements and resulting world average
mass of the top quark.}
\label{fig:summary}
\end{figure}
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|l||rrr|rr||rrrr|rr|}
\hline
& \multicolumn{5}{|c||}{{\hbox{Run-I}} published} & \multicolumn{6}{|c|}{{\hbox{Run-II}} preliminary} \\ \cline{2-12}
& \multicolumn{3}{|c|}{ CDF } & \multicolumn{2}{|c||}{ D\O\ }
& \multicolumn{4}{|c|}{ CDF } & \multicolumn{2}{|c|}{ D\O\ } \\
& l+j & di-l & all-j & l+j & di-l & l+j & di-l & all-j & lxy
& l+j & di-l\\
\hline
\hline
Pull & $+0.73$ & $-0.31$ & $+1.33$ & $+1.84$ & $-0.20$ & $-0.03$ & $-1.22$ & $+0.05$ & $+0.83$
& $-0.22$ & $+0.20$ \\
Weight [\%]
& $- 1.3$ & $- 0.4$ & $- 0.3$ & $+ 6.1$ & $+ 0.4$ & $+39.3$ & $+ 6.4$ & $+11.0$ & $+ 0.5$
& $+39.7$ & $-1.9$ \\
\hline
\end{tabular}
\end{center}
\caption[Pull and weight of each measurement]{The pull and weight for each of the
inputs used to determine the world average mass of the top quark. See
Reference~\cite{Lyons:1988} for a discussion of negative weights.}
\label{tab:stat}
\end{table}
Although the $\chi^2$ from the combination of all measurements indicates
that there is good agreement among them, and no input has an anomalously
large pull, it is still interesting to also fit for the top-quark mass
in the all-j, l+j, and di-l channels separately. We use the same methodology,
inputs, error categories, and correlations as described above, but fit for
the three physical observables, \ensuremath{\MT^{\mathrm{all-j}}}, \ensuremath{\MT^{\mathrm{l+j}}}, and \ensuremath{\MT^{\mathrm{di-l}}}.
The results of this combination are shown in Table~\ref{tab:three_observables}
and have $\chi^2$ of 5.8 for 8 degrees of freedom, which corresponds to a
probability of 60\%.
These results differ from a naive combination, where
only the measurements in a given channel contribute to the \ensuremath{M_{\mathrm{t}}}\
determination in that channel, since the combination here fully accounts
for all correlations, including those which cross-correlate the different
channels. Using the results of
Table~\ref{tab:three_observables} we calculate the chi-squared consistency
between any two channels, including all correlations, as
$\chi^{2}(dil-lj)=3.2$, $\chi^{2}(lj-allj)=0.1$, and
$\chi^{2}(allj-dil)=2.4$. These correspond to
chi-squared probabilities of 7\%, 75\%, and 12\%, respectively, and indicate
that the determinations of \ensuremath{M_{\mathrm{t}}}\ from the three channels are consistent with
one another.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.30}
\begin{tabular}{|l||c|rrr|}
\hline
Parameter & Value (\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }) & \multicolumn{3}{|c|}{Correlations} \\
\hline
\hline
$\ensuremath{\MT^{\mathrm{all-j}}}$ & $172.2\pm 4.1$ & 1.00 & & \\
$\ensuremath{\MT^{\mathrm{l+j}}}$ & $171.2\pm 1.9$ & 0.21 & 1.00 & \\
$\ensuremath{\MT^{\mathrm{di-l}}}$ & $163.5\pm 4.5$ & 0.15 & 0.30 & 1.00 \\
\hline
\end{tabular}
\end{center}
\caption[Mtop in each channel]{Summary of the combination of the nine
measurements by CDF and D\O\ in terms of three physical quantities,
the mass of the top quark in the all-jets, lepton+jets, and di-lepton channel. }
\label{tab:three_observables}
\end{table}
\section{Summary}
\label{sec:summary}
A preliminary combination of measurements of the mass of the top
quark from the Tevatron experiments CDF and D\O\ is presented.
The combination includes five published {\hbox{Run-I}} measurements and two published
plus four preliminary {\hbox{Run-II}} measurements. Taking into account the
statistical and systematic uncertainties and their correlations, the
preliminary world-average result is: $\ensuremath{M_{\mathrm{t}}}= 170.9 \pm 1.8~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$, where
the total uncertainty is obtained assuming Gaussian systematic uncertainties
and adding them plus the statistical uncertainty in quadrature.
\vspace*{0.10in}
The mass of the top quark is now known with a relative precision of 1.1\%,
limited
by the systematic uncertainties, which are dominated by the jet energy
scale uncertainty. This systematic is expected to improve as larger data sets
are collected since new analysis techniques constrain the jet
energy scale using in-situ $W\rightarrow qq^{\prime}$ decays. It can be reasonably
expected that with the full \hbox{Run-II}\ data set the top-quark mass will be
known to much better than 1\%. To reach this level of precision further work
is required to determine more accurately the various correlations present,
and to understand more precisely the $b$-jet modeling, Signal, and
Background uncertainties which may limit the sensitivity at larger data sets.
Limitations of the Monte Carlo generators used to calibrate each fit method
may also become important as the precision reaches the $\sim1~\ensuremath{\mathrm{ Ge\kern -0.1em V }\kern -0.2em /c^2 }$ level
and will warrant further study in the near future.
\clearpage
\bibliographystyle{tevewwg} |
physics/0703105 | \section{Introduction}
The effect of the underlying classical dynamics on the quantum
motion has been a recurrent topic of research since the early days
of the quantum theory. In recent years the use of experimental
techniques based on ultracold atoms in optical lattices
\cite{Raizen} has permitted to study in great detail the role of
classical mechanics in simple quantum systems. In these
experiments a very dilute almost free gas of atoms (Cs and Rb) is
cooled down to temperatures of the order of tens $\mu K$ and then
interacts with an optical lattice. In its simplest form, the
optical lattice consists of two laser beams prepared in such a way
that the resulting interference pattern is a stationary plane wave
in space. The laser frequency is tuned close to a resonance of the
atomic system in order to enhance the atom-laser coupling but not
too close to avoid spontaneous emission. In this limit the
laser-atom system can be considered as a point particle in a sine
potential, namely, the quantum pendulum. If the
laser is turned on and off in a series of short periodic pulses,
the resulting system is very well approximated by the so called
quantum kicked rotor (QKR) (see Ref. \cite{reviz} for a review)
extensively studied in the context of quantum chaos, \begin{eqnarray} {\cal H}=
\frac{p^2}{2} - K\cos(q)\sum_{n}\delta(t-Tn).\label{kr} \end{eqnarray} For
short time scales, quantum and classical motion agrees. However
quantum diffusion is eventually suppressed due to destructive
interference that localize eigenstates in momentum space. This
counter-intuitive feature, usually referred to as dynamical
localization \cite{dyn}, was fully understood \cite{fishman} after
mapping the kicked rotor problem onto a short range one
dimensional disordered system where localization is well
established. The theoretical predictions of Ref. \cite{fishman}
were eventually confirmed experimentally \cite{Raizen} (see also
Ref. \cite{otherexp}) by using the cold atoms techniques mentioned
previously. The standard kicked rotor is thus an ideal
candidate for the study of the quantum properties of one dimensional
systems whose classical motion is diffusive. A natural question to
ask is whether this analysis can be extended to other types of
(anomalous) diffusive motion.
For values of the kick strength $K$ in Eq.(\ref{kr}) sufficiently
small, the classical phase space is composed of chaotic and
integrable parts and, for certain initial conditions, the
classical motion is well described by a process of anomalous
diffusion. The quantum transport properties in systems with a
mixed phase space \cite{Christensen}
depend strongly on the details of the Hamiltonian \cite{boh}.
Even for a given configuration many types of anomalous diffusion
are observed depending on the initial conditions or the time
scales studied \cite{Geisel}. This lack of universality makes it
difficult to precisely assess the effect of the underlying
classical anomalous diffusion on the quantum dynamics.
The situation is different if the smooth sinusoidal optical
potential is replaced by a potential with a logarithmic or
power-law singularity \cite{ant9}. The classical phase space is
homogeneous but still the classical motion is superdiffusive. In
the quantum realm, as a consequence of interference effects, the
particle still diffuses but at a slower rate. In fact, for certain
types of singularities, full dynamical localization never occurs
and diffusion persists at all times. In other cases exponential
localization is eventually observed but anomalous diffusion
different from the classical one is still observed for shorter
times. The classical density of probability $P(p,t)$ in this
region is accurately described by the solution of an anomalous
Fokker-Planck equation \cite{klafter}. We note that, unlike the
case of normal diffusion, the information obtained from the
knowledge of a few moments of the density of probability may not
be sufficient to fully characterize the classical motion
\cite{klafter}. Thus for a correct understanding of these systems
it is essential a detailed investigation of $P(p,t)$.
The models studied in Ref. \cite{ant9} are non-KAM but classical
anomalous diffusion can also exist in KAM systems. One example is
the kicked rotor with a smooth potential but subjected to {\em
pairs} of closely time-spaced kicks: the $2\delta$-kicked rotor
($2\delta$-KR) \cite{Jones,Tania}.
The dynamics of this model in a certain region of parameters has
already been investigated in the literature: theoretically in Ref.
\cite{Tania} and experimentally in Ref. \cite{Jones}. Unlike the
single kicked rotor the classical dynamics is strongly correlated.
The momentum space is divided into regions of fast momentum
diffusion separated by porous boundaries, ie narrow trapping
regions where classical trajectories `stick' for relatively long
periods.
The $2\delta$-KR trajectory spends considerable time trapped in a
cell before escaping to the next.
In this paper we provide a detailed account of the type of
classical and quantum anomalous diffusion associated with this
motion. We shall restrict ourselves to the one cell region,
namely, to time/momentum scales such that a particle initially
trapped manages to escape and eventually reaches a new trapping
region. We aim a better understanding of how generic features of
the classical anomalous diffusion affect the quantum motion in a
KAM system. We restrict ourselves to a region of parameters such
that the dynamics is generic, namely, the classical phase space is
fully chaotic with no island of stability. However the
trapping-leaking mechanism of our model still causes strong
deviations from the single kicked rotor results. Other important
motivation to study the motion of a fully chaotic $2\delta$-KR is
that it mimics the effect of cantori in chaotic systems. Thus a
particle typically gets trapped in a cantorus for a long time
until it escapes to the chaotic sea. We argue the findings of this
paper shed light about the role of classical cantori in quantum
mechanics \cite{Maitra,weiss}.
The organization of the paper is as follows: in the next section
we introduce the model, review some of its more relevant dynamical
features and discuss the region of parameters to be studied. In
section III, we study the classical and quantum transport
properties. Among others we analyze the classical and quantum
density of probability, the classical-quantum breaking time and
the return probability. We aim to describe how dynamical
localization arises in systems whose classical diffusion is
anomalous and how other relevant scales of the problem such as
the quantum-classical breaking time are affected by the underlying
classical dynamics.
In summary, our main new results are: 1) In the region of
interest both classical and quantum dynamics is well characterized
by a process of anomalous diffusion. 2) We identify two routes to
dynamical localization as a function of $\hbar$: for $\hbar >
\hbar_c$ ($\hbar _c$ is function of the kick strength and the
separation between pairs of kicks) standard dynamical localization
occurs in the trapping region and the particle does not escape;
for $\hbar < \hbar_c$ the central part probability density is
exponential and almost time independent. However diffusion does
not stop since eventually the particle escapes from the trapping
region. True dynamical localization in this case occurs for time
scales much longer than the typical time to escape from the
trapping region. For intermediate times we find the quantum
diffusion is anomalous but slower than the classical one. 3) The
classical anomalous diffusion induces a fractional scaling with
$\hbar$ in different quantities of interest such as the
quantum-classical breaking time. 4) The $2\delta$-KR can be used
as a simplified model to study the effect of cantori in classical
and quantum mechanics.
\section{The model}
We consider a system with a Hamiltonian corresponding to a
sequence of closely spaced pairs of kicks:\begin{eqnarray} {\cal H}=
\frac{p^2}{2} - K \cos x \sum_n \left[ \delta(t-nT)+
\delta(t-nT+\epsilon) \right]\nonumber, \end{eqnarray} where $\epsilon \ll T$
is a short time interval and $K$ is the kick-strength.
\subsection{Classical dynamics}
The classical map for the $2\delta$-KP is a straightforward
extension of the Standard Map:
\begin{eqnarray}
p_{n+1}=p_n + K\sin x_n; &\ & p_{n+2}=p_{n+1} + K\sin x_{n+1}
\nonumber \\
x_{n+1}=x_n + \epsilon p_{n+1}; &\ & x_{n+2}=x_{n+1} + \tau
p_{n+2}
\nonumber \\
\label{eq2}
\end{eqnarray}
where $\epsilon$ is a very short time interval between two kicks
in a pair and $\tau = T -\epsilon$ is the (much longer) time interval
between the pairs.
Clearly, the limit $\epsilon=\tau$ or $0$ corresponds to the
Standard Map, which describes the classical dynamics of the
quantum kicked rotor:
\begin{eqnarray}
p_{i+1} &=& p_{i} + K \sin x_{i}, \nonumber \\
x_{i+1} &=& x_i + p_{i+1}. \label{eq3}
\end{eqnarray}
Particles with momenta $p_0 \simeq (2m+1)\pi/\epsilon$ and
$m=0,\pm 1, \pm 2,\cdots$ (relative to the optical lattice) are
confined in momentum trapping regions and absorb little energy;
conversely, particles prepared near $p_0 \simeq 2m\pi/\epsilon$
experience rapid energy growth up to localization. The basic
mechanism of trapping is fairly intuitive: atoms for which $p_0 =
(2m+1)\pi/\epsilon$ experience an impulse $K\sin x$ followed by
another one $\simeq K\sin(x + \pi)$ which in effect cancels the
first. Over time, however, there is a gradual de-phasing of this
classical `anti-resonant' process. A theoretical study
\cite{Jones} of the classical diffusion in this system found
anomalous momentum diffusion for any $p_0$ and intermediate time
scales. It was also observed long-ranged corrections to the
uncorrelated diffusion rate not present in the standard kicked
rotor. Inside the trapping region diffusion is normal but the
coefficient of diffusion $D$ has a dependence on $K$ as $D \sim
K^3$, similar to the one found in single kicked rotors in a region
of the classical phase space densely populated by cantori
\cite{weiss}. By contrast, if the phase space is fully chaotic it
is expected $D \sim K^2$. This suggests that our model may
reproduce to a good approximation the effect of cantori in a KAM
system.
The size in momentum space of the trapping region $\delta p$
strongly depends on the parameters $\epsilon,K$ defining the
model.
For an accurate analysis of the trapped region is necessary that:
i) the particle is initially trapped, ii) the particle dwells on
average a time long enough in this region before it escapes from
it. In Ref. \cite{Jones} it was concluded that this implies the
criterion $K \epsilon \ll 1$. However, if $K \epsilon$ is too
small, the phase space would be too regular. Since we are
interested in generic feature of the motion, we take iii) $\tau
\gg 1$. This implies less correlations between successive space
positions of the particle. We thus restrict ourselves through the
paper to the range $K \epsilon \ll 1$ and $\tau \gg 1$ with
initial conditions inside the trapping region.
\subsection{Quantum dynamics}
The time evolution operator for this system can be written as
\begin{eqnarray}
\hat{U}^{\epsilon} = e^{-i \frac{\tau \hat{p}^2}{2\hbar}}
e^{i\frac{K}{\hbar} \cos x} e^{-i \frac{\epsilon
\hat{p}^2}{2\hbar}} \ \epsilon^{i\frac{K}{\hbar} \cos x}. \ \label{eq31}
\end{eqnarray}
In a basis of plane waves, $\hat{U}^{\epsilon}$ has matrix
elements
\begin{eqnarray}
U_{lm}^{\epsilon}=U_l^{free}. \ U_{lm}^{2-kick} = e^{-i
\frac{l^2\hbar\tau}{2}} \ i^{l-m} \nonumber \\ \sum_k
J_{l-k}\left(K_{\hbar}\right) \ J_{k-m}\left(K_{\hbar}\right) \
e^{-i \frac{k^2\hbar \epsilon}{2}} \label{eq32}
\end{eqnarray}
where $K_{\hbar}= K/\hbar$ and $J_n(K_{\hbar})$ is integer Bessel
functions of the first kind. It is easy to see that
$U_{lm}^{2-kick}$ is invariant if the products $K_\epsilon=
K\epsilon$ and $\hbar_\epsilon= \hbar\epsilon$ are kept constant;
while the free propagator $U_l^{free}=e^{-i
\frac{l^2\hbar\tau}{2}}$ simply contributes a near-random phase.
Thus the results are quite insensitive to the magnitude of $\tau =
T-\epsilon$ provided that $\tau \gg 1$. We will stick to $K
\epsilon \ll 1$ and $\tau \gg 1$ in all numerical calculations.
The result in Eq. (\ref{eq32}) may be compared with the one-kick
map in Eq. (\ref{eq2})
\begin{eqnarray}
U^{(0)}_{lm}= e^ {-i \frac{l^2 T \hbar}{2}} \
J_{l-m}\left(K_{\hbar}\right) . \label{eq33}
\end{eqnarray}
The one-kick matrix for the QKR has a well-studied band-structure:
since $J_{l-m}(x)\simeq 0$ for $|l-m| \gg x$, we can define a
bandwidth for $U^{(0)}$; i.e. $b= K_{\hbar}$ (this is strictly a
{\em half}-bandwidth) which is independent of the angular momenta
$l$ and $m$. However, this is {\em not} the case for the matrix of
$U^{\epsilon}$.
Assuming $|l-m|$ is small it was shown in Ref. \cite{Tania} that
\begin{equation}
U_{lm}^{\epsilon} \approx e^{-i \Phi} \ J_{l-m}\left[
2K_{\hbar} \cos \left ({l\hbar_\epsilon/2}\right) \right]
\label{eq34}
\end{equation}
where the phase $\Phi=(l^2T+ \epsilon l m + \epsilon l^2)\hbar/2+
\pi (l-m)/2$. Hence we infer a momentum dependent bandwidth,
$b(p)= 2K_{\hbar}\cos{(p \epsilon/2)}$.
While $U^{(0)}$ has a constant bandwidth, the bandwidth for the
matrix of $U^{\epsilon}$ oscillates with $l$ from a maximum value
$b_{max}=2K_{\hbar}$, equivalent to twice the bandwidth of
$U^{(0)}$, down to a minimum value $b_{min} \sim 0$.
$U^{\epsilon}$ is thus partitioned
into sub-matrices of dimension $N ={2\pi}/{\hbar_\epsilon}$
corresponding precisely to the momentum cells of width $\Delta p =
N \hbar$ observed in experiments \cite{Jones}.
In this paper we aim to study both the evolution of an initially
given wave-packet and its low order moments. The structure of the
evolution operator $U^{\epsilon}$ allows an efficient and accurate
numerical calculation of these quantities. The action of the
operator $U^{\epsilon}$ on a quantum state can be decomposed into
four steps: two associated with the kicks, which are diagonal in
the space presentation; and two associated with the free rotations
between neighboring kicks, which are diagonal in the momentum
presentation. We thus take the space and momentum representations
alternatively to facilitate the calculations. The transformation
between the representations is efficiently carried out with the
fast Fourier transformation (FFT) algorithm. In our numerical
simulations we can set the size of the basis up to $2^{20}$ which
is big enough to guarantee a double precision accuracy.
\section{Results: Anomalous diffusion and dynamical localization}
In this section we investigate the classical and quantum dynamics
of the $2\delta$-KR. Our main motivation is to describe the effect
on the quantum motion of generic dynamical features of the
classical dynamics such as the trapping-escaping mechanism. We
mainly focus on time/momentum scales such that the motion is
confined inside one cell. Initial conditions are always chosen
within the trapping region. For a study in the multi-cell region
we refer to Ref. \cite{Tania,Jones}.
As was mentioned previously, we restrict ourselves to the window
of parameters $K \epsilon \ll 1$. In addition it is also imposed
$\tau \gg 1$ in order to remove correlations that make the
dynamics non generic. In the numerical calculations $\tau, K$, and
$\epsilon$ are fixed within the above limits. Our main
conclusions do not depend on the specific value of the parameters.
Quantum effects are
investigated by varying $\hbar$.
\begin{figure}
\includegraphics[width=.95\columnwidth,clip]{fig1a.EPS}
\vspace{-.5cm}\label{fig1a}
\includegraphics[width=.95\columnwidth,clip]{fig1b.EPS}
\vspace{-.5cm}\label{fig1b} \caption{(Color online) Comparison of
quantum and classical energy diffusion (from bottom to top:
$\hbar=4\times 10^{-3},1\times 10^{-3},5\times 10^{-4}, 2.5\times
10^{-4}$ and classical respectively) for $K=0.2,\epsilon=0.25$ and
$\tau=10^4$. The two figures only differ in the scale of
time/momentum represented. Fig.(a) shows the effect of leaking
from the trapping region in longer time scales while Fig.(b)
describes diffusion inside the trapping region. The inset of Fig.(b)
shows the dependence of quantum-classical breaking time $t_c$ on $\hbar$; the best fitting (thin
solid line) suggests that $t_c\sim \hbar ^ {-1.39\pm 0.05}$ for
$\hbar<10^{-3}$. This is in contrast with the single kicked rotor
where $t_c \sim \hbar^{-2}$} \label{fig1}
\end{figure}
\subsection{Energy diffusion}
The classical dynamics in the trapping region was investigated
analytically in Ref. \cite{Jones}. It was found that energy
diffusion $\langle p^2(t) \rangle \sim D t$ increases linearly
with time. However, the dependence of $D \sim K^3$ on the kicking
strength is different from the single kicked rotor prediction $D
\sim K^2$ in the chaotic region but similar to the one observed if
the classical phase space is populated by cantori \cite{Fish}. This
suggests that our model captures generic features of the effect of
cantori in classical mechanics.
Eventually the particle reaches the edge of the trapping region
and leaks outside. Diffusion becomes then anomalous until the
particle gets trapped again. As is shown in Fig. 1, the parameters
$K, \epsilon$ and $\tau$ were chosen such that the typical time
for considerable leaking is $t_{leak} \approx 200 \gg 1 $. Before
this time the leakage is still possible but with a negligible
probability. As the particle approaches the outer boundaries of
the two cells that sandwich the trapping region (at $t\approx
6\times 10^3$), the diffusion begin to slow down. The
time unit we take in all figures is the number of the kick pairs.
In the quantum case the dynamics is richer. If we restrict
ourselves to the one cell region three time scales are
distinguished. In a first stage, $t < t_c$, the quantum averaged
energy agrees with its classical counterpart. As a result of the
peculiarities of the classical dynamics, scaling with $\hbar$ of
the classical-quantum breaking time $t_c$ may be different from
$t_c \sim \hbar^{-2}$, the result for a single kicked rotor.
For longer times, $t_c < t < t_d$, interference effects start to
be important. As a consequence we expect the particle still
diffuses, $\langle p^2(t) \rangle \approx D_{quan} t^{\gamma}$,
but at a rate which decreases as $\hbar$ increases and it
is lower than the classical one; namely $\gamma \leq 1$ and
$D_{quan} \leq D_{clas}$. This is a novel regime caused by the
interplay of destructive interference and classical anomalous
diffusion. This stage lasts up to a time $t_d$ in which full
dynamical localization occurs and diffusion is totally arrested.
Our numerical results confirm the above qualitative picture.
Results for quantum and classical energy diffusion $\langle p^2(t)
\rangle$ ($p=\hbar k$ for the quantum case) as a function of time
$t$ for $\tau = 10^4$ and $K \epsilon=0.05 \ll 1$ for different
$\hbar$'s are shown in Fig. 1. In the classical case $2\times
10^7$ initial conditions uniformly distributed along
$p_0=\pi/\epsilon$ (located at the center of the trapping region)
have been utilized for the ensemble average. The quantum initial
condition is set as the momentum eigenstate $|l_0\rangle$ with
$l_0$ the integer closest to $p_0/\hbar$.
We restrict ourselves to time/momentum scales such that the motion
is confined to one cell. We observe that for short times both
classical and quantum results coincide up to a certain breaking
time $t_c$ which has a fractional scaling with $\hbar$, different
from the single kick rotor case (see insert of Fig. 1b, where
$t_c$ is evaluated as the maximum time such that the deviation
between classical and quantum $\langle p^2(t)\rangle$ is below
$20\%$). For later times the quantum particle still diffuses
normally but with a smaller diffusion coefficient $D_{quan}$ (see
Tab. I). This is a direct consequence of destructive interference
effects. We relate this new region of weak dynamical localization
to the effect of classical trapping on the quantum dynamics. This
interpretation is reinforced by the observed dependence of
$D_{quan}$ on $\hbar$ in the trapping region (see Tab. I). For
$\hbar$ sufficiently small ($\hbar<10^{-4}$), $D_{quan}$ is close
to the classical prediction. For larger $\hbar$ we observe a
growing dependence on $\hbar$ though the exponent $\gamma \sim 1$
is not modified. It decreases as $\hbar$ increases. This
situation lasts until a maximum $\hbar$, denoted as $\hbar_{max}$,
such that the breaking-time $t_c$ coincides with the time $t_d$
in which full dynamical localization starts. For the
parameters utilized $\hbar_{max} \approx 6\times 10^{-3}$.
We have observed an additional $\hbar$ scale, denoted as
$\hbar_c$, relevant for a precise characterization of the
trapping-leaking mechanism. It is defined by the smallest $\hbar$
such that $t_d < t_{leak}$. For the parameters used
$\hbar_c\approx 10^{-3}<\hbar_{max}$. For $\hbar_c < \hbar <
\hbar_{max}$, $t_d < t_{leak}$, $D_{quan}$ is close to zero, and
the wave-packet is well localized in the trapping region. In this
range as $\hbar$ approaches $\hbar_{max}$, $t_d$ tends to $t_c$.
For $\hbar < \hbar_c$, the time scale $t_d$ related to the
dynamical localization increases dramatically. However quantum
energy diffusion is still linear in time (like the classical one)
up to $t_{leak}$. After $t\approx t_{leak}$ the classical particle
starts to leave the trapping region and the classical motion
become superdiffusive. The quantum dynamics for $\hbar< \hbar_{c}$
is also superdiffusive, but with a smaller leaking rate. The
smaller the $\hbar$, the closer the leaking rate to the classical
one. Eventually ($t\approx 6\times 10^3$) a new trapping region is
approached. As a consequence, both classical and quantum motions
are slowed down again.
We note that the region of weak dynamical localization
characterized by a linear time dependence of the energy evolution
is not present in the standard kicked rotor where there is no
intermediate region between classical diffusion and full quantum
dynamical localization. This is an indication that even though the
quantum diffusion is still normal inside the trapping region the
$2\delta$-KR is essentially different from the single kicked
rotor.
\begin{table}
\vspace{0.2cm}
\begin{tabular}{cccc}
\hline
$\hbar $ & $~~~~~t_c~~$ & $~~~t_d$ &$~~~D_{quan}/D_{clas} $ \\
\hline
$8\times 10^{-5}$ & ~~~$95\pm 30$~ & $~~~>10^4$& $~~~0.85\pm 0.05$ \\
$1\times 10^{-4}$ & ~~~$64\pm 25$~ & $~~~>10^4$& $~~~0.74\pm 0.04$ \\
$2\times 10^{-4}$ & ~~$22\pm 6$~ & $~~~>10^4$& $~~~0.43\pm 0.04$ \\
$5\times 10^{-4}$ & ~~$7\pm 3$ & $~~~>10^4$ & $~~~0.07\pm 0.02$ \\
$1\times 10^{-3}$ & ~~$3\pm 1$ & $~~~>10^4$ & $~~~0.02\pm 0.01$ \\
$2\times 10^{-3}$ & ~~$3\pm 1$ & $ ~~~80\pm 30$ & $~~~0.02\pm 0.01$ \\
$3\times 10^{-3}$ & ~~$3\pm 1$ & $ ~~~50\pm 20$ & $~~~0.02\pm 0.01$ \\
$6\times 10^{-3}$ & ~~$3\pm 1$ & $ ~~~4\pm 2$ & $~~~-$ \\
\hline
\end{tabular}
\caption{Breaking time $t_c$, dynamical localization time $t_d$,
and classical/quantum diffusion coefficient against $\hbar$ for
$K=0.2,\epsilon=0.25$, and $\tau=10^4$. $D_{clas}\approx
1.83\times 10^{-5}$, which is evaluated over $t<t_{leak}\approx
200$. $D_{quan}$ is evaluated over the time range $t_c<t<t_{leak}$
for $\hbar\le \hbar_c\approx 10^{-3}$ and $t_c<t<t_d$ otherwise.}
\label{aa}
\end{table}
For longer times $t > t_d$ diffusion stops as a consequence of
standard dynamical localization similar to the one observed in the
single quantum kicked rotor. Our results thus suggest that in
order to observe genuine quantum anomalous diffusion the value of
$\hbar$ must be such that $t_d \gg t_c$. This corresponds with
$\hbar \leq \hbar_{c}$. We remark that this condition should be
met by any experiment aiming to confirm the results reported in
this paper.
Although not shown, we have confirmed that the dependence of the
classical diffusion constant is $D \sim K^3$ similar to the case
of a single kick rotor in a region populated by classical cantori.
This together with the anomalous dependence with $\hbar$ also
found in studies of the role of cantori in quantum mechanics
\cite{Maitra} is a further indication that the $2\delta-$KR in the
region of parameters studied in this paper can be utilized as a
effective model to investigate the role of classical cantori in
quantum mechanics.
Finally we note that $\langle p^2(t) \rangle \sim t$ is only a
necessary condition for normal diffusion but this by no means
assures that the density of probability is Gaussian-like
\cite{klafter}. Indeed, in the next section we will show that the
quantum and classical density of probability of our model strongly
deviate from the normal diffusion prediction.
\subsection{Density of probability}
The density of probability of finding a particle with momentum $p$
after a time $t$ from a given initial state
$|\psi(0)\rangle=|l_0\rangle$ is given by $P_q(p,t)\equiv
P_q(k,t)=|\langle k|\psi(t)\rangle |^2$ with $p=k\hbar$. The
parameter set $K=0.2$, $\epsilon = 0.05$ and $\tau = 10^4$ chosen
permits us to study generic features of the motion. By generic we
mean the trapping region is large enough to be studied and the
classical phase space has not stability island; namely, $K\epsilon
\ll 1$. In addition $\tau \gg 1$, so consecutive pairs of kicks
are uncorrelated. In all cases our initial state is located in the
trapping region.
\subsubsection{Classical density of probability}
The classical $P(p,t)$ is obtained by evolving the classical
equation of motion for $2\times 10^7$ different initial conditions
at the center of the trapping region. Positions are uniformly
distributed along the interval $(-\pi,\pi)$.
$P(p,t)$ is approximated as the set of the states of the whole
ensemble that falls in $(p-\Delta p/2,p+\Delta p/2)$ at time $t$.
In the numerical calculations we set $\Delta p=0.02 \sqrt{\langle
p^2(t)\rangle}$.
We distinguish the following regions in the classical density of
probability (see Fig. 2a): For short times such that the particle
is well inside the trapping region the diffusion is normal and
$P(p,t)$ is Gaussian (see inset of Fig. 2a). As the boundary of
the trapping region is approached we observe a gradual crossover
from normal to anomalous (super) diffusion. For small momentum
$P(p,t)$ is still Gaussian as leaking to the outside region is
weak. As time approaches $t_{leak} \approx 200$, the typical time
to reach the edge of the trapping region, the central (small
momentum) Gaussian region becomes smaller and smaller. Meanwhile,
the outskirts bend down and a power-like behavior typical of
anomalous diffusion is observed.
\begin{figure}
\includegraphics[width=.95\columnwidth,clip]{fig2a.EPS}
\vspace{-.5cm}
\includegraphics[width=.95\columnwidth,clip]{fig2b.EPS}
\includegraphics[width=.95\columnwidth,clip]{fig2c.EPS}
\vspace{-.5cm} \caption{(Color online) Distribution of the
classical (a) and quantum probability density with $\hbar\approx
5\times 10^{-4}$ (b) and $3\times 10^{-3}$ (c) respectively.
Classical $P(p,t)$ is characterized by a Gaussian profile (inset
of Fig. (a)) before leaking from the trap occurs
$(t<t_{leak}\approx 200)$. For later times $t_{leak} < t < t_d$,
$P(p,t)$ develops power-law tails, $P(p,t)\sim p^{-\alpha}$,
typical of anomalous diffusion. The exponent $\alpha$ varies in
time ($\alpha\approx 4.5,2.9,2.3,1.3$ for $t=500,1000,2000$ and
$6000$ respectively). In the quantum case for small $\hbar$
leaking leads to power-law tails as well (see Fig. (b)). The best
fitting exponent, indicated by thin black lines, is $\alpha\approx
7.1,4.2,3.0,0.94$ for $t=500,1000,2000$ and $6000$ respectively).
For larger $\hbar$ (see Fig. (c)), dynamical localization
suppresses the diffusion before leaking occurs.
$K=0.2,\epsilon=0.25$ and $\tau=10^4$.}
\end{figure}
For longer times, $t > t_{leak}$, $P(p,t)\sim p^{-\alpha}$ with
$\alpha$ a decreasing function of time (Fig. 2a). For $t \to
\infty$, $\alpha \to 0$. This behavior is not surprising due to
the transient nature of the trapping-leaking mechanism. The
power-law decay is a direct consequence of the enhanced diffusion once
the particle has escaped from the trapped region. However,
precisely due to this fast diffusion, the particle soon approaches
a new trapped region where diffusion is slowed down again until
the particle leaks from the trap. As a consequence of this bottle
neck, the exponent $\alpha$ decreases and the density of
probability in the region of fast diffusion gradually increases
with time. In addition a sharp jump in $P(p,t)$ is observed
between the enhanced diffusion zone and the new trapping region
(see Fig. 2a for $t=6000$ where a steep drop of $P(p,t)$ happens
at around $|p-p_0|=2\pi/\epsilon$ corresponding to the new
trapping region). For long times $t \to \infty$, $P(p,t)$ resembles
a staircase, flat between trapping regions and discontinuous in
the trapping areas. This is a simple consequence of the fact that
the dwelling time in the trapping region is typically much longer
than the one needed to travel between two consecutive traps. We
shall see in the next section that quantum mechanically the
situation is different.
\subsubsection{Quantum density of probability}
The quantum density of probability $P_q(p,t)$ was calculated from
an initial condition $|\psi(0)\rangle=|l_0\rangle$ with
$l_0=\pi/\hbar_\epsilon$. For given values of $\hbar$ and
$\epsilon$ the right hand side may not be an integer. Then the
nearest value of $\hbar$ was chosen to ensure this condition. Thus
$p_0=\pi/\epsilon$ exactly for both classical and quantum
calculations. As in the classical case, $P_q(p,t)$ was evaluated
by summing up the probability of falling in a bin of width $\Delta
p$. We set $\Delta p = 0.0315$ for $\hbar\approx 5\times 10^{-4}$
(Fig. 2b) and $\Delta p = 0.006$ for $\hbar\approx 3\times
10^{-3}$ (Fig. 2c). Hence it is in fact a coarse-grained result
where part of quantum fluctuations have been suppressed. It is
more instructive to start our analysis of the quantum probability
density with a general account of the expected effect of the
classical motion on the quantum dynamics. As in the single kicked
rotor we expect to observe dynamical localization for sufficiently
long times $t > t_d$ \cite{fishman}. However the trapping (and
eventual release) mechanism changes qualitatively the route to
dynamical localization in our model. In fact we distinguish two
different scales for full dynamical localization. For $\hbar>
\hbar_c$, full dynamical localization will occur well inside the
trapping region where the classical diffusion is normal. As a
consequence the particle will not have time to escape, ($t_d \leq
t_{leak}\approx 200$ with our parameters) and the density of
probability will be time independent and will decay exponentially
as a function of the momentum $p$. These are typical signatures
of exponential localization similar to the one observed in a
single kicked rotor. The situation is different if $\hbar<
\hbar_{c}$ is small enough such that leaking to the enhanced
diffusion region is possible. In this case we may still observe
typical features of dynamical localization within the trapping
region. However for larger momenta the density of probability
develops time-dependent power law tails typical of anomalous
diffusion. Full dynamical localization will not typically take
place until the next trapping region is reached or later if
$\hbar$ is small enough. This revival of quantum diffusion is a
typical feature of our model that it is not observed in the single
kicked rotor. Below we provide a more detailed account of the
relevant time and momentum scales (see Fig. 2b) for an accurate
description of the quantum density of probability:
1. For $t,p$ well within the trapping region: $t < t_{leak}$ and
$|p-p_0| < p_{leak} \sim 0.1$. After a narrow region in which
classical and quantum results fully agree (see inset Fig. 2a), we
observe in the quantum case a cusp for $p \sim p_0$ caused by the
incipient dynamical localization in the core of the trapping
region. However for larger momentum the distribution is Gaussian
in agreement with previous results (see Fig. 1) for the energy
diffusion.
2. For $p$ within the trapping region ($|p -p_0| < p_{leak}$) but
$t > t_{leak}$. The core of the quantum probability is still
exponentially localized (see Fig. 2c) but the outskirts $P_q(p,t)
\sim 1/p^\alpha$ develops a power-law tail (see Fig. 2b) typical
of anomalous diffusion. In this region the exponent $\alpha$
depends on $\hbar$ but it is time independent.
3. For $p$ outside or close to the edge of the trapping region and
$t \geq t_{leak}$. The quantum probability $P_q(p,t) \sim
1/p^\alpha$ develops power-law tails but in this case the exponent
$\alpha$ decreases with time and eventually tends to zero for $t
\to \infty$ (see Fig. 2b). This is in principle surprising. It
is in general expected that destructive interference slows down
the quantum motion. As a consequence the quantum density of probability should
decay faster than the classical one. However there is a simple
explanation for this behavior: in a first stage, the quantum
exponent $\alpha$ is larger than the classical one as a
consequence of destructive interference effects. For later times,
the classical probability between two trapping region become
smaller than the quantum one since the classical particle leaks
faster into the next cell. At the same time destructive
interference slows down the quantum motion in this region.
Eventually $P_q(p,t)$ saturates, $\alpha \to 0$ before the next
trapping region is reached (see $t=6000$ in Fig. 2a and Fig. 2b).
4. For times sufficiently long $t > t_{d}$ quantum destructive
interference effects dominate, standard dynamical localization
takes place, and diffusion stops. $P_q(p,t)$ becomes time
independent and decays exponentially as a function of the
momentum. As was mentioned previously the value of $t_{d}$ depends
dramatically on whether dynamical localization occurs in the
trapping region or after it has escaped from the trapping region.
In the former case $t_d <t_{leak}\approx 200$ and in the latter
$t_d>10^4$ with no intermediate values.
\subsection{Return Probability}
Having studied diffusion in momentum for a fixed time $t$ we now
look at the explicit dependence in time. In order to proceed we
calculate return probabilities $P(t)$ of a wave-packet as a
function of time. We average over $N$ initial starting conditions
close to the center of the trapping region to suppress quantum
fluctuations,
\begin{equation}
P(t)= \left( 1/N \right) \sum_{l_1}^{l_2} \langle P_l(t) \rangle
, \label{eq14a}
\end{equation}
where $P_l(t)= |\langle \psi(t)|\psi_l(t=0) \rangle |^2$. The
initial condition was taken to be an angular momentum eigenstate
$|\psi_l(t=0)\rangle = |l \rangle$. The results were further
averaged from $l_1\approx \pi/\hbar_\epsilon-N/2$ to $l_2=l_1+N$.
In order to ensure the initial conditions are within the trapping
region, the value $N$ is set to be $N\approx \delta p/\hbar$ with
$\delta p$ the width of the trapping region.
The return probability provides valuable information about the
degree of localization of a system. Indeed it was already utilized
in the landmark paper by Anderson \cite{anderson} about
localization. A non zero $P(t)$ in the $t \to \infty$ limit is a
signature of exponential localization. On the other hand $P(t)
\propto t^{-d/2}$ with $d$ the spatial dimensionality is typical
of fully delocalized eigenstates (normal diffusion). In the theory
of localization a power-law decay $P(t) \propto 1/t^{\gamma}$ with
$\gamma < d/2$ is a signature of localization typical of a
disordered conductor close to a localization transition. Such slow
decay has also been related to the effect of cantori in transport
\cite{Ketz2} properties.
\begin{figure}
\includegraphics[width=.95\columnwidth,clip]{fig3.EPS}
\vspace{-.5cm} \caption{(Color online) Return probability as a
function of time for $K=5, \epsilon=0.04$ and $\tau=10^4$. We present results
for different values of $\hbar$ together with the classical prediction. In the
trapping region ($(t<t_{leak}\approx 30)$ in this case) the return probability
decays as a power-law $P(t)\sim t^{-\gamma}$. The exponent
$\gamma$, given by the best linear fitting (indicated by a thin
black line), decreases as $\hbar$ increases. We have obtained
$\gamma \approx 0.50,0.46,0.40$ and $0.28$ for the classical and
quantum cases with $\hbar=2^{-7}, 2^{-6}$ and $2^{-5}$
respectively. } \label{figure3}
\end{figure}
Our results are summarized in Fig. 3. For the sake of comparison,
we first study the classical counterpart of $P(t)$. Again $2\times
10^7$ initial states uniformly located in space and with
$p_0=\pi/\epsilon$ were evolved. $P(t)$ was evaluated as the set
of states that fall in the momentum region $|p-p_0|<0.004$ at time
$t$. For the parameters ($K=5, \epsilon=0.04$ and $\tau=10^4$)
chosen for the calculation, the width of the trapping region
$\delta p\approx 4$, and $t_{leak}\approx 30$. For $t<t_{leak}$,
$P(t)\sim t^{-0.5}$, in agreement with the Gaussian probability
density distribution obtained in the previous section. For
$t>t_{leak}$, leaking gradually takes over and after a crossover
$P(t)$ undergoes a faster power-law decay, $P(t) \sim t^{-1.5}$
until the next trap is reached.
Within the trapping region the quantum $P(t)$ decays as
$t^{-\gamma}$ with $\gamma < 0.5$, a decreasing function of
$\hbar$. The fact that $\gamma < d/2$ is a clear indication that
destructive interference is already slowing down the quantum
diffusion. Not surprisingly, $\gamma$ approaches its classical
limit, $0.5$, as $\hbar \to 0$.
For sufficiently large $\hbar \geq 0.1$ the particle cannot
escapes from the trapping region and $P(t)$ becomes constant (see
for example $P(t)$ for $\hbar=2^{-3}$ in Fig. 3). This is fully
consistent with previous results for the energy diffusion and the
density of probability.
For smaller $\hbar$ the particle escapes from the trapping region
and full dynamical localization occurs only for times much longer
than those shown in Fig. 3. For $t > t_{leak}$ we observe a rapid
decrease of $P(t)$. The decay is still power-law but the exponent
is larger than for $t < t_{leak}$ though smaller than the
classical result ($\gamma\approx 1.5$). This time scale can be
utilized to study the interplay between quantum effects and
classical anomalous diffusion. Destructive interference slows down
the quantum motion however diffusion is not arrested due to the
underlying classical anomalous (super)diffusion.
Finally we address the similarities of our model with those of a
generic Hamiltonian in a region of the phase space dominated by
cantori. Qualitatively the effect of the trapping region is
similar to that of cantori \cite{Geisel,weiss}. In both cases the
particle remains in a small region of the phase space for a long
time until eventually is released to the chaotic sea. In addition
the dependence of the classical diffusion constant $D \sim K^3$
\cite{Tania} is identical in both cases. In the quantum realm
cantori \cite{borgonovi,casati,sirko} induces slow power-law decay
in quantities such as the return probability \cite{Ketz2} or the
eigenstates themselves \cite{borgonovi}. In addition cantori have
been linked to fractional scaling with $\hbar$ \cite{Maitra} and
anomalous dynamical localization \cite{zaslavs}. All these
features have also been observed in the model studied in this
paper. These similarities strongly suggest that the $2-\delta$ KR
can be utilized as a toy model to study the effect of cantori in
classical and quantum mechanics.
\section{Conclusions}
We have studied the effect of classical anomalous diffusion on the
quantum dynamics of atoms exposed to pairs of $\delta$-kicks. Our
result are generic, do not depend on the parameters
$K\epsilon \ll 1$ and $\tau \gg 1$ utilized. We have identified a
regime of quantum diffusion where the motion is slow down due to
destructive interference but full dynamical localization has not
occurred yet. This has been related to the effect of the
underlying classical trapping and releasing mechanism on the
quantum dynamics. We have argued that the $2\delta$-KR can be used
as a simplified model to study the effect of cantori in classical
and quantum mechanics.
\acknowledgments
The authors thank C.E. Creffield and T.S. Monteiro for helpful
discussions. AMG acknowledges financial support from a Marie Curie
Outgoing Action, contract MOIF-CT-2005-007300. JW acknowledges
support from Defence Science and Technology Agency (DSTA) of
Singapore under agreement of POD0613356. |
math/0703296 | \section{Introduction}
Let $K$ be a connected compact Lie group acting on a symplectic real
manifold $M$ by symplectomorphisms. Suppose there exists a moment
map $\mu:M\rightarrow \k^*$ (see, for instance, \cite{GS} for the
definition of moment maps). It is an important problem in symplectic
geometry to study properties of $\mu$. In fact, usually one studies
not the map $\mu$ itself, but some coarser map, which we call the
{\it invariant moment map}. It is constructed as follows. One
chooses a Weyl chamber $C\subset \k^*$. The inclusion
$C\hookrightarrow \k^*$ induces a homeomorphism $C\cong \k^*/K$ of
topological spaces. By definition, the invariant moment map $\psi$
is the composition of $\mu:M\rightarrow \k^*$ and the quotient map
$\k^*\rightarrow C$. It turns out that the map $\psi$ has the
following amazing properties provided $M$ is compact:
\begin{itemize}
\item[(a)] The image of $\psi$ is a convex polytope in $C$.
\item[(b)] All fibers of $\psi$ are connected.
\item[(c)] $\psi$ is an open map onto its image.
\end{itemize}
(a) and (b) were proved by Kirwan in \cite{Kirwan}, (c) is due to
Knop, \cite{Knop_convex}. Since $\mu$ is $K$-equivariant, one can
extract some information about the image of $\mu$ from (a). From (b)
one derives that all fibers of $\mu$ are connected. Hamiltonian
$K$-manifolds satisfying (a)-(c) were called {\it convex} in
\cite{Knop_convex}. In fact, all interesting classes of Hamiltonian
manifolds (compact manifolds, Stein complex manifolds, cotangent
bundles) are convex, see \cite{Knop_convex} for details.
An algebraic analog of the category of smooth manifolds with an
action of a compact Lie group is the category of smooth {\it affine}
varieties acted on by a reductive algebraic group. Similarly to the
case of compact groups one can define the notion of a Hamiltonian
action of a reductive group, see Subsection \ref{SUBSECTION_Ham1}.
It is an interesting problem to understand:
\begin{enumerate}
\item what are algebraic analogs of properties (a)-(c)?
\item what varieties satisfy these properties?
\end{enumerate}
The study of these two questions was initiated by Knop in the early
90's (see the details below).
In the sequel all groups and varieties are defined over $\C$. First
of all, we need to define the invariant moment map in the algebraic
category. Let $X$ be a symplectic algebraic variety and $G$ a
reductive algebraic group acting on $X$ in a Hamiltonian way. Fix a
moment map $\mu_{G,X}:X\rightarrow \g^*$ for this action. In the
sequel it will be convenient to identify $\g$ and $\g^*$ by means of
a nondegenerate invariant symmetric form of $\g$ and consider
$\mu_{G,X}$ as a morphism $X\rightarrow \g$. By the invariant moment
map for $X$ we mean the morphism
$\psi_{G,X}:=\pi_{G,\g}\circ\mu_{G,X}$, where $\pi_{G,\g}$ denotes
the quotient morphism $\g\rightarrow \g\quo G$ for the adjoint
action $G:\g$. Note that the relation between $\mu_{G,X}$ and
$\psi_{G,X}$ is more loose than in the case of compact groups. For
example, one cannot determine $\im\mu_{G,X}$ by $\im\psi_{G,X}$.
It turns out that the morphism $\psi_{G,X}$ does have some good
properties.
\begin{Thm}\label{Thm:1}
The morphism $\psi_{G,X}$ is equidimensional (i.e., all irreducible
components of nonempty fibers have the same dimensions equal,
obviously, to $\dim X-\dim\overline{\im\psi_{G,X}}$).
\end{Thm}
In fact, a more precise result holds, see Theorem \ref{Thm:4.0.1}.
However, $\psi_{G,X}$ does not seem to have other good properties.
For example, even its general fiber may be disconnected, see
\cite{Knop12}, Introduction. Therefore one needs to modify the
morphism $\psi_{G,X}$.
To this end we introduce a kind of Stein factorization of
$\psi_{G,X}$. Namely, let $A$ denote the integral closure of the
subalgebra $\psi_{G,X}^*(\C[\g]^G)$ in $\C[X]^G$. Set
$C_{G,X}:=\Spec(A)$. There are a natural $G$-invariant morphism
$\widetilde{\psi}_{G,X}: X\rightarrow C_{G,X}$ and a finite morphism
$\tau_{G,X}:C_{G,X}\rightarrow \g\quo G$ such that
$\tau_{G,X}\circ\widetilde{\psi}_{G,X}=\psi_{G,X}$. Note that at
least the general fibers of $\widetilde{\psi}_{G,X}:X\rightarrow
C_{G,X}$ are connected whenever $G$ is connected. The idea to
replace $\psi_{G,X}$ with $\widetilde{\psi}_{G,X}$ is due to F.
Knop, see \cite{Knop1}.
In \cite{Knop12} Knop proved that any fiber of
$\widetilde{\psi}_{G,X}$ is connected provided $X$ is the cotangent
bundle of some smooth irreducible (not necessarily affine)
$G$-variety. On the other hand, he constructed an example of a
four-dimensional affine Hamiltonian $\C^\times$-variety $X$ such
that $\widetilde{\psi}_{G,X}$ has a disconnected fiber.
On the other hand, Theorems 1.2.5,1.2.7 from \cite{alg_hamil}
describe the image of $\widetilde{\psi}_{G,X}$. This description is
particularly easy when $X$ satisfies some additional conditions that
can be described as a presence of a grading on $\C[X]$ compatible
with the structure of a Hamiltonian variety.
\begin{defi}\label{defi:2.2.1}
An affine Hamiltonian $G$-variety $X$ equipped with an action
$\C^\times:X$ commuting with the action of $G$ is said to be {\it
conical} if the following two conditions are fulfilled
\begin{itemize}
\item[(Con1)] The morphism $\C^\times\times X\quo G\rightarrow X\quo
G, (t,\pi_{G,X}(x))\mapsto \pi_{G,X}(tx),$ can be extended to a
morphism $\C\times X\quo G\rightarrow X\quo G$.
\item[(Con2)] There exists a positive integer $k$ (called the {\it degree} of $X$) such that $t_*\omega=t^{-k}\omega$ and
$\mu_{G,X}(tx)=t^k\mu_{G,X}(x)$ for all $t\in \C^\times, x\in X$. Here $\omega$ denotes the symplectic form
on $X$ and $t_*\omega$ is the push-forward of $\omega$ under the automorphism of $X$ induced by $t$.
\end{itemize}
\end{defi}
For example, a symplectic $G$-module and the cotangent bundle of a
smooth affine $G$-variety are conical.
If $X$ is conical, then $C_{G,X}$ is a quotient of a vector space by
a finite group and $\widetilde{\psi}_{G,X}$ is surjective, see
\cite{alg_hamil}, Theorem 1.2.7. More precisely, there is a subspace
$\a\subset \g$ (called the Cartan space of $X$) and a subgroup
$W\subset N_G(\a)/Z_G(\a)$ (the Weyl group) such that $C_{G,X}\cong
\a/W$ and the finite morphism $\tau_{G,X}:C_{G,X}\rightarrow \g\quo
G$ is induced by the embedding $\a\hookrightarrow \g$. So the
subspace $\a\subset \g$ and the group $W$ encode the difference
between $\widetilde{\psi}_{G,X}$ and $\psi_{G,X}$. This description
partially generalizes Knop's results for cotangent bundles and
symplectic vector spaces (\cite{Knop1},\cite{Knop6}).
We have no examples of conical Hamiltonian $G$-varieties, where
$\widetilde{\psi}_{G,X}$ has a disconnected fiber. We conjecture
that in this case all fibers of $\widetilde{\psi}_{G,X}$ are
connected and, more precisely, that $X$ enjoys the following
property:
\begin{itemize}
\item[(Irr)] Any fiber of $\widetilde{\psi}_{G,X}\quo G:X\quo G\rightarrow C_{G,X}$ is irreducible.
\end{itemize}
We are able to prove (Irr) only under another restriction on $X$.
\begin{defi}\label{Def:0.4}
An affine Hamiltonian $G$-variety $X$ is said to be {\it untwisted}
if
\begin{itemize}
\item[(Utw1)] $C_{G,X}$ is smooth.
\item[(Utw2)] The morphism $\widetilde{\psi}_{G,X}$ is smooth in
codimension 1 (that is, the complement to the set of smooth points
of $\widetilde{\psi}_{G,X}$ in $X$ has codimension at least 2).
\end{itemize}
\end{defi}
\begin{Thm}\label{Thm:0.5}
Let $G$ be connected and $X$ a conical Hamiltonian $G$-variety.
\begin{enumerate}
\item If $X$ is untwisted, then any fiber of $\widetilde{\psi}_{G,X}\quo G$ is a normal Cohen-Macaulay scheme.
\item If $X$ satisfies (Utw1) and all fibers of $\widetilde{\psi}_{G,X}\quo G$ are normal (as schemes), then
$X$ satisfies (Irr).
\item Suppose $X$ is algebraically simply connected. If $X$ satisfies (Irr), then $X$ is untwisted.
\end{enumerate}
\end{Thm}
The term "untwisted" is partially justified by Remark \ref{Rem:0.6}.
We recall that a smooth irreducible variety $X$ is called {\it
algebraically simply connected} if a finite \'{e}tale morphism
$\varphi:Y\rightarrow X$ is an isomorphism whenever $Y$ is
irreducible.
Note that a fiber of $\widetilde{\psi}_{G,X}\quo G$ can be thought
as an algebraic analog of a Marsden-Weinstein reduction, \cite{MW}.
Now let us describe some classes of conical untwisted Hamiltonian
$G$-varieties. Knop showed in \cite{Knop2} that the cotangent bundle
of any smooth irreducible affine variety is untwisted. In the
present paper we give alternative proofs of this result and prove
that a symplectic $G$-module is untwisted.
Let us briefly describe the content of the paper. In Section
\ref{SECTION_prelim} we recall some known results concerning
Hamiltonian actions in the algebraic setting. Section
\ref{SECTION_dimension} is devoted to the proof of Theorem
\ref{Thm:1} (in fact, of a more precise statement). In Section
\ref{SECTION_Weyl} we prove some results concerning the Weyl groups
of Hamiltonian actions (see above). These results are used in the
proof of Theorem \ref{Thm:0.5}. Besides, they play a crucial role in
the computation of Weyl groups and root lattices of affine
$G$-varieties, the former is done in the preprint \cite{Weyl}.
Section \ref{SECTION_untwisted} is devoted to the proof of Theorem
\ref{Thm:0.5}. We also present there some classes of untwisted
varieties. In Section \ref{SECTION_open} we discuss some open
problems related to the subject of the paper. Finally, Section
\ref{SECTION_Notation} contains conventions and the list of notation
we use. In the beginning of Sections 2-5 their content is
described in more detail.
{\bf Acknowledgements.} Part of the work on this paper was done
during my visit to Ruhr University, Bochum, in July, 2005, in the
framework of Euler program. I would like to thank this institution
and especially Professor H. Flenner for hospitality. I also express
my gratitude to Professor F. Knop for his kind permission to use his
counterexample in Subsection \ref{SUBSECTION_counterexamples}.
Finally, I wish to thank the referees for useful remarks on an
earlier version of this text.
\section{Preliminaries}\label{SECTION_prelim}
In this section $G$ is a reductive algebraic group and $X$ is a
smooth variety equipped with a regular symplectic form $\omega$ and
an action of $G$ by symplectomorphisms.
In Subsection \ref{SUBSECTION_Ham1} we recall the definition of a
Hamiltonian action and give some examples. Subsection
\ref{SUBSECTION_Ham2} is devoted to conical Hamiltonian varieties
introduced in \cite{alg_hamil}. In Subsection
~\ref{SUBSECTION_local} we study a local structure of Hamiltonian
actions. At first, we recall the theory of cross-sections of
Hamiltonian actions (Proposition \ref{Prop:1.1}) tracing back to
Guillemin-Sternberg, \cite{GS}. Next, in this subsection we recall
the symplectic slice theorem from \cite{slice}. These two results
are key ingredients of most proofs in this paper. Finally, in
Subsection \ref{SUBSECTION_recall} we recall some results from
\cite{alg_hamil}, \cite{Comb_Ham}. The most important ones are
Propositions \ref{Lem:4.4.1}, \ref{Thm:2.2}.
\subsection{Hamiltonian actions}\label{SUBSECTION_Ham1}
Let $U$ be an open subset of $X$ and $f$ a regular
function on $U$.
The {\it skew-gradient} $v(f)$ of $f$ is, by definition, the regular vector field on $U$
given by the equality
\begin{equation*
\omega_x(v(f),\eta)=\langle d_xf, \eta\rangle, x\in U, \eta\in T_xX.
\end{equation*}
For $f,g\in \C[U]$ one defines their Poisson bracket
$\{f,g\}\in \C[U]$ by
\begin{equation*
\{f,g\}=\omega(v(f),v(g)).
\end{equation*}
Clearly, $\{f,g\}=L_{v(f)}g$, where $L$ denotes the Lie derivative.
To any element $\xi\in\g$ one associates the velocity vector field
$\xi_*$. Suppose there is a linear map $\g\rightarrow \C[X],
\xi\mapsto H_\xi,$ satisfying the following two conditions:
\begin{itemize}
\item[(H1)] The map $\xi\mapsto H_\xi$ is $G$-equivariant.
\item[(H2)] $v(H_\xi)=\xi_*$.
\end{itemize}
\begin{defi}\label{Def:2.1.1}
The action $G:X$ equipped with a linear map $\xi\mapsto H_\xi$
satisfying (H1),(H2) is said to be {\it Hamiltonian} and $X$ is
called a Hamiltonian $G$-variety.
\end{defi}
\begin{Rem}\label{Rem:2.1.2}
Very often the definition of a Hamiltonian action is given in a
slightly different way. Namely, for a connected group $G$ condition
(H1) is replaced by the condition $\{H_\xi,H_\eta\}=H_{[\xi,\eta]}$.
However, these two conditions are equivalent provided (H2) is
fulfilled. Note also that one can consider Hamiltonian actions on arbitrary Poisson varieties,
see, for example, \cite{alg_hamil}.
\end{Rem}
For a Hamiltonian action $G:X$ we define the morphism
$\mu_{G,X}:X\rightarrow \g^*$ by the formula
\begin{equation*
\langle \mu_{G,X}(x),\xi\rangle= H_{\xi}(x),\xi\in\g,x\in X.
\end{equation*}
This morphism is called the {\it moment map} of the Hamiltonian
$G$-variety $X$.
Conditions (H1),(H2) are equivalent, respectively, to
\begin{itemize}
\item[(M1)] $\mu_{G,X}$ is $G$-equivariant.
\item[(M2)] $\langle d_x\mu_{G,X}(v),\xi\rangle=\omega_x(\xi_x,v),$ for all $ x\in X,v\in
T_xX,\xi\in\g$.
\end{itemize}
Here and below we write $\xi_x$ instead of $\xi_{*x}$.
Any two maps $\mu_{G,X}:X\rightarrow \g^*$ satisfying conditions
(M1),(M2) differ by an element of $\g^{*G}$. Moreover,
$H_{[\xi,\eta]}=\{H_\xi, H_\eta\} =\omega(\xi_*,\eta_*)$ (see, for
example, \cite{GS},\cite{Vinberg}). Conversely, for any
$\eta\in\g^{*G}$ there exists the unique Hamiltonian $G$-variety
$X_{\eta}$ coinciding with $X$ as a symplectic $G$-variety and such
that $\mu_{G,X_\eta}=\mu_{G,X}+\eta$.
Let us choose some effective $G$-module $V$ and put
$(\xi,\eta)=\tr_V(\xi\eta)$ for $\xi,\eta\in\g$. The form $(\cdot,\cdot)$ is
$G$-invariant, symmetric and its restriction to the Lie algebra of
any reductive subgroup of $G$ is nondegenerate. Using this form, we
identify $\g$ and $\g^*$. In particular, we may consider $\mu_{G,X}$
as a morphism from $X$ to $\g$.
Let us now give some examples of Hamiltonian $G$-varieties.
\begin{Ex}[Cotangent bundles]\label{Ex:2.1.4} Let $X_0$ be a smooth $G$-variety, $X:=T^*X_0$ the cotangent
bundle of $X_0$. $X$ is a symplectic algebraic variety (the
symplectic form is presented, for example, in
\cite{GS},\cite{Vinberg}). The action of $G$ on $X$ is Hamiltonian.
The moment map is given by $\langle\mu_{G,X}((y,\alpha)),
\xi\rangle=\langle \alpha, \xi_{y}\rangle$. Here $y\in X_0,
\alpha\in T^*_yX_0,\xi\in\g$.
\end{Ex}
\begin{Ex}[Symplectic vector spaces]\label{Ex:2.1.5} Let $V$ be a vector space equipped with a non-degenerate
skew-symmetric bilinear form $\omega$. Then $V$ is a symplectic
variety. Let $G$ act on $V$ by linear symplectomorphisms. Then the
action $G:V$ is Hamiltonian. The moment map $\mu_{G,V}$ is given by
$\langle\mu_{G,V}(v), \xi\rangle = \frac{1}{2}\omega(\xi v,v),
\xi\in\g, v\in V$.
\end{Ex}
\begin{Ex}[Model varieties]\label{Ex:2.1.6}
This example generalizes the previous one.
Let $H$ be a reductive subgroup of $G$, $\eta\in \g^H$, $V$ a symplectic $H$-module. Put $U=(\z_\g(\eta)/\h)^*$. Let us
equip the homogeneous vector bundle $X=G*_H(U\oplus V)$ with a
certain closed 2-form. Let $\eta_n,\eta_s$ denote nilpotent and
semisimple parts of $\eta$, respectively. If $\eta_n\neq 0$, choose
an $\sl_2$-triple $(\eta_n,h,f)$ in $\z_\g(\eta_s)^H$ (where $h$ is
semisimple and $f$ is nilpotent). If $\eta_n=0$, we set $h=f=0$. The
$H$-module $U$ can be identified with $\z_\g(f)\cap \h^\perp$. Fix a
point $x=[1,(u,v)]\in X$. The tangent space $T_xX$ is naturally
identified with $\h^\perp\oplus U\oplus V$, where $U\oplus V$ is the
tangent space to the fiber of the projection $G*_H(U\oplus
V)\rightarrow G/H$ and the embedding $\h^\perp\hookrightarrow T_xX$
is given by $\xi\mapsto \xi_x$. Put
\begin{equation*
\begin{split}
\omega_x(u_1+v_1+\xi_1,&u_2+v_2+\xi_2)=\omega_V(v_1,v_2)+(\xi_1,u_2)-(\xi_2,u_1)+(\eta+u+\mu_{H,V}(v),[\xi_1,\xi_2]),\\
& u_1,u_2\in U, v_1,v_2\in V, \xi_1,\xi_2\in \h^\perp.
\end{split}
\end{equation*}
The corresponding map $\omega: U\oplus V\rightarrow \bigwedge^2
(\h^\perp\oplus U\oplus V)^*$ is $H$-equivariant. Thus $\omega$ can
be extended to the unique $G$-invariant 2-form on $X$, which is
denoted also by $\omega$. It turns out that $\omega$ is closed and
nondegenerate in any point of the zero section $G/H$, \cite{slice},
assertion 1 of Proposition 1. If $\eta$ is nilpotent, then $\omega$
is nondegenerate on the whole variety $X$. In the general case the
subset $X_r=\{x\in G*_H(U\oplus V)| \omega_x \text{ is nondegenerate
in }x\}$ is affine. The action $G:X_r$ is Hamiltonian. The moment
map is given by (see \cite{slice}, assertion 3 of Proposition 1)
$$\mu_{G,X_r}([g,(u,v)])=\mathop{\rm Ad}\nolimits(g)(\eta+u+\mu_{H,V}(v)).$$
We denote the Hamiltonian variety $X_r$ by
$M_{G}(H,\eta,V)\index{mghev@$M_G(H,\eta,V)$}$ and call it a {\it
model variety}.
\end{Ex}
\begin{Rem}\label{Rem:2.1.7}
The Hamiltonian structure on $M_G(H,\eta,V)$ depends on the choice
of an $\sl_2$-triple $(\eta_n,h,f)$ in $\z_\g(\eta_s)^H$ (if
$\eta_n\neq 0$). However, Hamiltonian varieties corresponding to
different choices of $h,f$ are isomorphic (see Remark 1
from~\cite{slice}). In the sequel we say that $(\eta_n,h,f)$ is an
$\sl_2$-triple {\it generating} $M_G(H,\eta,V)$.
\end{Rem}
\begin{Rem}\label{Rem:2.1.8}
For $\eta_0\in \g^G$ the Hamiltonian $G$-varieties
$M_G(H,\eta+\eta_0,V), M_G(H,\eta,V)_{\eta_0}$ are naturally
identified. They even coincide as subsets in $G*_H(U\oplus V)$.
\end{Rem}
Now we consider two constructions with Hamiltonian varieties.
\begin{Ex}[Restriction to a subgroup]\label{Ex:2.1.9} Let $H$ be a reductive subgroup
of $G$ and $X$ a Hamiltonian $G$-variety. Then $X$ is a Hamiltonian
$H$-variety with the moment map $\mu_{H,X}=p\circ\mu_{G,X}$. Here
$p$ denotes the restriction map $\g^*\twoheadrightarrow \h^*$.
\end{Ex}
\begin{Ex}[Products]\label{Ex:2.1.10}
Suppose $X_1,X_2$ are Hamiltonian $G$-varieties. Being the product
of symplectic varieties, the variety $X_1\times X_2$ has a natural
symplectic structure. The action $G:X_1\times X_2$ is Hamiltonian.
The moment map is given by the formula $\mu_{G, X_1\times
X_2}(x_1,x_2)=\mu_{G,X_1}(x_1)+\mu_{G,X_2}(x_2)$ for $x_1\in
X_1,x_2\in X_2$.
\end{Ex}
\begin{Rem}\label{Rem:2.1.11}
It follows directly from the construction of a model variety that if
$(H,\eta,V)$ is the same as in Example~\ref{Ex:2.1.6} and $V_0$ is a
trivial symplectic $H$-module, then the Hamiltonian $G$-varieties
$M_G(H,\eta,V\oplus V_0)\cong M_G(H,\eta,V)\times V_0$ are
isomorphic (the action $G:V_0$ is assumed to be trivial).
\end{Rem}
Now we define some important numerical invariants of an irreducible
Hamiltonian $G$-variety $X$. For an action of $G$ on an algebraic
variety $Y$ we denote by $m_G(Y)\index{mgy@$m_G(Y)$}$ the maximal
dimension of a $G$-orbit on $Y$. The number
$m_G(X)-m_G(\overline{\im\mu_{G,X}})$ is called the {\it defect} of
$X$ and is denoted by $\mathop{\rm def}\nolimits_G(X)\index{defgx@$\mathop{\rm def}\nolimits_G(X)$}$. The
number $\dim X-\mathop{\rm def}\nolimits_G(X)-m_G(X)$ is called the {\it corank} of $X$
and is denoted by $\mathop{\rm cork}\nolimits_G(X)\index{corkgx@$\mathop{\rm cork}\nolimits_G(X)$}$.
Equivalently, $\mathop{\rm cork}\nolimits_G(X)=\td \C(X)^G-\mathop{\rm def}\nolimits_G(X)$. An irreducible
Hamiltonian $G$-variety $X$ such that $\mathop{\rm cork}\nolimits_G(X)=0$ is called {\it
coisotropic}.
It follows from the standard properties of the moment map (see, for
example, \cite{GS},\cite{Vinberg}) that the defect and the corank
of $X$ coincide, respectively, with $\dim\ker\omega|_{\g_*x}$,
$\mathop{\rm rk}\nolimits\omega|_{(\g_*x)^\skewperp}$ for a point $x\in X$ in general
position. Further, the following statement holds, see
\cite{alg_hamil}, Proposition 3.1.7.
\begin{Lem}\label{Lem:2.1.12}
$\dim C_{G,X}=\dim\overline{\im\psi_{G,X}}=\mathop{\rm def}\nolimits_{G}(X)$.
\end{Lem}
\begin{defi}\label{defi:3.5} Let $X_1,X_2$ be Hamiltonian
$G$-varieties. A morphism $\varphi:X_1\rightarrow X_2$ is called
{\it Hamiltonian} if it is an \'{e}tale $G$-equivariant
symplectomorphism intertwining the moment maps.
\end{defi}
Note that a Hamiltonian morphism $\varphi:X_1\rightarrow X_2$
induces the unique morphism $\varphi_0:C_{G,X_1}\rightarrow C_{G,X_2}$
such that
$\widetilde{\psi}_{G,X_2}\circ\varphi=\varphi_0\circ\widetilde{\psi}_{G,X_1}$.
\begin{Rem}\label{Rem:2.1.13}
One can similarly define Hamiltonian actions on complex analytic
manifolds. The definitions of the corank and the defect can be
extended to this case without any noticeable modifications.
\end{Rem}
\subsection{Conical Hamiltonian varieties}\label{SUBSECTION_Ham2}
The definition of a conical Hamiltonian variety was given in
Introduction, Definition \ref{defi:2.2.1}.
\begin{Ex}[Cotangent bundles]\label{Ex:2.2.2}
Let $X_0,X$ be as in Example \ref{Ex:2.1.4}. The variety $X$ is a
vector bundle over $X_0$. The action $\C^\times:X$ by fiberwise
multiplication turns $X$ into a conical variety of degree 1.
\end{Ex}
\begin{Ex}[Symplectic vector spaces]\label{Ex:2.2.3}
The symplectic $G$-module $V$ equipped with the action $\C^\times:V$
given by $(t,v)\mapsto tv$ is conical of degree 2.
\end{Ex}
\begin{Ex}[Model varieties]\label{Ex:2.2.4}
This example generalizes the previous one. Let $H,\eta,V$ be as in
Example \ref{Ex:2.1.6} and $X=M_G(H,\eta,V)$. Suppose that $\eta$ is
nilpotent. Here we define an action $\C^\times:X$ turning $X$ into a
conical Hamiltonian variety of degree 2. Let $(\eta,h,f)$ be the
$\sl_2$-triple in $\g^H$ generating $X$. As a $G$-variety,
$X=G*_H(U\oplus V)$, where $U=\z_\g(f)\cap \h^\perp$. Note that $h$
is an image of a coroot under an embedding of Lie algebras.
In particular, there
exists a one-parameter subgroup $\gamma:\C^\times\rightarrow G$ with
$\frac{d}{dt}|_{t=0}\gamma=h$. Since $[h,\h]=0, [h,f]=-2f$, we see
that $\gamma(t)(\h^\perp)=\h^\perp,\gamma(t)(U)=U$. Define a
morphism $\C^\times\times X\rightarrow X$ by formula
\begin{equation}\label{eq:2.4}
(t, [g,(u,v)])\mapsto [g\gamma(t),t^2\gamma(t)^{-1}u,tv], t\in
\C^\times, g\in G, u\in U, v\in V.
\end{equation}
One checks directly that the morphism (\ref{eq:2.4}) is
well-defined and determines an action of $\C^\times$ on $X$
commuting with the action of $G$. Let us check that $X$ with this
action is a conical Hamiltonian variety. The action of $\C^\times$
on $X\quo G$ coincides with that induced by the action $\C^\times:X$
given by
\begin{equation}\label{eq:2.2:1} (t,[g,(u,v)])\mapsto
[g,t^2\gamma(t)^{-1}u,tv].\end{equation} The eigenvalues of $\mathop{\rm ad}\nolimits(h)$
on $\z_\g(f)$ are not positive. Thus the morphism (\ref{eq:2.2:1}) can
be extended to a morphism $\C\times X\rightarrow X$. This yields
(Con1). (Con2) for $k=2$ is verified directly using the construction
of Example~\ref{Ex:2.1.6}.
\end{Ex}
\begin{Rem}\label{Rem:2.2.5}
Let $X$ be as in the previous example. The action $\C^\times:X$
induces a non-negative grading on $\C[X]^G$. In the notation of the
previous example $\C[X]^G\cong \C[U\oplus V]^H$. The grading on
$\C[U\oplus V]^H$ is induced from the following grading on
$\C[U\oplus V]$:
all elements of $V^*\subset \C[U\oplus V]$ have degree 1. The
$H$-module
$U^*$ is naturally identified with $\z_\g(\eta)\cap
\h^\perp$. Put $\g_i=\{\xi\in \g| [h,\xi]=i \xi\}$. All elements of $\z_\g(\eta)\cap\h^\perp\cap\g_i$
have degree $i+2$.\end{Rem}
\begin{Lem}[\cite{alg_hamil}, Lemma 3.3.6]\label{Lem:2.2.6}
Let $X$ be a conical Hamiltonian $G$-variety of degree $k$. Then
\begin{enumerate}
\item $0\in \im\psi_{G,X}$.
\item Assume that $X$ is irreducible and normal. Then the subalgebra $\C[C_{G,X}]\subset
\C[X]^G$ is $\C^\times$-stable. The morphisms
$\widetilde{\psi}_{G,X}:X\rightarrow C_{G,X},
\tau_{G,X}:C_{G,X}\rightarrow \g\quo G$ are $\C^\times$-equivariant,
where the action $\C^\times:\g\quo G$ is induced from the action
$\C^\times:\g$ given by $(t,x)\mapsto t^kx, t\in \C^\times, x\in\g$.
\item Under the assumptions of assertion 2, there is the unique point $\lambda_0\in C_{G,X}$ such that
$\tau_{G,X}(\lambda_0)=0$. For any point $\lambda\in C_{G,X}$ the
limit $\lim_{t\rightarrow 0}t\lambda$ exists and is equal to
$\lambda_0$.
\end{enumerate}
\end{Lem}
\subsection{Local structure of Hamiltonian actions}\label{SUBSECTION_local}
Firstly, we review the algebraic variant of the Guillemin-Sternberg
local cross-section theory, see \cite{Knop2}, Section 5,
\cite{alg_hamil}, Subsection 5.1. Let $L$ be a Levi subgroup of $G$
and $\lfr$ the corresponding Lie algebra. Put
$\lfr^{pr}=\{\xi\in\lfr| \z_\g(\xi_s)\subset \lfr\}$.
\begin{Prop}[\cite{Knop2}, Theorem 5.4 and \cite{alg_hamil},
Corollary 5.1.3, Propositions 5.1.2, 5.1.4, 5.1.7]\label{Prop:1.1}
Let $x\in X,
\lfr=\z_\g(\mu_{G,X}(x)_s),Y=\mu_{G,X}^{-1}(\lfr^{pr})$. Then
\begin{enumerate}
\item
$T_yX=\lfr^\perp_*y\oplus T_yY$
is a skew-orthogonal direct sum for any $y\in Y$. In particular,
$Y$ is a smooth subvariety of $X$ and the restriction of $\omega$ to
$Y$ is nondegenerate. Thus $Y$ is equipped with a symplectic
structure.
\item The action $N_G(L):Y$ is Hamiltonian with the
moment map $\mu_{G,X}|_Y$. \item The natural morphism
$G*_{N_G(L)}Y\rightarrow X$ is \'{e}tale. Its image is saturated.
\item If $x$ is in general position, then the natural morphism $G*_{N_G(L)}Y\rightarrow
X$ is an open embedding and $N_G(L)$ permutes the connected
components of $Y$ transitively.
\end{enumerate}
\end{Prop}
A subset $Z^0$ of a $G$-variety $Z$ is said to be {\it saturated} if
there exist a $G$-invariant morphism $\varphi:Z\rightarrow Z_0$ and
a subset $Z_0^0\subset Z_0$ such that $Z^0=\varphi^{-1}(Z_0^0)$.
\begin{defi}\label{defi:1.2}
An irreducible (=connected) component of $\mu_{G,X}^{-1}(\lfr^{pr})$
equipped with the structure of a Hamiltonian $L$-variety obtained by
restriction of the Hamiltonian structure from
$\mu_{G,X}^{-1}(\lfr^{pr})$ is called an {\it $L$-cross-section} of
$X$.
\end{defi}
\begin{defi}\label{defi:1.3}
The Levi subgroup $L=Z_G(\mu_{G,X}(x)_s)$, where $x\in X$ is in
general position, is said to be the {\it principal centralizer} of
$X$.
\end{defi}
Note that the principal centralizer is determined uniquely up to
$G$-conjugacy.
\begin{Lem}\label{Lem:2.3.5}
Let $L$ be the principal centralizer and $X_L$ an $L$-cross-section
of $X$. Then the following conditions are equivalent:
\begin{enumerate}
\item $m_G(X)=\dim G$.
\item $\mathop{\rm def}\nolimits_G(X)=\mathop{\rm rk}\nolimits G$.
\item $\overline{\im \mu_{G,X}}=\g$.
\item $L$ is a maximal torus in $G$ and $m_L(X_L)=\mathop{\rm def}\nolimits_L(X_L)=\mathop{\rm rk}\nolimits G$.
\item The stabilizer in general position for the action $G:X$ is finite.
\end{enumerate}
Under these conditions, $\mathop{\rm cork}\nolimits_G(X)=\dim X-\dim G-\mathop{\rm rk}\nolimits G$.
\end{Lem}
\begin{proof}
The equivalence of conditions (1)-(4) was proved in \cite{Comb_Ham},
Lemma 4.5. The equality for $\mathop{\rm cork}\nolimits_G(X)$ follows from (1) and (2). It is well-known that (5) is equivalent to
(1).
\end{proof}
\begin{Lem}\label{Lem:2.3.11}
Let $L$ be the principal centralizer and $X_L$ an $L$-cross-section
of $X$. Suppose that the stabilizer in general position $L_0$ for
the action $L:X_L$ is reductive and that $0\in
\overline{\im\psi_{G,X}}$. Then
$\overline{\im\mu_{G,X}}=\overline{G(\lfr\cap\lfr_0^\perp)}$.
\end{Lem}
\begin{proof}
From Proposition \ref{Prop:1.1} it follows that
$\overline{\im\mu_{G,X}}=\overline{G\im\mu_{L,X_L}}$. Since $0\in
\overline{\im\psi_{G,X}}$, we see that
$0\in\overline{\im\mu_{L,X_L}}$. By Theorem 4.1.1 from
\cite{alg_hamil}, $(L,L)\subset L_0$. Therefore
$\overline{\im\mu_{L,X_L}}=\lfr\cap\lfr_0^\perp$.
\end{proof}
Now we turn to the problem of describing the structure of an {\it
affine} Hamiltonian $G$-variety in some neighborhood of a point with
closed $G$-orbit. A neighborhood is taken with respect to the
complex topology (in the sequel we call such neighborhoods {\it
analytical}).
At first, we define some invariants of the triple $(G,X,x)$. Put
$H=G_x, \eta=\mu_{G,X}(x)$. The subgroup $H\subset G$ is reductive
and $\eta\in \g^H$. Put $V=(\g_*x)^\skewperp/(\g_*x\cap
\g_*x^\skewperp)$. This is a symplectic $H$-module. We say that
$(H,\eta,V)$ is the {\it determining triple} of $X$ at $x$. For
example, the determining triple of $X=M_G(H,\eta,V)$ at
$x=[1,(0,0)]$ is $(H,\eta,V)$, see \cite{slice}, assertion 4 of
Proposition 1.
As the name suggests, a determining triple should determine the
structure of the Hamiltonian $G$-variety $X$ near $x$. In fact, a
slightly stronger claim holds.
\begin{defi}\label{defi:4.3.1}
Let $X_1,X_2$ be affine Hamiltonian $G$-varieties, $x_1\in X_1,
x_2\in X_2$ be points with closed $G$-orbits. The pairs
$(X_1,x_1),(X_2,x_2)$ are called {\it analytically equivalent}, if
there are saturated open analytical neighborhoods $O_1,O_2$ of
$x_1\in X_1, x_2\in X_2$, respectively, that are isomorphic as
complex-analytical Hamiltonian $G$-manifolds.
\end{defi}
\begin{Rem}\label{Rem:4.3.2}
An open saturated analytical neighborhood in $X$ is the inverse
image of an {\it open} analytical neighborhood in $X\quo G$ under
$\pi_{G,X}$. See, for example, \cite{slice}, Lemma 5.
\end{Rem}
\begin{Prop}[Symplectic slice theorem, \cite{slice}]\label{Prop:4.3.3}
Let $X$ be an affine Hamiltonian $G$-variety, $x\in X$ a point
with closed $G$-orbit, $(H,\eta,V)$ the determining triple of $X$
at $x$. Then the pair $(X,x)$ is analytically equivalent to the pair
$(M_{G}(H,\eta,V), [1,(0,0)])$.
\end{Prop}
Now we prove two lemmas, which will be used in Subsection
\ref{SUBSECTION_affham6}.
We have two approaches to the local study of affine Hamiltonian
varieties: the cross-sections theory and the symplectic slice
theorem. Let us establish a connection between them.
\begin{Lem}\label{Lem:4.3.4} Let $x\in X$ be a point with closed $G$-orbit
and $(H,\eta,V)$ the determining triple of $X$ at $x$. Put
$M=Z_{G}(\eta_s)$. Denote by $X_M$ the unique $M$-cross-section of $X$
containing $x$. Then the following assertions hold
\begin{enumerate}\item $Mx$ is closed in $X_M$ and $(H,\eta,V)$
is the determining triple of $X_M$ at $x$.
\item There exists an affine saturated open (with respect to Zariski topology) neighborhood
$X_M^0\subset X_M$ of $x$ such that the following conditions are
satisfied:
\begin{enumerate}
\item the natural morphism $X_M^0\quo M\rightarrow X\quo G,
\pi_{M,X_M}(z)\mapsto \pi_{G,X}(z)$ is \'{e}tale;
\item
for any $z\in X^0_M$ the orbit
$Mz$ is closed in $X_M^0$ (equivalently, in $X_M$) iff $Gz$ is closed in $X$.
\end{enumerate}
\end{enumerate}
\end{Lem}
\begin{proof}
The morphism $\varphi:G*_MX_M\rightarrow X, [g,x]\mapsto gx,$ is
\'{e}tale (assertion 3 of Proposition~\ref{Prop:1.1}). Since $Gx$ is
closed in $X$, we see that $G[1,x]$ is closed in $G*_MX_M$,
equivalently, $Mx$ is closed in $X_M$. Since $G_z\subset
Z_G(\mu_{G,X}(z))\subset Z_G(\mu_{G,X}(z)_s)=M$, we have $G_z=M_z$
for $z\in X_M$. By construction of $\mu_{M,X_M}$,
$\mu_{M,X_M}(z)=\mu_{G,X}(z)$. Assertion 1 will follow if we check
that the $H$-modules $\g_*x^\skewperp/(\g_*x^\skewperp\cap \g_*x)$
and $\m_*x^\skewperp/(\m_*x^\skewperp\cap \m_*x)$ are isomorphic.
Here the skew-orthogonal complement to $\g_*x$ (resp., to $\m_*x$)
is taken in $T_xX$ (resp., in $T_xX_M$)). The existence of an
isomorphism stems from $\g_*x=\m^{\perp}_*x\oplus \m_*x$ and
assertion 1 of Proposition~\ref{Prop:1.1}.
By the above, the orbits $G[1,x],Gx$ are closed and isomorphic via
$\varphi$. It follows from Luna's fundamental lemma, \cite{Luna1},
that for some open affine neighborhood $U$ of the point
$\pi_{M,X_M}(x)$ in $X_M\quo M\cong (G*_MX_M)\quo G$ the morphism
$\varphi\quo G:U\rightarrow X\quo G$ is \'{e}tale and
\begin{equation}\label{eq:4.3:1}\pi_{G,G*_MX_M}^{-1}(U)\cong
U\times_{X\quo G}X.\end{equation}
Clearly, $\pi_{G,G*_MX_M}^{-1}(U)\cong G*_M\pi_{M,X_M}^{-1}(U)$.
Thanks to (\ref{eq:4.3:1}), we see that for all $z\in
X_M^0:=\pi^{-1}_{M,X_M}(U)$ the orbit $G[1,z]$ is closed in $
G*_M\pi_{M,X_M}^{-1}(U)$ iff $Gz$ is closed in $X$.
\end{proof}
The next lemma studies the behavior of determining triples under
replacing $G$ with some connected subgroup $G^1\subset G$ containing
$(G,G)$.
\begin{Lem}\label{Lem:4.3.5}
Let $x\in X$ be a point with closed $G$-orbit and $(H,\eta,V)$ the
determining triple of $X$ at $x$. Then $G^1x$ is closed in $X$ and
the determining triple of the Hamiltonian $G^1$-variety $X$ at $x$
has the form $(H\cap G^1, \eta_0,V\oplus V_0)$, where $V_0$ is a
trivial $H\cap G^1$-module and $\eta_0$ is the projection of $\eta$
to $\g^1$.
\end{Lem}
\begin{proof}
Since $G^1$ is a normal subgroup of $G$, we see that all
$G^1$-orbits in $Gx$ have the same dimension whence closed.
Obviously, $G^1_x=G^1\cap H, \mu_{G^1,X}(x)=\eta_0$. Clearly,
$\g^1_*x\subset \g_*x$ and $\g_*x^\skewperp\subset
\g^1_*x^\skewperp$. Therefore we have a natural embedding
$\g_*x^\skewperp/(\g_*x^\skewperp\cap \g^1_*x)\hookrightarrow
\g^1_*x^\skewperp/(\g^1_*x^\skewperp\cap \g^1_*x)$ and a natural
projection $\g_*x^\skewperp/(\g_*x^\skewperp\cap
\g^1_*x)\twoheadrightarrow \g_*x^\skewperp/(\g_*x^\skewperp\cap
\g_*x)$. The cokernel of the former is a quotient of the $H\cap
G^1$-module $\g^1_*x^\skewperp/\g_*x^\skewperp\cong
(\g_*x/\g^1_*x)^*$, while the kernel of the latter is a submodule in
$\g_*x/\g^1_*x$. Since $\g_*x/\g_*^1x$ is a trivial $H\cap
G^1$-module, we are done.
\end{proof}
\subsection{Some results concerning
$\widetilde{\psi}_{G,X},C_{G,X}$}\label{SUBSECTION_recall} Let us,
at first, define two important invariants of a Hamiltonian variety:
its Cartan space and Weyl group. The proofs of the facts below
concerning these invariants can be found in \cite{alg_hamil},
Subsection 5.2.
Let $L$ be the principal centralizer and $X_L$ an $L$-cross-section
of a Hamiltonian $G$-variety $X$. It turns out that
$\overline{\im\mu_{Z(L)^\circ,X_L}}$ is an affine subspace in
$\z(\lfr)$. We denote this affine subspace by $\a_{G,X}^{(X_L)}$
and call it the {\it Cartan space} of $X$. It intersects the Lie
algebra of the inefficiency kernel for the action $Z(L)^\circ:X_L$
in the unique point (by the inefficiency kernel of a group action
$\Gamma:Y$ we mean the kernel of the corresponding homomorphism
$\Gamma\rightarrow \Aut(Y)$). Taking this point as the origin in
$\a_{G,X}^{(X_L)}$ we may (and will) consider $\a_{G,X}^{(X_L)}$ as
a vector space.
The group $N_G(L,X_L)$ acts linearly on $\a_{G,X}^{(X_L)}$. We
denote the image of $N_G(L,X_L)$ in $\GL(\a_{G,X}^{(X_L)})$ by
$W_{G,X}^{(X_L)}$ and call it the {\it Weyl group} of $X$. If $G$ is
connected, then $W_{G,X}^{(X_L)}$ is naturally identified with
$N_G(L,X_L)/L$.
Note that, in a suitable sense, the pair
$(\a_{G,X}^{(X_L)},W_{G,X}^{(X_L)})$ does not depend up to
$G$-conjugacy from the choice of $L,X_L$. When a particular choice
of $L,X_L$ does not matter, we write $\a^{(\cdot)}_{G,X}$ for
$\a_{G,X}^{(X_L)}$ and $W^{(\cdot)}_{G,X}$ for $W_{G,X}^{(X_L)}$.
Note that $\im\psi_{L,X_L}\subset\a_{G,X}^{(X_L)}\hookrightarrow
\lfr\quo L$. There is the unique $G$-invariant morphism
$\widehat{\psi}_{G,X}:X\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ coinciding with
$\psi_{N_G(L,X_L),X_L}$ on $X_L$. The morphism
$\psi_{G,X}:X\rightarrow \g\quo G$ is the composition of
$\widehat{\psi}_{G,X}$ and the finite morphism
$\tau_{G,X}^1:\a_{G,X}^{(\cdot)}/W^{(\cdot)}_{G,X}\rightarrow \g\quo G$ induced
by the embedding $\a_{G,X}^{(X_L)}\hookrightarrow \g$. So
$\widehat{\psi}_{G,X}$ factors through $\widetilde{\psi}_{G,X}$ and
the respective morphism $\tau_{G,X}^2:C_{G,X}\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is finite and dominant.
\begin{Lem}\label{Lem:2.51}
Assume, in addition, that $X$ is conical of degree $k$. Then
$\a_{G,X}^{(X_L)}$ is a vector subspace of $\g$ so one can equip
$\a_{G,X}^{(X_L)}$ with the action of $\C^\times$ given by
$(t,\xi)\mapsto t^k\xi$. Let us equip
$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ with the induced action. Then
the morphisms $\tau_{G,X}^1,\tau_{G,X}^2$ are
$\C^\times$-equivariant.
\end{Lem}
\begin{proof}
Note that $X_L$ is $\C^\times$-stable and the morphism $\mu_{L,X_L}:X_L\rightarrow \lfr$ is $\C^\times$-equivariant
(here $\C^\times$ acts on $\lfr$ by $(t,\xi)\mapsto t^k\xi$). Now everything follows
directly from the definitions of $\a_{G,X}^{(X_L)}$ and the morphisms
$\tau_{G,X}^1,\tau_{G,X}^2$.
\end{proof}
Now we want to describe the behavior of $\widehat{\psi}_{G,X}$ under
some simple modifications of the pair $(G,X)$. To do this we need to
recall some results obtained in \cite{Comb_Ham}. The proofs of
these results are mostly straightforward.
Let $X,L,X_L$ be such as above. Let $M$ be a Levi subgroup of $G$
containing $L$, $G^1$ a connected subgroup of $G$ containing $(G,G),
L^1:=G^1\cap L$, $G_1,\ldots,G_k$ be all simple normal subgroups of
$G$, so that $G=Z(G)^\circ G_1\ldots G_k$ is the decomposition into
the locally direct product. Finally, let $X'$ be another affine
irreducible Hamiltonian $G$-variety and $\varphi:X\rightarrow X'$ a
generically finite dominant $G$-equivariant morphism such that
$\mu_{G,X'}\circ\varphi=\mu_{G,X}$.
By Lemma 6.9 from \cite{Comb_Ham},
$\a_{G,X}^{(X_L)}=\a_{G^\circ,X}^{(X_L)}, W_{G^\circ,X}^{(X_L)}$ is
a normal subgroup of $W_{G,X}^{(X_L)}$.
Suppose $G$ is connected. Recall, \cite{Comb_Ham}, Lemmas 4.6,6.10,
that there exists the unique $M$-cross-section $X_M$ of $X$ containing
$X_L$ and $\a_{M,X_M}^{(X_L)}=\a_{G,X}^{(X_L)}, W_{M,X_M}^{(X_L)}=
W_{G,X}^{(X_L)}\cap M/L$.
By Lemma 4.6 from
\cite{Comb_Ham}, $L$ is the principal centralizer of $X'$ and there
exists the unique $L$-cross-section $X_L'$ of $X'$ such that
$\varphi(X_L)\subset X_L'$. Further, by Lemma 6.11 from
\cite{Comb_Ham}, $\a_{G,X}^{(X_L)}=\a_{G,X'}^{(X_L')}$,
$W_{G,X}^{(X_L)}\subset W_{G,X'}^{(X_L')}$.
Suppose, in addition, that $0\in \overline{\im\psi_{G,X}}$. Recall,
\cite{Comb_Ham}, Lemma 4.6, that $L^1$ is the principal centralizer
and $X_L$ is an $L^1$-cross-section of the Hamiltonian $G^1$-variety
$X$. Further, by \cite{Comb_Ham}, Lemma 6.13,
$\a_{G,X}^{(X_L)}\cap\g^1\subset \a_{G^1,X}^{(X_L)}$, the groups
$W_{G^1,X}^{(X_L)},W_{G,X}^{(X_L)}$ are naturally identified, and
the orthogonal projection $\g\twoheadrightarrow \g^1$ induces the
$W_{G,X}^{(X_L)}$-equivariant epimorphism
$\a_{G,X}^{(X_L)}\twoheadrightarrow \a_{G^1,X}^{(X_L)}$.
Finally, suppose $X$ satisfies the equivalent conditions of Lemma
\ref{Lem:2.3.5}. Put $T=L, T_i=L\cap G_i$. Recall, \cite{Comb_Ham},
Lemma 4.6, that $T_i$ is the principal centralizer of the
Hamiltonian $G_i$-variety $X$ and there is the unique
$T_i$-cross-section $X_{T_i}$ of $X$ containing $(\prod_{j\neq
i}G_j)X_T$. Further, Lemma 6.14 from \cite{Comb_Ham} implies that
$\a_{G_i,X}^{(X_{T_i})}=\t_i$, $W_{G,X}^{(X_T)}\subset \prod_{i=1}^k
W_{G_i,X}^{(X_{T_i})}$ and the projection of $W_{G,X}^{(X_T)}$ to
$\GL(\t_i)$ coincides with $W_{G_i,X}^{(X_{T_i})}$.
\begin{Lem}\label{Lem:1.9}
Let $G,X,X_L,M,G^1,L^1,G_1,\ldots,G_k,
X',\varphi,X_M,X_L',T,T_i,X_{T_i}$ be as above.
\begin{enumerate}
\item $\widehat{\psi}_{G,X}$ is the composition of
$\widehat{\psi}_{G^\circ,X}$ and the natural morphism of quotients
$\a^{(X_L)}_{G^\circ,X}/W^{(X_L)}_{G^\circ,X}\rightarrow
\a_{G,X}^{(X_L)}/W_{G,X}^{(X_L)}$ induced by the inclusion
$W_{G^\circ,X}^{(X_L)}\subset W_{G,X}^{(X_L)}$.
\item Suppose $G$ is connected. Then the following diagram is
commutative.
\begin{picture}(80,30)
\put(2,2){$X$}\put(2,22){$X_M$}\put(27,2){$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$}
\put(27,22){$\a_{M,X_M}^{(\cdot)}/W_{M,X_M}^{(\cdot)}$}
\put(65,2){$\g\quo G$}\put(65,22){$\m\quo M$}
\put(4,20){\vector(0,-1){14}} \put(34,20){\vector(0,-1){14}}
\put(68,20){\vector(0,-1){14}} \put(6,4){\vector(1,0){20}}
\put(6,24){\vector(1,0){20}} \put(45,4){\vector(1,0){19}}
\put(48,24){\vector(1,0){16}} \put(12,5){\tiny
$\widehat{\psi}_{G,X}$} \put(12,25){\tiny $\widehat{\psi}_{M,X_M}$}
\put(52,6){\tiny $\tau^1_{G,X}$} \put(52,26){\tiny $\tau^1_{M,X_M}$}
\end{picture}
Here the morphism $X_M\rightarrow X$ is the inclusion, the morphism
$\a_{M,X_M}^{(\cdot)}/W_{M,X_M}^{(\cdot)}\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is given by $
W_{M,X_M}^{(X_L)}\xi\mapsto W_{G,X}^{(X_L)}\xi$, and the morphism
$\m\quo M\rightarrow \g\quo G$ is induced by the restriction of
functions from $\g$ to $\m$.
\item The following
diagram is commutative.
\begin{picture}(60,30)
\put(2,2){$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$}\put(9,22){$X$}
\put(42,2){$\a_{G,X'}^{(\cdot)}/W_{G,X'}^{(\cdot)}$}\put(49,22){$X'$}
\put(11,20){\vector(0,-1){13}}\put(51,20){\vector(0,-1){13}}
\put(25,5){\vector(1,0){15}} \put(15,24){\vector(1,0){32}}
\end{picture}
\item Suppose $G$ is connected and $0\in \overline{\im\psi_{G,X}}$.
Then the
following
diagram is commutative.
\begin{picture}(50,30)
\put(30,20){$X$} \put(5,2){$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$}
\put(42,2){$\a_{G^1,X}^{(\cdot)}/W_{G^1,X}^{(\cdot)}$}
\put(25,4){\vector(1,0){15}}\put(29,19){\vector(-1,-1){12}}
\put(33,19){\vector(1,-1){12}}
\end{picture}
\item Suppose $G$ is connected and $X$ satisfies the equivalent
conditions of Lemma \ref{Lem:2.3.5}. Then the following diagram,
where the map $\a_{G,X}^{(\cdot)}/W^{(\cdot)}_{G,X}\rightarrow
\a_{G_i,X}^{(\cdot)}/W_{G_i,X}^{(\cdot)}$ is induced by the natural
epimorphism $\g\rightarrow \g_i$, is commutative.
\begin{picture}(50,30)
\put(30,20){$X$} \put(5,2){$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$}
\put(42,2){$\a_{G_i,X}^{(\cdot)}/W_{G_i,X}$}
\put(25,4){\vector(1,0){15}}\put(29,19){\vector(-1,-1){12}}
\put(33,19){\vector(1,-1){12}}
\end{picture}
\end{enumerate}
\end{Lem}
\begin{proof}
The proofs of assertions 1,3,4 follow directly from the definition
of $\widehat{\psi}_{\bullet,\bullet}$.
Let us prove assertion 2. The commutativity of the right square of
the diagram follows directly from the definition of
$\tau^1_{\bullet,\bullet}$. To prove the commutativity of the left
square we note that both morphisms $X_M\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ from the diagram are
$M$-invariant and their restrictions to $X_L$ coincide with
$\widehat{\psi}_{N_G(L,X_L),X_L}$. To complete the proof it remains
to recall that $MX_L$ is dense in $X_M$.
We proceed to assertion 5. The morphism
$\widehat{\psi}_{G_i,X}|_{X_i}$ is $Z(G)^\circ\prod_{j\neq
i}G_j$-invariant. It follows that $\widehat{\psi}_{G_i,X}$ is
$G$-invariant. It remains to note that the restrictions of both
morphisms $X\rightarrow \a_{G_i,X}^{(\cdot)}/W_{G_i,X}^{(\cdot)}$
coincide on $X_T$.
\end{proof}
Now we are going to quote some properties of
$C_{G,X},\widetilde{\psi}_{G,X},\widehat{\psi}_{G,X}$ proved in
\cite{alg_hamil}.
\begin{Prop}\label{Lem:4.4.1}
The morphism $\widehat{\psi}_{G,X}\quo G:X\quo G\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is equidimensional and open.
Further, for any closed subvariety $Y\subset
\im\widehat{\psi}_{G,X}$ and any irreducible component $Z$ of
$(\widehat{\psi}_{G,X}\quo G)^{-1}(Y)$ the subset
$(\widehat{\psi}_{G,X}\quo G)(Z)$ is dense in $Y$.
\end{Prop}
\begin{proof}
Note that $\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is a normal variety
of dimension $\mathop{\rm def}\nolimits_G(X)$. Thanks to Theorem 1.2.3 from
\cite{alg_hamil}, $\widehat{\psi}_{G,X}\quo G$ is equidimensional.
The openness stems from \cite{Chevalley}, Proposition 3 in Section
5.5. The last assertion of the proposition is an easy corollary of
the fact that $\widehat{\psi}_{G,X}\quo G$ is equidimensional.
\end{proof}
\begin{Prop}[\cite{alg_hamil}, Theorem 1.2.7]\label{Thm:2.2}
Suppose $X$ is conical. Then $C_{G,X}\cong \a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ and
$\widetilde{\psi}_{G,X}=\widehat{\psi}_{G,X}$. Further, the algebra $\C[C_{G,X}]$
coincides with the intersection of $\C[X]$ and the Poisson center of $\C(X)^G$.
\end{Prop}
\section{Dimensions of fibers}\label{SECTION_dimension}
Throughout the section $G$ is a connected reductive group and $X$ is
a Hamiltonian $G$-variety with symplectic form $\omega$.
In Subsection \ref{SUBSECTION_affham2} we prove a variant of the
Luna-Richardson restriction theorem (\cite{LR}) for Hamiltonian
varieties. This allows us to reduce a general affine Hamiltonian
$G$-variety to one satisfying the equivalent conditions of
Lemma~\ref{Lem:2.3.5}.
Subsection~\ref{SUBSECTION_affham4} deals with a stratification of
fibers of the morphism $\psi_{G,X}\quo G:X\quo G\rightarrow \g\quo
G$. A stratum consists of the images of all points with closed
$G$-orbit and the same determining triple. The main results of the
subsection are the proof that any stratum is smooth and the formula
for the dimensions of the strata (Proposition~\ref{Prop:4.4.2}).
The main part of this section is Subsection
\ref{SUBSECTION_affham5}. There we prove the following result that
strengthens Theorem \ref{Thm:1}.
\begin{Thm}\label{Thm:4.0.1}
The morphisms $\psi_{G,X},\widetilde{\psi}_{G,X},$
$\widehat{\psi}_{G,X}$ are equidimensional. The morphisms
$\widehat{\psi}_{G,X},\widetilde{\psi}_{G,X}$ are open. For any
closed irreducible subvariety $Y\subset \im\widehat{\psi}_{G,X}$
and any irreducible component $\widetilde{Y}\subset
\widehat{\psi}_{G,X}^{-1}(Y)$ the subvariety
$\pi_{G,X}(\widetilde{Y})\subset X\quo G$ is an irreducible
component of $(\widehat{\psi}_{G,X}\quo G)^{-1}(Y)$.
\end{Thm}
The proof uses the stratification introduced in
Subsection~\ref{SUBSECTION_affham4} and the estimate on dimensions
of fibers of $\pi_{G,X}$ obtained in Proposition~\ref{Prop:4.5.1}.
\subsection{A Hamiltonian version of the Luna-Richardson theorem}\label{SUBSECTION_affham2}
Let $H$ be a reductive subgroup of $G$. The subvariety $X^H\subset
X$ is smooth (see \cite{VP}, Subsection 6.5) and $N_G(H)$-stable.
Let us equip $X^H$ with a structure of a Hamiltonian
$N_G(H)$-variety.
\begin{Prop}\label{Lem:4.2.1}
\begin{enumerate}
\item $\omega|_{X^H}$ is nondegenerate, thus
$X^H$ is equipped with the symplectic structure.
\item The action
$N_G(H):X^H$ is Hamiltonian with the moment map
$\mu_{N_G(H),X^H}=\mu_{G,X}|_{X^H}$.
\end{enumerate}
\end{Prop}
\begin{proof}
For a symplectic vector space $V$ and a reductive subgroup
$H\subset\Sp(V)$ the $H$-modules $V^{H}$ and $V/(V^H)^{\skewperp}$
are isomorphic. Thus $\omega|_{V^H}$ is nondegenerate. Since
$T_x(X^H)=(T_xX)^H$, see \cite{VP}, Subsection 6.5, we see that
$\omega|_{X^H}$ is nondegenerate.
Note that the Lie algebra of $N_G(H)$ coincides with $\g^H+\h$.
Since $\mu_{G,X}$ is $G$-equivariant, we have $\mu_{G,X}(X^H)\subset
\g^H$. Clearly, $\mu_{G,X}|_{X^H}$ is $N_G(H)$-equivariant. It
remains to check that
\begin{equation}\label{eq:4.2:1}v(H_{\xi}|_{X^H})_x=\xi_x\end{equation} for all $\xi\in
\g^H+\h,x\in X^H$. Obviously, $v(H_\xi)_x=\xi_x=0$ for all
$\xi\in\h,x\in X^H$. Thus (\ref{eq:4.2:1}) holds for $\xi\in\h$. Now
let $\xi\in \g^H$. Then $H_\xi\in\C[X]^H$, and $v(H_\xi)_x$ is an
$H$-invariant vector for $x\in X^H$. It follows from the
construction of the symplectic form on $X^H$ that
$v(H_\xi)_x=v(H_\xi|_{X^H})_x$.
\end{proof}
Now we will apply the previous construction to a special choice of
$H$.
Let $L$ be the principal
centralizer of $X$ and $X_L$ an $L$-cross-section. By Corollary
4.2.3 from~\cite{alg_hamil}, the restriction of
$\pi_{(L,L),X_L}:X_L\rightarrow X_L\quo (L,L)$ to
$X_L^{(L,L)}\subset X_L$ is an isomorphism. Denote by $L_0$ the unit
component of the inefficiency kernel of the action $L:X_L\quo
(L,L)\cong X_L^{(L,L)}$.
It follows from Theorem 4.2.1, \cite{alg_hamil}, that
$L_0=(L,L)T_0$, where $T_0$ is the unit component of the
inefficiency kernel for the action $Z(L):X_L$. Let $X_0$ be the unique connected component of $X^{L_0}$ containing $X_L^{(L,L)}$.
Put $\widetilde{G}_0=N_G(L_0,X_0)$ (the stabilizer of $X_0$ under
the action of $N_G(L_0)$), $ G_0=\widetilde{G}_0/L_0$. We identify
$\g_0$ with $\g^{L_0}\cap\lfr_0^\perp$. It follows from
Proposition~\ref{Lem:4.2.1} that the action $\widetilde{G}_0:X_0$ is
Hamiltonian with moment map $\mu_{G,X}|_{X_0}$. By Remark 3.1.2
from \cite{alg_hamil}, the action $G_0:X_0$ is Hamiltonian with the
moment map $\mu_{G_0,X_0}:=p\circ\mu_{\widetilde{G}_0,X_0}$, where
$p$ denotes the natural projection
$\widetilde{\g}_0\rightarrow\g_0$.
The following proposition is what we mean by a "Hamiltonian version
of the Luna-Richardson theorem".
\begin{Prop}\label{Prop:4.2.2}
In the notation introduced above the following statements hold.
\begin{enumerate}
\item
The morphism $X_0\quo G_0\rightarrow X\quo G$ induced by the
restriction of functions is an isomorphism. \item
$m_{G_0}(X_0)=\dim G_0$,
$\mathop{\rm def}\nolimits_{G_0}(X_0)=\mathop{\rm def}\nolimits_G(X),\mathop{\rm cork}\nolimits_G(X)=\mathop{\rm cork}\nolimits_{G_0}(X_0)$.
\item $L/L_0$ is the principal centralizer of $X_0$. The subvariety $X_L^{L_0}$
is dense in the unique
$L/L_0$-cross-section $X_{0L}$ of $X_0$,
$\a_{G_0,X_0}^{(X_{L})}=\a_{G,X}^{(X_{0L})}-\xi_0$, where $\xi_0\in
\lfr_0\cap\a_{G,X}^{(X_{L})},$ and
$W_{G_0,X_0}^{(X_{0L})}=W_{G,X}^{(X_{L})}$.
\item $\widehat{\psi}_{G,X}|_{X_0}=\widehat{\psi}_{G_0,X_0}$.
\end{enumerate} \end{Prop}
In the proof we will use some notions of the theory of algebraic
transformation groups. Let $Y$ be an irreducible affine variety
acted on by a reductive group $H$. It is known, see \cite{VP},
Theorem 7.12, that there exists an open subset $Y_0\subset Y\quo H$
such that for any $y\in Y_0$ the closed orbit in $\pi_{H,Y}^{-1}(y)$
is isomorphic to $H/C$, where $C$ is a reductive subgroup of $H$.
\begin{defi}\label{defi:4.2.3} Such a subgroup $C$ (determined uniquely up to $H$-conjugacy)
is called the {\it principal isotropy subgroup} for the action
$H:Y$.
\end{defi}
The action $H:Y$ is called {\it stable} if its general orbit is
closed and {\it locally free} if $m_H(Y)=\dim H$.
\begin{proof}[Proof of Proposition \ref{Prop:4.2.2}]
The action $Z(L)^\circ:X_L^{(L,L)}\cong X_L\quo (L,L)$ is stable
(\cite{alg_hamil}, Proposition 4.5.1). Thus $L_0$ is the unit
component of the principal isotropy subgroup for the action $L:X_L$.
Since the natural morphism $G*_LX_L\rightarrow X$ is \'{e}tale and
its image is saturated, we see that the group $L_0$ is the unit
component of the principal isotropy subgroup for the action $G:X$
and that the morphism $X_0\quo G_0\rightarrow X\quo G$ is dominant.
By the Luna-Richardson theorem (\cite{LR}), the morphism $X_0\quo
G_0\rightarrow X\quo G$ is an isomorphism and the action of $G_0$ on
$X_0$ is locally free.
The latter yields $\mathop{\rm def}\nolimits_{G_0}(X_0)=\mathop{\rm rk}\nolimits
G_0=\mathop{\rm rk}\nolimits G-\mathop{\rm rk}\nolimits L_0=\mathop{\rm def}\nolimits_G(X)$. By Theorem 1.2.9 from
\cite{alg_hamil}, $\C(X)^G=\Quot(\C[X]^G),
\C(X_0)^{G_0}=\Quot(\C[X_0]^{G_0})$. So
$$\mathop{\rm cork}\nolimits_G(X)=\td \C(X)^G-\mathop{\rm def}\nolimits_G(X)=\td \C(X_0)^{G_0}-\mathop{\rm def}\nolimits_{G_0}(X_0)=\mathop{\rm cork}\nolimits_{G_0}(X_0).$$
We proceed to assertion 3. Since $m_{G_0}(X_0)=\dim G_0$, the
maximal torus $L/L_0\subset G_0$ is the principal centralizer of
$X_0$ (see Lemma \ref{Lem:2.3.5}) and
$\a_{G_0,X_0}^{(X_{0L})}=\lfr\cap\lfr_0^{\perp}=\a_{G,X}^{(X_L)}-\xi_0$
for any $L/L_0$-cross-section $X_{0L}$ of $X_0$. The natural
morphism $X_L\quo L\rightarrow X\quo G$ is dominant and quasifinite,
therefore so is the natural morphism $(X_L^{L_0})\quo
(L/L_0)\rightarrow X_0\quo G_0$. It follows from \cite{alg_hamil},
Theorem 1.2.9, that the actions $L/L_0:X_L^{L_0},G_0:X_0$ are
stable. It follows that $\dim X_L^{L_0}=\dim (X_L^{L_0})\quo
(L/L_0)+\dim L/L_0=\dim X_0\quo G_0+\dim L/L_0=\dim X_0-\dim
G_0+\dim L/L_0$. Since
$\mu_{G,X}(X_L^{L_0})\subset\mu_{G,X}(X_L)\cap \mathfrak{g}_0\subset \mathfrak{l}^{pr}\cap\mathfrak{g}_0
\subset (\mathfrak{l}/\mathfrak{l}_0)^{pr}$ (the last subset is taken w.r.t. the Lie algebra
$\g_0$), we see that $X_L^{L_0}$ lies in the unique
$L/L_0$-cross-section $X_{0L}$ of $X_0$. Comparing the dimensions,
we see that $X_L^{L_0}$ is dense in $X_{0L}$. The equality for the
Weyl groups stems from $N_G(L,X_L)/L_0\subset G_0,
N_G(L,X_L^{L_0})=N_G(L,X_L)$.
Finally, both morphisms in assertion 4 are $G_0$-invariant and
their restrictions to $X_L^{L_0}$ are equal to the restriction of
$\widehat{\psi}_{N_G(L,X_L),X_L}$.
\end{proof}
\subsection{A stratification of a fiber of $\psi_{G,X}\quo G$}\label{SUBSECTION_affham4}
In this subsection we introduce a stratification of fibers of the
morphism $\psi_{G,X}\quo G:X\quo G\rightarrow \g\quo G$. We consider
fibers of $\psi_{G,X}\quo G$ as algebraic varieties. Namely, let
$\eta\in \g$, $H$ be a reductive subgroup of $G_\eta$ and $V$ a
symplectic $H$-module. We put
\begin{equation*}S_{G,X}(H,\eta,V)\index{sgxfev@$S_{G,X}(H,\eta,V)$}=\{\pi_{G,X}(x)| Gx \text{ is
closed},(H,\eta,V) \text{ is the determining triple of }X\text{ at
}x\}.\end{equation*} Clearly,
$S_{G,X}(H_1,\eta_1,V_1)=S_{G,X}(H_2,\eta_2,V_2)$ iff there is
$g\in G$ and a linear isomorphism $\iota:V_1\rightarrow V_2$ such
that $\mathop{\rm Ad}\nolimits(g)\eta_1=\eta_2$, $gH_1g^{-1}=H_2$ and
$(ghg^{-1})\iota(v)=\iota(hv)$ for all $h\in H_1$.
The main result of this subsection is the following
\begin{Prop}\label{Prop:4.4.2}
Let $X,G, H,\eta,V$ be as above, $\lambda=\pi_{G,\g}(\eta)$. Then
$S_{G,X}(H,\eta,V)$ is a locally-closed smooth subvariety of pure
codimension $\mathop{\rm cork}\nolimits_G(X)-\dim V^H$ in $(\psi_{G,X}\quo
G)^{-1}(\lambda)$.
\end{Prop}
\begin{proof}
Firstly, we show that $S_{G,X}(H,\eta,V)$ is a locally-closed
subvariety of $X\quo G$. Denote by $Y$ the set of all points $x\in
X$ such that $Gx$ is closed, $G_x=H$, and $T_xX/\g_*x\cong V\oplus
(\g_\eta/\h)^*$. It follows from the Luna slice theorem applied to
any point of $Y$ that $Y$ is a locally-closed subvariety in $X$.
Therefore $Y_\eta=Y\cap \mu_{G,X}^{-1}(\mathop{\rm Ad}\nolimits(G)\eta)$ is a locally
closed subvariety of $X$. Since all orbits in $Y_\eta$ are closed in
$X$, we see that $Y_{\eta}$ is an open saturated subvariety of
$\overline{Y_\eta}$. Thus $S_{G,X}(H,\eta,V)=\pi_{G,X}(Y_\eta)$ is
open in $\overline{Y_\eta}\quo G$.
Applying Proposition~\ref{Prop:4.3.3}, we reduce the codimension and
smoothness claims to the case $X=M_{G}(H,\eta,V)$. Put
$\s=\z_\g(\eta_s)$. Choose an $\sl_2$-triple $(\eta_n,h,f)$ in
$\s^H$ generating $M_G(H,\eta,V)$. Denote by $U$ the $H$-module
$\z_\s(f)\cap\h^\perp$.
\begin{Lem}\label{Lem:4.4.3}
In the above notation $\eta$ is an isolated point of
$(\eta+\z_\s(f))\cap \overline{\mathop{\rm Ad}\nolimits(G)\eta}$.
\end{Lem}
\begin{proof}[Proof of Lemma~\ref{Lem:4.4.3}]
Note that $T_\eta(\eta+\z_\s(f))=\z_\s(f),
T_\eta\overline{\mathop{\rm Ad}\nolimits(G)\eta}=[\g,\eta]$. It is enough to show
$\z_\s(f)\cap [\g,\eta]=\{0\}$. The equality $\s=\z_\g(\eta_s)$
yields $[\g,\eta]=[\s^{\perp},\eta]+ [\s,\eta]=\s^{\perp}\oplus
[\s,\eta_n]$. Thanks to the representation theory of $\sl_2$,
$[\s,\eta_n]\cap \z_\s(f)=0$ whence the required equality.
\end{proof}
In virtue of Remark~\ref{Rem:2.1.11}, it is enough to assume that
$V^H=\{0\}$. Put $x:=[1,(0,0)]$. Everything will follow if we check
that $\pi_{G,X}(x)$ is an isolated point in $S_{G,X}(H,\eta,V)$.
Indeed, by Proposition~\ref{Lem:4.4.1}, $\mathop{\rm cork}\nolimits_G(X)=\dim X\quo
G-\mathop{\rm def}\nolimits_G(X)=\dim_{\pi_{G,X}(x)}(\psi_{G,X}\quo G)^{-1}(\lambda)$.
There exists a neighborhood
$O'$ of $\eta$ in $\eta+\z_\s(f)$ such that $O'\cap
\overline{\mathop{\rm Ad}\nolimits(G)\eta}=\eta$. Replacing $O'$ with $HO'$, if
necessary, we may assume that $O'$ is $H$-stable. Set
$O:=\{[g,(u,v)]\in M_G(H,\eta,V)| \eta+u+\mu_{H,V}(v)\in O'\}$. By
definition, $O$ is an open $G$-subvariety of $X$ containing $x$. It
is enough to show that any point $x_1\in O$ with closed $G$-orbit
and the determining triple
$(H,\eta,V)$ is $G$-conjugate to $x$. Assume the
converse. Put $x_1=[g,(u,v)]$, $u\in U, v\in V, (u,v)\neq 0$. Recall
that $\mu_{G,X}(x_1)=\mathop{\rm Ad}\nolimits(g)(\eta+u+\mu_{H,V}(v))$. Since
$\mu_{G,X}(x_1)=\eta$, Lemma~\ref{Lem:4.4.3} implies that
$u+\mu_{H,V}(v)=0$. Since $U\cap \h=\{0\}$, we have $u=0$. The
subgroup $H_v\subset H$ is conjugate to $H$ in $G$. Thus $v\in
V^H=\{0\}$. Contradiction.
\end{proof}
\subsection{The proof of Theorem~\ref{Thm:4.0.1}}\label{SUBSECTION_affham5}
At first, we obtain an estimate for the dimension of a fiber of
$\pi_{G,X}$.
\begin{Prop}\label{Prop:4.5.1}
The dimension of any fiber of $\pi_{G,X}:X\rightarrow X\quo G$ does
not exceed $\dim X-\mathop{\rm def}\nolimits_G(X)-\frac{\mathop{\rm cork}\nolimits_G(X)}{2}$.
\end{Prop}
\begin{proof}
The proof is carried out in two steps. Firstly, we consider the case
when $X$ satisfies the equivalent conditions of
Lemma~\ref{Lem:2.3.5} and then deduce the general case from this
one.
{\it Step 1. } Suppose $X$ satisfies the equivalent conditions of
Lemma~\ref{Lem:2.3.5}. Then
$$\mathop{\rm def}\nolimits_G(X)+\frac{\mathop{\rm cork}\nolimits_G(X)}{2}=\frac{\dim
X-\dim G+\mathop{\rm rk}\nolimits G}{2}.$$
Let $y\in X\quo G$, $x$ be a point from the unique closed $G$-orbit
in $\pi_{G,X}^{-1}(y)$, $H=G_x$, $\eta=\mu_{G,X}(x)$,
$U=(\z_\g(\eta)/\h)^*, V=(\g_*x)^{\skewperp}/(\g_*x\cap
(\g_*x)^{\skewperp})$. The $H$-modules $U\oplus V$ and $T_xX/\g_*x$
are isomorphic.
Using the Luna slice theorem, we see that it is enough to check
\begin{equation}\label{eq:4.5:0}\dim\pi_{H,U\oplus V}^{-1}(0)\leqslant \dim U+\dim V-\frac{\dim
X-\dim G+\mathop{\rm rk}\nolimits G}{2}\end{equation}
\begin{Lem}[\cite{Schwarz}, Proposition 2.10]\label{Lem:4.5.2}
Let $H$ be a reductive group, $T_H$ a maximal torus of $H$, and
$V$ a self-dual $H$-module. Then
\begin{equation*
\dim\pi_{H,V}^{-1}(0)\leqslant \frac{1}{2}(\dim V-\dim V^{T_H}+\dim
H-\dim T_H).
\end{equation*}
\end{Lem}
\begin{Lem}\label{Lem:4.5.3}
$U\oplus V$ is a self-dual $H$-module.
\end{Lem}
\begin{proof}[Proof of Lemma~\ref{Lem:4.5.3}]
Note that the $H$-modules $U\oplus V$ and $T_xX/\g_*x$ are
isomorphic. The module $T_xX$ is symplectic, while the module
$\g_*x\cong \g/\h\cong\h^\perp$ is orthogonal. Hence both these
modules are self-dual. Therefore the quotient module $U\oplus V$ is
self-dual too.
\end{proof}
We see that the $H$-module $U\oplus V$ satisfies the assumptions of
Lemma~\ref{Lem:4.5.2}. Let $T_H$ be a maximal torus of $H$. Let us
show that $\dim U^{T_H}\geqslant \mathop{\rm rk}\nolimits\g-\mathop{\rm rk}\nolimits\h$. Since $\dim
\h^{T_H}=\mathop{\rm rk}\nolimits\h$, it is enough to show that
$\dim\z_\g(\xi)^{T_H}\geqslant \mathop{\rm rk}\nolimits\g$ for any $\xi\in\g^H$. It is
enough to check the last inequality for $\xi\in \g^H$ in general
position. But in this case $\xi$ is semisimple. Thence $\z_\g(\xi)$
is a Levi subalgebra of $\g$ and everything is clear.
By Lemma~\ref{Lem:4.5.2}, we have the following inequalities
\begin{equation}\label{eq:4.5:1}
\begin{split}
&\dim\pi_{H,U\oplus V}^{-1}(0)\leqslant \frac{1}{2}(\dim U+\dim
V-\dim U^{T_H}-\dim V^{T_H}+\dim \h-\mathop{\rm rk}\nolimits\h)\\ &\leqslant
\frac{1}{2}(\dim U+\dim V-(\mathop{\rm rk}\nolimits\g-\mathop{\rm rk}\nolimits\h)+\dim\h-\mathop{\rm rk}\nolimits\h).
\end{split}
\end{equation}
One may check directly that the last expression in (\ref{eq:4.5:1})
coincides with the r.h.s of (\ref{eq:4.5:0}).
{\it Step 2.} Now we consider the general case. Let $X_0,G_0$ be as
in Subsection~\ref{SUBSECTION_affham2}.
By Proposition~\ref{Prop:4.2.2}, $\mathop{\rm cork}\nolimits_G(X)=\mathop{\rm cork}\nolimits_{G_0}(X_0),
\mathop{\rm def}\nolimits_G(X)=\mathop{\rm def}\nolimits_{G_0}(X_0)$. The proposition will follow if we show
that
\begin{equation}\label{eq:4.5:11}\codim_X\pi_{G,X}^{-1}(y)\geqslant
\codim_{X_0}\pi_{G_0,X_0}^{-1}(y), \end{equation} for any $y\in
X\quo G$. It follows from Proposition~\ref{Prop:4.2.2} that $X_0\quo
G_0\cong X\quo G$, $\pi_{G_0,X_0}^{-1}(y)=\pi_{G,X}^{-1}(y)\cap
X_0$. Now (\ref{eq:4.5:11}) stems from the following general fact of
Algebraic geometry:
$\dim_xY\cap Z\geqslant \dim_x Y+\dim_x Z-\dim X$ for any subvarieties $Y,Z$ of an irreducible variety $X$
and $x\in Y\cap Z$ provided $X$ is smooth.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm:4.0.1}]
Clearly, $\widetilde{\psi}_{G,X},\psi_{G,X}$ are equidimensional
provided $\widehat{\psi}_{G,X}$ is. As we mentioned above, any
equidimensional morphism to a normal variety is open.
To prove the theorem it remains to check that for all
$\lambda\in\g\quo G$ and any irreducible component $Z$ of
$\psi_{G,X}^{-1}(\lambda)$ the equality $\dim\pi_{G,X}(Z)=\dim X\quo
G-\mathop{\rm def}\nolimits_G(X)$ and the inequality $\dim Z\leqslant \dim X-\mathop{\rm def}\nolimits_G(X)$
take place (the opposite inequality holds automatically, since
$\mathop{\rm def}\nolimits_G(X)=\dim\overline{\im\psi_{G,X}}$). The former equality will
imply
\begin{equation}\label{eq:4.5:2}\dim\pi_{G,X}(Z)=\dim
X\quo G-\mathop{\rm def}\nolimits_G(X)+\dim Y\end{equation}
for an irreducible component $Z$ of
$\widehat{\psi}_{G,X}^{-1}(Y)$, where $Y\subset
\im\widehat{\psi}_{G,X}$ is an arbitrary closed irreducible
subvariety (recall that, by Proposition~\ref{Lem:4.4.1},
$\im\widehat{\psi}_{G,X}=\im(\widehat{\psi}_{G,X}\quo G)$ is an open
subvariety in $\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$). Thanks to
Proposition \ref{Lem:4.4.1}, (\ref{eq:4.5:2}) holds iff
$\pi_{G,X}(Z)$ is an irreducible component in
$(\widehat{\psi}_{G,X}\quo G)^{-1}(Y)$.
Choose a subvariety $S_{G,X}(H,\eta,V)\subset (\psi_{G,X}\quo
G)^{-1}(\lambda)$ (see Subsection~\ref{SUBSECTION_affham4}) such
that $\pi_{G,X}(Z)\cap S_{G,X}(H,\eta,V)$ is dense (and so, in
virtue of Proposition~\ref{Prop:4.4.2}, open) in $\pi_{G,X}(Z)$.
Further, choose a point $x\in Z\cap
\pi_{G,X}^{-1}(S_{G,X}(H,\eta,V))$ with closed $G$-orbit. Applying
Proposition~\ref{Prop:4.3.3} to $x$, we may replace $X$ with
$M_G(H,\eta,V)$. Thanks to Remark~\ref{Rem:2.1.11}, we may assume
that $V^H=0$. From Proposition~\ref{Prop:4.4.2} it follows that
$\pi_{G,X}(Z)$ is a point. By Proposition~\ref{Prop:4.5.1}, $\dim
Z\leqslant \dim X-\mathop{\rm def}\nolimits_G(X)-\frac{1}{2}\mathop{\rm cork}\nolimits_G(X)$. It follows that
$\mathop{\rm cork}\nolimits_G(X)=0, \dim (\psi_{G,X}\quo G)^{-1}(\lambda)=0, \dim Z=\dim
X-\mathop{\rm def}\nolimits_G(X)$. This verifies the claim in the beginning of the
previous paragraph and completes the proof.
\end{proof}
\begin{Cor}\label{Cor:4.5.2}
For any $\lambda\in \im\psi_{G,X}$ and any irreducible component
$Z$ of $\psi_{G,X}^{-1}(\lambda)$ there exists an open subset
$Z_0\subset Z\quo G$ such that $Z_0$ is smooth (as a variety),
$\codim_{Z\quo G }(Z\quo G)\setminus Z_0\geqslant 2$, and for any
$z\in Z_0$ and any point $x\in\pi_{G,X}^{-1}(z)$ with closed
$G$-orbit the following condition holds:
\begin{enumerate}
\item[(*)] $M_G(H,\eta,V/V^H)$ is coisotropic, where $(H,\eta,V)$ is
the determining triple of $X$ at $x$.
\end{enumerate}
Moreover, $M_G(H,\eta,V/V^H)$ does not depend (up to an isomorphism)
on the choice of $z$.
\end{Cor}
\begin{proof}
(*) is equivalent to $\mathop{\rm cork}\nolimits_G(X)=\mathop{\rm cork}\nolimits_G(M_G(H,\eta,V))=\dim
V^H$. It follows from Theorem~\ref{Thm:4.0.1} that $Z$ maps
dominantly whence, by the standard properties of quotient
morphisms, surjectively onto some irreducible component of
$(\psi_{G,X}\quo G)^{-1}(\lambda)$. The required claims follow now
from Proposition~\ref{Prop:4.4.2}.
\end{proof}
\begin{Cor}\label{Cor:4.5.3}
Let $Y$ be a closed irreducible subvariety in
$\im\widehat{\psi}_{G,X}$. Then
$\overline{\widehat{\psi}_{G,X}(\widetilde{Y})}=Y$ for any
irreducible component $\widetilde{Y}$ of
$\widehat{\psi}_{G,X}^{-1}(Y)$.
\end{Cor}
\begin{proof}
According to Theorem~\ref{Thm:4.0.1}, $\pi_{G,X}(\widetilde{Y})$ is
an irreducible component of $(\widehat{\psi}_{G,X}\quo
G)^{-1}(Y)\subset X\quo G$. It remains to apply
Proposition~\ref{Lem:4.4.1}.
\end{proof}
\begin{Cor}\label{Cor:4.5.4}
A simply connected affine conical Hamiltonian $G$-variety satisfies
(Utw1).
\end{Cor}
\begin{proof}
Thanks to Proposition \ref{Thm:2.2},
$\tau^2_{G,X}:C_{G,X}\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is an isomorphism. By
Theorem~\ref{Thm:4.0.1}, the morphism
$\widetilde{\psi}_{G,X}:X\rightarrow C_{G,X}$ is equidimensional.
Since $G$ is connected, the subalgebra $\C[X]^G$ is integrally
closed in $\C[X]$. Thus $\C[C_{G,X}]$ is integrally closed in
$\C[X]$. In other words, a general fiber of $\widetilde{\psi}_{G,X}$
is connected. Summarizing, we see that $\widetilde{\psi}_{G,X}$ is
an equidimensional morphism with a connected general fiber from a
simply connected variety $X$ to $C_{G,X}\cong
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$. The proof of the proposition
is based on an idea of Panyushev~\cite{Panyushev} and is completely
analogous to that given in~\cite{Knop6}, Theorem 7.2.
\end{proof}
\section{Some results concerning Weyl groups}\label{SECTION_Weyl}
Throughout the section $G,X,\omega$ have the same meaning as in the
previous section.
In this section we study the structure of the Weyl group
$W_{G,X}^{(\cdot)}$. Subsection \ref{SUBSECTION_affham6} contains
three technical propositions, which play a crucial role in the
subsequent exposition. Propositions \ref{Prop:4.6.1},
\ref{Prop:4.6.5} allow one to reduce the study of an arbitrary
affine Hamiltonian $G$-variety to the study of a coisotropic conical
model variety. Proposition \ref{Prop:4.6.3} describes the behavior
of Weyl groups under this reduction.
Using results of Subsection \ref{SUBSECTION_affham6}, in Subsection
\ref{SUBSECTION_Weyl_aff1} we establish some properties of Weyl
groups of varieties satisfying the equivalent conditions of Lemma
\ref{Lem:2.3.5}. In particular, we get some restrictions on
varieties with a "small" Weyl group (Proposition \ref{Prop:5.2.1},
Corollary \ref{Cor:5.2.3}) and show that a Weyl group cannot be "too
small" (Corollary \ref{Cor:5.2.5}). As a consequence of Corollary
\ref{Cor:5.2.5} we get some explicit restrictions on Weyl groups
for simple $G$ of types $A-E$ in Proposition \ref{Prop:5.2.6},
Corollary \ref{Cor:5.2.7}.
Finally, in Subsection \ref{SUBSECTION_Weyl_computation} we compute
the Weyl groups of linear actions of simple groups satisfying some
additional restrictions. This computation will be used in Subsection
\ref{SUBSECTION_Utw3} to check that any symplectic $G$-module is an
untwisted Hamiltonian variety.
\subsection{Some technical propositions}\label{SUBSECTION_affham6}
\begin{Prop}\label{Prop:4.6.1}
Let $L$ be the principal centralizer and $X_L$ an
$L$-cross-section of $X$, $\xi\in\a_{G,X}^{(X_L)}$,
$\alpha=\pi_{W_{G,X}^{(X_L)},\a_{G,X}^{(X_L)}}(\xi)$, $M=Z_G(\xi)$.
Suppose $\alpha\in \im\widehat{\psi}_{G,X}$. Choose an irreducible
component $Z$ of $\widehat{\psi}_{G,X}^{-1}(\alpha)$. Then there
exists $x\in X$ possessing the following properties:
\begin{itemize}
\item[(a)] $x\in Z$.
\item[(b)] $\mu_{G,X}(x)_s\in \z(\m)\cap\m^{pr}$.
\item[(c)] A unique $M$-cross-section $X_M$ of $X$
containing $x$ contains $X_L$ and
$\widehat{\psi}_{M,X_M}(x)=\pi_{W_{M,X_M}^{(X_L)},\a_{M,X_M}^{(X_L)}}(\xi)$.
\item[(d)] $Gx$ is closed in $X$.
\item[(e)] Let $(H,\eta,V)$ be the determining triple of $X$ (or, equivalently, of $X_M$) at $x$ and
$\widehat{G}$ be a connected subgroup of $M$ containing
$(M,M)H^\circ$. The orbit $\widehat{G}x$ is closed in $X_M$ and the
Hamiltonian $\widehat{G}$-variety
$\widehat{X}:=M_{\widehat{G}}(H\cap \widehat{G},\eta_n,V/V^H)$
is coisotropic.
\end{itemize}
\end{Prop}
\begin{Rem}\label{Rem:4.6.2}
If $X$ satisfies the equivalent conditions of Lemma \ref{Lem:2.3.5},
then so does the Hamiltonian $\widehat{G}$-variety $\widehat{X}$.
This stems easily from Proposition \ref{Prop:4.3.3}.
\end{Rem}
\begin{proof}[Proof of Proposition \ref{Prop:4.6.1}]
Choose a point $z\in Z$ with closed $G$-orbit. Let us show that $gz$
satisfies (b),(c) for some $g\in G$. Put $M_1=Z_G(\mu_{G,X}(z)_s)$.
Since $\pi_{G,\g}(\mu_{G,X}(z)_s)= \pi_{G,\g}(\xi)$, we have
$M_1\sim_G M$. Let $X_{M_1}$ be an $M_1$-cross-section of $X$
containing $z$, $L_1$ be the principal centralizer and $X_{L_1}$ an
$L_1$-cross-section of $X_{M_1}$. Replacing $z$ with $gz$ for an
appropriate element $g\in G$, we may assume that $L_1=L,
X_{L_1}=X_L$. Next, replacing $z$ with $mz$ for some $m\in M_1$, one
obtains
$\mu_{G,X}(z)_s\in\a_{M_1,X_{M_1}}^{(X_L)}=\a_{G,X}^{(X_L)}$. By the
commutative diagram of assertion 2 of Lemma \ref{Lem:1.9}, for some
$n\in N_G(L,X_L)$ the following equality holds
\begin{equation}\label{eq:4.6:2}
\widehat{\psi}_{M_1,X_{M_1}}(z)=\pi_{W_{M_1,X_{M_1}}^{(X_L)},\a_{G,X}^{(X_L)}}(n\xi).
\end{equation}
Note that $\psi_{M_1,X_{M_1}}(z)\in
\z(\m_1)\hookrightarrow \m_1\quo M_1$. From (\ref{eq:4.6:2}) it
follows that $\pi_{M_1,\m_1}(n\xi)\in\z(\m_1)\hookrightarrow
\m_1\quo M_1$ whence $n\xi\in\z(\m_1)$. On the other hand, $n\xi\in
\z(\mathop{\rm Ad}\nolimits(n)\m)\cap (\mathop{\rm Ad}\nolimits(n)\m)^{pr}$ and so $\m_1\subset \mathop{\rm Ad}\nolimits(n)\m$. We
have seen above that $M_1\sim_G M$ whence $M_1=nMn^{-1}$. Replacing
$z$ with $n^{-1}z$, we get the point $z$ satisfying (a)-(c). Put
$\alpha'=\pi_{W_{M,X_M}^{(X_L)},\a_{M,X_M}^{(X_L)}}(\xi)$.
According to Lemma~\ref{Lem:4.3.4}, there exists an
open affine $M$-saturated subvariety $X_M^0\subset X_M$ containing
$z$ such that for any $x\in X_M^0$ the orbit $Gx$ is closed in $X$
iff $Mx$ is closed in $X_M$. Further, by Lemma~\ref{Lem:4.3.5},
$\widehat{G}x\subset X_M$ is closed whenever $Mx$ is closed.
From assertion 2 of Lemma \ref{Lem:1.9}, Theorem \ref{Thm:4.0.1} and
the fact that the natural morphism $G*_{M}X_M\rightarrow X$ is
\'{e}tale we get $\dim Z\cap
X_M=\dim\widehat{\psi}_{M,X_M}^{-1}(\alpha')$. Hence there is an
irreducible component $Z'$ of $\psi_{M,X_M}^{-1}(\alpha')$
containing $z$ and contained in $Z\cap X_M$. By Corollary
\ref{Cor:4.5.2}, there is an open subset $Y^0\subset
\pi_{M,X_M}(Z')$ such that any point $x\in \pi_{M,X_M}^{-1}(Y^0)$
with closed $M$-orbit satisfies (a)-(d) and (e) for $\widehat{G}=M$.
When $\widehat{G}\neq M$, there is a covering
$T_0\times\widehat{G}\twoheadrightarrow M$ and a finite Hamiltonian
morphism $T^*(T_0)\times \widehat{X}\rightarrow M_M(H\cap
\widehat{G},\eta_n,V/V^H)$, where $T_0$ is a torus. Since
$H^\circ\subset \widehat{G}$, we are done.
\end{proof}
\begin{Prop}\label{Prop:4.6.5}
Let $X,L,X_L$ be as in Proposition \ref{Prop:4.6.1}, $T_0$ denote
the unit component of the inefficiency kernel of the action
$Z(L)^\circ:X_L$, $\xi_0\in\a_{G,X}^{(X_L)}$, $M=Z_G(\xi_0)$.
Suppose $0\in\im\widehat{\psi}_{G,X}$. Put
$\z:=\z(\m)\cap\a_{G,X}^{(X_L)},
\underline{Z}:=\pi_{W_{G,X}^{(X_L)},\a_{G,X}^{(X_L)}}(\z)$. Choose
an irreducible component $\widetilde{Z}$ of
$\widehat{\psi}_{G,X}^{-1}(\underline{Z})$. Let $\xi\in \z$ be a
point in general position. Then there is a component $Z$ of
$\widehat{\psi}_{G,X}^{-1}(\pi_{W_{G,X}^{(X_L)},\a_{G,X}^{(X_L)}}(\xi))$
lying in $\widetilde{Z}$ and a point $x\in Z$ satisfying the
conditions (b)-(e) of Proposition \ref{Prop:4.6.1} and
\begin{itemize}
\item[(f)] $G_x^\circ\subset (M,M)T_0$.
\end{itemize}
\end{Prop}
\begin{Rem}\label{Rem:4.6.4}
Under the assumptions of Proposition \ref{Prop:4.6.5} one may
assume that $\widehat{G}$ defined in (d) coincides with $(M,M)T_0$.
If $X$ satisfies the equivalent conditions of Lemma~\ref{Lem:2.3.5},
then one can take $(M,M)$ for $\widehat{G}$.
\end{Rem}
\begin{proof}[Proof of Proposition \ref{Prop:4.6.5}]
The morphism $\widehat{\psi}_{G,X}$ is open, Theorem
\ref{Thm:4.0.1}. So $Z,\widetilde{Z}$ do exist. Choose a point
$z\in Z$ satisfying conditions (a)-(e) and such that
$\pi_{G,X}(\widetilde{Z})$ is the only component of
$(\widehat{\psi}_{G,X}\quo G)^{-1}(\underline{Z})$ (see Theorem
\ref{Thm:4.0.1}) containing $\pi_{G,X}(z)$.
Let $X_M$ be as in (c). By the choice of $z$, any irreducible
component $\widetilde{Z}'$ of $\psi_{M,X_M}^{-1}(\z)$ containing $z$
is contained in $\widetilde{Z}\cap X_M$, compare with the proof of
Proposition \ref{Prop:4.6.1}. As in that proof, there is an open
subset $Y^0\subset\pi_{M,X_M}(\widetilde{Z}')$ such that any $x\in
\pi_{M,X_M}^{-1}(Y^0)$ with closed $M$-orbit satisfies conditions
(a)-(e) (for
appropriate $\xi$).
It remains to prove that
$M_x^\circ\subset (M,M)T_0$ for a general point $x\in
\widetilde{Z}'$ with closed $M$-orbit. Recall (see the discussion
preceding Proposition \ref{Prop:4.2.2}) that $L_0:=(L,L)T_0$ is the
unit component of the principal isotropy group for the action
$M:X_M$.
Let $C$ denote the principal isotropy
subgroup for the action $M:\widetilde{Z}'$, so $L_0\subset C$. By
the definition of $C$, there exists an irreducible component $X_1$
of $X_M^C$ such that $\pi_{M,X_M}(X_1\cap \widetilde{Z}')$ is dense
in $\pi_{M,X_M}(\widetilde{Z}')$.
By Lemma~\ref{Lem:4.2.1}, the action $N_M(C,X_1):X_1$ is Hamiltonian
with moment map $\mu_{N_M(C,X_1),X_1}=\mu_{M,X_M}|_{X_1}$. Since
$0\in \overline{\im\psi_{G,X}}$, we get $0\in
\overline{\psi_{M,X_M}(\widetilde{Z}')}$, equivalently,
$\overline{\mu_{M,X_M}(X_1)}$ contains a nilpotent element. Since
$C$ acts trivially on $X_1$, we get
\begin{equation}\label{eq:4.6:3}\mu_{M,X_M}(X_1)\subset \m^C\cap(\xi+\c^{\perp})\end{equation}
for any $\xi\in \im\mu_{M,X_M}(X_1)$. Since there is a nilpotent
element in $\overline{\mu_{M,X_M}(X_1)}$, we see that the r.h.s. of
(\ref{eq:4.6:3}) coincides with $\m^C\cap\c^{\perp}$. For brevity,
put $\s=\m^C\cap\c^\perp$. This is an ideal in $\m^C$.
Choose $x\in \widetilde{Z}'\cap X_1$ and put $\eta=\mu_{M,X_M}(x)$.
Then $\eta_s\in\z$ and $(\eta_s-\xi)+\eta_n\in \s$. Clearly,
$\c^C\subset \z(\m^C)$. Thus $[\eta_s-\xi,\eta_n]=0$ whence
$\eta_s-\xi=(\eta-\xi)_s\in \s$ and
\begin{equation}\label{eq:4.6:4}\mu_{M,X_M}(x)_s\in
\z\cap \c^\perp,\forall x\in \widetilde{Z}'\cap X_1.\end{equation}
\begin{Lem}\label{Lem:4.6.6}
$\m=\z+\t_0+[\m,\m]$.
\end{Lem}
\begin{proof}
It is enough to check that
\begin{equation}\label{eq:4.6:5}\t=\z+\t_0+\t_1,\t_1:=\t\cap [\m,\m],\end{equation} where $\t$ denotes a
Cartan subalgebra of $\lfr$. Recall that
\begin{align*
&\t=\z(\m)\oplus \t_1,\\%\label{eq:4.6:7}
&\z=\z(\m)\cap\a_{G,X}^{(X_L)}=\z(\m)\cap
(\z(\lfr)\cap\t_0^\perp)=\z(\m)\cap\t_0^\perp.
\end{align*}
Since $\z(\m),\t_1,\t_0$ are the Lie algebras of algebraic groups,
we see that $(\cdot,\cdot)$ is nondegenerate on
$\z(\m),\t_1,\t_0,\z$. To prove (\ref{eq:4.6:5}) it is enough to
note that $\t_0+\t_1=\z^\perp$.
\end{proof}
If $\c\not\subset[\m,\m]+\t_0$, then, thanks to Lemma
\ref{Lem:4.6.6}, the r.h.s. of (\ref{eq:4.6:4}) is a proper subspace
in $\z$. Hence
$\psi_{M,X_M}(\widetilde{Z}')=\psi_{M,X_M}(\widetilde{Z'}\cap X_1)$
is not dense in $\z$. Since $\z\cap\im\psi_{M,X_M}$ is an open
subset in $\z$, we get a contradiction with
Corollary~\ref{Cor:4.5.3}.
\end{proof}
\begin{Prop}\label{Prop:4.6.3}
Let $X,L,X_L,M,X_M,\widehat{G}$ be as in
Proposition~\ref{Prop:4.6.1}, $\widehat{L}=L\cap \widehat{G}$. Let
$x\in X$ satisfy conditions (a)-(d) of Proposition~\ref{Prop:4.6.1}
(for some $Z$) and $\widehat{X}$ be the model variety constructed by
$x$ as in (e). Then $\widehat{L}$ is the principal centralizer of
$\widehat{X}$ and there is an $\widehat{L}$-cross-section
$\widehat{X}_{\widehat{L}}$ of $\widehat{X}$ such that
$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}$ is a
$W_{G,X}^{(X_L)}\cap M/L$-stable subspace of $\a_{G,X}^{(X_L)}$ and
$W_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}$ lies in
the image of $ W_{G,X}^{(X_L)}\cap M/L$ in
$\GL(\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})})$.
\end{Prop}
\begin{proof}
Recall, see Lemma \ref{Lem:1.9}, that $\widehat{L}$ is the principal
centralizer and $X_L$ is an $\widehat{L}$-cross-section of the
Hamiltonian $\widehat{G}$-variety $X_M$. Let $(H,\eta,V)$ denote the
determining triple of $X$ at $x$. Thanks to
Lemmas~\ref{Lem:4.3.4},\ref{Lem:4.3.5}, $(H\cap
\widehat{G},\eta_n,V/V^H\oplus V_0)$ is the determining triple of
the Hamiltonian $\widehat{G}$-variety $X_M$ at $x$, where $V_0$ is a
trivial $H\cap\widehat{G}$-module. Put
$\widehat{X}':=M_{\widehat{G}}(H\cap \widehat{G},\eta_n,V/V^H\oplus
V_0)\cong \widehat{X}\times V_0$. It is enough to prove the analogue
of the assertion of the proposition for $\widehat{X}'$.
By Proposition~\ref{Prop:4.3.3}, there is a $\widehat{G}$-saturated
analytical open neighborhood $O$ of $[1,(0,0)]$ in $\widehat{X}'$,
that is isomorphic (as a Hamiltonian $\widehat{G}$-manifold) to a
saturated analytical neighborhood of $x$ in $(X_M)_{-\eta_s}$. One
may assume additionally that $O$ is connected. By \cite{slice},
Lemma 5, $O_1:=\pi_{\widehat{G},\widehat{X}'}(O)$ is an open
neighborhood of $\pi_{\widehat{G},\widehat{X}'}([1,(0,0)])$ in
$\widehat{X}'\quo \widehat{G}$. Further, according to Example
\ref{Ex:2.2.4}, $\widehat{X}'$ is a conical Hamiltonian variety.
Replacing $O$ with a smaller neighborhood, we may assume that
$t.O\subset O$ for $0\leqslant t\leqslant 1$. Note that
$\widehat{L}$ is the principal centralizer of the Hamiltonian
$\widehat{G}$-variety $\widehat{X}'$. Since $\widehat{G}X_L$ is an
open subvariety of $X_M$ (in Zariski topology), we have $X_L\cap
O\neq \varnothing$. Choose an $\widehat{L}$-cross-section
$\widehat{X}'_{\widehat{L}}$
of $\widehat{X}'$ such that some connected component of $X_L\cap O$
is contained in $\widehat{X}'_{\widehat{L}}\cap O$.
\begin{Lem}\label{Lem:4.6.4}
The manifold $\widehat{X}'_{\widehat{L}}\cap O$ is connected.
\end{Lem}
\begin{proof}[Proof of Lemma~\ref{Lem:4.6.4}]
Let $(\eta_n,h,f)$ be an $\sl_2$-triple in $\widehat{\g}^{H\cap
\widehat{G}}$ generating the model variety $\widehat{X}'$. Note that
the action $\C^\times:\widehat{X}'$ preserves
$\widehat{X}'_{\widehat{L}}$. Let $Y^0,Y^1$ be two distinct
connected components of $\widehat{X}'_{\widehat{L}}\cap O$, $y^i\in
Y^i, i=0,1,$ and $y^t, 0\leqslant t\leqslant 1,$ a continuous curve
connecting $y^0,y^1$ in $\widehat{X}'_{\widehat{L}}$. There is a
positive real $\tau<1$ such that $\tau y^t\in O$ for all
$t,0\leqslant t\leqslant 1$. Finally, note that $\tau_1y^i\in Y^i$
for all real $\tau_1$ such that $\tau\leqslant \tau_1\leqslant 1$
and $i=0,1$. Therefore $t\mapsto \tau y^t$ is a continuous curve in
$\widehat{X}'_{\widehat{L}}\cap O$ connecting points from $Y^0,Y^1$.
Contradiction.
\end{proof}
Now we can complete the proof of the proposition. One easily deduces
from Proposition \ref{Prop:4.3.3} that
$\a_{\widehat{G},\widehat{X}'}^{(\widehat{X}'_{\widehat{L}})}=\a_{\widehat{G},X_M}^{(X_L)}$.
The equalities
$W_{\widehat{G},X_M}^{(X_L)}=W_{M,X_M}^{(X_L)}=W_{G,X}^{(X_L)}\cap
M/L$ hold, see the discussion preceding Lemma \ref{Lem:1.9}. By
Lemma~\ref{Lem:4.6.4},
$N_{\widehat{G}}(\widehat{L},X'_{\widehat{L}})=N_{\widehat{G}}(\widehat{L},X'_{\widehat{L}}\cap
O)$. It remains to recall that $X'_{\widehat{L}}\cap
O\hookrightarrow X_L$ whence
$N_{\widehat{G}}(\widehat{L},X'_{\widehat{L}}\cap O)\subset
N_{\widehat{G}}(\widehat{L},X_L)$.
\end{proof}
\begin{Rem}\label{Rem:2.8}
We use the notation of Proposition \ref{Prop:4.6.1}. Let
$\widehat{X}', \widehat{X}'_{\widehat{L}},O$ be as in the proof of
Proposition \ref{Prop:4.6.3}. It can be checked using the
definitions of the morphisms $\widehat{\psi}_{\bullet,\bullet}$ that
the following diagram is commutative
\begin{picture}(165,40)
\put(2,32){$\widehat{X}$}\put(30,32){$\widehat{X}'$}\put(46,32){$O$}\put(60,32){$X_M$}
\put(95,32){$G*_MX_M$}\put(138,32){$X$}
\put(7,4){$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}/
W_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}$}
\put(50,4){$\a_{\widehat{G},X_M}^{(X_L)}/
W_{\widehat{G},X_M}^{(X_L)}$} \put(90,4){$\a_{M,X_M}^{(X_L)}/
W_{M,X_M}^{(X_L)}$} \put(130,4){$\a_{G,X}^{(X_L)}/ W_{G,X}^{(X_L)}$}
\put(28,34){\vector(-1,0){20}}\put(44,34){\vector(-1,0){8}}
\put(50,34){\vector(1,0){9}}\put(67,34){\vector(1,0){25}}
\put(113,34){\vector(1,0){25}}\put(30,6){\vector(1,0){18}}
\put(89,6){\vector(-1,0){14}}\put(116,6){\vector(1,0){13}}
\put(5,31){\vector(1,-3){6.5}}\put(30,31){\vector(-1,-3){6.5}}
\put(63,31){\vector(0,-1){21}}\put(67,32){\vector(1,-1){23}}
\put(105,31){\vector(0,-1){21}}\put(140,31){\vector(0,-1){21}}
\put(141,22){\footnotesize $\widehat{\psi}_{G,X}$}
\put(79,22){\footnotesize $\widehat{\psi}_{M,X_M}$}
\put(64,17){\footnotesize $\widehat{\psi}_{\widehat{G},X_M}$}
\put(29,22){\footnotesize
$\widehat{\psi}_{\widehat{G},\widehat{X}'}$}\put(9,22){\footnotesize
$\widehat{\psi}_{\widehat{G},\widehat{X}}$}
\end{picture}
Let us explain the meaning of unmarked arrows. The morphism
$\widehat{X}'\cong \widehat{X}\times V^H \rightarrow \widehat{X}$ is
the projection along $V^H$. The maps $O\rightarrow \widehat{X}',X_M$
are open embeddings of complex analytical manifolds. The morphism
$X_M\rightarrow G*_MX_M$ is the embedding $x\mapsto [1,x]$ and the
morphism $G*_MX_M\rightarrow X$ is given by $[g,x]\mapsto gx$, it is
\'{e}tale by Proposition \ref{Prop:1.1}. One easily sees that
$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}=\a_{\widehat{G},X_M}^{(X_L)}$,
$\a_{M,X_M}^{(X_L)}=\a_{G,X}^{(X_L)}$, and
$\a_{\widehat{G},X_M}^{(X_L)}$ is the image of $\a_{M,X_M}^{(X_L)}$
under the orthogonal projection $\m\twoheadrightarrow\widehat{\g}$.
Moreover, it follows from Lemma \ref{Lem:1.9} and Proposition
\ref{Prop:4.6.3} that
$W_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}\subset
W_{\widehat{G},X_M}^{(X_L)}$, $W_{M,X_M}^{(X_L)}=
W_{G,X}^{(X_L)}\cap M/L$, $W_{\widehat{G},X_M}^{(X_L)}\cong
W_{M,X_M}^{(X_L)}$. The morphism
$\a_{M,X_M}^{(X_L)}/W_{M,X_M}^{(X_L)}\rightarrow\a_{G,X}^{(X_L)}/W_{G,X}^{(X_L)}$
is the natural morphisms of quotients. The morphism
$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}/W_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}\rightarrow
\a_{\widehat{G},X_M}^{(X_L)}/W_{\widehat{G},X_M}^{(X_L)}$ is the
composition of the natural morphism of quotients and the translation
by the projection of $\eta_s$ to $\a_{\widehat{G},X_M}^{(X_L)}$,
where $\eta$ is as in (e) of Proposition \ref{Prop:4.6.1}. The
morphism $\a_{M,X_M}^{(X_L)}/W_{M,X_M}^{(X_L)}\rightarrow
\a_{\widehat{G},X_M}^{(X_L)}/W_{\widehat{G},X_M}^{(X_L)}$ is induced
by the the projection $\a_{M,X_M}^{(X_L)}\twoheadrightarrow
\a_{\widehat{G},X_M}^{(X_L)}$.
\end{Rem}
\subsection{The structure of Weyl groups of affine Hamiltonian varieties}\label{SUBSECTION_Weyl_aff1}
In this subsection $G$ is a connected reductive group, $T$ is a
maximal torus of $G$, $X$ is a conical affine Hamiltonian
$G$-variety satisfying the equivalent conditions of
Lemma~\ref{Lem:2.3.5}, and $X_T$ is a $T$-cross-section of $X$. The
goal of this subsection is to obtain some information about
$W_{G,X}^{(\cdot)}$ and some restrictions on a Hamiltonian
$G$-variety $X$ with a given Weyl group. All results are based on
Propositions \ref{Prop:4.6.1},\ref{Prop:4.6.5},\ref{Prop:4.6.3}.
These propositions allow one to reduce the study of
$W_{G,X}^{(\cdot)}$ to the case when $G$ is semisimple and $X$ is
a model Hamiltonian variety $M_G(H,\eta,V)$ such that
$\mathop{\rm cork}\nolimits_G(X)=0$ and $\eta$ is nilpotent. First of all, we need to
find out when the Weyl group of the last variety is trivial.
\begin{Prop}\label{Prop:5.2.1}
Let $G$ be a connected reductive group, $H$ a reductive subgroup,
$\eta$ an $H$-invariant nilpotent element of $\g$, and $V$ a
symplectic $H$-module. Suppose $X:=M_G(H,\eta,V)$ satisfies the
equivalent conditions of Lemma \ref{Lem:2.3.5} and $\mathop{\rm cork}\nolimits_G(X)=0$.
\begin{enumerate}
\item
If $W_{G,X}^{(\cdot)}=\{1\}$, then
\begin{itemize}
\item[(*)] $\eta=0, (G,G)\subset H$,
$(G,G)\cong G_1\times \ldots\times G_k$ for some $k$, where $G_i\cong
\SL_2$. Moreover, the $G$-modules $V/V^{(G,G)}$ and $ V_1\oplus
V_2\oplus\ldots\oplus V_k$ are isomorphic, where $V_i$ is the direct
sum of two copies of the two-dimensional irreducible $G_i$-module.
\end{itemize}
\item Conversely, if $G$ is semisimple, and $X$ satisfies (*), then
$W_{G,X}^{(\cdot)}=\{1\}$.
\end{enumerate}
\end{Prop}
\begin{proof}
Suppose, at first, that $G$ is semisimple. Let us prove the first
assertion.
Since $\mathop{\rm cork}\nolimits_G(X)=0$, we see that the field $\C(X)^G$ is Poisson commutative, compare with \cite{Vinberg}, Section 2.3.
It follows from Proposition \ref{Thm:2.2} that
$\widehat{\psi}_{G,X}\quo G:X\quo G\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is an isomorphism. By Lemma \ref{Lem:2.51}, $\C[X]^G$
and $\C[\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}]$ are isomorphic as graded algebras (where all elements
of $\a_{G,X}^{(X_L)*}$ are supposed to have degree 2, the grading on
$\C[X]^G$ is described in Remark \ref{Rem:2.2.5}). Thus the
equality $W_{G,X}^{(\cdot)}=\{1\}$ is equivalent to the condition
that $\C[X]^G$ is generated by elements of degree 2.
The morphism $M_G(H^\circ,\eta,V)\rightarrow M_G(H,\eta,V),
[g,(u,v)]\mapsto [g,(u,v)],$ satisfies the assumptions of assertion
4 of Lemma \ref{Lem:1.9}. Thus
$W_{G,M_G(H^\circ,\eta,V)}^{(\cdot)}=\{1\}$ and we may assume that
$H$ is connected.
Put $U=(\z_\g(\eta)\cap \h^\perp)^*$. Let us equip the algebra
$\C[U\oplus V]$ with the grading described in
Remark~\ref{Rem:2.2.5}. The algebra $\C[U\oplus V]^H$ is generated
by elements of degree 2. Recall that $\eta\in U^*$ has degree 4.
Therefore $\eta=0$. Any element from $U^*\cong \h^\perp$ has degree
2. Therefore $U\quo H\cong U^H$, equivalently, $\C[U/U^H]^H=\C$. But
the $H$-module $U/U^H$ is orthogonal (that is, possesses a
nondegenerate $H$-invariant symmetric form) because $U$ is
orthogonal. Hence $U=U^H$. Equivalently, $\g=\h+\g^\h$. In
particular, $\h$ is an ideal of $\g$. By Lemma \ref{Lem:2.3.5},
\begin{equation}\label{eq:5.2:1} \dim X= \dim\g+\mathop{\rm rk}\nolimits \g.\end{equation}
But $\dim X=2\dim \g-2\dim\h+\dim V$. Note that $m_H(V)=\dim H$, for
$m_G(X)=\dim G$ and $\g/\h$ is a trivial $\h$-module. Since $V$ is
a symplectic $H$-module, we have
\begin{equation}\label{eq:5.2:2}\dim V=
\dim\h+\mathop{\rm rk}\nolimits\h+\mathop{\rm cork}\nolimits_H(V).\end{equation} From (\ref{eq:5.2:1}),
(\ref{eq:5.2:2}) it follows that
\begin{equation}\label{eq:5.2:3}\dim \g+\mathop{\rm rk}\nolimits\g= \dim X=2\dim G/H+\dim
V\geqslant 2\dim\g-\dim\h+\mathop{\rm rk}\nolimits\h.\end{equation} We deduce from
(\ref{eq:5.2:3}) that $\dim \g-\mathop{\rm rk}\nolimits\g\leqslant \dim\h-\mathop{\rm rk}\nolimits\h$.
Since $\h$ is an ideal of $\g$, the last inequality is equivalent
to $\g=\h$.
Let $G= G_1\ldots G_k$ be the decomposition into the locally direct
product of simple subgroups. By the discussion before Lemma
\ref{Lem:1.9}, $W_{G_i,V}^{(\cdot)}=\{1\}$. By
Propositions~\ref{Prop:4.6.1}, \ref{Prop:4.6.5}, there exists a
point $x\in \psi_{G_i,V}^{-1}(0)$ satisfying the conditions (a)-(f)
of those propositions (with $G,X$ replaced with $G_i,V$). Let
$(H_0,\eta_0,V_0)$ be the determining triple of the $G_i$-variety
$V$ at $x$. Thanks to Proposition~\ref{Prop:4.6.3},
$W_{G_i,M_{G_i}(H_0,\eta_0,V_0)}^{(\cdot)}=\{1\}$. By assertion 1,
$\eta_0=0, H_0=G_i$ whence $V_0=V$. Further, $V/V^{G_i}$ is a
coisotropic $G_i$-module of dimension $\dim\g_i+\mathop{\rm rk}\nolimits\g_i$. Using
the classification of coisotropic modules obtained
in~\cite{cois},\cite{Knop7}, we get $G_i=\SL_2$. Since
$W_{G_i,V/V^{G_i}}^{(\cdot)}=\{1\}$, we see that $\C[V/V^{G_i}]$ is
generated by an element of degree 2. One easily deduces from this
that $V/V^{G_i}$ is isomorphic to the direct sum of two copies of
the irreducible 2-dimensional $\SL_2$-module.
Since $V/V^{G_i}\cong V^{G_i\skewperp}$ is a symplectic $G$-module
and $\Sp(V/V^{G_i})^{G_i}$ is a torus, the group $\prod_{j\neq
i}G_j$ acts trivially on $V/V^{G_i}$. Note that
$(\bigoplus_{i=1}^k V^{G_i\skewperp})^{\skewperp}=\bigcap_{i=1}^k
V^{G_i}=V^{G}=0$. The last equality holds because $\mathop{\rm cork}\nolimits_G(V)=0$. To
complete the proof of assertion 1 note that $\prod_{i=1}^k G_i$ acts
on $V$ effectively. It follows that the natural epimorphism
$\prod_{i=1}^k G_i\rightarrow G$ is an isomorphism.
Now suppose that $X$ is of the form indicated in (*). It is enough
to check the equality $W_{G,X}^{(\cdot)}=\{1\}$ for $k=1$. Here the
equality follows from the observation that $\C[X]^G$ is generated by
an element of degree 2.
We proceed to the case when $G$ is not necessarily semisimple.
Let $x$ be a point of $X$ satisfying conditions
(a)-(f) of Propositions \ref{Prop:4.6.1},\ref{Prop:4.6.3} for $M=G$,
$\widehat{G}=(G,G)$ and $\widehat{X}$ be the model variety
constructed by $\widehat{G},x$ in (e). By Proposition
\ref{Prop:4.6.3}, $W_{\widehat{G},\widehat{X}}^{(\cdot)}=\{1\}$.
Therefore $(G,G)=\prod_{i=1}^k G_i, \widehat{X}=\bigoplus_{i=1}^k
V_i$. Since any stabilizer of a point with closed $G$-orbit is
conjugate to a subgroup of $H$, we have $(G,G)\subset H,\eta=0$. By the above, there is a point $x\in V^{(G,G)}\hookrightarrow X$ such that
$V/\g_*x\cong \bigoplus_{i=1}^k V_i$. This observation completes the
proof.
\end{proof}
Now we are going to obtain a sufficient condition for
$W_{G,X}^{(\cdot)}$ to intersect any subgroup of $W(\g)$ conjugate
to a certain fixed subgroup. To state the corresponding assertion we
need some definitions.
\begin{defi}\label{defi:5.2.2}
A subset $A\subset \Delta(\g) $ is called {\it completely
perpendicular} if the following two conditions take place:
\begin{enumerate}
\item $(\alpha,\beta)=0$ for any $\alpha,\beta\in A$.
\item $\Span_{\R}(A)\cap \Delta(\g) =A\cup -A$.
\end{enumerate}
\end{defi}
For example, any one-element subset of $\Delta(\g) $ is completely
perpendicular.
\begin{defi}\label{Def:1.4.1}
A pair $(\h,V)$, where $\h$ is a reductive subalgebra of $\g$ and
$V$ is an $\h$-module, is said to be a $\g$-stratum. Two $\g$-strata
$(\h_1,V_1)$, $(\h_2,V_2)$ are called {\it equivalent} if there
exist $g\in G$ and a linear isomorphism
$\varphi:V_1/V_1^{\h_1}\rightarrow V_2/V_2^{\h_2}$ such that
$\mathop{\rm Ad}\nolimits(g)\h_1=\h_2$ and $ (Ad(g)\xi)\varphi(v_1)=\varphi(\xi v_1)$ for
all $\xi\in\h_1, v_1\in V_1/V_1^{\h_1}$.
\end{defi}
\begin{defi}\label{Def:1.4.2} Let $Y$ be a smooth affine variety and
$y\in Y$ a point with closed $G$-orbit. The pair $(\g_y,
T_yY/\g_*y)$ is called the $\g$-stratum of $y$. We say that $(\h,V)$
is a $\g$-stratum of $Y$ if $(\h,V)$ is equivalent to a $\g$-stratum of
a point of $Y$. In this case we write $(\h,V)\rightsquigarrow_\g Y$.
\end{defi}
\begin{Rem}\label{Rem:1.4.3} Let us justify the terminology. Pairs $(\h,V)$
do define some stratification of $Y\quo G$ by varieties with
quotient singularities. Besides, analogous objects were called
"strata" in \cite{Schwarz3}, where the term is borrowed from.
\end{Rem}
Let $A$ be a nonempty completely perpendicular subset of $\Delta(\g)
$. By $S^{(A)}$ we denote the $\g$-stratum $(\g^{(A)},
\sum_{\alpha\in A}V^{\alpha})$, where $V^\alpha$ is, by definition,
the direct sum of two copies of the two-dimensional irreducible
$\g^{(A)}/\g^{(A\setminus\{\alpha\})}$-module.
\begin{Cor}\label{Cor:5.2.3}
If $W_{G,X}^{(X_T)}\cap W(\g^{(A)})=\{1\}$, then
$S^{(A)}\rightsquigarrow_\g X$.
\end{Cor}
\begin{proof}
Put $M=Z_G(\bigcap_{\alpha\in A}\ker\alpha)$. We remark that
$G^{(A)}=(M,M)$. Choose a point $x\in X$ satisfying conditions
(a)-(f) of Propositions~\ref{Prop:4.6.1},\ref{Prop:4.6.5} for
general $\xi\in\z(\m)$. Let $(H,\eta,V)$ be the determining triple
of $X$ at $x$ and $\widehat{X}=M_{G^{(A)}}(H\cap
G^{(A)},\eta_n,V/V^H)$. By Proposition~\ref{Prop:4.6.3},
$W_{G^{(A)},\widehat{X}}^{(\cdot)}=\{1\}$. Using
Proposition~\ref{Prop:5.2.1}, we see that $S^{(A)}$ is equivalent
to the $\g$-stratum of $x$.
\end{proof}
Now we obtain some restriction on $W_{G,X}^{(\cdot)}$, namely, we
check that $W_{G,X}^{(\cdot)}$ is large in the sense of the
following definition.
\begin{defi}\label{defi:5.2.4}
A subgroup $\Gamma\subset W(\g)$ is said to be {\it large} if for
any two roots $\alpha,\beta\in \Delta(\g) $ such that $\beta\neq
\pm\alpha, (\alpha,\beta)\neq 0$ there exists $\gamma\in
\R\alpha+\R\beta$ with $s_\gamma\in \Gamma$.
\end{defi}
\begin{Cor}\label{Cor:5.2.5}
The subgroup $W_{G,X}^{(X_T)}\subset W(\g)$ is large.
\end{Cor}
\begin{proof}
Assume the converse. Choose $\alpha,\beta\in \Delta(\g) $ such that
$\beta\neq \pm\alpha, (\alpha,\beta)\neq 0$ but $s_\gamma\not\in
W_{G,X}^{(X_T)}$ for all $\gamma\in \Delta(\g)\cap
(\R\alpha+\R\beta)$. Put $M=Z_G(\ker\alpha\cap\ker\beta)$. Note that
$(M,M)=G^{(\alpha,\beta)}$. Let $x\in X$ satisfy conditions (a)-(f)
of Propositions \ref{Prop:4.6.1}, \ref{Prop:4.6.5} for general
$\xi\in\z(\m)$. Let $(H,\eta,V)$ be the determining triple of $X$ at
$x$. Put $\widehat{X}:=M_{G^{(\alpha,\beta)}}(H\cap
G^{(\alpha,\beta)},\eta_n,V/V^H)$. It follows from
Proposition~\ref{Prop:4.6.3} that
$W_{G^{(\alpha,\beta)},\widehat{X}}^{(\cdot)}$ contains no
reflection. Let $\widetilde{G}$ denote the simply connected covering
of $G^{(\alpha,\beta)}$. It is a simple simply connected group of
rank 2. Further, denote by $\widetilde{H}$ the connected normal
subgroup of $\widetilde{G}$ corresponding to $\h$. Put
$\widetilde{X}=M_{\widetilde{G}}(\widetilde{H},\eta_n,V/V^H)$. It is
a coisotropic variety. There is a natural morphism
$\widetilde{X}\rightarrow \widehat{X}$ satisfying the assumptions of
the fourth assertion of Lemma \ref{Lem:1.9}. Therefore the group
$W_{\widetilde{G},\widetilde{X}}^{(\cdot)}$ contains no reflection.
On the other hand, by Corollary~\ref{Cor:4.5.4}, the group
$W_{\widetilde{G},\widetilde{X}}^{(\cdot)}$ is generated by
reflections. Therefore
$W_{\widetilde{G},\widetilde{X}}^{(\cdot)}=\{1\}$. By
Proposition~\ref{Prop:5.2.1}, $\widetilde{G}$ is isomorphic to the
direct product of several copies of $\SL_2$. Since $\widetilde{G}$
is simple and of rank 2, this is absurd.
\end{proof}
Now let us describe large subgroups of $W(\g)$ for simple groups $G$
of types $A-E$.
Firstly, we consider the situation when $\g$ is simple and has type
$A,D,E$, in other words, when all elements of $\Delta(\g) $ are of
the same length.
Recall the classification of maximal proper root subsystems in
$\Delta(\g) $ (see~\cite{Dynkin}). We fix a system
$\alpha_1,\ldots,\alpha_r\in\Delta(\g) $ of simple roots. Let
$\alpha_0$ be the minimal (=lowest) root and $n_1,\ldots, n_r$
(uniquely determined) nonnegative integers satisfying
$\alpha_0+n_1\alpha_1+\ldots+n_r\alpha_r=0$. A proper root subsystem
$\Delta_0\subset\Delta(\g) $ is maximal iff it is $W(\g)$-conjugate
to one of the following root subsystems.
\begin{itemize}
\item[(a)] $\Span_\Z(\alpha_1,\ldots,\alpha_{i-1},\alpha_{i+1},\ldots,
\alpha_r)\cap\Delta(\g) $ for $n_i=1$.
\item[(b)]
$\Span_\Z
(\alpha_0,\alpha_1,\ldots,\alpha_{i-1},\alpha_{i+1},\ldots,
\alpha_r)\cap\Delta(\g) $ for prime $n_i$.
\end{itemize}
The number $n_i$ depends only on $\Delta_0$. We will call this
number the {\it characteristic} of $\Delta_0$.
For a proper subgroup $\Gamma\subset W(\g)$ let $\Delta_\Gamma$
denote the set of all $\alpha\in \Delta(\g) $ such that
$s_\alpha\in \Gamma$.
\begin{Prop}\label{Prop:5.2.6}
Let $\g$ be a simple Lie algebra of type $A,D,E$, $\mathop{\rm rk}\nolimits\g>1$, and
$\Gamma$ a proper subgroup in $W(\g)$. Then $\Gamma$ is large iff
$\Delta_\Gamma$ is a maximal proper root subsystem in
$\Delta(\g) $ of characteristic 1 or 2.
\end{Prop}
\begin{Lem}\label{Lem:5.2.9}
Let $\g$ be a simple Lie algebra of type $A,D,E$. Then
$\Delta_\Gamma$ is a root subsystem in $\Delta(\g)$ for any subgroup
$\Gamma\subset W(\g)$.
\end{Lem}
\begin{proof}
Let $\alpha,\beta\in \Delta(\g)$. Since all roots of $\Delta(\g) $
are of the same length, we see that $\alpha+\beta\in \Delta(\g) $,
(resp., $\alpha-\beta\in \Delta(\g) $) iff $(\alpha,\beta)< 0$,
(resp., $(\alpha,\beta)>0$).
We need to check that $\alpha\in \Delta_{\Gamma}$ implies
$-\alpha\in \Delta_\Gamma$ and that $\alpha,\beta\in
\Delta_\Gamma,\alpha+\beta\in \Delta(\g) $ imply $\alpha+\beta\in
\Delta_\Gamma$. The first implication follows directly from the definition of $\Delta_\Gamma$. To
prove the second one we note that $\alpha+\beta=s_{\alpha}\beta$
whenever $\alpha,\beta,\alpha+\beta\in\Delta(\g)$, while $s_{s_\alpha\beta}=s_\alpha s_\beta s_\alpha\in \Gamma$
provided $\alpha,\beta\in \Delta_\Gamma$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{Prop:5.2.6}]
The subgroup $\Gamma\subset W(\g)$ is large iff
\begin{itemize}
\item[(A)] $\{\alpha,\beta,\alpha+\beta\}\cap \Delta_\Gamma\neq\varnothing$
for all $\alpha,\beta\in\Delta(\g)$ such that
$\alpha+\beta\in\Delta(\g)$.
\end{itemize}
One checks directly that a maximal root subsystem
$\Delta_\Gamma\subset \Delta(\g)$ of characteristic 1 or 2
satisfies (A).
Now let $\Delta_\Gamma$ be a root subsystem of $\Delta(\g)$
satisfying (A). At first, assume that $\Delta_\Gamma$ is not
maximal. Let $\Delta_1$ be a maximal proper root subsystem of
$\Delta(\g) $ containing $\Delta_\Gamma$. Choose
$\alpha\in\Delta_1\setminus \Delta_\Gamma$. We see that
$\alpha+\beta\not\in\Delta(\g) $ for all $\beta\not\in\Delta_1$.
Otherwise
$\{\alpha,\beta,\alpha+\beta\}\cap\Delta_\Gamma=\varnothing$.
Analogously, $\alpha-\beta\not\in\Delta(\g)$. Therefore $\alpha\perp
\Delta\setminus\Delta_1$. Since the root system $\Delta$ is
irreducible, there is $\gamma\in \Delta$ such that
$(\alpha,\gamma)\neq 0, \gamma\not\perp \Delta\setminus\Delta_1$. By the above, any such $\gamma$ necessarily lies in $\Delta_\Gamma$.
Without loss of generality, we may assume that $(\alpha,\gamma)=-1$
whence $\alpha+\gamma\in \Delta$. Then, automatically,
$\alpha+\gamma\in \Delta_1\setminus\Delta_\Gamma$. It follows that
$\alpha+\gamma\perp\Delta\setminus\Delta_1$, which contradicts the
choice of $\gamma$.
It remains to show that the characteristic of $\Delta_\Gamma$ is
less than 3. Assume that
$\Delta_\Gamma=\Span_\Z\{\alpha_0,\ldots,\alpha_{i-1},\alpha_{i+1},\ldots,\alpha_r\}\cap\Delta(\g)
$, where $n_i>2$. Let $\pi_i^\vee$ denote the dual fundamental
weight corresponding to $\alpha_i$. The subset $\Delta_\Gamma\subset
\Delta(\g) $ coincides with the set of all $\alpha\in\Delta$ such
that $n_i$ divides $\pi_i^\vee(\alpha)$. So it is enough to check
that there are $\alpha,\beta\in \Delta(\g) $ such that
$\langle\pi_i^{\vee},\alpha\rangle=\langle\pi_i^{\vee},\beta\rangle=1$,
and $\alpha+\beta\in\Delta(\g) $. There is $\gamma\in \Delta(\g) $
with $\langle\pi_i^{\vee},\gamma\rangle=2$. Choose such an element
$\gamma=\sum_{j=1}^r m_j\alpha_j$ such that $\sum m_j$ is minimal
possible. One sets $\alpha:=\alpha_i,\beta:=\gamma-\alpha_i\in
\Delta(\g) $.
\end{proof}
\begin{Cor}\label{Cor:5.2.7}
Suppose $\g$ is a simple classical Lie algebra. Then $\Gamma\subset
W(\g)$ is large iff $\Delta_\Gamma$ is listed in
Table~\ref{Tbl:5.2.8}.
\end{Cor}
\begin{longtable}{|c|l|}\caption{Subsets $\Delta_\Gamma$
for large subgroups
$\Gamma\subset W(\g)$ when $\g$ is classical}\label{Tbl:5.2.8}\\\hline $\g$&$\Delta_\Gamma$\\\hline
$A_l,l\geqslant 2$& $\{\varepsilon_i-\varepsilon_j|i\neq j, i,j\in I\text{
or }i,j\not\in I\}, I\subsetneq \{1,\ldots,n+1\},
I\neq\varnothing$\\\hline $B_l,l\geqslant 3$&(a) $\{\pm
\varepsilon_i\pm\varepsilon_j|i\neq j, i,j\in I \text{ or }i,j\not\in
I\}\cup \{\pm\varepsilon_i| i\in I\},
I\subsetneq\{1,\ldots,n\}$\\&(b) $\{\pm
\varepsilon_i\pm\varepsilon_j|i\neq j, i,j\in I \text{ or }i,j\not\in
I\}\cup \{\pm\varepsilon_i| i\in \{1,2,\ldots,n\}\},
I\subsetneq\{1,\ldots,n\},I\neq\varnothing$\\&(c)
$\{\varepsilon_i-\varepsilon_j|i\neq j, i,j\in I \text{ or } i,j\not\in
I\}\cup \{\pm(\varepsilon_i+\varepsilon_j), i\in I,j\not\in I)\},
I\subset \{1,\ldots,n\}$
\\\hline $C_l,l\geqslant 2$&(a) $\{\pm \varepsilon_i\pm\varepsilon_j|i\neq j, i,j\in
I \text{ or }i,j\not\in I\}\cup \{\pm 2\varepsilon_i| i\in I\},
I\subsetneq\{1,\ldots,n\}$\\&(b) $\{\pm
\varepsilon_i\pm\varepsilon_j|i\neq j, i,j\in I \text{ or }i,j\not\in
I\}\cup \{\pm 2\varepsilon_i| i\in \{1,2,\ldots,n\}\},
I\subsetneq\{1,\ldots,n\}, I\neq\varnothing$\\&(c)
$\{\varepsilon_i-\varepsilon_j|i\neq j, i,j\in I \text{ or } i,j\not\in
I\}\cup \{\pm(\varepsilon_i+\varepsilon_j), i\in I,j\not\in I)\},
I\subset \{1,\ldots,n\}$ \\\hline $D_l,l\geqslant 3$& (a) $\{\pm
\varepsilon_i\pm\varepsilon_j|i\neq j, i,j\in I \text{ or }i,j\not\in I\},
I\neq\{1,\ldots,n\},\varnothing$\\&(b)
$\{\varepsilon_i-\varepsilon_j|i\neq j, i,j\in I \text{ or } i,j\not\in
I\}\cup \{\pm(\varepsilon_i+\varepsilon_j), i\in I,j\not\in I)\},
I\subset \{1,\ldots,n\}$
\\\hline
\end{longtable}
Note that some subsets $\Delta_\Gamma$ appear in
Table~\ref{Tbl:5.2.8} more than once.
\begin{proof}
For $\g$ of type $A_l$ or $D_l$ the required assertion stems
directly from Proposition~\ref{Prop:5.2.6}.
Suppose $\g\cong \sp_{2l},l\geqslant 2$. If $l=2$, then
$\Gamma\subset W(\g)$ is large iff $\Delta_\Gamma\neq \varnothing$.
All nonempty subsets $\Delta_\Gamma\subset\Delta(\sp_4) $ do appear
in Table~\ref{Tbl:5.2.8}. Now suppose $l>2$. Let $\Delta_0$ denote
the subset of all short roots in $\Delta(\g) $ and $W_0$ the
subgroup of $W(\g)$ generated by $s_\alpha,\alpha\in\Delta_0$. Note
that $W_0$ is the Weyl group of the root system $D_l$. By the
definition of a large subgroup, the subgroup $\Gamma_0$ generated by
$s_\alpha,\alpha\in \Delta_\Gamma\cap \Delta_0,$ is large in $W_0$.
If $\Delta_0\cap\Delta_\Gamma$ is of type (a) (see Table
\ref{Tbl:5.2.8}), then $\Gamma$ is large in $W(\g)$ iff
$\Delta_\Gamma$ contains a long root. If $\Delta_0\cap
\Delta_\Gamma$ is of type (b) or $\Delta_0\subset\Delta_\Gamma$,
then $\Gamma_0$ is large in $W(\g)$. Since $\Gamma\subset
N_{W(\g)}(\Gamma_0)$, we see that large subgroups in $W(\g)$ are
precisely those presented in Table \ref{Tbl:5.2.8}.
The proof for $\g\cong\so_{2l+1},l>2,$ follows easily from the
duality between the root systems $B_l,C_l$.
\end{proof}
\subsection{Examples of computation of Weyl
groups}\label{SUBSECTION_Weyl_computation} In this subsection we
classify pairs $(G,V)$, where $G$ is a simple algebraic group, and
$V$ is a symplectic $G$-module such that $\mathop{\rm def}\nolimits_G(V)=\mathop{\rm rk}\nolimits G$,
$W^{(\cdot)}_{G,V}\neq W(\g)$. The computation for $V\cong U\oplus
U^*$ (and, more generally, $X=T^*(G*_HV)$) is made in \cite{Weyl},
Section 5, so here we consider only the case $X\not\cong U^*\oplus
U$.
\begin{Lem}\label{Lem:5.9}
Let $G$ be a simple group, $X:=M_G(H,\eta,V)$, where $\eta$ is
nilpotent, satisfy the equivalent conditions of Lemma
\ref{Lem:2.3.5}. If $s_\alpha\not\in W_{G,X}^{(\cdot)}$ for some
$\alpha\in \Delta(\g)$, then there exist a subalgebra $\s\subset \h$
such that $\s\sim_G \g^{(\alpha)}$ and
\begin{equation}\label{eq:5.9:1}\frac{\tr_{U\oplus
V}(h^2)}{\tr_\h(h^2)}=1-\frac{4}{\tr_\h(h^2)}.\end{equation} Here
$U:=\z_\g(\eta)/\h$ and $h$ is a coroot in $\s$.
\end{Lem}
\begin{proof}
By Corollary \ref{Cor:5.2.3}, $S^{(\alpha)}\rightsquigarrow_\g X$.
Equivalently, there is a subalgebra $\s\subset \h$ such that
$\s\sim_G\g^{(\alpha)}$ and $(\s,\C^2\oplus \C^2)\rightsquigarrow_\h
U\oplus V$. The last condition implies that the $\s$-modules
$\h/\s\oplus (\C^2)^{\oplus 2}$ and $U\oplus V$ differ by a trivial
summand. Comparing the traces of $h^2$ on these two modules, we get
the claim.
\end{proof}
Here is the main result of this subsection.
\begin{Prop}\label{Prop:5.10}
Let $G$ be a simple group and $V$ a symplectic $G$-module satisfying
the equivalent conditions of Lemma \ref{Lem:2.3.5} such that
$V\not\cong U\oplus U^*$ for any $G$-module $U$. Then
$W_{G,V}^{(\cdot)}\neq W(\g)$ iff $V$ is contained in Table
\ref{Tbl:5.11}. The group $W_{G,V}^{(\cdot)}$ is presented in the forth
column of the table.
\end{Prop}
\begin{longtable}{|c|c|c|c|}
\caption{$G$-modules $V$ such that $W_{G,V}^{(\cdot)}\neq
W(\g)$}\label{Tbl:5.11}\\\hline
N&$\g$&$V$&$W_{G,V}^{(\cdot)}$\\\endfirsthead\hline
N&$\g$&$V$&$W_{G,V}^{(\cdot)}$\\\endhead\hline 1&$\g=\sl_6$&$
V=V(\pi_3)\oplus V(\pi_1)^{\oplus 2}\oplus V(\pi_5)^{\oplus
2}$&$A_1\times A_3$\\\hline 2& $\g=\sp_4$&$ V=V(\pi_1)\oplus
V(\pi_2)^{\oplus 2}$&$C_1\times C_1$\\\hline 3&$\g=\sp_6$&$
V=V(\pi_3)\oplus V(\pi_1)^{\oplus 2}$&$C_1\times C_2$\\\hline 4&
$\g=\so_{11}$&$ V=V(\pi_5)\oplus V(\pi_1)^{\oplus 4}$&$B_1\times
B_4$\\\hline 5& $\g=\so_{13}$&$ V=V(\pi_6)\oplus V(\pi_1)^{\oplus
2}$&$B_2\times B_4$\\\hline
\end{longtable}
In the fourth column we indicate the type of a root subsystem in
$\Delta(\g)$ such that the reflections corresponding to its roots
generate $W^{(\cdot)}_{G,V}$. By $B_1$ (resp., $C_1$) we mean a root
subsystem containing two opposite short (resp., long) roots in $B_n$
(resp., $C_n$). Root subsystems indicated in column 4 are determined
uniquely up to $W(\g)$-conjugacy.
\begin{proof}[Proof of Proposition \ref{Prop:5.10}]
By Corollary \ref{Cor:5.2.3}, $S^{(\alpha)}\rightsquigarrow_\g V$
for some $\alpha\in\Delta(\g)$. There is an $\SL_2$-stable prime
divisor $D'$ on $\C^2\oplus \C^2$ such that $m_{\SL_2}(D')=2$.
Applying the Luna slice theorem, we see that there is a prime
$G$-stable divisor $D$ on $V$
such that $m_G(D)<\dim G$. All $G$-modules $V$ with $m_G(V)=\dim G$ possessing
such a divisor $D$ were classified by Knop and Littelmann,
\cite{KL}. All such symplectic modules $V$ such that $V\not\cong
U\oplus U^*$ are presented in Table \ref{Tbl:5.11}. Let us show that
for these modules the inequality $W_{G,V}^{(\cdot)}\neq W(\g)$ does
hold.
{\it Case 1. $\g=\sl_6, V=V(\pi_3)\oplus V(\pi_1)^{\oplus 2}\oplus
V(\pi_5)^{\oplus 2}$.} We can consider $V$ as a symplectic
$\widetilde{G}:=\SL_6\times\SL_2$-module, where $\SL_2$ acts on
$V(\pi_1)^{\oplus 2}\oplus V(\pi_5)^{\oplus 2}$ as on $\C^2\otimes
(V(\pi_1)\oplus V(\pi_5))$). This module has a finite stabilizer in
general position and is coisotropic, see \cite{Knop7},\cite{cois}.
The Weyl group $W_{\widetilde{G},V}^{(\cdot)}$ was computed in
\cite{Knop7}, Table 12, it corresponds to the root system $A_1\times
A_1\times A_3$. By the discussion preceding Lemma \ref{Lem:1.9},
$W^{(\cdot)}_{\widetilde{G},V}=W_{\SL_2,V}^{(\cdot)}\times
W_{G,V}^{(\cdot)}$. It follows that $W_{G,V}^{(\cdot)}=A_1\times
A_3$.
{\it Case 2. $\g=\sp_4, V=V(\pi_1)\oplus V(\pi_2)^{\oplus 2}$.} We
can consider $V$ as a symplectic
$\widetilde{G}:=\Sp_4\times\C^\times$-module. Here $\C^\times$ acts
trivially on $V(\pi_1)$ and as $\SO_2$ on $V(\pi_2)^{\oplus 2}\cong
\C^2\otimes V(\pi_2)$. Again, this module is coisotropic and has a
finite stabilizer in general position. Using tables obtained in
\cite{Knop7}, we see that $W_{\widetilde{G},V}^{(\cdot)}\cong
A_1\oplus A_1$. But
$W_{G,V}^{(\cdot)}=W_{\widetilde{G},V}^{(\cdot)}$, see assertion 3
of Lemma \ref{Lem:1.9}. Using Lemma \ref{Lem:5.9}, we see that
$s_\alpha\in W_{G,V}^{(\cdot)}$ for all long roots $\alpha$.
{\it Case 3. $\g=\sp_6, V=V(\pi_3)\oplus V(\pi_1)^{\oplus 2}$.} One
argues exactly as in the previous case.
Before proceeding to the remaining two cases let us make some
remarks.
Firstly, $s_\alpha\in W_{G,V}^{(\cdot)}$ for all short roots
$\alpha$. One checks this using Lemma \ref{Lem:5.9} (the fraction in
the l.h.s. of (\ref{eq:5.9:1}) (the index of the $G$-module $V$) can
be computed using Table 1 of \cite{AEV}).
Since $s_\alpha\in W_{G,V}^{(\cdot)}$ for any short root $\alpha$,
it follows from Corollary~\ref{Cor:5.2.5}, Proposition
\ref{Prop:5.2.6} that $W_{G,V}^{(\cdot)}$ is either the whole Weyl
group $W(\g)$ or is maximal among all proper subgroups generated by
reflections. The latter holds iff $\C[C_{G,V}]\cong \z(\C[V]^G)$
(the center of the Poisson algebra $\C[V]^G$) contains two linearly
independent elements of degree 4.
{\it Case 4. $\g=\so_{11}, V=V(\pi_5)\oplus V(\pi_1)^{\oplus 4}$.}
Note that $\Sp(V)^G\cong \Sp_4$. We consider the $\Z^2$-grading on
$\C[V]$ induced by the degrees with respect to $V(\pi_5)$ and
$V(\pi_1)^{\oplus 4}$. By \cite{Schwarz}, $\C[V]^G$ is freely
generated by a 21-dimensional subspace $U\subset \C[V]^G$ such that
\begin{enumerate}
\item $U$ is an $\Sp_4$-submodule in $\C[V]^G$.
\item There is the decomposition $U=U_1\oplus U_2\oplus U_3\oplus
U_4$, where $U_1\cong S^2\C^4, U_2\cong \bigwedge^2\C^4, U_3\cong
\C^4, U_4\cong\C$ (isomorphisms of $\Sp_4$-modules).
\item $U_i, i=\overline{1,4},$ is homogeneous with respect to the
$\Z^2$-grading on $\C[V]$. The degrees of $U_1,U_2,U_3,U_4$ are
$(2,0),(2,2),(1,2),(0,4)$, respectively.
\end{enumerate}
Let $P_1,P_2$ denote the Poisson bivectors on $V(\pi_5)$ and
$V(\pi_1)^{\oplus 4}$ respectively. Now let $f_1,f_2$ be homogeneous
elements of $\C[V]$ of bidegrees, say $(d_1,d_1'),(d_2,d_2')$. Then
$\{f_1,f_2\}=\langle P_1,df_1\wedge df_2\rangle+\langle
P_2,df_1\wedge df_2\rangle$. The bidegrees of the first and the
second summand are, respectively,
$(d_1+d_2-2,d_1'+d_2'),(d_1+d_2,d_1'+d_2'-2)$. For instance, if
$f_1\in \C[V(\pi_5)]$, then $\{f_1,f_2\}$ is homogeneous of bidegree
$(d_1+d_2-2,d_2')$.
Let us check that $U_4\subset \z(\C[V]^G)$. Since $V(\pi_5)$ and
$V(\pi_1)^{\oplus 4}$ are skew-orthogonal, we get $\{U_4,U_1\}=0$.
Suppose $\{U_4,U_2\}\neq \{0\}$. Since $U_4\subset
\C[V]^{G\times\Sp_4}$, we see that $\{U_4,U_2\}$ is isomorphic (as
an $\Sp_4$-module) to a submodule of $\bigwedge^2\C^4$. But
$\{U_4,U_2\}$ consists of homogeneous elements of degree (2,4)
whence $\{U_4,U_2\}\subset U_1U_4+U_3^2$. Both summands are
isomorphic to $S^2\C^4$. This contradicts $\{U_4,U_2\}\neq \{0\}$.
Finally, the degree of $\{U_3,U_4\}$ equals $(1,4)$ whence
$\{U_3,U_4\}=0$.
Let $q$ denote a homogeneous element in $\C[\g]^G$ corresponding to
an invariant nondegenerate form. Then $\mu_{G,V}^*(q)$ is a
homogeneous element of $\C[V]^G$ of degree 4. It remains to check
that $\mu_{G,V}^*(q)\not\in U_4$. One checks easily that
$\im\mu_{G,V(\pi_1)^{\oplus 4}}$ contains an element $\xi$ such that
$(\xi,\xi)\neq 0$. If $v\in V(\pi_1)^{\oplus 4}\hookrightarrow V$ is
such that $\mu_{G,V}(v)=\xi$, then $q(\mu_{G,V}(v))\neq 0, f(v)=0$
for any $f\in U_4$. The last equality holds because $U_4\subset \C[V(\pi_5)]$.
By Corollary \ref{Cor:5.2.7}, $W_{G,V}^{(\cdot)}$ corresponds
either to $B_1\oplus B_4$ or to $ B_2\oplus B_3$. Thanks to
Corollary \ref{Cor:5.2.3}, it remains to prove that
$S^{(A)}\not\rightsquigarrow_\g V$, where
$A=\{\varepsilon_1-\varepsilon_2,\varepsilon_3-\varepsilon_4\}$. If
$S^{(A)}\rightsquigarrow_\g V$, then
\begin{equation}\label{eq:5.9:2}\dim
V^{\g^{(A)}}+\dim\g-\dim\n_\g(\g^{(A)})= \dim V-8.\end{equation} But
$\dim V(\pi_1)^{\g^{(A)}}=3, \dim V(\pi_5)^{\g^{(A)}}=8$ (recall
that the weight system of $V(\pi_5)$ consists of all weights of the
form $\pm\frac{1}{2}(\varepsilon_1\pm\ldots\pm\varepsilon_5)$
without multiplicities;
$V(\pi_5)_\lambda\subset V(\pi_5)^{\g^{(A)}}$ iff $(\lambda,\varepsilon_1-\varepsilon_2)=
(\lambda,\varepsilon_3-\varepsilon_4)=0$),
$\dim\n_\g(\g^{(A)})=15$. So (\ref{eq:5.9:2}) does not hold.
{\it Case 5. $\g=\so_{13}, V=V(\pi_6)\oplus V(\pi_1)^{\oplus 2}$.}
By \cite{Schwarz}, the algebra $\C[V]^G$ is freely generated by 12
elements $f_{(2,0,0)},f_{(1,1,0)},f_{(0,2,0)}, f_{(0,0,4)},
f_{(0,0,8)},f_{(1,0,4)},f_{(0,1,4)},f_{(2,0,4)},f_{(1,1,4)},f_{(0,2,4)},
f_{(1,1,2)}, f_{(1,1,6)}$, where the lower index indicates the
grading with respect to the decomposition $V=V(\pi_1)\oplus
V(\pi_1)\oplus V(\pi_6)$. Note that $\Sp(V)^G\cong \SL_2$. The
elements $f_{(0,0,4)},f_{(0,0,8)},f_{(1,1,2)},f_{(1,1,6)}$ are
$\SL_2$-invariant, $\Span_\C (f_{(2,0,0)},f_{(1,1,0)},f_{(0,2,0)}),
\Span_\C( f_{(2,0,4)},f_{(1,1,4)},f_{(0,2,4)})\cong S^2\C^2,
\Span(f_{(1,0,4)},f_{(0,1,4)})\cong \C^2$.
Analogously to the previous case (i.e., using the grading and the
$\SL_2$-module structure of $\C[V]^G$), we check that $f_{(0,0,4)},
f_{(0,0,8)}\in \z(\C[V]^G)$. Let $\mu_1,\mu_2,\mu$ denote the moment
maps for the actions $G:V(\pi_6),G:V(\pi_1)^{\oplus 2},V$. Clearly,
$\mu=\mu_1+\mu_2$. Further, put $f_2(\xi)=\tr(\xi^2),
f_4(\xi)=\tr(\xi^4), \xi\in\g$ (the traces are taken in the
tautological $\so_{13}$-module). We have shown that
$\mu_1^*(f_2),\mu_1^*(f_4)\in \z(\C[V]^G)$. On the other hand,
$\mu^*(f_2),\mu^*(f_4)\in \z(\C[V]^G)$. Let us check that
$\mu_1^*(f_2),\mu_1^*(f_4),\mu^*(f_2),\mu^*(f_4)$ are algebraically
independent. Analogously to the previous case one checks that
$\mu_1^*(f_2),\mu^*(f_2)$ are independent. It remains to check that
the equality
\begin{equation}\label{eq:5.9:3}
\begin{split}
&a\tr(\xi_1^4)+b\tr((\xi_1+\xi_2)^4)+c\tr(\xi_1^2)^2+d\tr(\xi_1^2)\tr((\xi_1+\xi_2)^2)+
e\tr((\xi_1+\xi_2)^2)^2=0,\\& \forall \xi_i\in \overline{\im\mu_i},
i=1,2,
\end{split}
\end{equation}
implies $a=b=0$. The isotropy subalgebras in general positions for
$V(\pi_6),V(\pi_1)^{\oplus 2}$ are
$\g^{(\alpha_1,\alpha_2,\alpha_4,\alpha_5)},\g^{(\alpha_2,\ldots,\alpha_6)}$,
respectively. By Lemma \ref{Lem:2.3.11},
$\overline{\im\mu_1}=\overline{G\Span_\C(\varepsilon_1+\varepsilon_2+\varepsilon_3,
\varepsilon_4+\varepsilon_5+\varepsilon_6)},$ $
\overline{\im\mu_2}=\overline{G\Span_\C(\varepsilon_1)}$. Since
$\tr(\xi_1^4)$ and $\tr(\xi_1^2)^2$ are not proportional, we get
$a+b=0$. Writing down the terms of (\ref{eq:5.9:3}) of bidegree
(3,1) with respect to $(\xi_1,\xi_2)$, we get
$$4b\tr(\xi_1^3\xi_2)+2d\tr(\xi_1^2)\tr(\xi_1\xi_2)+4e\tr(\xi_1^2)\tr(\xi_1\xi_2)=0.$$
Putting
$\xi_1=\varepsilon_1+\varepsilon_2+\varepsilon_3+i(\varepsilon_4+\varepsilon_5+\varepsilon_6),
\xi_2=\varepsilon_1$, we see that $b=0$.
To prove that the group $W_{G,V}^{(\cdot)}$ has the form indicated
in Table \ref{Tbl:5.11} it is enough to check that
$S^{(A)}\not\rightsquigarrow_\g V$ for
$A=\{\varepsilon_1-\varepsilon_2,\varepsilon_3-\varepsilon_4,\varepsilon_5-\varepsilon_6\}$.
This is done analogously to the previous case.
\end{proof}
\section{Fibers of $\widehat{\psi}_{G,X}$ and untwisted
varieties}\label{SECTION_untwisted} Throughout this section $G,X$
are as in the previous one.
The goal of this section is to prove Theorem \ref{Thm:0.5} and
establish some examples of untwisted varieties. Subsection
\ref{SUBSECTION_Utw1} contains some technical results used in the
proof of Theorem \ref{Thm:0.5}. The proof itself is given in
Subsection \ref{SUBSECTION_Utw2}. In Subsection
\ref{SUBSECTION_Utw3} we describe some classes of untwisted
Hamiltonian varieties. We state a result by Knop that the cotangent
bundle of an affine variety is untwisted and show that any
symplectic module is untwisted. Finally, in Subsection
\ref{SUBSECTION_counterexamples} we give two counterexamples: of a
Hamiltonian variety not satisfying (Irr) and of a conical
coisotropic model variety not satisfying (Utw2). The former
counterexample is due to F. Knop.
\subsection{Reducedness of fibers of $\widehat{\psi}_{G,X}$}\label{SUBSECTION_Utw1}
\begin{Prop}\label{Prop:3.1}
Let $\xi\in\a_{G,X}^{(\cdot)}$ be such that
$(W_{G,X}^{(\cdot)})_\xi=\{1\}$. Then the fiber
$\widehat{\psi}_{G,X}^{-1}(\pi_{W_{G,X}^{(\cdot)},\a_{G,X}^{(\cdot)}}(\xi))$
is reduced.
\end{Prop}
\begin{proof}
We preserve the notation of Proposition \ref{Prop:4.6.1} and Remark
\ref{Rem:2.8} and put $\widehat{G}=M$. The image of $\xi$ in
$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is a smooth point. Thanks to
Theorem \ref{Thm:4.0.1}, the schematic fiber in interest is a local
complete intersection. So to verify that this fiber is reduced it is
enough to prove that it is generically reduced (see, for example,
\cite{Eisenbud}, Propositions 18.13,18.15). In other words, we need
to show that $\widehat{\psi}_{G,X}$ is smooth at any point $x\in X$
satisfying conditions (a)-(e) of Proposition \ref{Prop:4.6.1} for
any irreducible component $Z$. By Remark \ref{Rem:2.8},
$W_{M,X_M}^{(\cdot)}=\{1\}$. Using the commutative diagram of Remark
\ref{Rem:2.8}, we see that it is enough to prove the proposition in
the case when $\xi=0$ and $X=M_G(H,\eta,V)$, where $\eta$ is
nilpotent, $\mathop{\rm cork}\nolimits_G(X)=0$, and $W_{G,X}^{(\cdot)}=\{1\}$. In this
case $\a_{G,X}^{(\cdot)}/W^{(\cdot)}_{G,X}\cong X\quo G$. Let
$X_L,L,G_0,X_0,L_0$ be as in Proposition \ref{Prop:4.2.2}. An
element of $N_G(L_0)$ acting trivially on $\a_{G,X}^{(X_L)}$ lies in
$Z_G(\a_{G,X}^{(X_L)})=L$. Thus (see the discussion preceding Lemma
\ref{Lem:1.9})
$W_{G_0,X_0}^{(\cdot)}/W_{G^\circ_0,X_0}^{(\cdot)}\cong
G_0/G_0^\circ$. So $G_0$ is connected. There is a point $y\in X_0$
of the form $[g,0]$, we may assume that $g=e$. It follows directly
from Example \ref{Ex:2.1.6} that $X_0\cong
M_{G_0}(N_H(L_0)/L_0,\eta,V^{L_0})$. Note that $\eta$ is a nilpotent
element of $ \g^{L_0}$ so $\eta\in \g^{L_0}\cap\lfr_0^\perp\cong
\g_0$. By assertion 1 of Proposition \ref{Prop:4.2.2}, it is enough
to check that the fiber $\pi_{G_0,X_0}^{-1}(0)$ is generically
reduced. Replacing $(G,X)$ with $(G_0,X_0)$ we may assume, in
addition, that $X$ satisfies the equivalent conditions of Lemma
\ref{Lem:2.3.5}. It follows that $X$ satisfies condition (*) of
Proposition \ref{Prop:5.2.1}.
We may replace $G$ with a covering and assume that $G=T_0\times
H^\circ$, where $T_0$ is a torus. Further, by assertion 4 of Lemma
\ref{Lem:1.9}, $W_{G,\widetilde{X}}^{(\cdot)}=\{1\}$, where
$\widetilde{X}:=M_G(H^\circ,\eta,V)$. So we may replace $X$ with
$\widetilde{X}$ and assume that $H$ is connected. Since in this case
$X=T^*(T_0)\times V$, we reduce to the case $H=G,X=V$. Changing $G$
by a covering again, we may assume that $G\cong (G,G)\times Z$,
where $Z$ is a torus.
Recall that $\a_{G,V}\cong V\quo G$. The required claim will follow
if we show that the zero fibers of the morphisms
$\pi_{(G,G),V},\pi_{Z,V\quo (G,G)}$ are reduced. For the former
morphism this stems easily from the decomposition $V\cong
V^{(G,G)}\oplus\bigoplus_{i=1}^k V_i$. Put, for brevity,
$U_1:=V^{(G,G)},U_2:=\bigoplus_{i=1}^k V_i$. Note that
$\Sp(U_2)^{(G,G)}$ is a torus of dimension $k$ acting trivially on
$U_2\quo (G,G)$. So it remains to prove that $\pi_{Z,U_1}^{-1}(0)$
are reduced. Since $\mathop{\rm cork}\nolimits_G(V)=0$, we have $\dim V=4k+2\dim Z$. It
follows that $\dim U_2=2\dim Z$. Further, by the above, $k+\dim
Z=\dim V\quo G=\dim U_1\quo Z+\dim U_2\quo (G,G)$, whence $\dim
U_1\quo Z=\dim Z$. Since $\C(U_1)^Z=\Quot(\C[U_1]^Z)$ (see
\cite{alg_hamil}, Theorem 1.2.9, for the proof in the general case),
it follows that $Z$ acts on $U_1$ locally effectively. Thus the
weight system of $Z$ in $U_1$ coincides with
$\lambda_1,\ldots,\lambda_r,-\lambda_1,\ldots,-\lambda_r$, where
$\lambda_1,\ldots,\lambda_r$ form a basis in $\z^*$. Now the claim
is easy.
\end{proof}
\begin{Prop}\label{Prop:3.3}
Suppose $0\in\im\psi_{G,X}$, so that $0\in\a_{G,X}^{(X_L)}$. Let
$s\in W_{G,X}^{(\cdot)}$ be a reflection. Put
$\Gamma_s:=(\a_{G,X}^{(\cdot)})^s$ (the fixed point hyperplane of
$s$), $D_s=\pi_{W_{G,X}^{(\cdot)},\a_{G,X}^{(\cdot)}}(\Gamma_s)$.
Let $\widetilde{Z}$ be an irreducible component of
$\widehat{\psi}_{G,X}^{-1}(D_s)$. Let $\xi$ be a general point in
$\Gamma_s$, $M:=Z_G(\xi)$, and $x\in \widetilde{Z}$ satisfy
conditions (a)-(f) of Propositions \ref{Prop:4.6.1},
\ref{Prop:4.6.5}. In the notation of those propositions put
$\widehat{G}=T_0(M,M)$ and let $\widehat{X}$ be as in (e) of
Proposition \ref{Prop:4.6.1}. Then the multiplicity of
$\widetilde{Z}$ in $\widehat{\psi}_{G,X}^{-1}(D_s)$ equals 1 or 2
and the latter holds iff
$W_{\widehat{G},\widehat{X}}^{(\cdot)}=\{1\}$.
\end{Prop}
\begin{proof}
We use the notation of Propositions
\ref{Prop:4.6.1},\ref{Prop:4.6.5} and Remark \ref{Rem:2.8}. From
the choice of $M$ it follows that $s\in
W_{\widehat{G},X_M}^{(X_L)}$. Put
$\Gamma'_s=(\a_{\widehat{G},X_M}^{(X_L)})^s,$ and let $D_s'$ be the
image of $\Gamma_s'$ in
$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}/
W_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}$. Let
$\widetilde{Z}_M$ be an irreducible component of $\widetilde{Z}\cap
X_M$ containing $x$. Also $\widetilde{Z}_M$ is an irreducible
component of
$\widehat{\psi}_{\widehat{G},X_M}^{-1}(\pi_{W_{\widehat{G},X_M}^{(X_L)},\a_{\widehat{G},X_M}^{(X_L)}}(\Gamma_s'))$.
Let $\widehat{Z}'$ denote an irreducible component of
$\widehat{\psi}^{-1}_{\widehat{G},\widehat{X}'}(D_s')$ containing a
connected component of $O\cap \widetilde{Z}_M$, , where $O$ is as
in the proof of Proposition \ref{Prop:4.6.3}. Clearly,
$\widehat{Z}'=\widehat{Z}\times V^H$ for some subvariety
$\widehat{Z}\subset \widehat{X}$, which is an irreducible component
of $\widehat{\psi}_{\widehat{G},\widehat{X}}^{-1}(D_s')$. From the
commutative diagram of Remark \ref{Rem:2.8} one deduces that
precisely one of the following possibilities holds:
\begin{enumerate}
\item $W_{\widehat{G},\widehat{X}}^{(\cdot)}\cong\Z_2$.
The multiplicity of $\widetilde{Z}$ in
$\widehat{\psi}_{G,X}^{-1}(D_s)$ equals the multiplicity of
$\widehat{Z}$ in
$\widehat{\psi}_{\widehat{G},\widehat{X}}^{-1}(D_s')$.
\item $W_{\widehat{G},\widehat{X}}^{(\cdot)}=\{1\}$. By Proposition
\ref{Prop:3.1}, the multiplicity of $\widehat{Z}$ in
$\widehat{\psi}_{\widehat{G},\widehat{X}}^{-1}(D_s')$ is one.
Further, the multiplicity of $\widetilde{Z}$ in
$\widehat{\psi}_{G,X}^{-1}(D_s)$ is 2.
\end{enumerate}
It remains to consider the first possibility. Note that, by
definition of $M$, one gets $\Gamma_s\subset \z(\m)$ whence
$\Gamma_s'\subset \z(\widehat{\g})$. Since
$W_{\widehat{G},\widehat{X}}^{(\cdot)}\neq \{1\}$, it follows that
$\dim\a_{\widehat{G},\widehat{X}}^{(\cdot)}\cap
[\widehat{\g},\widehat{\g}]=1$. Further, note that $T_0$ acts
trivially on $\widehat{X}_{\widehat{L}}$. Since
$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}=\z(\widehat{\lfr})\cap\t_0^\perp$
and $\t_0$ projects surjectively to $\z(\widehat{\g})$ (recall that
$\widehat{\g}=\t_0+[\m,\m]$), we see that
$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}\cap\z(\widehat{\g})=\{0\}$.
Therefore
$\a_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}\subset
[\g,\g],\dim\a_{\widehat{G},\widehat{X}}^{(\cdot)}=1,
Z(\widehat{G})^\circ\subset T_0$. From the last inclusion it follows
that $Z(\widehat{G})^\circ$ acts trivially on $\widehat{X}$.
Replacing $(G,X)$ with $((\widehat{G},\widehat{G}),\widehat{X})$, we
reduce the problem to the proof of the following claim.
\begin{itemize}
\item[(**)] Suppose that $X=M_G(H,\eta,V)$, $G$ is semisimple, $\eta$ is nilpotent,
$\mathop{\rm cork}\nolimits_G(X)=0$, $\dim\a_{G,X}^{(\cdot)}=1,
W_{G,X}^{(\cdot)}=\{1,s\}$, where $s$ is a reflection.
Then the fiber
$\widehat{\psi}_{G,X}^{-1}(0)$ is reduced.
\end{itemize}
As in the proof of Proposition \ref{Prop:3.1}, we see that
$\widehat{\psi}_{G,X}=\pi_{G,X}$. So it is enough to check that
$\pi_{H,U\oplus V}^{-1}(0)$ is reduced, where
$U=(\z_\g(\eta)/\h)^*$. Note that $\C[U\oplus V]^H$ is generated by
an element of degree 4. Recall that $\eta\in (U^*)^H\subset
\C[U\oplus V]^H$ is of degree 4. So if $\eta\neq 0$, we are done. If
$\g\neq\h$, then there is an element $q\in\C[\g/\h]^H$
corresponding to an $H$-invariant quadratic form on $\g/\h\cong
\h^{\perp}$. It has degree 4. Any fiber of $q$ is reduced, since
$\dim\g/\h>1$. Therefore it remains to consider the case $H=G,X=V$.
The reducedness of fibers in this case follows from the observation
that a homogeneous generator of $\C[V]^G$ is irreducible.
\end{proof}
\begin{Prop}\label{Prop:3.4}
Again, we keep the notation of Proposition \ref{Prop:4.6.1} and
Remark \ref{Rem:2.8}. Suppose $X$ is untwisted and $0\in
\im\widehat{\psi}_{G,X}$. Let $x$ be a point satisfying conditions
(a)-(d) of Proposition \ref{Prop:4.6.1} for a point $\xi_0\in
\a_{G,X}^{(X_L)}$. Then the Hamiltonian $\widehat{G}$-variety
$\widehat{X}$ is untwisted and
$W_{\widehat{G},\widehat{X}}^{(\widehat{X}_{\widehat{L}})}=(W_{G,X}^{(X_L)})_{\xi_0}$.
\end{Prop}
\begin{proof}
At first, we consider the case when $\widehat{G}=M$. Let $s$ be a
reflection lying in $(W_{G,X}^{(X_L)})_{\xi_0}$. Let $\Gamma_s,D_s$
have the same meaning as in Proposition \ref{Prop:3.3} and $D_s'$
denote the image of $\Gamma_s$ in
$\a_{M,\widehat{X}}^{(\widehat{X}_L)}/W_{M,\widehat{X}}^{(\widehat{X}_L)}$
(we write $\widehat{X}_L$ instead of $\widehat{X}_{\widehat{L}}$
because $L=\widehat{L}$). By Proposition \ref{Prop:3.1}, it is
enough to show that $\widehat{\psi}_{M,\widehat{X}}^{-1}(D'_s)$ is
reduced. Note that the last scheme is non-empty because
$\widehat{\psi}_{M,\widehat{X}}(x)\in D_s'$. Choose a component
$\widehat{Z}$ of $\widehat{\psi}_{M,\widehat{X}}^{-1}(D'_s)$. Note
that $\widehat{\psi}_{G,X}(\widehat{Z}\cap O)\subset D_s$. Choose a
component $Z$ of $\widehat{\psi}_{G,X}^{-1}(D_s)$ containing a
connected component of $\widehat{Z}\cap O$. Since $\widehat{G}=M$,
we see that the morphism
$\a_{M,X_M}^{(X_L)}/W_{M,X_M}^{(X_L)}\rightarrow
\a_{\widehat{G},X_M}^{(X_L)}/W_{\widehat{G},X_M}^{(X_L)}$ from the
commutative diagram of Remark \ref{Rem:2.8} can be inverted. So one
can consider the morphism $\widehat{\psi}:\widehat{X}\rightarrow
\a_{G,X}^{(X_L)}/W_{G,X}^{(X_L)}$ from this diagram. It follows from
the diagram that the multiplicities of $\widehat{Z}$ in
$\widehat{\psi}^{-1}(D_s)$ and of $Z$ in
$\widehat{\psi}_{G,X}^{-1}(D_s)$ coincide. By (Utw2) the latter is
$1$. It follows that the multiplicity of $\widehat{Z}$ in
$\widehat{\psi}_{M,\widehat{X}}^{-1}(D_s')$ is 1 and that the
morphism
$\a_{M,\widehat{X}}^{(\widehat{X}_L)}/W_{M,\widehat{X}}^{(\widehat{X}_L)}\rightarrow
\a_{G,X}^{(X_L)}/W_{G,X}^{(X_L)}$ is unramified along $D_s'$. The
latter is equivalent to $s\in W_{M,\widehat{X}}^{(\widehat{X}_L)}$.
Since $W_{G,X}^{(X_L)}$ is generated by reflections, so is
$(W_{G,X}^{(X_L)})_{\xi_0}$. This completes the proof when
$\widehat{G}=M$.
We proceed to the general case.
\begin{Lem}\label{Lem:3.6}
Let $X_1,X_2$ be Hamiltonian $G$-varieties and
$\varphi:X_1\rightarrow X_2$ a Hamiltonian $G$-morphism. If $X_2$ is
untwisted, then so is $X_1$ and a natural morphism
$\varphi_0:C_{G,X_1}\rightarrow C_{G,X_2}$ induced by $\varphi$ is
\'{e}tale.
\end{Lem}
\begin{proof}[Proof of Lemma \ref{Lem:3.6}]
The morphism $\varphi_0$ is finite and dominant. The morphism
$\widetilde{\psi}_{G,X_2}\circ\varphi$ is smooth in codimension 1.
Therefore $\varphi_0$ is \'{e}tale in codimension 1. Since
$C_{G,X_2}$ is smooth, we can apply the Zariski-Nagata theorem on
the purity of branch locus. We see that $C_{G,X_1}$ is smooth and
$\varphi_0$ is \'{e}tale.
\end{proof}
Changing $M$ by a covering, we may assume that $M=\widehat{G}\times
T_0$, where $T_0$ is a torus. There is a natural Hamiltonian
morphism $\rho:T^*(T_0)\times \widehat{X}\twoheadrightarrow
M_M(H,\eta,V)$. By Lemma \ref{Lem:3.6}, $T^*(T_0)\times \widehat{X}$
is untwisted and the natural morphism $C_{M, T^*(T_0)\times
\widehat{X}}\rightarrow C_{M, M_M(H,\eta,V)}$ is \'{e}tale. From
Proposition \ref{Thm:2.2} it follows that the Weyl groups of
$M_M(H,\eta,V)$ and $T^*(T_0)\times \widehat{X}$ coincide. This
implies all required claims.
\end{proof}
\subsection{Proof of Theorem \ref{Thm:0.5}}\label{SUBSECTION_Utw2}
\begin{Lem}\label{Lem:4.1}
If $X$ is an affine Hamiltonian variety satisfying (Utw1), then all schematic
fibers of $\widetilde{\psi}_{G,X}\quo G$ are Cohen-Macaulay.
\end{Lem}
\begin{proof}
By the Hochster-Roberts theorem (see, for instance, \cite{VP},
Theorem 3.19), $X\quo G$ is Cohen-Macaulay. Since $C_{G,X}$ is
smooth and $\widetilde{\psi}_{G,X}\quo G$ is equidimensional (from
Proposition \ref{Lem:4.4.1}), we see that any fiber of
$\widetilde{\psi}_{G,X}\quo G$ is a locally complete intersection in
a Cohen-Macaulay scheme whence Cohen-Macaulay (\cite{Eisenbud},
Proposition 18.13).
\end{proof}
\begin{proof}[Proof of assertion 1]
Recall that $C_{G,X}\cong \a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$,
Proposition \ref{Thm:2.2}.
Thanks to Lemma \ref{Lem:4.1}, it remains to prove that any fiber of
$\widehat{\psi}_{G,X}\quo G$ is smooth in codimension 1. Since $X$
is conical (of degree, say, $k$), there are actions $\C^\times:X\quo G,\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$
such that the former extends to a morphism $\C\times X\quo G:\rightarrow X\quo
G$, the latter is induced by $\C^\times\times\a_{G,X}\rightarrow \a_{G,X}, (t,v)\mapsto t^k
v$, and the morphism $\widehat{\psi}_{G,X}\quo G$ is
$\C^\times$-equivariant. Applying a standard argument, we see
that it is enough to prove that $\widehat{\psi}_{G,X}\quo
G^{-1}(0)$ is smooth in codimension 1.
Let us use the notation of Corollary \ref{Cor:4.5.2}. Put
$\lambda:=0$ and choose $z\in Z_0$ and $x\in \pi_{G,X}^{-1}(z)$ with
closed $G$-orbit. Put $\widehat{X}:=M_{G}(H,\eta,V/V^H)$. By
Proposition \ref{Prop:3.4},
$\a_{G,\widehat{X}}^{(\cdot)}/W_{G,\widehat{X}}^{(\cdot)}\cong
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$. Since
$\mathop{\rm cork}\nolimits_{\widehat{G}}(\widehat{X})=0$, we have $\widehat{X}\quo
G\cong \a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$. Taking quotients in
the commutative diagram of Remark \ref{Rem:2.8}, we get the
following commutative diagram
\begin{picture}(80,30)
\put(2,22){$V^H\times \widehat{X}\quo G$}\put(40,22){$O\quo
G$}\put(66,22){$X\quo
G$}\put(35,4){$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$}
\put(38,23){\vector(-1,0){13}}\put(52,23){\vector(1,0){13}}
\put(22,20){\vector(1,-1){13}} \put(67,20){\vector(-1,-1){13}}
\put(30,24){\footnotesize $\hookleftarrow$}\put(56,24){\footnotesize
$\hookrightarrow$}\put(23,12){\footnotesize $\operatorname{pr}_2$}
\put(64,12){\footnotesize $\widehat{\psi}_{G,X}\quo G$}
\end{picture}
It follows that $\widehat{\psi}_{G,X}\quo G$ is smooth at $z$. Since
one may take an arbitrary point of $Z_0$ for $z$ we are done by
Proposition \ref{Prop:4.2.2}.
\end{proof}
\begin{proof}[Proof of assertion 2]
Choose a nonzero point $\xi\in\a_{G,X}^{(\cdot)}$ and put
$\a_0:=\C\xi$. Further, put $Y:=(X\quo
G)\times_{\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}}\a_{G,X}^{(\cdot)},
Y_0:=(X\quo G)\times_{\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}}\a_0$
(here $\a_0$ maps to $\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ via the
composition $\a_0\hookrightarrow \a_{G,X}^{(\cdot)}\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$).
Let us check that $Y$ is normal (as a scheme) and Cohen-Macaulay.
Indeed, the morphism $\a_{G,X}^{(\cdot)}\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ is flat, since $X$ satisfies
(Utw1). Therefore the morphism $Y\rightarrow X\quo G$ is flat. But,
as we have already remarked, $X\quo G$ is Cohen-Macaulay. By
Corollary 18.17 from \cite{Eisenbud}, $Y$ is Cohen-Macaulay. Note
that $X\quo G$ is smooth in codimension 1 over
$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$. Hence $Y$ is smooth in
codimension 1 over $\a_{G,X}^{(\cdot)}$ hence normal.
Again, being a complete intersection in a Cohen-Macaulay variety,
$Y_0$ is Cohen-Macaulay. Similarly to the previous paragraph, $Y_0$
is normal.
Let us show that $\C[\a_0]$ is integrally closed in $\C[Y_0]$. Let
$\widetilde{\a}_0$ denote the spectrum of the integral closure of
$\C[\a_0]$ in $\C[Y]$. There is an action of $\C^\times$ on $Y_0$
lifted from $\C^\times:X\quo G$. The morphism $Y_0\rightarrow \a_0$
is $\C^\times$-equivariant. Therefore there is an action
$\C^\times:\widetilde{\a}_0$ contracting $\widetilde{\a}_0$ to the unique point over $0\in\a_0$. It follows that
$\widetilde{\a}_0\cong\C$. Since the zero fiber of the morphism
$Y\rightarrow \a_0$ is normal, we see that the morphism
$\widetilde{\a}_0\rightarrow \a_0$ is \'{e}tale in 0. From the
$\C^\times$-equivariance it follows that it is an isomorphism.
Thus a general fiber of the morphism $Y_0\rightarrow \a_0$ is
irreducible. Thanks to the presence of $\C^\times$-action, the same
is true for any fiber but the zero one. It follows easily from
(Con1),(Con2) that $(\widehat{\psi}_{G,X}\quo G)^{-1}(0)$ is
connected. Since $(\widehat{\psi}_{G,X}\quo G)^{-1}(0)$ is normal, it
is irreducible.
\end{proof}
\begin{proof}[Proof of assertion 3]
By Proposition \ref{Thm:2.2}, $X$ satisfies (Utw1). Assume that $X$
does not satisfy (Utw2). By Proposition \ref{Prop:3.1}, there is
$s\in W_{G,X}^{(\cdot)}$ such that some irreducible component
$\widetilde{Z}\subset \widehat{\psi}_{G,X}^{-1}(D_s)$ (where, as
above, $D_s$ denotes the image of $(\a_{G,X}^{(\cdot)})^s$ in
$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$) is of multiplicity 2. Put
$Y=\pi_{G,X}(\widetilde{Z})$. By Proposition \ref{Lem:4.4.1}, $Y$ is
an irreducible component of $(\widehat{\psi}_{G,X}\quo
G)^{-1}(D_s)$. It follows from (Irr) that
$Y=(\widehat{\psi}_{G,X}\quo G)^{-1}(D_s)$. Thanks to Theorem
\ref{Thm:4.0.1}, the set of closed orbits of any two components
$\widetilde{Z}_1,\widetilde{Z}_2\subset
\widehat{\psi}_{G,X}^{-1}(D_s)$ is the same. By Proposition
\ref{Prop:3.3}, the multiplicity of any component $\widetilde{Z}_1$
in $\widehat{\psi}_{G,X}^{-1}(D_s)$ is 2. Let $f\in
\C[\a_{G,X}^{(\cdot)}]^{W_{G,X}^{(\cdot)}}$ be such that $(f)=D_s$.
Let us remark that $f$ is not a square in $\C(X)$. Assume the
converse, let $f=f_1^2, f_1\in \C(X)$. Then $f_1\in
\C[C_{G,X}]=\C[\a_{G,X}^{(\cdot)}]^{W_{G,X}^{(\cdot)}}$ which is
absurd.
Put $\widetilde{A}:=\C[X][t]/(t^2-f)$. There is a natural morphism
of schemes $\rho:\Spec(\widetilde{A})\rightarrow X$. This morphism
is unramified over $X\setminus \widehat{\psi}_{G,X}^{-1}(D_s)$. Note
also that the group $\Z_2$ acts on $\Spec(\widetilde{A})$ so that
$\rho$ is the quotient for this action. Hence the restriction of
$\rho$ to any irreducible component of $\Spec(\widetilde{A})$ is
dominant. Since $f$ is not a square in $\C[X]$, we see that
$\Spec(\widetilde{A})$ is irreducible. Recall that $(f)=2D$ for
some $D$. It follows that $\rho$ is unramified over
$\widehat{\psi}_{G,X}^{-1}(D_s)$. So $\rho$ is \'{e}tale in
codimension
1.
Let $\widetilde{X}$ denote the normalization of
$\Spec(\widetilde{A})$ and $\widetilde{\rho}$ is the natural
morphism $\widetilde{X}\rightarrow X$. Since $X$ is smooth, we see
that $\widetilde{\rho}$ is also \'{e}tale in codimension 1. Besides,
$\widetilde{\rho}$ is finite. Applying the Zariski-Nagata theorem
to $\widetilde{\rho}$, we obtain that $\widetilde{\rho}$ is
\'{e}tale. However, by our assumptions, $X$ is simply connected
whence $\widetilde{\rho}$ is an isomorphism. It follows that the
image of $t$ in $\C[\widetilde{X}]$ lies in $\C(X)$. This
contradicts the condition that $f$ is not a square in $\C(X)$.
\end{proof}
\begin{Rem}\label{Rem:0.6}
In fact, assertion 3 can be generalized to non simply connected
varieties. Namely, suppose $X$ satisfies (Irr). Then there exists an
untwisted conical Hamiltonian $G$-variety $\widetilde{X}$ and a free
action of a finite group $\Gamma$ on $\widetilde{X}$ by Hamiltonian
automorphisms (see Definition \ref{defi:3.5}) such that $X\cong
\widetilde{X}/\Gamma$ and $\pi_{\Gamma,X}:\widetilde{X}\rightarrow
X$ is a Hamiltonian morphism. The proof of this claim is similar to
that of assertion 3.
\end{Rem}
\subsection{Some classes of untwisted
varieties}\label{SUBSECTION_Utw3}
\begin{Prop}\label{Prop:5.0}
Let $X$ be coisotropic, simply connected and conical (e.g. a
symplectic vector space). Then $X$ is untwisted.
\end{Prop}
\begin{proof}
Thanks to Proposition \ref{Thm:2.2}, $X$ satisfies (Utw1).
Furthermore, $X$ obviously satisfies (Irr). Applying assertion 3 of
Theorem \ref{Thm:0.5}, we complete the proof.
\end{proof}
\begin{Thm}\label{Thm:5.1}
Let $X_0$ be a smooth irreducible affine $G$-variety. Then
$X:=T^*X_0$ is an untwisted Hamiltonian $G$-variety.
\end{Thm}
\begin{proof}
(Utw1) is checked in Satz 6.6 of \cite{Knop1}. (Utw2) follows from
\cite{Knop2}, Corollary 7.6.
\end{proof}
As the preprint \cite{Knop2} is not published, below we present
alternative proofs of Theorem \ref{Thm:5.1}.
\begin{Thm}\label{Thm:5.2}
Let $V$ be a symplectic $G$-module. Then $V$ is an untwisted
Hamiltonian $G$-variety.
\end{Thm}
We will prove this theorem after some auxiliary considerations.
\begin{Prop}\label{Prop:5.3}
Let $X$ be a conical Hamiltonian $G$-variety and $G_0,X_0$ be such
as in the discussion preceding Proposition \ref{Prop:4.2.2}. If the
Hamiltonian $G_0^\circ$-variety $X_0$ satisfies (Irr), then so does
$X$.
\end{Prop}
\begin{proof}
The action $\C^\times:X$ preserves $X_0$ and so gives rise to the
structure of a conical Hamiltonian $G_0$-variety on $X_0$. By
Proposition \ref{Prop:4.2.2}, the following diagram, where the
horizontal arrows are quotient morphisms for the actions
$G_0/G_0^\circ$ on $X_0\quo G_0^\circ,
\a_{G,X}^{(\cdot)}/W_{G_0^\circ,X_0}^{(\cdot)}$, is commutative.
\begin{picture}(60,30)
\put(4,22){$X_0\quo G_0^\circ$}\put(45,22){$X\quo G$}
\put(2,4){$\a_{G,X}^{(\cdot)}/W_{G_0^\circ,X_0}^{(\cdot)}$}\put(42,4){$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$}
\put(19,24){\vector(1,0){24}}\put(25,6){\vector(1,0){15}}
\put(9,20){\vector(0,-1){12}}\put(50,20){\vector(0,-1){12}}
\put(10,13){\footnotesize $\widehat{\psi}_{G_0^\circ,X_0}$}
\put(51,13){\footnotesize $\widehat{\psi}_{G,X}$}
\end{picture}
Choose $\xi\in\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$ and a point
$\xi'\in\a_{G_0,X_0}^{(\cdot)}/W_{G_0^\circ,X_0}^{(\cdot)}$ mapping
to $\xi$. By the previous commutative diagram,
$(\widehat{\psi}_{G,X}\quo G)^{-1}(\xi)$ is the quotient of
$\widehat{\psi}_{G_0^\circ,X_0}\quo G_0^{\circ-1}(\xi')$ by some
finite group. In particular, $(\widehat{\psi}_{G,X}\quo G)^{-1}(\xi)$
is irreducible.
\end{proof}
\begin{Prop}\label{Prop:5.4}
Let $X$ be an irreducible conical affine Hamiltonian $G$-variety
such that $\dim \a_{G,X}^{(\cdot)}=\mathop{\rm rk}\nolimits G$. Let $G=Z(G)^\circ
G_1\ldots G_k$ be the decomposition of $G$ into the locally direct
product of simple normal subgroups and the unit component of the
center. If $X$ satisfies (Utw1) and is untwisted as a Hamiltonian
$G_i$-variety for any $i$, then $X$ is untwisted as a Hamiltonian
$G$-variety. Conversely, if $X$ is untwisted as a Hamiltonian
$G$-variety, then so is it as a Hamiltonian $G_i$-variety.
\end{Prop}
\begin{proof}
By Proposition \ref{Prop:3.1}, if $\widehat{\psi}_{G,X}^{-1}(D)$ is
not reduced for some divisor $D$ of
$\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$, then (in the notation of
Proposition \ref{Prop:3.3}) $D=D_s$ for some reflection $s\in
W_{G,X}^{(\cdot)}$. By assertion 5 of Lemma \ref{Lem:1.9}, if $s$ is
a reflection in $W_{G,X}^{(\cdot)}$, then $s$ is also a reflection
in $W_{G_i,X}^{(\cdot)}$ for some $i$ and vice versa. If
$W_{G,X}^{(\cdot)}$ is generated by reflections, then
$\widehat{\psi}_{G,X}$ is identified with the product of the
morphisms $\widehat{\psi}_{G_i,X}$. Now the proof is
straightforward.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:5.2}]
Applying Proposition \ref{Prop:5.3}, we reduce the proof to the
case when $V$ satisfies the equivalent conditions of Lemma
\ref{Lem:2.3.5}. Further, thanks to Proposition \ref{Prop:5.4}, we
may (and will) assume that $G$ is simple.
Suppose $V$ is not untwisted. By Proposition \ref{Thm:2.2},
$\widehat{\psi}_{G,V}$ is not smooth in codimension 1. By
Proposition \ref{Prop:3.3}, in the notation of that proposition, for
some $s\in W_{G,V}^{(\cdot)}$there is a point $x\in
\widehat{\psi}_{G,V}^{-1}(D_s)$ satisfying conditions (a)-(f) with
$W_{\widehat{G},\widehat{X}}^{(\cdot)}=\{1\}$. From Remark
\ref{Rem:4.6.4} it follows that one can take $(M,M)$ for
$\widehat{G}$. Note that $\widehat{\g}\cong \sl_2$. By Proposition
\ref{Prop:5.2.1}, $\g_x=\sl_2, V/(\g_*x+V^{\g_x})\cong
\C^2\oplus\C^2$ (here $\C^2$ denotes the irreducible two-dimensional
$\sl_2$-module). In the proof of Proposition \ref{Prop:5.10} we have
seen that \begin{itemize}\item[(A)] all modules $V$ containing such
a point $x$ are presented in Table \ref{Tbl:5.11}; \item[(B)] if
$\alpha\in\Delta(\g)$ is such that $S^{(\alpha)}\rightsquigarrow_\g
V$, then $s_{w\alpha}\not\in W_{G,V}^{(\cdot)}$ for some $w\in
W(\g)$.
\end{itemize}
Let us choose a point $x\in \widehat{\psi}_{G,V}^{-1}(0)$ satisfying
conditions (a)-(e) of Proposition \ref{Prop:4.6.1}. Let
$\widehat{X}$ be the corresponding model $G$-variety. Let us check
that $(\widehat{\psi}_{G,V}\quo G)^{-1}(0)$ is irreducible. Clearly,
$(\widehat{\psi}_{G,V}\quo G)^{-1}(0)$ is connected. From
Proposition \ref{Prop:4.4.2} it follows that
$(\widehat{\psi}_{G,V}\quo G)^{-1}(0)$ is smooth in codimension 2
(as a variety). Applying the Hartshorne connectedness theorem, we
see that $(\widehat{\psi}_{G,V}\quo G)^{-1}(0)$ is irreducible.
Therefore $\widehat{X}$ does not depend on the choice of $x$. If
$S^{(\alpha)}\rightsquigarrow_\g \widehat{X}$ for some $\alpha\in
\Delta(\g)$, then $S^{(\alpha)}\rightsquigarrow V$. By Proposition
\ref{Prop:4.6.3}, $W_{G,\widehat{X}}^{(\cdot)}$ is $W(\g)$-conjugate
to a subgroup in $W_{G,V}^{(\cdot)}$. Let us check that these two
groups are, in fact, $W(\g)$-conjugate. By (B) and Corollary
\ref{Cor:5.2.3}, if $s_{w\alpha}\in W_{G,V}^{(\cdot)}$ for all $w\in
W(\g)$, the the same inclusions hold for $\widehat{X}$. But both
$W_{G,V}^{(\cdot)},W_{G,\widehat{X}}^{(\cdot)}$ are contained in
Table \ref{Tbl:5.2.8}. Therefore they are $W(\g)$-conjugate.
It follows from the
commutative diagram of Remark \ref{Rem:2.8} that the fiber
$(\widehat{\psi}_{G,V}\quo G)^{-1}(0)$ is smooth in $x$. By
Proposition \ref{Prop:4.2.2}, this fiber is smooth in codimension 1.
Proceeding as in the proof of assertion 1 of Theorem \ref{Thm:0.5},
we see that any fiber of $\widehat{\psi}_{G,V}\quo G$ is normal.
Applying assertions 2,3 of Theorem \ref{Thm:0.5}, we see that $V$ is
untwisted.
\end{proof}
\begin{proof}[First alternative proof of Theorem \ref{Thm:5.1}]
Suppose that (Utw2) does not hold. Let $Y$ be a prime divisor in $X$
consisting of singular points of $\widehat{\psi}_{G,X}$.
{\it Step 1.}
Note that $Y$ is $\C^\times$-stable. Therefore there is a point
$x\in X_0\cap Y$ with closed $G$-orbit. Set $H:=G_x,
V_0:=T_xX_0/\g_*x, X'_0:=G*_HV_0, X':=T^*X_0$. As we noted in
\cite{slice}, $X'\cong M_G(H,0,V_0\oplus V_0^*)$. By \cite{Knop1},
Satz 6.5, $W_{G,X}^{(\cdot)}$ is conjugate to $W_{G,X'}^{(\cdot)}$.
Using Remark \ref{Rem:2.8}, we see that $X'$ does not satisfy
(Utw2). So we may assume that $X_0\cong G*_HV_0$. Also \cite{Knop1},
Satz 6.5, implies that $T^*\widetilde{X}_0$, where
$\widetilde{X}_0:=G*_{H^\circ}V_0$, is not untwisted. So we may
assume that $X$ is simply connected.
{\it Step 2.} Analogously to the proof of Theorem \ref{Thm:5.2}, we
may assume that $m_G(X)=\dim G$ and $G$ is simple. In \cite{Weyl},
Section 5 (see especially Lemma 5.4.1), we checked that condition
(B) of the proof of Theorem \ref{Thm:5.2} (with $V$ replaced by $X$)
holds. Now we can proceed as in that proof. Note that \cite{Weyl}
uses results of Section \ref{SECTION_Weyl} but not of Section
\ref{SECTION_untwisted}.
\end{proof}
Let us also present a proof that does not use the classification
results of \cite{Weyl}.
\begin{proof}[Second alternative proof of Theorem \ref{Thm:5.1}]
Again, we may assume that $X$ is simply connected. Recall the notation $Y$ from
the previous proof.
Set
$D:=\overline{\widetilde{\psi}_{G,X}(Y)}$. Let us check that any
irreducible component of $\widetilde{\psi}_{G,X}^{-1}(D)$ is of
multiplicity 2. We may assume that $Y\neq
\widetilde{\psi}_{G,X}^{-1}(D)$. It follows from the connectedness theorem
of \cite{Knop12}, that $\widetilde{\psi}_{G,X}^{-1}(z)$ is connected
for any $z\in D$. Applying the Hartshorne connectedness theorem to
the Cohen-Macaulay scheme $(\widetilde{\psi}_{G,X}\quo G)^{-1}(D)$
and using the Knop connectedness theorem, we may assume that (probably, after
replacing $Y$ with another component of multiplicity 2) there is an
irreducible component $Y_1$ of $\widetilde{\psi}_{G,X}^{-1}(D)$ such
that
\begin{itemize}
\item
$\codim_{Y\quo G} (Y_1\quo G)\cap (Y\quo G)=1$,
\item
$\widetilde{\psi}_{G,X}(Y_1\cap Y)$ is dense in $D$.
\item $Y_1$ is of multiplicity 1.
\end{itemize}
Choose a general point $\alpha\in D$. It follows from Proposition
\ref{Prop:4.4.2} that there is a point $x\in Y\cap Y_1$ satisfying
the conditions (a)-(f) of Propositions
\ref{Prop:4.6.1},\ref{Prop:4.6.5}. By Proposition \ref{Prop:3.3},
$Y_1$ has multiplicity 2, contradiction.
So any component of $\widetilde{\psi}_{G,X}^{-1}(D)$ has
multiplicity 2. Recall that by our assumptions $X$ is simply
connected. Arguing as in the last paragraph of the proof of
assertion 3 of Theorem \ref{Thm:0.5}, we get a contradiction.
\end{proof}
\subsection{Some counterexamples}\label{SUBSECTION_counterexamples}
The following example of a Hamiltonian variety not satisfying (Irr)
is due to F. Knop.
\begin{Ex}\label{Ex:5.13}
Put $G=\C^\times, X=\C^\times\times\C^\times\times\C\times\C$.
Choose coordinates $x_1,\ldots,x_4$ on $X$ corresponding to the
above decomposition. Define the action $G:X$ by
$t(x_1,x_2,x_3,x_4)=(tx_1,t^{-1}x_2,x_3,x_4)$. Put
$\alpha:=(x_1x_2^2-x_1^{-1}x_3^2)dx_1+x_4dx_3$. Clearly, $\alpha$ is
$G$-invariant. Put $$\omega:=-d\alpha=2x_1x_2dx_1\wedge
dx_2-2x_1^{-1}x_3 dx_1\wedge dx_3-dx_3\wedge dx_4.$$ One checks
directly that $\omega$ is nondegenerate. The action $G:X$ is
Hamiltonian with $\mu_{G,X}(x)=\langle\alpha,
\frac{\partial}{\partial t}\rangle_x=x_1^2x_2^2-x_3^2$. It is clear
that $\mu_{G,X}^{-1}(a)$ is irreducible whenever $a\neq \{0\}$. It
follows that $C_{G,X}\cong \g$. On the other hand,
$\mu_{G,X}^{-1}(0)$ has two connected components.
\end{Ex}
We remark that the Hamiltonian variety in Example \ref{Ex:5.13} is
the smallest one in the sense that both group and variety have the
smallest possible dimensions.
Now let us present an example of a coisotropic conical model
variety $X=M_G(H,\eta,V)$
that is twisted. An example, where the group
$W_{G,X}^{(\cdot)}$ is not generated by reflections, can be found in
\cite{alg_hamil}, Subsection 5.10. In the following example
$W_{G,X}^{(\cdot)}$ is generated by reflections but (Utw2) does not
hold. Note that this example is very similar to that from
\cite{alg_hamil}.
\begin{Ex}\label{Ex:5.14}
Put $G=\SL_2\times \C^\times, X:=M_G(\Z_2\times\SL_2,0,\C^2\oplus
\C^2)$, where $\C^2$ denotes the two-dimensional irreducible
$\SL_2$-module with the symplectic form given by $(u,v)\mapsto
\det(u,v)$ and the nontrivial element $\sigma\in \Z_2\subset
\C^\times$ acts on $\C^2\oplus \C^2$ as follows:
$\sigma(v_1,v_2)=(v_2,-v_1)$. One easily checks that
$W_{G,X}^{(\cdot)}=N_G(T)/T\cong \Z_2$ and (Utw2) does not hold.
\end{Ex}
\section{Some open problems}\label{SECTION_open}
Firstly, we state two conjectures concerning property (Irr). Below
$G$ is a connected reductive group.
\begin{Conj}\label{Conj:1}
Any conical irreducible Hamiltonian $G$-variety $X$ satisfies (Irr).
\end{Conj}
The following conjecture is a weaker version of the first one.
\begin{Conj}\label{Conj:2}
$X=M_G(H,\eta,V)$, where $\eta$ is nilpotent, satisfies (Irr).
\end{Conj}
In virtue of the local cross-section and symplectic slice theorems
(Propositions \ref{Prop:1.1}, \ref{Prop:4.3.3}) one can deduce from
Conjecture \ref{Conj:2} that any fiber of $\psi_{G,X}$ is normal
(as a variety).
Unlike the first conjecture, the second one can be reduced to some
case-by-case consideration. Let us sketch the scheme of this
reduction.
At first, one reduces the problem to the case when $X$ satisfies the
equivalent conditions of Lemma \ref{Lem:2.3.5} and then to the case
when $X$ is algebraically simply connected. Here one should check
that $X$ satisfies (Utw2). This will follow if one verifies the
following assertion:
\begin{itemize}
\item[(*)]
for any $\alpha\in \Delta(\g)$ such that
$S^{(\alpha)}\rightsquigarrow_\g X$ there is $w\in W(\g)$ such that
$s_{w\alpha}\not\in W_{G,X}^{(\cdot)}$.
\end{itemize}
Finally, it is enough to check (*) only for some special quadruples
$(G,H,\eta,V)$. By analogy with Section 7 of \cite{Weyl}, we call
such quadruples {\it quasiessential}. By definition, a quadruple
$(G,H,\eta,V)$ is quasiessential if $M_G(H,\eta,V)$ satisfies the
equivalent conditions of Lemma \ref{Lem:2.3.5} and for any ideal
$\h_1\subset \h$ there is $\alpha\in \Delta(\g)$ such that
$S^{(\alpha)}\rightsquigarrow_\g M_G(H,\eta,V)$ but
$S^{(\alpha)}\not\rightsquigarrow_\g M_G(H_1,\eta,V)$, where $H_1$
is a subgroup of $H$ corresponding to $\h_1$. It is not very
difficult to show that if $(G,H,\eta,V)$ is quasiessential, then $G$
is simple and $H$ is semisimple.
The next conjecture strengthens assertion 1 of Theorem
\ref{Thm:0.5}.
Note that any fiber of $\widetilde{\psi}_{G,X}\quo G$ has the
natural structure of a Poisson variety. The open stratum (in the
sense of Subsection \ref{SUBSECTION_affham4}) is symplectic.
\begin{Conj}\label{Conj:3}
Let $X=M_G(H,\eta,V)$, where $\eta$ is nilpotent, be untwisted. Then
any fiber $Y$ of $\widetilde{\psi}_{G,X}\quo G$ has symplectic
singularities. This means that there is a resolution of
singularities $\widetilde{Y}\rightarrow Y$ such that the symplectic
form on the smooth part of $Y$ is extended to some regular form on
$\widetilde{Y}$.
\end{Conj}
Finally, we would like to propose a conjecture giving an estimate on
dimensions of fibers of $\mu_{G,X}$.
\begin{Conj}\label{Conj:4}
Let $X$ be an irreducible affine Hamiltonian $G$-variety. Then
$\dim\mu_{G,X}^{-1}(\eta)\leqslant \dim X-(m_G(X)+\mathop{\rm def}\nolimits_G(X)+\dim
G\eta)/2$.
\end{Conj}
If $X$ is the cotangent bundle of a (not necessarily affine)
$G$-variety $X_0$ this conjecture follows from Vinberg's theorem on
the modality of the action of a Borel subgroup of $G$ on $X_0$, see
\cite{Vinberg_complexity}.
\section{Notation and conventions}\label{SECTION_Notation}
For an algebraic group denoted by a capital Latin letter we denote
its Lie algebra by the corresponding small German letter. For roots
and weights of semisimple Lie algebras we use the notation of
\cite{VO}.
\begin{longtable}{p{5.5cm} p{10.5cm}}
\\$\sim_G$& the equivalence relation induced by an action of a group
$G$.
\\ $C_{G,X}$& the spectrum of the integral closure of $\psi_{G,X}^*(\C[\g]^G)$ in
$\C[X]^G$.
\\ $\mathop{\rm cork}\nolimits_G(X)$& the corank of a Hamiltonian $G$-variety $X$.
\\ $\mathop{\rm def}\nolimits_G(X)$& the defect of a Hamiltonian $G$-variety $X$.
\\$e_\alpha$& a nonzero element of the root subspace $\g^\alpha\subset \g$.
\\ $(f)$& the zero divisor of a rational function $f$.
\\ $(G,G)$ (resp., $[\g,\g]$)& the commutant of a group $G$
(resp., of a Lie algebra $\g$)
\\ $G^{\circ}$& the connected component of unit of an algebraic group $G$.
\\ $G*_HV$& the homogeneous bundle over $G/H$ with a fiber $V$.
\\ $[g,v]$& the equivalence class of $(g,v)$ in $G*_HV$.
\\ $G_x$& the stabilizer of $x\in X$ under an action
$G:X$.
\\ $\g^{\alpha}$& the root subspace of $\g$ corresponding to a root $\alpha$.
\\ $\g^{(A)}$ (resp., $G^{(A)}$)& the subalgebra $\g$ generated by $\g^{\alpha}, \alpha\in A\cup -A$ (resp.,
the corresponding connected subgroup of $G$).
\\ $m_G(X)$&$=\max_{x\in X}\dim Gx$.
\\ $N_G(H)$, (resp., $N_G(\h),\n_\g(\h)$)& the normalizer of an algebraic subgroup $H$ in
an algebraic group $G$ (resp., of a subalgebra $\h\subset \g$ in
$G$, of a subalgebra $\h\subset \g$ in $\g$).
\\ $N_G(H,Y)$& the stabilizer of $Y$ under the action of $N_G(H)$.
\\ $\Quot(A)$& the fraction field of $A$.
\\ $\mathop{\rm rk}\nolimits(G)$& the rank of an algebraic group $G$.
\\
$s_\alpha$& the reflection in a Euclidian space corresponding to a vector $\alpha$.
\\ $\Span_{F}(A)$& the linear span of a subset $A$ of a module
over a field $F$.
\\ $\td A$& the transcendence degree of an algebra $A$.
\\ $U^{\skewperp}$& the skew-orthogonal complement to a subspace $U\subset V$ of
a symplectic vector space $V$.
\\ $V^\g$& $=\{v\in V| \g v=0\}$, where $\g$ is a Lie algebra and
$V$ is a $\g$-module.
\\ $V(\lambda)$& the irreducible module of the highest weight $\lambda$
over a reductive algebraic group or a reductive Lie algebra.\\
$W(\g)$& the Weyl group of a reductive Lie algebra $\g$.\\
$X^G$& the fixed point set for an action $G:X$.\\
$X\quo G$& the categorical quotient for an action $G:X$, where $G$
is a reductive group and $X$ is an affine $G$-variety.
\\ $Z(G)$(resp., $\z(\g))$& the center of an algebraic group $G$ (resp., a Lie algebra $\g$).
\\ $Z_G(H)$, (resp., $Z_G(\h),\z_\g(\h)$)& the centralizer of a subgroup $H$
in an algebraic group $G$ (resp., of a subalgebra $\h\subset \g$ in
$G$, of a subalgebra $\h\subset \g$ in a Lie algebra $\g$).
\\ $\alpha^\vee$& the dual root of $\alpha\in\Delta(\g)$.
\\ $\Delta(\g)$& the root system of a reductive Lie algebra $\g$.
\\ $\mu_{G,X}$& the moment map for a Hamiltonian
$G$-variety $X$.
\\ $\xi_s$ (resp., $\xi_n$)& the semisimple (the nilpotent) part of
an element $\xi$ of an algebraic Lie algebra.
\\ $\xi_*$& the velocity vector field associated with $\xi\in\g$.
\\ $\Pi(\g)$& the system of simple roots for a reductive Lie algebra $\g$.
\\$\pi_{G,X}$& the (categorical) quotient morphism $X\rightarrow X\quo G$.
\\$\varphi\quo G$& the morphism of (categorical) quotients induced by a
$G$-equivariant morphism $\varphi$.
\\ $\varphi^*$& the homomorphism $\C[X_2]\rightarrow \C[X_1]$
induced by a morphism $\varphi:X_1\rightarrow X_2$.
\\ $\psi_{G,X}$& $:=\pi_{G,\g}\circ\mu_{G,X}$.
\\ $\widetilde{\psi}_{G,X}$& the natural morphism $X\rightarrow
C_{G,X}$.
\\ $\widehat{\psi}_{G,X}$& the natural morphism $X\rightarrow
\a_{G,X}^{(\cdot)}/W_{G,X}^{(\cdot)}$.
\end{longtable} |
math-ph/0703027 | \section{Introduction}
The main motivation to study hermitian symplectic spaces---this
terminology follows \cite{Kost:Sch}---is the well know connection
between the self-adjoint extensions of a symmetric operator and the
Lagrange planes of a hermitian symplectic space \cite{Pav,Kost:Sch,Nov3}.
This is based on the fact that the boundary form of a symmetric operator
is a hermitian symplectic form and the extensions of the operator may be
identified with isotropic subspaces in the associated hermitian
symplectic space. \\
In the first section of this paper we define and describe some of the
properties of hermitian symplectic spaces. By our definition hermitian
symplectic spaces (unlike symplectic spaces) need not be even dimensional
or admit a canonical basis. We show that when a hermitian symplectic
space admits a canonical basis, it has Lagrange planes and derive an
explicit parameterisation of the set of Lagrange planes in terms of the
set of unitary matrices ${\sf U} (n)$ where $n$ is half the dimension of the
space. \\
In the following section we consider connections to extension
theory of symmetric operators. It is observed that hermitian symplectic
spaces that do not admit a canonical basis, or Lagrange planes,
correspond to symmetric operators with unequal deficiency indices (in
this case the extensions are described by isotropic subspaces). On the
other hand, symmetric operators with equal deficiency indices correspond
to hermitian symplectic spaces with Lagrange planes and as is well known
these Lagrange planes may be used to describe the self-adjoint
extensions. The fact that the set of Lagrange planes, or self-adjoint
extensions, is isomorphic to ${\sf U} (n)$ is in accordance with the
parameterisation of self-adjoint extensions by a unitary map between the
deficiency subspaces as described by Neumann extension theory
\cite{Akh:Glz}. \\
We then consider the specific example of the
Schr\"{o}dinger operator on the graph. Our explicit parameterisation of
the Lagrange planes in terms of the unitary matrices allows us to
describe all self-adjoint boundary conditions at the origin for the
Schr\"{o}dinger operator on the graph with trivial compact part in terms
of a unitary matrix. Furthermore, we show that the asymptotics of the
scattering matrix may be written in terms of this unitary matrix and that
the boundary conditions do not contribute to the discrete spectrum iff
this unitary matrix is also hermitian. We also use a property of this
parameterisation, as well as the Wronskian, to show the unitarity of the
scattering matrix. \\
\section{Hermitian symplectic geometry}
Many of the basic ideas in this section can be found in any standard text
on symplectic geometry \cite{Arn,Arn:Giv,Fom,Mar:Rat}. However, the
concept of a canonical hermitian symplectic space and the details of the
parameterisation of Lagrange planes in a hermitian symplectic space
distinguish this construction from the standard symplectic case. In
particular the Lagrange planes in hermitian symplectic geometry are
parameterised by unitary matrices whereas they have different
parameterisations in the standard symplectic geometry. Also, by our
definition, a hermitian symplectic space need not be even dimensional or
admit a canonical basis---unlike the symplectic case. This is
seen to correspond to a symmetric operator with unequal deficiency
indices. \\
\begin{defn}
The two-form $\langle\cdot,\cdot\rangle$, linear in the second
argument and conjugate linear in the first argument, is a hermitian
symplectic form if
\[
\langle\phi,\psi\rangle=-\overline{\langle\psi,\phi\rangle} .
\]
\end{defn}
We recall that the standard symplectic form obeys
$\langle\phi,\psi\rangle=-\langle\psi,\phi\rangle$. We will use the
prefix `hermitian' to emphasise this distinction.
\begin{defn}
We say that an m-dimensional ($m <\infty$) vector space $H_{m}$ over $\mathbb{C}$
is a hermitian symplectic space if it has defined on it a
nondegenerate hermitian symplectic form. By nondegenerate we mean that if
$\phi$ obeys
\[
\langle\phi,\psi\rangle=0 \qquad \forall \psi\in H_{m}
\]
then $\phi=0$.
\end{defn}
Since $H_{m}$ is a vector space we can find a basis $\{ e_i \}^{m}_{i=1}$
for it and use this basis to express the hermitian symplectic
form as a matrix with entries
\begin{equation}\label{sym4}
\omega_{ij} = \langle e_i , e_j \rangle .
\end{equation}
By the definition of the form, the matrix $\omega$ is a skew-hermitian,
$\omega = -\omega^{\star}$, nondegenerate matrix. Clearly the hermitian
symplectic form can be written
\begin{equation}\label{sym3}
\langle\phi,\psi\rangle=(\phi ,\omega \psi )
\end{equation}
where, on the right hand side, $\phi$ and $\psi$ are written as vectors in
$\mathbb{C}^{m}$ using the basis $\{ e_i \}^{m}_{i=1}$ and $(\cdot ,\cdot)$ is the
standard hermitian scalar product on $\mathbb{C}^{m}$, making it an
$m$-dimensional Hilbert space. \\
In the usual symplectic case $\omega$ is
skew-symmetric and hence, due to nondegeneracy, of even order. This
restriction does not apply to skew-hermitian matrices and hence there is
no obstruction to having hermitian symplectic spaces of odd dimension. \\
Hermitian symplectic
spaces differ from symplectic spaces in another important respect; given
any symplectic space it is always possible to find a {\em canonical
basis:}
\begin{defn}
A basis $\{ p_{i},q_{i}\}^{n}_{i=1}$ which has the following property
\begin{eqnarray*}
\langle p_{i},q_{j}\rangle= \delta_{ij}
=-\langle q_{j},p_{i}\rangle \\
\langle p_{i},p_{j}\rangle= 0
=\langle q_{i},q_{j}\rangle
\end{eqnarray*}
where $\delta_{ij}$ is the Kronecker delta is known as a canonical
basis.
\end{defn}
Even an even-dimensional hermitian symplectic space, $H_{2n}$, need not
admit a canonical basis. Let us suppose that $H_{2n}$ has a basis $\{ e_i
\}^{2n}_{i=1}$ so that the skew-hermitian matrix
$\omega$ is
\[
\omega =\left( \langle e_i , e_j \rangle \right) = i \mathbb{I}_{(2n)} .
\]
We denote by $\mathbb{I}$ or $\mathbb{I}_{(n)}$ the $n\times n$
unit matrix. This case is obviously prohibited in the symplectic
case but acceptable in the hermitian symplectic case. Now if it were
possible to find a canonical basis in this space then there would be a
non-singular transformation of the basis, $P$, such that $\omega$ would
be transformed to
\[
P^{\star} \omega P = {J}
\]
where ${J}$, known as the canonical symplectic structure, is
\[
{J}=\left( \begin{array}{cc}
0 & \mathbb{I} \\
-\mathbb{I} & 0
\end{array}
\right) .
\]
This is clearly not possible. We use the fact that any
non-singular transformation can be written as the product of a unitary
and a hermitian matrix, $P= U H$. Consequently
\[
i H^2 = P^{\star} \omega P = J
\]
and the left hand side is a matrix with eigenvalues only on the imaginary
axis in the upper half plane. The right hand side, ${J}$, however, has
eigenvalues $\pm i$ equally distributed between the upper and lower half
planes.
\begin{defn}
We say that a hermitian symplectic space is canonical if it admits a
canonical basis.
\end{defn}
In the following we denote
\[
\mathbb{I}_{(n_+ ,n_- )}\equiv\left( \begin{array}{cc}
\mathbb{I}_{(n_+ )} & 0 \\
0 & -\mathbb{I}_{(n_- )}
\end{array}
\right) .
\]
\begin{lem}\label{charhs}
A hermitian symplectic space $H_m$ is, up to a non-singular
transformation of the basis, completely characterised by two integers,
$n_+$, $n_-$, $n_+ + n_- =m$. Specifically the matrix $\omega$
associated with the hermitian symplectic form can be
diagonalised to
\[
i \mathbb{I}_{(n_+ ,n_-)} .
\]
Furthermore $H_m$ is canonical iff $n_+ =n_-$.
\end{lem}
{\it Proof:} A hermitian symplectic space is
specified by the matrix $\omega$ up to a non-singular
transformation of the basis, $P$. The matrix $-i\omega$ is hermitian and
hence it can be diagonalised
\[
-i\omega = U D U^{\star}
\]
where $D$ is a real diagonal matrix without zeroes on the diagonal. Let
us choose the matrix $H$ as the {\em positive} diagonal matrix so that
$D^2 = H^4$. Then choosing the non-singular transformation of the basis,
$P= U H^{-1}$ we get
\[
P^{\star} \omega P = i H^{-1} U^{\star} U D U^{\star} U H^{-1} = i
\mathbb{I}_{(n_+ ,n_- )}
\]
where $n_{\pm}$ are the number of positive and negative eigenvalues of
$-i\omega$ respectively. Clearly, when $n_+ =n_- =n$ we can
find a canonical basis since we can transform $i \mathbb{I}_{(n,n)}$ to
${J}$. \hspace*{\fill} $\Box$ \\
\begin{defn}
We say that $\phi,\psi\in H_{m}$ are skew-orthogonal, denoted
$\phi\perp\psi$, if
\[
\langle\phi,\psi\rangle=0 .
\]
\end{defn}
\begin{defn}
Given a subspace $N\subset H_{m}$, we define the skew-orthogonal
complement, $N^{\perp}$, as the subspace
\[
N^{\perp}\equiv\{\phi; \;\phi\in H_{m},\,
\langle\phi,\psi\rangle=0 \; \forall \psi\in N \} .
\]
\end{defn}
\begin{defn}
The subspace $N\subset H_{m}$ is isotropic if
\[
N\subset N^{\perp} .
\]
\end{defn}
Let us assume that we have fixed some basis and found
the corresponding skew-hermitian matrix $\omega$ from equation
(\ref{sym4}) so that $H_m$ can be identified with the Hilbert space
$\mathbb{C}^m$ equipped with a hermitian symplectic form. The remaining lemmata in
this section all have analagous statements in symplectic geometry
\cite{Fom,Mar:Rat}.
\begin{lem}
The subspace $N\subset H_{m}$ is isotropic iff the subspaces
$N$ and
$\omega N$ are orthogonal in $\mathbb{C}^m$.
\end{lem}
{\it Proof:} Follows directly from equation (\ref{sym3}). \hspace*{\fill}
$\Box$ \\
\begin{lem}
The dimension, $k$, of an isotropic subspace $N\subset H_{m}$ never
excedes $m/2$.
\end{lem}
{\it Proof:} Since the operator $\omega$ on $\mathbb{C}^{m}$ is nondegenerate,
the dimensions of $N$ and $\omega N$ are the same. Consequently $k+k\leq
m$. \hspace*{\fill} $\Box$ \\
\begin{defn}
An isotropic subspace $\Lp{n}\subset H_{2n}$ of maximal dimension, that
is dimension $n$, is called a Lagrange\index{Lp} plane.
\end{defn}
\begin{cor}
If $\Lp{n}\subset H_{2n}$ is a Lagrange plane then
$\Lp{n}^{\perp}=\Lp{n}$.
\end{cor}
{\it Proof:} $\Lp{n}$ and $\Lp{n}^{\perp}$ both have dimension
$n$ and $\Lp{n}\subset \Lp{n}^{\perp}$. \hspace*{\fill} $\Box$ \\
From the definition it is clear that Lagrange planes only exist in
even-dimensional hermitian symplectic spaces, in fact it is not difficult
to show that a hermitian symplectic space contains a Lagrange plane iff
it is canonical. First we need the basic lemma,
\begin{lem}\label{decomp}
Given a hermitian symplectic subspace $V\subset H_{m}$,
$V^{\perp}$ is also hermitian symplectic,
\[
V + V^{\perp} = H_{m}
\]
and these subspaces have trivial intersection.
\end{lem}
{\it Proof:} It is clear that the intersection $V\cap V^{\perp}$ is empty.
Supposing instead that there is a $v\in V\cap V^{\perp}$ then $v$ is
skew-orthogonal to all the elements of $V$ and hence the form is
degenerate on $V$ which is a contradiction. \\
Since the matrix $\omega_{ij}$ is nondegenerate the dimension of
$V^{\perp}$ is the codimension of $V$. But since these two spaces do not
intersect, by a simple argument of linear independence
\[
V + V^{\perp} = H_{m} .
\]
Now we suppose that the form is degenerate on $V^{\perp}$, so there is
some element $z\in V^{\perp}$ so that
\[
\langle z , u \rangle =0, \qquad \forall u\in V^{\perp}
\]
and
\[
\langle z , v \rangle =0, \qquad \forall v\in V .
\]
But this would imply that the form is degenerate on $H_{m}$ which is a
contradiction. \hspace*{\fill} $\Box$ \\
\begin{lem}\label{cbeLp}
An even-dimensional hermitian symplectic space $H_{2n}$ is canonical iff
it contains a Lagrange plane.
\end{lem}
{\it Proof:} It is clear that a canonical hermitian symplectic space
contains a Lagrange plane, viz. the span of the first $n$ elements of the
canonical basis. \\
We suppose that we have an even-dimensional hermitian symplectic space
$H_{2n}$ which contains a Lagrange plane $\Lp{n}$. Then we can find some
basis $\{ e_i \}^{2n}_{i=1}$ so that the first $n$ elements span
$\Lp{n}$. Let us pick $p_1 = e_1$. Since the form is nondegenerate,
there is an element $\hat{q}_1\not\in\Lp{n}$ such that $\langle p_1 ,
\hat{q}_1 \rangle \neq 0$ and hence we can normalise so that
\[
\langle p_1 , q_1 \rangle = 1 .
\]
We denote by $V_1$ the linear span of $\{ p_1 , q_1 \}$. Using the fact
that $\langle p_1 , p_1 \rangle = 0$ it is not difficult to see that
$V_1$ is a canonical hermitian symplectic space. \\
Applying lemma \ref{decomp} to $V_1$ we see that
$V^{\perp}_1$ is a hermitian symplectic space. Furthermore it has a
Lagrange plane given by the span of $\{ e_i \}^{n}_{i=2}$. Repeating this
process for $V^{\perp}_1$ allows us to construct a canonical basis for
$H_{2n}$. \hspace*{\fill} $\Box$ \\
\begin{defn}
A linear transformation is called ${J}$-unitary or hermitian symplectic
if it satisfies
\[
g^{\star}{J} g={J} .
\]
\end{defn}
Clearly such a transformation takes Lagrange planes to Lagrange planes.
Consider the set of all Lagrange planes of a canonical hermitian
symplectic space $H_{2n}$, the Lagrange
Grassmannian denoted $\Lambda_{n}$. We show that the Lagrange
Grassmannian is isomorphic to the set of unitary matrices.
\begin{lem}\label{Can6}
A given Lagrange plane $\Lp{0,n}$ can be made to coincide with
any other Lagrange plane $\Lp{n}$ by means of a hermitian symplectic
transformation of the form
\begin{equation}\label{hs1}
g=\left( \begin{array}{cc}
A & B \\
-B & A
\end{array}
\right) \hspace{5mm} A,B\in \mathbb{C}^{n\times n}
\end{equation}
where $A$ and $B$ satisfy
\begin{eqnarray}
A^{\star} A + B^{\star} B = \mathbb{I} \label{kri}\\
A^{\star} B = B^{\star} A . \label{krii}
\end{eqnarray}
Specifically, if we are given a canonical basis
$\{\xi_{0,i}\}^{2n}_{i=1}$, the first $n$ elements of which span the
Lagrange plane $\Lp{0,n}$, then there is a hermitian symplectic
transformation $g$ such that the first $n$ elements of the canonical
basis $\{\xi_{i}\}^{2n}_{i=1}$ given by
\[
\xi_{i} = \sum^{2n}_{j=1} g_{ij} \xi_{0,j}
\]
span $\Lp{n}$.
\end{lem}
{\it Proof:} As we are dealing with canonical spaces there always exists
a canonical basis $\{\xi_{0,i}\}^{2n}_{i=1}$ and we choose
$\Lp{0,n}$ to be the span of the first $n$ elements of this basis. In
terms of this canonical basis we can identify $H_{2n}$ with
$\mathbb{C}^{2n}$ where the two-form is given by $\omega ={J}$. \\
Consider another arbitrary Lagrange plane $\Lp{n}$. Using the
above identification, $\Lp{n}$ may be considered to be an
$n$-dimensional subspace of $\mathbb{C}^{2n}$. Consequently, we
can find a set of $n$ orthonormal vectors in $\mathbb{C}^{2n}$ which form
a basis for $\Lp{n}$---we denote this basis by $\{\xi_{i}\}^{n}_{i=1}$.
Since the
$\{\xi_{0,i}\}$ form a basis for $H_{2n}$ there are matrices $A$ and $B$
such that
\begin{equation}\label{st1}
\xi_{i}=\sum^{n}_{j}A_{ij}\xi_{0,j}+\sum^{n}_{j}B_{ij}\xi_{0,j+n}
\qquad \mbox{for} \;i=1,\ldots ,n .
\end{equation}
That is $A_{ij}=(\xi_i ,\xi_{0,j})$, $B_{ij}=(\xi_i ,\xi_{0,n+j})$ for
$j=1,\ldots ,n$. Furthermore, since we have assumed that the $\{\xi_{i}\}$
are orthonormal in $\mathbb{C}^{2n}$ we immediately have
equation (\ref{kri}). Using the fact that the $\{\xi_{i}\}$ form a
Lagrange plane in equation (\ref{sym3}) gives us
equation (\ref{krii}). Together these two equations imply that $g$ is a
hermitian symplectic transformation. \hspace*{\fill} $\Box$ \\
In fact, it is easy to see that equations (\ref{kri},\ref{krii}) imply
that $g$ is a hermitian symplectic matrix as well as a unitary matrix,
ie. it preserves the hermitian symplectic form as well as the scalar
product in $\mathbb{C}^{2n}$.
\\ Let us denote by ${\cal G}$ the set of matrices of the form
\[
{\cal G}=\left\{ g=\left( \begin{array}{cc}
A & B \\
-B & A
\end{array}
\right); \; A,B\in \mathbb{C}^{n\times n}, \; g\in{\sf U} (2n) \right\}
\]
which occur in the above lemma, this set is clearly a group under matrix
multiplication. In order to classify $\Lambda_{n}$ we need to find the
stationary subgroup of ${\cal G}$, ie. ${\cal H}\subset{\cal G}$ the elements of which
take the Lagrange plane $\Lp{0,n}$ into itself. But it is easy to see that
in the notation of the above lemma these are just those matrices with
$B=0$: the stationary subgroup ${\cal H}$ is therefore the set of matrices
\[
{\cal H}=\left\{ h=\left( \begin{array}{cc}
C & 0 \\
0 & C
\end{array}
\right); \; C\in \mathbb{C}^{n\times n}, \; h\in{\sf U} (2n) \right\} .
\]
\begin{lem}
The Lagrange Grassmannian $\Lambda_{n}$ is in one-to-one
correspondence with the unitary group.
\[
\Lambda_{n}\simeq{\cal G} / {\cal H}\simeq{\sf U} (n)
\]
\end{lem}
{\it Proof:} The first isomorphism follows from lemma 6. To see the
second isomorphism we use the unitary matrix
\[
W=\frac{1}{\sqrt{2}}\left( \begin{array}{cc}
\mathbb{I} & i\mathbb{I} \\
i\mathbb{I} & \mathbb{I}
\end{array}
\right)
\]
Our choice of $W$ is motivated by the fact that it diagonalises in the
`blockwise' sense matrices of the form given by equation (\ref{hs1}).
Precisely
\[
WgW^{\star} = W \left( \begin{array}{cc}
A & B \\
-B & A
\end{array} \right) W^{\star}
= \left( \begin{array}{cc}
A-iB & 0 \\
0 & A+iB
\end{array} \right) .
\]
Since $g$ is unitary so is $WgW^{\star}$ and hence,
$A-iB$ and $A+iB$ must also be unitary. \\
Now instead of considering the groups ${\cal G}$ and ${\cal H}$, we consider the
unitarily equivalent groups
\[
\hat{{\cal G}}=W{\cal G} W^{\star}=\left\{ \hat{g}=\left( \begin{array}{cc}
S & 0 \\
0 & T
\end{array}
\right); \; S,T\in{\sf U} (n) \right\}
\]
and, since the elements of ${\cal H}$ are already in block diagonal form,
\[
\hat{{\cal H}}=W{\cal H} W^{\star}=\left\{ \hat{h}=\left( \begin{array}{cc}
C & 0 \\
0 & C
\end{array}
\right); \; C\in{\sf U} (n) \right\} .
\]
It is easy to see that we can represent the set of cosets
$\hat{{\cal G}} / \hat{{\cal H}}$ by the subgroup of $\hat{{\cal G}}$ consisting of
matrices where the bottom right block is of the form $T=\mathbb{I}$, that is
\[
\Lambda_n\simeq\hat{{\cal G}} / \hat{{\cal H}}\simeq
\left\{ \hat{g}=\left( \begin{array}{cc}
U & 0 \\
0 & \mathbb{I}
\end{array}
\right); \; U\in{\sf U} (n) \right\} .
\]
This gives the result. \hspace*{\fill} $\Box$ \\
\begin{cor}\label{LPcorr}
A given Lagrange plane can be made to coincide with any other
Lagrange plane by means of a hermitian symplectic transformation of the
form
\begin{equation}\label{hs2}
g=W^{\star}\hat{g}W=W^{\star}\left( \begin{array}{cc}
U & 0 \\
0 & \mathbb{I}
\end{array}
\right) W=\frac{1}{2}\left( \begin{array}{cc}
U+\mathbb{I} & i(U-\mathbb{I}) \\
-i(U-\mathbb{I}) & U+\mathbb{I}
\end{array}
\right)
\end{equation}
where $U$ is a unitary matrix.
\end{cor}
\section{Extension theory}
Here we consider the extension theory for a symmetric operator ${\cal L}_0$
on a Hilbert space \cite{Akh:Glz, Alb:Kur, Pav, Ree:Sim}. First we
recall some well known facts from operator theory. The domain of the
adjoint operator
${\cal L}^{\star}_0$ can be expressed
\[
\mbox{Dom} ({\cal L}^{\star}_0 )= \mbox{Dom} ({\cal L}_0 ) + {\cal N}_{+i} + {\cal N}_{-i}
\]
where these three subspaces are linearly independent. The eigenspaces
\[
{\cal N}_{\pm i}\equiv \ker ({\cal L}^{\star}_0 \pm i)
\]
are known as the deficiency subspaces and the deficiency indices $(n_+
,n_-)$ are the dimensions of the deficiency subspaces $n_{\pm} \equiv \dim
{\cal N}_{\pm i}$. In what follows we assume $n_{\pm}<\infty$. \\
Typically, the extensions of ${\cal L}_0$ are specified by a unitary map
between the deficiency subspaces \cite{Akh:Glz, Ree:Sim} and self-adjoint
extensions of ${\cal L}_0$ exist when $n_+ =n_-$. Alternatively,
extensions may be described by consideration of the boundary form
\begin{equation}\label{bf0}
{\cal J}(f,g)\equiv ({\cal L}^{\star}_{0}f,g) - (f,{\cal L}^{\star}_{0}g) ,
\end{equation}
where $f,g\in \mbox{Dom} ({\cal L}^{\star}_{0})$---see \cite{Pav} for a detailed
account. The boundary form ${\cal J}(\cdot ,\cdot)$ is actually a
hermitian symplectic form and when restricted to ${\cal N}_{+i} + {\cal N}_{-i}$ is
nondegenerate, defining a hermitian symplectic space (the form is degenerate
on $\mbox{Dom}({\cal L}_0)$, a simple consequence of the fact that ${\cal L}_0$ is
symmetric).
\begin{prop}
The hermitian symplectic space formed by the boundary form
${\cal J}$ on ${\cal N}_{+i} + {\cal N}_{-i}$ is characterised, in the sense of
lemma \ref{charhs}, by the deficiency indices $n_{\pm}$.
\end{prop}
{\it Proof:} Suppose that we have orthonormal bases
$\{ \js{,i}\}^{n_+}_{i=1}$, $\{ \jsb{,i}\}^{n_-}_{i=1}$ for ${\cal N}_{+i}$ and
${\cal N}_{-i}$ respectively. We use these bases to write the boundary form as
a matrix
\[
\omega_{ij} = {\cal J}( \jsb{,i}, \jsb{,j} ) = - 2 i \delta_{ij} .
\]
This completes the proof. \hspace*{\fill} $\Box$ \\
In terms of this hermitian symplectic space it is not difficult to see
that the extensions of ${\cal L}_0$ correspond to isotropic subspaces and,
when the space is canonical (ie. $n_+ =n_-$), that the self-adjoint
extensions correspond to Lagrange planes.
\subsection{The Schr\"{o}dinger operator on the graph with trivial compact
part}
Here we consider the non-compact graph consisting of $n$ semi-axes
connected at a single vertex, we denote such a graph by $\Gamma_n$.
Functions on $\Gamma_n$ may be represented by elements of the Hilbert space
\[
H(\Gamma_n)=\oplus^{n}_{i=1}L^{2}([0,\infty)) .
\]
The elements of $H(\Gamma_n)$ are $n$-dimensional vector functions and
the inner product on $H(\Gamma_n)$ is
\[
(\phi,\psi) = \sum^{n}_{i=1}(\phi_i,\psi_i)_{L^{2}([0,\infty))}
= \sum^{n}_{i=1} \int^{\infty}_0 \bar{\phi}_i (x) \psi_i (x) dx
\]
where $\phi_i$ are the components of $\phi$. \\
Let us consider the symmetric Schr\"{o}dinger operator,
${\cal L}_{0}$ in $H(\Gamma_n)$ which acts on components by
\begin{displaymath}
{\cal L}_{0}\psi_i\equiv -\frac{d^2\psi_i}{dx_i^2}+q_i \psi_i ,
\end{displaymath}
and has domain consisting of the smooth functions
with compact suppport in the open interval
\begin{displaymath}
D({\cal L}_{0})=\oplus^{n}_{i=1}C^{\infty}_{0}((0,\infty)) .
\end{displaymath}
The potentials $q_i$ are supposed to be continuous real valued functions
which are integrable with finite first moment, ie.
\begin{equation}\label{bcndxx}
\int^{\infty}_0 (1 + x) \vert q_i(x) \vert dx < \infty .
\end{equation}
It is easy to see that the deficiency indices of ${\cal L}_{0}$ are $(n,n)$.
Consequently we may consider the self-adjoint extensions of ${\cal L}_{0}$ and
indeed, using the results of Neumann extension theory \cite{Akh:Glz}
parameterise these extensions by the unitary matrices ${\sf U}(n)$. \\
The problem of finding self-adjoint {\em boundary conditions} for
such an operator is discussed in detail in
\cite{Kost:Sch,Exn:Seb}. In \cite{Kost:Sch} all self-adjoint boundary
conditions are parameterised non-uniquely in terms of two $n$-th order
matrices, $A$ $B$, such that $(A\, B)$ is of maximal rank and
$AB^{\star}=BA^{\star}$ is hermitian (in this paper the authors consider
graphs with trivial compact part as well as graphs with
non-trivial compact part). \\
Instead, here we will use the discussion of hermitian symplectic spaces
to parameterise all of the self-adjoint boundary conditions at the origin
in terms of a unitary matrix $U$. A simple calculation using integration
by parts shows that the boundary form for the Schr\"{o}dinger operator is
\begin{equation}\label{bf1}
({\cal L}^{\star}_{0}\psi,\phi) - (\psi,{\cal L}^{\star}_{0}\phi) =
\sum^{n}_{j=1}
\left. [ \bar{\psi}_i \phi_{i,x} - \bar{\psi}_{i,x} \phi_i ]\right|_{0} .
\end{equation}
This boundary form may be thought of as acting in the
$2n$-dimensional hermitian symplectic space, $H_{2n}$, of
boundary values at the origin. The boundary form can be written
\[
{\cal J}( \psi , \phi ) = (\psi ,{J}\phi )
\]
where on the right hand side we use the inner product in $\mathbb{C}^{2n}$ and
$\psi$, $\phi$ are vectors in $\mathbb{C}^{2n}$ of the form
\[
( \psi_{1}|_0,\ldots ,\psi_{n}|_0, \psi_{1,x}|_0,\ldots ,\psi_{n,x}|_0
)^T .
\]
Consequently this defines a canonical basis. Let us represent the canonical
basis elements explicitly as $\{\xi_{0,i}\}^{2n}_{i=1}\in H_{2n}$ where for
$i=1,\ldots ,n$, $\xi_{0,i}$ represents the boundary condition
$\left.\psi_{i}\right|_0 =1$; and for $i=n+1,\ldots ,2n$ it represents the
boundary condition $\left.\psi_{i,x}\right|_0 =1$. The first
$n$ and last $n$ elements of a canonical basis each span a Lagrange
plane---the first $n$ basis vectors specify self-adjoint Neumann boundary
conditions, and the last $n$ basis vectors specify self-adjoint Dirichlet
boundary conditions. \\
We fix a unitary matrix $U$ and consider the associated
self-adjoint boundary conditions specifying a Lagrange plane. From
corollary \ref{LPcorr} the basis for the Lagrange plane defined by $U$
is given by
\[
\xi_i = \sum^{2n}_{j=1} g_{ij} \xi_{0,j} \hspace{5mm} \mbox{for}\;
i=1,\ldots ,n ,
\]
where $g$ is defined by equation
(\ref{hs2}). Writing this in terms of boundary values we see that (up
to a transposition) the set of self-adjoint boundary values is
\[
(\psi_{1}|_0,\ldots ,\psi_{n}|_0, \psi_{i,x}|_0,\ldots ,\psi_{n,x}|_0 )^T
\in
\mbox{Ran} \left( \begin{array}{c}
\frac{1}{2}( U+\mathbb{I} )
\vspace{3mm} \\
\frac{i}{2}( U-\mathbb{I} )
\end{array} \right) .
\]
It is convenient to have the self-adjoint boundary
{\em conditions}, ie. to have an expression in terms of the kernel
rather than the range of a matrix. This is possible if we note that
\[
\mbox{Ran} \left( \begin{array}{c}
\frac{1}{2}( U+\mathbb{I} )
\vspace{3mm} \\
\frac{i}{2}( U-\mathbb{I} )
\end{array} \right) = \ker \left( \frac{i}{2}(U^{\star}-\mathbb{I}), \;
\frac{1}{2}(U^{\star}+\mathbb{I}) \right)
\]
which follows from equation (\ref{krii}) and the fact that both of these
matrices are of rank $n$. Consequently, the boundary conditions may be
expressed
\begin{equation}\label{basicbc}
\frac{i}{2}(U^{\star}-\mathbb{I} ) \left. \psi \right|_0 +
\frac{1}{2}(U^{\star} +\mathbb{I} ) \left. \psi_{x} \right|_0 = 0 .
\end{equation}
In the remainder of this subsection we will discuss how the matrix $U$,
used to describe the boundary conditions, appears in the asymptotics of
the scattering matrix. It is convenient to consider the
Schr\"{o}dinger operator on the graph with $n$ rays as a matrix
operator, with diagonal potential, see \cite{Har,Har1}. Let us consider the
matrix of $n$ solutions of Schr\"{o}dinger equation ${\cal L}\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}} = \lambda\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}}$
on the graph satisfying the following boundary conditions at the origin
\begin{equation}\label{eSwo}
\left. \Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}} \right|_0 = \frac{1}{2}( U + \mathbb{I} ) \equiv A ,\qquad
\left. \Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}}_x \right|_0 = \frac{i}{2}( U -\mathbb{I} ) \equiv B .
\end{equation}
It is clear, from equation (\ref{krii}), that each column of
$\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}}$ satisfies the self-adjoint boundary conditions, ie. equation
(\ref{basicbc}), and hence is (formally) an eigenfunction of the self-adjoint
Schr\"{o}dinger operator on the graph with boundary conditions prescribed by
$U$. \\
Likewise we can define the Jost solutions,
$\Jsd{}$, as the matrix of solutions of the homogeneous equation
${\cal L}\Jsd{} =\lambda\Jsd{}$, with asymptotic behaviour
\[
\lim_{x\rightarrow\infty}\Jsd{}(x,k) \sim e^{\pm ikx}\mathbb{I} .
\]
We denote $\lambda=k^2$. As the Jost solutions form a complete set of
solutions we can write
\begin{equation}\label{eSwJs}
\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}} (x,k) = \Jsb{}(x,k)\Mb{}{}(k) + \Js{}(x,k)\M{}{}(k) .
\end{equation}
In this notation we define the scattering wave solutions
\begin{eqnarray*}
\Psi (x,k) \equiv \Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}} (x,k) \Mb{-1}{} = \Jsb{} + \Js{} S(k)
\end{eqnarray*}
where $S(k)$ is known as the scattering matrix. The
coefficients $\Md{}{}$ can be evaluated by taking the Wronskian of $\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}}$
and $\Js{}$ or $\Jsb{}$ \cite{Har}
\begin{equation}\label{sdmi}
\Md{}{} = \pm\frac{1}{2ik}\left[ \Jfad{} B - \Jfad{,x} A \right] .
\end{equation}
where $\Jfd{}(k)\equiv\Jsd{}(0,k)$ are known as the Jost functions and
$\mbox{}^{\dagger}$ is the involution
$Y^{\dagger} (x,k)\equiv Y^{\star}(x,\bar{k})$.
The Wronskian of $\Xi^{\dagger} \rule{0mm}{4.5mm}$ and $\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}}$
\[
W\{\Xi^{\dagger} \rule{0mm}{4.5mm} ,\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}} \} = \left. \left[ \Xi^{\dagger} \rule{0mm}{4.5mm} \Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}}_x - \Xi^{\dagger} \rule{0mm}{4.5mm}_x \Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}} \right]
\right|_0 = A^{\star} B - B^{\star} A = 0 ,
\]
is always zero. Moreover, if we write
$\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}}$ in terms of the scattering wave solutions
\[
W\{\Xi^{\dagger} \rule{0mm}{4.5mm} ,\Xi \rule{0mm}{4.5mm}}\newcommand{\M}[2]{M^{#1}_{+{#2}} \} = \Ma{}{} W\{\Jsa{} + S^{\dagger} \Jsab{}
,\Jsb{} + \Js{} S \} \Mb{}{} = 2ik \Ma{}{} \left[ -\mathbb{I} + S^{\dagger} S
\right] \Mb{}{} ,
\]
we see, since $S^{\dagger}=S^{\star}$ for $k\in\mathbb{R}$, that
the scattering matrix is unitary for real $k$. \\
If we diagonalise $U$, and use the well known asympototics of the Jost
functions \cite{Agr:Mar,Har} in the above expression for $\Md{}{}$, we
see that the scattering matrix has the following asymptotic behaviour:
\begin{lem}\label{umhu}
Given the self-adjoint operator ${\cal L}$, with associated unitary matrix $U$
defining the boundary conditions of ${\cal L}$, the scattering matrix of ${\cal L}$
has the asymptotics
\[
\lim_{k\rightarrow\infty} S(k) \sim \hat{U}
\]
where $\hat{U}$ is a unitary hermitian matrix derived from $U$ by applying
the map
\[
z\mapsto \left\{ \begin{array}{cc}
1 & : z\in\mathbb{T} \setminus \{-1\} \\
-1 & : z = -1
\end{array} \right.
\]
to the spectrum of $U$. Here $\mathbb{T}$ is the unit circle in $\mathbb{C}$.
\end{lem}
{\it Proof:} Let us diagonalise the matrix $U$. In this basis, using
equation (\ref{sdmi}) and the asymptotics of the Jost
functions, the scattering matrix approaches
\[
\lim_{k\rightarrow\infty} - \left[ (e^{i\varphi_j} - 1 ) + k
(e^{i\varphi_j} + 1) \right]
\left[ (e^{i\varphi_j} - 1 ) - k (e^{i\varphi_j} + 1 )\right]^{-1}
\]
in the limit of large $k$. Here the $e^{i\varphi_j}$ are the unitary
eigenvalues of $U$. There are two cases; when $e^{i\varphi_j}=-1$, this
limit is $-1$, and when $e^{i\varphi_j}\neq-1$ the limit is
$1$. \hspace*{\fill} $\Box$ \\
We note that those boundary conditions which
are defined by unitary matrices which in addition are hermitian matrices
can be expressed by projections---the terms $\frac{1}{2}(U \pm \mathbb{I})$ are
really orthogonal projections
\[
P = \frac{1}{2}\left( U+\mathbb{I}\right),\qquad
P^{\perp} = \mathbb{I} - P = -\frac{1}{2}\left( U-\mathbb{I}\right) .
\]
which follows simply from the fact that $U=U^{\star}=U^{-1}$. Using this
notation and orthogonality we can write the
boundary conditions, equation (\ref{basicbc}), as
\begin{equation}\label{Pbc}
P^{\perp} \left. \psi \right|_0 = 0, \qquad
P \left. \psi_x \right|_0 = 0 .
\end{equation}
Consequently these boundary conditions are characterised by the fact
that the conditions on the functions and the derivatives
of the functions at the origin are independently specified. \\
The associated scattering matrix has the form
\begin{equation}\label{Psm}
S(k) = - \left[ i \Jfab{} P^{\perp} + \Jfab{,x} P \right]
\left[ i \Jfa{} P^{\perp} + \Jfa{,x} P \right]^{-1} .
\end{equation}
In the case of zero potential so that the Jost solutions are
exponential functions we see that the
scattering matrix is constant
\begin{equation}\label{sh}
S(k) = - \left[ P^{\perp} - k P \right]
\left[ P^{\perp} + k P \right]^{-1} = - P^{\perp} + P = U .
\end{equation}
Therefore the scattering wave has no poles and
there are no discrete eigenvalues. \\
In contrast if $U$ is not hermitian we will have discrete
eigenvalues, or alternatively resonances, when the potential is, apart
from at the origin, identically zero. This reproduces all cases--like
for instance a $\delta$ or $\delta^{\prime}$-interaction at the
origin---in which bound states or resonances appear for a zero-range
potential.
\section*{Acknowledgements}
The author would like to thank Prof B.S. Pavlov for his advice and
many useful conversations. |
quant-ph/0703228 | \section{Introduction}
The integration of quantum mechanics and information theory gave
birth to the theory of quantum information. As another fundamental
part of modern physics theories, relativity theory also has
significant interrelationship with quantum mechanics and information
theory \cite{PTRMP,Terno}. One intriguing example is the
relativistic thermodynamics, which was renewed when quantum
properties of black holes \cite{qpbh} were discovered. The
thermodynamics of moving bodies \cite{RTMB} demonstrate that
probability distributions, which is relevant to Shannon entropy
information, depend on the inertial frame. Most recently, the
relationship of relativity theory to quantum information theory has
attracted increasing interest. Since Peres, Scudo and Terno find
that the spin Von Neumann entropy is not Lorentz invariant
\cite{PST}, the effects of Lorentz boosts on quantum states and then
quantum entanglement
\cite{Czachor,Milburn,Pachos,Adami,Terno1,Alsing,Mann,Lamata} have
been widely investigated. Most of these works about relativistic
quantum information theory (RQIT) concentrated on the quantum state
itself. RQIT maybe necessary in future practical experiments. It has
been shown that the fidelity of quantum teleportation with a
uniformly accelerated partner is reduced due to Davies-Unruh
radiation \cite{Alsing}. Other possible applications include quantum
clock synchronization \cite{Clock}, quantum-enhanced communication
\cite{EEC,Wilczewski} and global positioning \cite{global}.
Quantum decoherence is closely related to several fundamental
problems in quantum mechanics, e.g. quantum measurement and quantum
to classical transition \cite{Zurek4}. Therefore, the investigation
of decoherence in combination with special relativity is naturally
of interest and importance \cite{Breuer}, which may help gain new
insights into the fundamental issues of modern theoretic physics. It
is known that quantum systems coupled with an external environment
will suffer from inevitable decoherence, which is the most severe
obstacle to implementing quantum computation. A famous strategy to
fight against decoherence is quantum dynamical decoupling by
applying different control operations, including spin echo and
bang-bang control \cite{decoupling,spin-echo}.
Given a single spin-$1/2$ Dirac electron with the rest mass $m>0$,
we could realize the qubit by the spin up and down along the
$\hat{\mathbf{z}}$ direction. If the Dirac electron is coupled with
an noisy magnetic environment, the coherence of the spin-qubit will
be lost. In conventional quantum information theory (CQIT), people
always assume that the central quantum system is at rest. In this
paper, we present the time evolution of the spin state of a
\textit{moving} Dirac electron. It can be seen that the decoherence
properties is significantly modified by special relativity. We
demonstrate that the dressed environment induced by special
relativity will suppress the spin decoherence. One should note that,
the problem we consider here is different from the gravitational
decoherence \cite{GD}, which results from quantum metric
fluctuations and Unruh effect.
The structure of this paper is as follows. In Sec. II we demonstrate
the spin dephasing of a rest electron. In Sec. III the influences
of special relativity on the spin decoherence properties is
investigated by presenting its dynamical behavior in the
operator-sum representation form. In Sec. IV are conclusions and
some discussions.
\section{Spin dephasing of a rest electron}
We start by considering the decoherence process for the spin degree
of freedom of an electron with a background magnetic noise. In CQIT,
the electron is always assumed to be at rest. Since the spin
magnetic moment of the electron is $\mu =\frac{e\hslash }{2mc}$,
where $e$ is the magnitude of the electronic charge, its interaction
with a magnetic field $\mathbf{B}=\nabla \times \mathbf{A}$ is
described by the following simple Hamiltonian
\begin{equation}
H_{I}=\mu \hat{\mathbf{\sigma }}\mathbf{\cdot B}
\end{equation}%
where $\hat{\mathbf{\sigma }}=(\sigma _{x},\sigma _{y},\sigma _{z})$ are
Pauli matrices. If the noisy background magnetic field is $\mathbf{B=}B\hat{%
\mathbf{z}}$ in the $\hat{\mathbf{z}}$ direction with Gaussian probability
distribution $\eta (B)=\exp (-B^{2}/2\kappa ^{2})/\sqrt{2\pi }\vartheta $,
and under quasi-static approximation \cite{Ithier}, we can write the spin
state of the electron at time $t$ as a completely positive map with an
operator-sum representation
\begin{equation}
\mathcal{E}(\rho )=\lambda _{0}(t)\rho +\lambda _{1}(t)\sigma _{z}\rho
\sigma _{z}
\end{equation}%
where $\rho $ is the initial spin state, and the parameters $\lambda
_{0}(t)+\lambda _{1}(t)=$ $1$, $\lambda _{0}(t)-\lambda _{1}(t)=$ $%
e^{-\gamma t^{2}}=\int_{-\infty }^{\infty }e^{-i2\mu Bt}\eta (B)dB$ with $%
\gamma =2\vartheta ^{2}\mu ^{2}$. Here we have set $\hbar =1$ for
simplicity. This kind of decoherence model, named dephasing, is very
important and has been widely investigated in CQIT \cite{Zurek3}. In the
dephasing process, the diagonal elements of the spin density matrix remain
unchanged. The decoherence is reflected by the off-diagonal elements, which
will decay exponentially as $\rho _{\uparrow \downarrow }(t)=\rho _{\uparrow
\downarrow }e^{-\gamma t^{2}}$ until $\rho _{\uparrow \downarrow
}(t)\rightarrow 0$ in the long time limit $\gamma t^{2}\gg 1$, i.e. the spin
state becomes classically mixed.
\begin{figure}[tbh]
\epsfig{file=figure1.eps,width=7cm}
\caption{(Color online) A spin-$1/2$ Dirac electron, at rest in the moving
inertial frame $R_{S}$ with the velocity $\mathbf{v}$, is coupled with the
background magnetic noise in the $\hat{\mathbf{z}}$ direction of the rest
frame $R_{E}$.}
\end{figure}
\section{Spin decoherence of a moving electron}
To deal with the relativistic Dirac electron moving at a constant
velocity $\mathbf{v}=(v\sin \theta \cos \varphi ,v\sin \theta \sin
\varphi ,v\cos \theta )$ relative to the rest frame $R_{E}$, we
should adopt the Dirac equation for the electron in external
homogeneous static fields. After choosing a suitable reference
frame $R$, the Dirac Hamiltonian in Foldy-Wouthuysen representation \cite%
{Greiner} is
\begin{eqnarray}
H_{D} &=&H_{p}+H_{SB} \notag \\
H_{p} &=&mc^{2}+\frac{1}{2m}(\hat{\mathbf{p}}+\frac{e}{c}\mathbf{A})^{2}-%
\frac{\hat{\mathbf{p}}^{4}}{8m^{3}c^{2}} \\
H_{SB} &=&\mu \hat{\mathbf{\sigma }}\mathbf{\cdot B}-\frac{\mu }{2mc}\hat{%
\mathbf{\sigma }}\mathbf{\cdot }(\mathbf{E\times }\hat{\mathbf{p}}) \notag
\end{eqnarray}%
where $\hat{\mathbf{p}}=-i\hslash \nabla $, and $\mathbf{E=-\nabla \phi }$
represents the electric field. The above Hamiltonian is a non-relativistic
expansion to order $v_{p}^{2}/c^{2}$, where $v_{p}$ is the relative velocity
of the electron in the reference frame $R$. It is convenient for us to
investigate the dynamical properties of the Dirac electron in the the moving
inertial frame $R_{S}$ with the velocity $\mathbf{v}$ relative to the rest
frame $R_{E}$, in which the electron is at rest, i.e. $v_{p}=0$. By virtue
of the Foldy-Wouthuysen representation, we only need to consider the
positive energy states. For the spin and momentum eigen state $|\mathbf{p}%
,s\rangle $, of which $|\mathbf{p}\rangle \sim e^{i\mathbf{p\cdot r}}$, $%
H_{p}e^{i\mathbf{p\cdot r}}=\varepsilon _{p}e^{i\mathbf{p\cdot r}}$. After
time $t$, the evolution of the state is $|\mathbf{p},s\rangle \rightarrow
e^{i\phi _{p}(t)}|\mathbf{p}\rangle e^{-iH_{SB}t}|\mathbf{s}\rangle $. The
momentum phase $\phi _{p}(t)$ becomes trivial when tracing out the momentum
degree of freedom.
In the rest frame $R_{E}$, the magnetic field is $\mathbf{B=}B\hat{\mathbf{z}%
}$ in the $\hat{\mathbf{z}}$ direction, thus the fields viewed in the moving
frame $R_{S}$ can be obtained according to the Lorentz transformations \cite%
{Greiner2} as follows
\begin{eqnarray}
E_{\bot }^{\prime } &=&\cosh \xi (E_{\bot }+\frac{\mathbf{v}}{c}\times
\mathbf{B})_{\bot },\text{ \ \ }E_{\shortparallel }^{\prime
}=E_{\shortparallel } \notag \\
B_{\bot }^{\prime } &=&\cosh \xi (B_{\bot }-\frac{\mathbf{v}}{c}\times
\mathbf{E})_{\bot },\text{ \ \ }B_{\shortparallel }^{\prime
}=B_{\shortparallel }
\end{eqnarray}%
where $\shortparallel $ and $\bot $ mean parallel and perpendicular to $%
\mathbf{v}$, the rapidity $\xi $ is defined as $\cosh \xi
=1/(1-v^{2}/c^{2})^{1/2}$. After some straightforward calculations, and note
that $\mathbf{p=0}$, we get the effective Hamiltonian for the spin degree of
freedom of the Dirac electron $H_{SB}=\mu \hat{\mathbf{\sigma }}\mathbf{%
\cdot B}^{\prime }$, where $B_{x}^{\prime }=B(1-\cosh \xi )\cos \theta \sin
\theta \cos \varphi $, $B_{y}^{^{\prime }}=B(1-\cosh \xi )\cos \theta \sin
\theta \sin \varphi $, and $B_{z}^{^{\prime }}=B(\cos ^{2}\theta +\cosh \xi
\sin ^{2}\theta )$. The above effective interaction Hamiltonian means that
from the viewpoint of the moving Dirac electron, the environment is
different compared to the situation when the electron is at rest due to the
relativity of the magnetic field. In the following, we will investigate in
detail how this kind of effects will modify the dynamical properties of the
spin decoherence.
The interaction with the external magnetic field $\mathbf{B}^{\prime }$ will
introduce a rotation on the spin by $\delta $ about the $\hat{\mathbf{n}}%
=(n_{x},n_{y},n_{z})$ axis as
\begin{equation}
U(B,t)=\exp (-i\frac{\delta }{2}\hat{\mathbf{\sigma }}\mathbf{\cdot }\hat{%
\mathbf{n}}) \label{RT}
\end{equation}%
where $n_{i}=B_{i}^{\prime }/B^{^{\prime }}$, $i=x,y,z$, with $B^{^{\prime
}}=(B_{x}^{\prime 2}+B_{y}^{\prime 2}+B_{z}^{\prime 2})^{1/2}=\kappa B$, $%
\kappa =(\cos ^{2}\theta +\cosh ^{2}\xi \sin ^{2}\theta )^{1/2}$, and the
rotation angle is $\delta =2\mu tB^{^{\prime }}=2\kappa \mu tB$. In the
similar way, the noisy background magnetic field $B$ is with Gaussian
probability distribution, and under quasi-static approximation \cite{Ithier}%
, we can write the spin state of the moving Dirac electron at time $t$ as%
\begin{equation}
\rho (t)=\int_{-\infty }^{\infty }\rho (B,t)\eta (B)dB
\end{equation}%
where $\rho (B,t)=U(B,t)\rho U^{\dag }(B,t)$ and $\rho $ denotes the initial
spin state.
We first examine the diagonal elements by calculating $\rho _{\uparrow
\uparrow }(t)$. According to Eq.(\ref{RT}), it is easy for us to write $\rho
_{\uparrow \uparrow }(B,t)=\rho _{\uparrow \uparrow }-2\Delta \rho
_{\uparrow \uparrow }\sin ^{2}\frac{\delta }{2}+\frac{i}{2}(\rho _{\uparrow
\downarrow }e^{i\varphi }-\rho _{\downarrow \uparrow }e^{-i\varphi
})(n_{x}^{2}+n_{y}^{2})^{1/2}\sin \delta $, where $\Delta \rho _{\uparrow
\uparrow }=\frac{1}{2}[\eta (\rho _{\uparrow \uparrow }-\rho _{\downarrow
\downarrow })-\chi (\rho _{\uparrow \downarrow }e^{i\varphi }+\rho
_{\downarrow \uparrow }e^{-i\varphi })]$ with two modulation factors defined
as $\eta =(1-n_{z}^{2})$ and $\chi =n_{z}(n_{x}^{2}+n_{y}^{2})^{1/2}$. After
integrating over the magnetic field $B$ with Gaussian probability
distribution, we obtain
\begin{equation}
\rho _{\uparrow \uparrow }(t)=\rho _{\uparrow \uparrow }-\Delta \rho
_{\uparrow \uparrow }(1-e^{-\gamma ^{\prime }t^{2}}) \label{DE}
\end{equation}%
where $\gamma ^{\prime }=\kappa ^{2}\gamma =2\kappa ^{2}\vartheta ^{2}\mu
^{2}$. The first term is just the same as the dephasing process in CQIT,
i.e. $v=0$. However, the second item in Eq.(\ref{DE}) indicates that the
diagonal elements will change as time, which is a different decoherence
source introduced by the effects of special relativity. In the long time
limit $\gamma ^{\prime }t^{2}\gg 1$, the spin up population will decrease $%
\Delta \rho _{\uparrow \uparrow }$.
Now we turn to the off-diagonal elements, in the similar way we can obtain $%
\rho _{\uparrow \downarrow }(B,t)=\rho _{\uparrow \downarrow }e^{-i\delta
}+2\Delta \rho _{\uparrow \downarrow }\sin ^{2}\frac{\delta }{2}+\frac{i}{2}%
[(\rho _{\uparrow \uparrow }-\rho _{\downarrow \downarrow
})(n_{x}-in_{y})+2\rho _{\uparrow \downarrow }(1-n_{z})]\sin \delta $, where
$\Delta \rho _{\uparrow \downarrow }=\frac{1}{2}[\chi (\rho _{\uparrow
\uparrow }-\rho _{\downarrow \downarrow })e^{-i\varphi }+\eta (\rho
_{\uparrow \downarrow }+\rho _{\downarrow \uparrow }e^{-i2\varphi })]$.
After integrating over the Gaussian magnetic field $B$, we obtain
\begin{equation}
\rho _{\uparrow \downarrow }(t)=\rho _{\uparrow \downarrow }e^{-\gamma
^{\prime }t^{2}}+\Delta \rho _{\uparrow \downarrow }(1-e^{-\gamma ^{\prime
}t^{2}}) \label{NDE}
\end{equation}%
The first term of $\rho _{\uparrow \downarrow }(t)$ is in a similar form of
exponential decay, except that the decay rate changes to $\gamma ^{\prime
}=\kappa ^{2}\gamma \geq \gamma $. The effects of special relativity is also
reflected by the second term, which implies that when $\gamma ^{\prime
}t^{2}\gg 1$, $\rho _{\uparrow \downarrow }(t)\rightarrow \Delta \rho
_{\uparrow \downarrow }$ that is a saturation value other than zero, i.e.
the off-diagonals will not vanish.
From the evolution of diagonal and off-diagonal elements in Eqs.(\ref{DE},%
\ref{NDE}), we can establish a physical picture of the above analyses by
expressing the spin density matrix at time $t$ in the operator-sum
representation%
\begin{equation}
\mathcal{E}_{m}(\rho )=p_{0}\rho +p_{1}\sigma _{z}\rho \sigma
_{z}-\varepsilon \sigma _{z}\rho \sigma _{z}+\sum\limits_{i=1}^{2}F_{i}\rho
F_{i}^{\dag } \label{OSR}
\end{equation}
where $p_{0}=(1+e^{-\gamma ^{\prime }t^{2}})/2$, $p_{1}=(1-e^{-\gamma
^{\prime }t^{2}})/2$ and $\varepsilon =p_{1}(\eta +\chi )$. The operators $%
\left\{ F_{i}\right\} $ are $F_{1}=[p_{1}(\eta -\chi )]^{1/2}(\cos
\varphi \sigma _{x}+\sin \varphi \sigma _{y})$ and $F_{2}=(p_{1}\chi
)^{1/2}(\sigma _{z}+\cos \varphi \sigma _{x}+\sin \varphi \sigma
_{y})$. If the velocity of the electron $v=0$, the modulation factor
$\eta =\chi =0$, and the above results reduce to the pure dephasing
of a rest spin. When we consider a moving electron, the first two
terms in Eq.(\ref{OSR}) is similar to pure dephasing. However, the
decay rate of the off-diagonal elements is amplified by the factor
$\kappa ^{2}\geq 1$. The suppression of dephasing stems from the
third term $-\varepsilon \sigma _{z}\rho \sigma _{z}$. The other two
operators $F_{1}$ and $F_{2}$ in Eq.(\ref{OSR}) represent different
decoherence mechanisms which makes the spin suffer from other
decoherence than pure dephasing. We could also formulate the
evolution of the spin state of the moving electron as in the dressed
environment
\begin{widetext}
\begin{equation}
\rho (t)=V^{\dagger }\left\{ \int_{-\infty }^{\infty }[\exp (-i\frac{\delta
_{0}}{2}\sigma _{z})]^{\kappa }(V\rho V^{\dagger })[\exp (i\frac{\delta _{0}%
}{2}\sigma _{z})]^{\kappa }\eta (B)dB\right\} V
\end{equation}
\end{widetext}
where $V=\exp [-i\frac{\phi }{2}\hat{\mathbf{\sigma }}\mathbf{\cdot }(\hat{%
\mathbf{n}}\times \hat{\mathbf{z}})]$ is the dressing
transformation, $\phi $ is the angle between the axes
$\hat{\mathbf{n}}$ and $\hat{\mathbf{z}}$. From the viewpoint of
qubit, after the control operation $V$, the initial state $\rho$
will undergo pure dephasing in the dressed Hilbert space, the final
reverse operation $V^{\dagger}$ will recover some coherence
information in the original Hilbert space.
\begin{figure}[tbh]
\epsfig{file=figure2.eps,width=7cm}
\caption{{}(Color online) Decoherence modulation factor as a function of the
moving rapidity and angle: $\protect\eta $\textit{\ vs. }$\protect\xi $ and $%
\protect\theta $. }
\end{figure}
The modulation factors $\eta $ and $\chi $ are the key ingredients
that characterize the influence of special relativity. Fig 2 shows
the modulation factor $\eta $ as a function of the rapidity $\xi $
and $\theta $. For a given rapidity $\xi $, the modulation factor
$\eta =(\cosh \xi -1)^{2}(1-\cos ^{2}2\theta )/2[(\cosh ^{2}\xi
+1)-(\cosh ^{2}\xi -1)\cos
2\theta ]$. We are interested in the maximum value of the modulation factor $%
\eta $. Thus we consider the first order partial differential equation $%
\partial \eta /\partial (\cos 2\theta )=0$, which leads to $\cos 2\theta
=(\cosh \xi -1)/(\cosh \xi +1)$, and the corresponding maximum value of $%
\eta $ is%
\begin{equation}
\eta _{\max }=(\frac{\cosh \xi -1}{\cosh \xi +1})^{2}
\end{equation}%
We plot the maximum value $\eta _{\max }$ for various rapidity $\xi $ in the
following Fig 3(a). It can be seen that $\eta _{\max }$ always increases
monotonically as the rapidity $\xi $ grows. In the limit $\xi \rightarrow
\infty $, i.e. the velocity is close to the light velocity $v\rightarrow c$,
the maximum modulation factor $\eta _{\max }$ reaches the constant value $1$%
.
\begin{figure}[tbh]
\epsfig{file=figure3.eps,width=8.5cm}
\caption{(Color online) \textit{a.} Maximum value of decoherence modulation
factor as a function of rapidity: $\protect\eta _{\max }$ \emph{vs.} $%
\protect\xi $; \textit{b. }Off-diagonal element as a function of time: $%
\protect\rho _{\uparrow \downarrow }(t)$ \textit{vs. }$\protect\gamma t^{2}$%
, and the rapidity $\protect\xi =2.5$ (Solid), $\protect\xi =0$ (Dashed).
The initial spin state is $|\protect\psi \rangle =\frac{1}{\protect\sqrt{2}}%
(|\uparrow \rangle +|\downarrow \rangle )$.}
\end{figure}
For the angle $\theta $ of the velocity $v$ that maximize the value of $\eta
$, the corresponding value of the other modulation factor $\chi
=n_{z}(n_{x}^{2}+n_{y}^{2})^{1/2}$ is $\chi _{m}=2\cosh ^{1/2}\xi (\cosh \xi
-1)/(\cosh \xi +1)^{2}$. When we consider the relativistic Dirac electron,
i.e. the velocity $v$ is comparable to the light velocity $c$, $\chi _{m}\ll
\eta _{\max }$, which can be neglected in some sense.
\emph{Example }To explicitly demonstrate the influence of special relativity
on the spin decoherence of a moving Dirac electron, we consider a spin qubit
in a fully coherent initial state $|\psi \rangle =\frac{1}{\sqrt{2}}%
(|\uparrow \rangle +|\downarrow \rangle )$. We could have chosen a more
general initial spin state with different relative amplitudes. However, the
above state gives rise to all interesting physical features in the situation
considered here.
We set the velocity angle $\varphi =0$ to maximize the modulus of the
saturation value of off-diagonal elements, thus
\begin{eqnarray}
\rho _{\uparrow \uparrow }(t) &=&\frac{1}{2}[1+\chi (1-e^{-\gamma ^{\prime
}t^{2}})] \\
\rho _{\uparrow \downarrow }(t) &=&\frac{1}{2}[(1-\eta )e^{-\gamma ^{\prime
}t^{2}}+\eta ]
\end{eqnarray}%
We plot the decay of the off-diagonal elements of the spin density matrix in
Fig 3(b), and compare two cases when the rapidity $\xi =2.5$ and $\xi =0$.
The velocity angle $\theta $ is set to maximize the modulation factor $\eta $%
.
In the short time region, the off-diagonal elements decay much faster, which
is due to the amplification of $\gamma \rightarrow \gamma ^{\prime }\ $by
the factor $\kappa ^{2}=6.13229>1$. However, as the time is longer, the
decay of the off-diagonal elements will be much suppressed. In particular,
if the time is sufficiently long, for $\xi =0$, the spin degree of freedom
becomes completely classical; while for $\xi =2.5$, the off-diagonal
elements will reach a nonzero saturation value $\eta _{\max }/2=0.2589$, and
the spin state becomes steady, which suggests that the decoherence process
seems halting.
The above discussions are extensible to the situation of several
spin-$1/2$ Dirac electrons coupled with one common Gaussian
background magnetic noise, i.e.
$H_{SB}=\mu\sum\limits_{i}\hat{\mathbf{\sigma}}^{i}\mathbf{\cdot B}$
\cite{Braun,cai2}. For example, we consider two electrons in the spin entangled state $%
|\psi\rangle =\frac{1}{\sqrt{2}}(|\uparrow\uparrow \rangle +|\downarrow
\downarrow \rangle )$ moving at the same velocity,
if the electrons are at rest, the two-qubit entanglement quantified
by the concurrence will decay exponentially as $\mathcal{C}(t)=\exp
(-4\gamma t^{2}) $ \cite{Lidar,cai}. In order to highlight the
effects of special relativity, we assume that $v\cong c$, thus the
modulation factors are $\eta \cong 1$ and $\chi \cong 0$. After
simple calculations, it is easy to obtain the evolution of the
two-qubit entanglement as $\mathcal{C}^{\prime }(t)=\exp (-4\gamma
^{\prime }t^{2})$. Therefore, unlike the situation of a
single electron, the effects of special relativity will makes $\mathcal{C}%
^{\prime }(t)\rightarrow 0$ much more quickly.
\section{Conclusions}
The existing research in the field of RQIT focused on the effects of
special relativity on the \textit{static} properties of quantum
states, e.g. Von Neumann entropy and quantum entanglement. In this
work, we investigate the \textit{dynamic} properties, i.e.
decoherence process of a moving Dirac electron. The decoherence
mechanisms will be significantly modified due to the \textit{dressed
environment}, which leads to the intriguing phenomenon that the
coherent information will be preserved even as the electron is in
the noisy environment for sufficiently long time.
The extension of this work to general decoherence models will
enlarge the research scope of relativistic quantum information
theory, and may establish fundamental connections between special
relativity and quantum mechanics.
\section{Acknowledgments}
We thank Prof. Daniel A. Lidar and L. Lamata for valuable
discussions. This work was funded by National Fundamental Research
Program, NCET-04-0587, the Innovation funds from Chinese Academy of
Sciences, and National Natural Science Foundation of China (Grant
No. 60121503, 10574126). |
hep-ph/0703231 | \section{Introduction} \setcounter{equation}{0}
Theories beyond the standard model which include
several new particles at the TeV scale and a new
discrete symmetry lead to cascade decays with interesting
signatures at colliders. At the same time, the discrete symmetry
reduces the contributions of new particles to electroweak
observables, allowing the new particles to be light enough such that
they can be copiously produced not only at the LHC, but perhaps even at the
Tevatron. Classic examples of such theories
include supersymmetric models with $R$-parity,
universal extra dimensions \cite{Appelquist:2000nn},
and Little Higgs models with $T$-parity \cite{Cheng:2003ju}.
Typically, the cascade decays in these models lead to observable
events with up to four leptons and missing transverse energy
\cite{Cheng:2002ab, Datta:2005zs}.
In this paper we show that more spectacular events,
with five or six leptons, or one photon and several leptons,
are predicted in the 6-dimensional standard model (6DSM).
This model \cite{Burdman:2006gy}, in which all
standard model particles propagate
in two universal extra dimensions compactified on the
chiral square \cite{Burdman:2005sr,Dobrescu:2004zi,Hashimoto:2004xz}, is motivated
by the prediction based on anomaly cancellations
that the number of fermion generations is a multiple of three
\cite{Dobrescu:2001ae},
and by the long proton lifetime enforced by a remnant of
6D Lorentz symmetry \cite{Appelquist:2001mj}.
The larger number of leptons and the presence of photons is due
to the existence of `spinless adjoint' particles, the
Kaluza-Klein (KK) modes of gauge bosons polarized
along extra dimensions. Compared to five-dimensional (5D) models
where such fields become the longitudinal components of the
KK vector bosons, in six-dimensional (6D) gauge theories there is
an additional field for each KK vector boson, which represents
a physical spin-0 particle transforming in the adjoint
representation of the gauge group \cite{Burdman:2006gy}.
The 6DSM has a KK parity corresponding to reflections with respect to
the center of the chiral square. Its consequences are similar to
the ones in the case of a single universal extra dimension \cite{Hooper:2007qk},
where KK parity is the symmetry under reflections with respect to the center of
the compact dimension. It is well known that in the 5D case KK parity
ensures the stability of the lightest KK particle (LKP). Furthermore,
loop corrections select the KK mode of the hypercharge boson to be the
LKP \cite{Cheng:2002iz}, and that is a viable dark matter candidate
\cite{Servant:2002aq}.
The same is true in the 6DSM, with the additional
twist that the LKP in that case is a spinless adjoint.
In fact,
one-loop mass corrections in this model lift the degeneracy of the
modes at each KK level, making all spinless adjoints lighter than
the corresponding
vector bosons \cite{Ponton:2005kx}.
Particles on the first KK level, having KK numbers (1,0),
are odd under KK parity. As a result, they may be produced only in pairs
at colliders,
and each of their cascade decays produces an LKP, which is seen as
missing transverse energy in the detector.
The goal of this paper is to determine the main
signatures of (1,0) particles at hadron colliders.
Particles on the second level, which have KK numbers (1,1)
and are even under KK parity, lead to a completely different set of
signatures, mainly involving resonances of top and bottom quarks
\cite{Burdman:2006gy}.
We review the 6DSM in
Section \ref{sec:couplings}, and then proceed in Section \ref{sec:decays} to
calculate decay widths for $(1,0)$ modes. We analyze the
production of these particles at the LHC and Tevatron in Section
\ref{sec:production}, and compute rates for events with leptons
and photons.
Several comments regarding our results are given in Section \ref{sec:conclusions}.
Feynman rules for this
model are given in Appendix A. Details of the calculations
of one-loop 2-body and tree-level 3-body decay widths for
spinless adjoints and vector bosons can be found in
Appendices B and C, respectively.
\bigskip
\section{Two universal extra dimensions}\label{sec:model}
\label{sec:couplings}
\setcounter{equation}{0}
We assume that all standard model fields propagate in two flat extra
dimensions, of coordinates $x_4$ and $x_5$, compactified on a square
of side $L=\pi R$ with adjacent sides identified in pairs (see Figure 1). This
compactification predicts that the fermion zero modes are
chiral, and therefore may represent the observed quarks and
leptons. Furthermore, this `chiral square' is invariant under rotations by
$\pi$ about its center. The ensuing $Z_2$ symmetry,
known as KK parity, implies that the lightest KK-odd particle is stable.
\begin{figure}[t]
\vspace*{-4mm}
\centering
\includegraphics[width=.45 \textwidth]{chiralsquare.eps}
\includegraphics[width=.45 \textwidth]{zeromode.eps}
\caption{Chiral square compactification (left) and level-1 KK function $f_0^{({\bf 1})} (x_4,x^5)$
for standard model fields (right).}\bigskip
\label{fig:fprofile}
\end{figure}
Equality of the Lagrangian densities on adjacent sides of the square is
achieved by enforcing that bulk fields and their first derivatives
vary smoothly across the boundary. Applying these boundary
conditions to solve the 6D equations of motion for these fields,
by separation of variables, we
find that the dependence on $x_4$ and $x_5$ can be expressed in terms of one of
four complete and orthonormal sets of functions $f_n^{(j,k)}$ with
$n=0,1,2,3$, where
the KK numbers $(j,k)$ are integers and $j\geq 1$, $k\geq 0$ or
$j=k=0$. All $(j,k)$ modes have tree-level mass $\sqrt{j^2+k^2}/R$
before electroweak symmetry breaking.
\bigskip \bigskip
\subsection{Interactions of the (1,0) modes}\label{sec:interactions}
We are
primarily interested in the phenomenology of the $(1,0)$ modes here.
We loosely refer to these as `level-1' modes because they are the
lightest nonzero KK modes. For notational brevity we will label
them using the superscript ${({\bf 1})} $.
The level-1 KK modes belonging to a tower that includes a zero mode
has a KK function
\begin{equation}
f^{({\bf 1})} _{0}(x_4,x_5)=\cos{\left(\frac{x_4}{R}\right)}+\cos{\left(\frac{x_5}{R}\right)}~,
\end{equation}
which is plotted in Figure \ref{fig:fprofile}.
This is the case for the KK modes of all spin-1 fields and fermions of the
same chirality as the observed quarks and leptons, as well as the
Higgs doublet. The spinless adjoint field, $A^{({\bf 1})} _H$, which is the uneaten
combination of the extra-dimensional polarizations of the 6D gauge field,
is associated with a KK function which is independent of $x_4$,
\begin{equation}
f^{({\bf 1})} _H= - \frac{1}{2}\left[ f^{({\bf 1})} _1(x_4,x_5)-f^{({\bf 1})} _3(x_4,x_5) \right]
= -\sin{\frac{x_5}{R}}~,
\end{equation}
while the longitudinal component of the vector KK modes is
associated with a KK function which is independent of $x_5$:
\begin{equation}
f^{({\bf 1})} _G= - \frac{i}{2}\left[f^{({\bf 1})} _1(x_4,x_5)+f^{({\bf 1})} _3(x_4,x_5) \right]
= \sin{\frac{x_4}{R}}~.
\end{equation}
KK modes of fermions come in vectorlike pairs with the component of
4D chirality opposite to the corresponding standard model fermion having KK
function $f_1$ or $f_3$,
depending on the 6D chirality.
Integrating
over the extra dimensional coordinates gives
the 4D effective Lagrangian, which contains kinetic and interaction
terms for all SM particles and their KK modes. We limit
ourselves to detailing in this section only the couplings
of the standard model fields with the level-1 KK modes; the latter are odd
under KK parity and so only appear in pairs.
The general Lagrangian
for all modes is derived in Ref.~\cite{Burdman:2005sr,Dobrescu:2004zi}, while the
couplings for all fermion modes can be found in Appendix B.
The $SU(3)_c$ gauge interactions
include the following tree-level couplings between zero modes and $(1,0)$
modes:
\begin{eqnarray}
\mathcal{L}_{\rm gauge} &\supset &
g_sf^{abc}\left[ G_\mu^{{({\bf 1})} a}\left(\partial^\mu G^{\nu{({\bf 1})} b}
- \partial^\nu G^{\mu{({\bf 1})} b}\right) G_\nu^{{({\bf 1})} c}
- G_\mu^{{({\bf 1})} a}G_\nu^{{({\bf 1})} b}\partial^\mu G^{\nu c}
+ G_H^{{({\bf 1})} a}\partial^\mu G_H^{{({\bf 1})} b} G_\mu^{c}\right]
\nonumber\\ [2mm]
&-& \frac{g_s^2}{2}\left[f^{abd}f^{ace}G_\mu^{{({\bf 1})} b}G^{\mu{({\bf 1})} c}G_\nu^dG^{\nu e}
+ \left(f^{abc}f^{ade}+f^{adc}f^{abe}\right)
G_\mu^{{({\bf 1})} b}G^{\mu d}G_\nu^{{({\bf 1})} c}G^{\nu e}\right]
\nonumber\\ [2mm]
&+& \frac{g_s^2}{2}f^{abc}f^{ade}G_H^{{({\bf 1})} c}G_H^{{({\bf 1})} e}G_\mu^{b} G^{\mu d} ~,
\eear
where $g_s$ is the
QCD gauge coupling, $f^{abc}$ are the $SU(3)_c$ structure constants,
and $G_{\mu}^{({\bf 1})} $ and $G_H^{({\bf 1})} $ are the level-1
vector and spinless adjoint KK modes of the gluon $G_\mu$. We have
suppressed all superscripts for zero modes.
There are also interactions of the quark modes with the QCD vector and
spinless modes:
\begin{equation}
\mathcal{L}_{\rm matter}\supset\!\!\!\displaystyle\sum_{\rm fermions}\!\!\!g_s
\overline{Q}_{\pm}^{({\bf 1})} G_\mu^a T^a\gamma^\mu Q^{({\bf 1})} _{\pm}+g_s\bigg[
\overline{Q}_{\pm}^{({\bf 1})} G_\mu^{{({\bf 1})} a}T^a\gamma^\mu P_{\substack{L\\R}}Q_{\pm}
- i\overline{Q}_{\pm}^{({\bf 1})} G_H^{{({\bf 1})} a}T^a P_{\substack{L\\R}}Q_{\pm}
+ {\rm H.c.}\bigg] ~,
\end{equation}
where fermions with 6D chirality $+$ contain left-handed zero modes, and
fermions with 6D chirality $-$ contain right-handed zero modes. The
$SU(2)_W$ and $U(1)_Y$ sectors are
analogous, with all the gauge self-couplings set to zero in the Abelian
case. The Higgs and ghost terms are given in Ref.~\cite{Burdman:2005sr,Dobrescu:2004zi}.
\bigskip
\subsection{Mass corrections}\label{sec:masscorrections}
Computing radiative corrections in this theory involves taking sums over KK modes, or momenta in the
extra dimensions, which fourier transform to operators localized at
the corners of the chiral square, $(0,0)$, $(\pi R,\pi R)$ and $(0,\pi
R)\sim(\pi R,0)$. The most
general 4D effective Lagrangian must therefore allow for these ~\cite{Ponton:2005kx}:
\begin{equation}
L_{eff}=\int_0^L d x^4\int_0^L d x^5
\left[\mathcal{L}_{\rm bulk}+\bigg(\delta(x_4)\delta(x_5)+\delta(L-x_4)\delta(L-x_5)
\bigg)\mathcal{L}_1+\delta(L-x_5)\mathcal{L}_2\right]~,
\end{equation}
where $\mathcal{L}_1$ and $\mathcal{L}_2$ contain all localized operators. Note that
KK parity ensures the equality of the operators localized
at $(0,0)$ and $(L,L)$. Local operators
break 6D Lorentz invariance and hence give rise to mass corrections
for KK particles. Such terms are important for models of
flat extra dimensions since they allow for the decays of higher modes
into pairs of lower ones, a process which would otherwise be on
threshold at best due to the quantization of KK mode masses. They
also make for a more interesting phenomenology by lifting the
degeneracy of states at each level.
The localized terms contain contributions from ultraviolet physics
as well as from running down from the cut-off. Being unable to
compute the former, we assume that they are generically smaller than the
logarithmically-enhanced one-loop terms which are calculable (for further
discussion see \cite{Ponton:2005kx,Cheng:2002iz}). Level-1 fermions
acquire the following mass corrections ~\cite{Burdman:2006gy}:
\begin{eqnarray}
\delta(M_{Q_+}) &=& \left(\frac{16}{3}g_s^2+3g^2+\frac{1}{9}g'^2
+ \frac{5}{8}\lambda_{Q_+}^2\right)\frac{l_0}{R}
+ \frac{1}{2}m_{q}^2 R~,
\nonumber\\ [2mm]
\delta(M_{Q_-}) &=&
\left(\frac{16}{3}g_s^2+4g'^2 y^2+\frac{10}{8}\lambda_{Q_-}^2\right)\frac{l_0}{R}
+ \frac{1}{2}m_{q}^2 R ~,
\nonumber\\ [2mm]
\delta(M_{L_+})&=&\left(3g^2+g'^2\right)\frac{l_0}{R}~,
\nonumber\\ [2mm]
\delta(M_{E_-})&=&\frac{g'^2}{4\pi^2}\frac{l_0}{R} ~,
\label{fermion-corr}
\eear
where $g_s$, $g$ and $g^\prime$ are the $SU(3)_c\times SU(2)_W\times U(1)_Y$
gauge couplings, $\lambda_{Q_\pm}$ are the Yukawa couplings of $Q_\pm$ to the Higgs
doublet, and $l_0$ is a common loop factor,
\begin{equation}
l_0=\frac{1}{16\pi^2}\ln \left(\Lambda R\right)^2 ~.
\end{equation}
An estimate of the cutoff of the effective theory, based on naive dimensional analysis,
gives $\Lambda \approx 10/R$
\cite{Burdman:2006gy}. The terms linear in $R$ shown in
Eq.~(\ref{fermion-corr})
are small corrections to the tree-level masses due to electroweak symmetry breaking masses, $m_q$.
The (1,0) vector bosons also receive radiative corrections to their masses,
\begin{eqnarray}
\delta M_{G_\mu^{({\bf 1})} } &=& 4 g_s^2\frac{l_0}{R} ~,
\nonumber\\ [2mm]
\delta M_{W_\mu^{({\bf 1})} } &=&\frac{123}{24}g^2 \frac{l_0}{R} ~,
\nonumber\\ [2mm]
\delta M_{B_\mu^{({\bf 1})} } &=&-\frac{165}{24}g'^2 \frac{l_0}{R} ~,
\eear
while only the spinless adjoints in the electroweak sector have mass corrections:
\begin{eqnarray}
\delta M_{G_H^{({\bf 1})} } &=& 0
\nonumber\\ [2mm]
\delta M_{W_H^{({\bf 1})} } &=& -\frac{51}{8}g^2 \frac{l_0}{R} +\frac{m_{W}^2 R}{2} ~,
\nonumber\\ [2mm]
\delta M_{B_H^{({\bf 1})} } &=& -\frac{307}{8}g'^2 \frac{l_0}{R} ~.
\eear
The above mass shifts include negative contributions from fermions
in loops, allowing for overall negative corrections to masses.
This is especially important when there are no self-interactions to compete
with the fermion interactions, as is the case with for the hypercharge bosons.
\begin{table}[t]
\hspace*{-.2cm}
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{|c|c||c|c|}
\hline \ \hspace*{-.28cm}boson\hspace*{-.28cm} \ & $M R$ \
& \hspace*{-.16cm}fermion\hspace*{-.17cm} & $M R$ \
\rule{0mm}{5mm}\rule{0mm}{-22mm} \\ \hline\hline
$G_\mu^{({\bf 1})} $ & 1.392 & $Q_+^{{({\bf 1})} 3}$ &
$\!1.265 + \frac{1}{2}(m_t R)^2\!$ \\ \hline
$\! W_\mu^{({\bf 1})} $ & $1.063 + \frac{1}{2}(M_W R)^2\!$ & $T_-^{({\bf 1})} $ &
$\! 1.252 + \frac{1}{2}(m_t R)^2\!$ \\ \hline
$G_H^{({\bf 1})} $ & 1.0 & $Q_+^{({\bf 1})} $ & 1.247 \\ \hline
$B_\mu^{({\bf 1})} $ & 0.974 & $U_-^{({\bf 1})} $ & 1.216 \\ \hline
$\! W_H^{({\bf 1})} $ & $0.921 + \frac{1}{2}(m_W R)^2\!$ & $D_-^{({\bf 1})} $ & 1.211 \\ \hline
$B_H^{({\bf 1})} $ & 0.855 & $L_+^{({\bf 1})} $ & 1.041 \\ \hline
\multicolumn{2}{r|}{} & $E_-^{({\bf 1})} $ & 1.015 \\ \cline{3-4}
\end{tabular}
\psfrag{mass}[B]{\hspace*{3.5em} $M$ [GeV] } \small
\psfrag{gmu}[B]{\hspace*{6.5mm} $G^{({\bf 1})} _\mu$}
\psfrag{wmu}[B]{\hspace*{6.5mm} $W^{({\bf 1})} _\mu$}
\psfrag{bmu}[B]{\hspace*{6.5mm} $B^{({\bf 1})} _\mu$}
\psfrag{gh}[B]{\hspace*{-2.5mm} $G^{({\bf 1})} _H$}
\psfrag{wh}[B]{\hspace*{-2.5mm} $W^{({\bf 1})} _H$}
\psfrag{bh}[B]{\hspace*{-2.5mm} $B^{({\bf 1})} _H$}
\psfrag{qp3}[B]{\hspace*{.5mm} $Q_+^{3 {({\bf 1})} }$}
\psfrag{qp}[B]{\hspace*{.5mm} $Q_+^{({\bf 1})} $}
\psfrag{dm}[B]{\hspace*{.5mm} $D_-^{({\bf 1})} $}
\psfrag{tm}[B]{\hspace*{-2.5mm} $T_-^{{({\bf 1})} }$}
\psfrag{um}[B]{\hspace*{-2.1mm} $U_-^{{({\bf 1})} }$}
\psfrag{lp}[B]{\hspace*{-2.5mm} $L_+^{{({\bf 1})} }$}
\psfrag{em}[B]{\hspace*{-2.5mm} $E_-^{{({\bf 1})} }$}
\psfrag{RR}[T]{\hspace*{-1.9cm} $1/R = 500$ GeV}
\vspace*{-9.cm}
\hspace*{9.4cm}
\psfig{ file=mass.eps,width=6.1cm,angle=0}
\vspace*{-1.9cm}
\caption{Masses of the (1,0) particles in $1/R$ units (left). The (1,0) Higgs particles are
not included here because their masses are quadratically sensitive to the cutoff scale.
The right-hand panel shows the
spectrum for $1/R = 0.5$ TeV.}
\label{tab:mass}
\end{table}
The masses of the (1,0) particles are given in Table 1 in units of $1/R$.
The mass shifts are evaluated there for gauge couplings $g_s=1.16$, $g=0.65$ and
$g'=0.36$, which are the values obtained using the
standard model one-loop running up to the scale $1/R=500$ GeV,
We will use the masses from Table 1 throughout the paper, ignoring
further running of the gauge couplings above 500 GeV (note that the standard
model running of the gauge couplings between 500 GeV and 1 TeV results in
only a 3\% change in $g_s$ and negligible changes in $g$ and $g^\prime$;
however, above $\sim 1/R$ the running is accelerated by the presence of
the level-1 modes).
The KK modes of the Higgs doublet have mass-squared shifts which are quadratically
sensitive to the cutoff scale $\Lambda$ \cite{Cheng:2002iz}. Hence, the masses
of the (1,0) Higgs scalars may be treated as free parameters (determined by the
underlying theory above $\Lambda$, which is not specified in our framework).
Furthermore,
additional structures such as the Twin Higgs mechanism \cite{Chacko:2005pe}
may be used to cancel the quadratic divergences in models with universal extra dimensions
\cite{Burdman:2006jj}, potentially affecting the (1,0) Higgs sector.
We assume here that the (1,0) Higgs particles are heavier than $1/R$.
In that case, the hadron collider phenomenology is mostly independent of the exact
(1,0) Higgs masses.
\bigskip
\subsection{Loop-induced bosonic operators}
In addition to lifting the degeneracy of the $(1,0)$ masses, loop corrections also contribute to the following
dimension-5 operators that are of particular interest for computing the branching fractions of the $(1,0)$ bosons:
\begin{equation}
-\frac{R}{4} \Big ( \mathcal{C}_B \epsilon^{\mu\nu\alpha\beta} F_{\mu\nu} B^{({\bf 1})} _{\alpha\beta} B_H^{({\bf 1})}
+ \mathcal{C}_G \epsilon^{\mu\nu\alpha\beta} G_{\mu\nu} B^{({\bf 1})} _{\alpha\beta} G_H^{({\bf 1})} \Big )
~,
\label{operator}
\end{equation}
where $F_{\mu\nu}$ and $G_{\mu\nu}$ are the field strengths of the photon and gluon, respectively,
$B^{({\bf 1})} _{\alpha\beta}$ is the field strength
of the $(1,0)$ hypercharge vector boson $B^{({\bf 1})} _\alpha$, and $B_H^{({\bf 1})} $ is the
$U(1)_Y$ spinless adjoint. These operators account for the
only significant 2-body decay channels open to the
level-1 KK modes $G^{({\bf 1})} _H$ and $B^{({\bf 1})} _\mu$.
The analogous operator with the photon replaced by the $Z$ boson is less relevant
because the corresponding decay width is phase-space suppressed.
The coefficients of the above dimension-5 operators are computed in
Appendix B, with the result:
\begin{equation}
\mathcal{C}_B = \frac{g^{\prime 2} e}{8 \pi^2 R}
\frac{1}{M_{B_\nu^{({\bf 1})} }^2 - M_{B_H^{({\bf 1})} }^2} \sum_F \sigma_F \Big ( \frac{Y_F}{2} \Big )^2 Q_F {\cal E}_F
~,
\label{operator-coef}
\end{equation}
where $\sigma_F = \pm 1$ for a 6D fermion $F$ of chirality $\pm$, $Q_F$ is the electric charge,
$Y_F$ is the hypercharge normalized
to be twice the electric charge for $SU(2)_W$ singlets
and ${\cal E}_F$ is a function of the masses of $B_H^{({\bf 1})} $, $B_\nu^{({\bf 1})} $,
and of the (1,0) and (1,1) fermions given in Eq.~(\ref{ef}). $\mathcal{C}_G$ is given
by an analogous expression, but it is suppressed by the small mass difference between the initial-
and final-state $(1,0)$ bosons.
One might also naively expect higher-dimension operators of the form
\begin{equation}
G_{\mu\nu}\partial^\mu B_H^{({\bf 1})} \partial^\nu G_H^{({\bf 1})}
+ Z_{\mu\nu}\partial^\mu B_H^{({\bf 1})} \partial^\nu W_H^{{({\bf 1})} 3}
+ \left(W_{\mu\nu}^+\partial^\mu B_H^{({\bf 1})} \partial^\nu W_H^{{({\bf 1})} -} + {\rm H.c.}\right)
~,
\end{equation}
to be generated, where $W^{({\bf 1})} _H$ is the level-1 $SU(2)_W$ spinless adjoint and $W_{\mu\nu}$
and $Z_{\mu\nu}$ are the standard
model field strengths for the $W$ and $Z$ bosons. However, the first of these terms is
identically zero as can be seen after integrating by parts and using
the gluon field equation. By the same method one can see that the coefficients of the last two terms are
small, being proportional to $(m_W R)^2$, and furthermore the resulting decay widths for $W_H^{{({\bf 1})} }$ are
also phase-space suppressed.
\bigskip\bigskip
\section{Decays of the level-1 particles}
\label{sec:decays}
\setcounter{equation}{0}
KK parity allows any (1,0) particle to decay only into a lighter (1,0)
particle and one or more standard model particles. The lightest
(1,0) particle is stable. In this section we compute the branching fractions
of the (1,0) particles assuming that the generic features of the `one-loop'
mass spectrum, shown in Table~\ref{tab:mass}, are not modified by higher-order
corrections.
\subsection{Color-singlet $(1,0)$ particles}
The $W^{{({\bf 1})} }_H$ boson (the spinless adjoint of $SU(2)_W$) is the next-to-lightest (1,0)
particle, and therefore can decay only into a $B^{{({\bf 1})} }_H$
plus standard model particles. The dominant decay mode
of its electrically neutral component is the 3-body decay
$W^{{({\bf 1})} 3}_H \to B^{{({\bf 1})} }_{H} l\bar{l}$, where
$l$ are leptons. The width for this decay, computed in
Appendix~C,
is given by
\begin{equation}
\Gamma\left(W^{{({\bf 1})} 3}_H \to B^{{({\bf 1})} }_{H} e^+e^-\right)
= \,
\frac{\alpha^{2} \, M_{W_H^{{({\bf 1})} }} }{128\pi\cos^2\!\theta_w\,\sin^2\!\theta_w }
\, {\cal I}_+\!\left( M_{W_H^{({\bf 1})} }, M_{B_H^{({\bf 1})} }, M_{L_+^{({\bf 1})} }\right) ~,
\end{equation}
and is the same for any lepton pair. The dimensionless function ${\cal I}_+$ contains
phase space integrals for the decay and is defined in Eq.~(\ref{fpm}).
Expanding this to leading order in the mass difference $M_{W_H^{({\bf 1})} } - M_{B_H^{({\bf 1})} }$,
which is accurate to about 25\% for the mass spectrum in Table~\ref{tab:mass}
[see Eq.~(\ref{fpe}) in Appendix~C], we find that
the width of the $W^{{({\bf 1})} 3}_H$ decay into $B^{{({\bf 1})} }_{H}$ plus quarks
has a simple expression in terms of the decay width into
$B^{{({\bf 1})} }_{H}$ plus leptons:
\begin{equation}
\Gamma\left(W^{{({\bf 1})} 3}_H \to B^{{({\bf 1})} }_{H} q\overline{q} \right)
\approx \frac{1}{3} \left( \frac{ M_{L_+^{{({\bf 1})} }}^2 - M^2_{W_H^{{({\bf 1})} }} }{
M_{Q_+^{{({\bf 1})} }}^2 - M^2_{W_H^{{({\bf 1})} }} } \right)^{\!\! 4}
\Gamma\left(W^{{({\bf 1})} }_H \to B^{{({\bf 1})} }_{H} e^+e^-\right) ~,
\end{equation}
where we have not summed over quark flavors.
Given that $W_H^{{({\bf 1})} }$ is closer to $L_+^{{({\bf 1})} }$ in mass than to $Q_+^{{({\bf 1})} }$,
it follows that the decay into quarks is highly suppressed.
The ensuing branching fractions for the $W_H^{{({\bf 1})} \, 3} \to B^{{({\bf 1})} }_{H}$
transition are approximately 1/6 for each of the $e^+e^-$, $\mu^+\mu^-$
and $\tau^+\tau^-$ final states, 1/2 for $\nu\overline{\nu}$, and
$0.5$\% for the sum of all quark-antiquark pairs.
The electrically charged spinless adjoints of $SU(2)_W$, $W_H^{{({\bf 1})} \, \pm}$,
decay with a branching fraction of nearly 1/3 into each of the
$e^\pm\nu B^{{({\bf 1})} }_{H}$, $\mu^\pm\nu B^{{({\bf 1})} }_{H}$ and
$\tau^\pm\nu B^{{({\bf 1})} }_{H}$ final states, while the branching fraction into
$q\overline{q} B^{{({\bf 1})} }_{H}$ is again negligible.
The spin-1 boson $B^{{({\bf 1})} }_\mu$ may decay only into a $B^{{({\bf 1})} }_H$ or $W^{{({\bf 1})} }_H$
and standard model particles.
An important tree-level decay is into right-handed leptons and a $B^{{({\bf 1})} }_H$, with
a width:
\begin{equation}
\Gamma\left(B^{({\bf 1})} _\mu \to B^{{({\bf 1})} }_{H} e_R^+e_R^-\right)
= \frac{\alpha^2 M_{E_-^{{({\bf 1})} }}^2} {24\pi \cos^{4}\!\theta_{w} \, M_{B_\mu^{({\bf 1})} }} \,
{\cal I}_-\!\left( M_{B_\mu^{({\bf 1})} }, M_{B_H^{({\bf 1})} }, M_{E_-^{({\bf 1})} }\right) ~,
\label{BmutoBh}
\end{equation}
where ${\cal I}_-$ is another phase space integral defined in Eq.~(\ref{fpm}).
The width into left-handed leptons,
\begin{equation}
\Gamma\left(B^{{({\bf 1})} }_\mu \to B^{{({\bf 1})} }_{H} e_L^+e_L^-\right)
= \frac{\alpha^2 M_{L_+^{{({\bf 1})} }}^2} {384\pi \cos^{4}\!\theta_{w} \, M_{B_\mu^{({\bf 1})} }} \,
{\cal I}_-\!\left( M_{B_\mu^{({\bf 1})} }, M_{B_H^{({\bf 1})} }, M_{L_+^{({\bf 1})} }\right) ~,
\end{equation}
is suppressed due to the smaller
hypercharge and larger mass of the (1,0) fermion, which is $L_+^{({\bf 1})} $ in this case.
For the same reasons, the $B^{{({\bf 1})} }_\mu$ decay into a $B^{{({\bf 1})} }_{H}$
and $q\overline{q}$ pairs has a small decay width.
$B^{{({\bf 1})} }_\mu$ decays to $W^{{({\bf 1})} }_H$ plus fermion pairs
are highly suppressed due to
the dependence on the 7th power of the small difference between initial and
final (1,0) masses [see Eqs.~(\ref{AmutoAH}) and (\ref{fpe}) in Appendix C].
Besides these tree-level 3-body decays, $B^{{({\bf 1})} }_\mu$ also has 2-body decays
via the dimension-5 operator shown in Eq.~(\ref{operator}), which is
induced at one loop (see Appendix B). The decay width is given by
\begin{equation}
\Gamma\left(B^{{({\bf 1})} }_\mu \to B^{{({\bf 1})} }_{H} \gamma \right)
= \frac{\alpha^3}{96 \pi^2 \cos^{4}\!\theta_{w} } \frac{1}{M_{B_\mu^{({\bf 1})} }}
\left(1- \frac{M_{B_H^{({\bf 1})} }^2}{ M_{B_\mu^{({\bf 1})} }^2} \right)
\left( \sum_F \sigma_F \, \Big ( \frac{Y_F}{2} \Big )^2 \, Q_F \, {\cal E}_F \right)^2 ~,
\label{oneloopdecay}
\end{equation}
where the sum over $F$ includes all quarks and leptons,
$\sigma_F$ is +1 for $SU(2)_W$ doublets and $-1$ for $SU(2)_W$ singlets,
$Q_F$ is the electric charge,
$Y_F$ is the hypercharge normalized
to be twice the electric charge for $SU(2)_W$ singlets,
and ${\cal E}_F$ is given in Eq.~(\ref{ef}) and depends only on
the masses of $B_H^{({\bf 1})} $, $B_\nu^{({\bf 1})} $, and of the (1,0) and (1,1) fermions.
Using the values for the standard model gauge couplings given at the end of section 2.2,
{\it i.e.}, $\alpha = 1/127$ and $\sin^{2}\!\theta_{w} = 0.235$,
we find the following branching fractions for $B^{{({\bf 1})} }_\mu$:
\begin{eqnarray}
{\rm Br} \left(B^{{({\bf 1})} }_\mu \rightarrow B^{{({\bf 1})} }_{H} \gamma \right) \equiv b_{B\gamma}
\approx 34.0\% ~,
\nonumber \\ [ 2mm]
{\rm Br} \left(B^{{({\bf 1})} }_\mu \rightarrow B^{{({\bf 1})} }_{H} e^+e^- \right) \equiv b_{Be}
\approx 21.3\% ~.
\label{bmu-br}
\eear
The branching fractions into $e^+e^-B^{{({\bf 1})} }_{H}$, $\mu^+\mu^-B^{{({\bf 1})} }_{H}$
and $\tau^+\tau^-B^{{({\bf 1})} }_{H}$ are equal.
The fact that the tree-level 3-body decay and the one-loop 2-body decay have
comparable branching fractions in the case of $B^{{({\bf 1})} }_\mu$
is an accidental consequence of the mass spectrum given in Table 1.
The $B^{{({\bf 1})} }_\mu$
decays into $B^{{({\bf 1})} }_{H}$ plus neutrinos or quarks have small
branching fractions (1.4\% and 0.6\%, respectively) which may be
safely ignored in what follows.
The (1,0) leptons can decay into (1,0) modes of the
electroweak gauge bosons or spinless adjoints, and a standard model lepton.
The decay widths of the $SU(2)_W$-doublet (1,0) leptons,
$L_+^{{({\bf 1})} } \equiv (N_+^{{({\bf 1})} }, E_+^{{({\bf 1})} })$, to neutral (1,0) particles are given
at tree level by:
\begin{eqnarray}
\Gamma\left(L_+^{{({\bf 1})} }\!\to W_H^{{({\bf 1})} 3} l_{L}\right) & \! =\! &
\frac{\alpha }{32\sin^{2}\!\theta_{w}}M_{L^{{({\bf 1})} }}
\left(1-\frac{M_{W_H^{{({\bf 1})} }}^2}{M_{L^{{({\bf 1})} }}^2}\right)^{\! 2} ~,
\nonumber \\ [0.5em]
\hspace{.1mm}
\Gamma\left(L_+^{{({\bf 1})} }\!\to B^{{({\bf 1})} }_{\mu} l_L \right) & \! =\! &
\frac{\alpha }{16 \cos^2\!\theta_{w}} M_{L^{{({\bf 1})} }}
\left(1 - \frac{M_{B_\mu^{{({\bf 1})} }}^2}{M_{L^{{({\bf 1})} }}^2} \right)^{\! 2}
\left(1 + \frac{M_{L^{{({\bf 1})} }}^2}{2M_{B_\mu^{{({\bf 1})} }}^2}\right)~,
\nonumber \\ [0.5em]
\hspace{.1mm}
\Gamma\left(L_+^{{({\bf 1})} }\!\to B_H^{{({\bf 1})} } l_L \right) & \! =\! &
\frac{\alpha }{32\cos^{2}\!\theta_{w}} M_{L^{{({\bf 1})} }}
\left(1-\frac{M_{B_H^{{({\bf 1})} }}^2}{M_{L^{{({\bf 1})} }}^2}\right)^{\! 2} \, ,
\label{Ldecays}
\eear
where $l_L$ is the corresponding standard model weak doublet lepton.
The decays to charged (1,0) particles, $E_+^{{({\bf 1})} }\!\to W_H^{{({\bf 1})} -} \nu_L$
and $N_+^{{({\bf 1})} }\!\to W_H^{{({\bf 1})} -} e_L^+$, have a width twice as
large as the $L_+^{{({\bf 1})} }\!\to W_H^{{({\bf 1})} 3} l_{L}$ decay width.
The $L_+^{{({\bf 1})} }$ branching fractions are given by:
\begin{eqnarray}
&&
{\rm Br} \left[ (N_+^{{({\bf 1})} }, E_+^{{({\bf 1})} }) \to B_H^{{({\bf 1})} } (\nu_L,e_L) \right]
\equiv b_{l1} \approx 20.1\% ~.
\nonumber \\ [0.5em]
&&
\frac{1}{2}\, {\rm Br} \left[ (N_+^{{({\bf 1})} }, E_+^{{({\bf 1})} }) \to W_H^{{({\bf 1})} +} (e_L,\nu_L) \right]
=
{\rm Br} \left[ (N_+^{{({\bf 1})} }, E_+^{{({\bf 1})} }) \to W_H^{{({\bf 1})} 3} (\nu_L,e_L) \right]
\equiv b_{l2} \approx 23.5\% ~,
\nonumber \\ [0.5em]
&& {\rm Br} \left[ (N_+^{{({\bf 1})} }, E_+^{{({\bf 1})} }) \to B_\mu^{{({\bf 1})} }(\nu_L,e_L) \right]
\equiv b_{l3} \approx 9.3\% ~.
\label{onestep}
\eear
\TABLE[t]{
\centering
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{|c|c|c|c|c|c|c|}
\cline{1-3}\cline{5-7}
\hspace{-0.1em} Final-state \hspace{-0.1em} &
\multicolumn{2}{c|}{$W_\mu^{{({\bf 1})} 3} \!\to ... \to B_H^{{({\bf 1})} } $} & \hspace*{0.17em} &
\hspace{-0.5em} Final-state \hspace{-0.7em} &
\multicolumn{1}{r}
{$W_\mu^{{({\bf 1})} +} \!\to ... \to B_H^{{({\bf 1})} } $} & \rule{0mm}{5mm}\rule{0mm}{-22mm}
\\ \cline{2-3}\cline{6-7}
$e,\mu,\gamma$ & Branching fractions & \% & & $e,\mu,\gamma$ & Branching fractions & \%
\\ \cline{1-3}\cline{5-7}
$X$ & $\frac{\textstyle 2}{\textstyle 3}( b_{l1} + b_{l2} + b_{l3}b_{Be} )$ & 30.4 & &
$X$ & $\frac{\textstyle 1}{\textstyle 3}(b_{l1} + 2 b_{l2} + b_{l3}b_{Be} )$ & 23.1
\\ \cline{1-3}\cline{5-7}
$(e^+ + e^-)X$ & $\frac{\textstyle 4}{\textstyle 9} b_{l2}$ & 10.5 & &
$e^+ \, X$ & $\frac{\textstyle 1}{\textstyle 3}(b_{l1} + 2 b_{l2} + b_{l3}b_{Be})$ & 23.1
\\ \cline{1-3}\cline{5-7}
$\!(e^+\!\mu^-\!\! + e^-\!\mu^+) X\!\!$ & $\frac{\textstyle 4}{\textstyle 9} b_{l2}$ & 10.5 & &
$e^+ e^- \, X $ & $\frac{\textstyle 1}{\textstyle 6} (b_{l2} + 2 b_{l3}b_{Be})$ & \ 4.6
\\ \cline{1-3}\cline{5-7}
$e^+e^- \, X$ &
$\frac{\textstyle b_{l1}}{\textstyle 6}+ \frac{\textstyle 4}{\textstyle 9} b_{l2}
+ \frac{\textstyle 5}{\textstyle 6}b_{l3}b_{Be}\!$ & 15.5 & &
$\!e^+e^- e^+ X\!$ & $\frac{\textstyle 1}{\textstyle 6} (b_{l2} + 2 b_{l3}b_{Be})$ & \ 4.6
\\ \cline{1-3}\cline{5-7}
$e^+e^- e^+e^-$ & $\frac{\textstyle 1}{\textstyle 36}(b_{l2} + 6 b_{l3} b_{Be} )$ & \ 1.0 & &
$\!e^+e^-\mu^+ X\!$ & $\frac{\textstyle 1}{\textstyle 6} (b_{l2} + 2 b_{l3}b_{Be} )$ & \ 4.6
\\ \cline{1-3}\cline{5-7}
$\!e^+e^- \mu^+\mu^-$ & $\frac{\textstyle 1}{\textstyle 18}(b_{l2} + 6 b_{l3}b_{Be})$ & \ 2.0 & &
$\gamma \, X$ & $\frac{\textstyle 1}{\textstyle 3} b_{l3}b_{B\gamma } $ & \ 1.1
\\ \cline{1-3}\cline{5-7}
$\gamma \, X$ & $\frac{\textstyle 2}{\textstyle 3} b_{l3}b_{B\gamma } $ & \ 2.1 & &
$\gamma \; e^+ \, X$ &
$\frac{\textstyle 1}{\textstyle 3} b_{l3}b_{B\gamma } $ & \ 1.1
\\ \cline{1-3}\cline{5-7}
$\gamma \; e^+e^- \, X$ & $\frac{\textstyle 1}{\textstyle 6} b_{l3} b_{B\gamma } $ & \ 0.5 &
\multicolumn{4}{c}{}
\\ \cline{1-3}
\end{tabular}
\medskip
\caption{Branching fractions for the complete cascade decays of $W_\mu^{{({\bf 1})} 3}$ and
$W_\mu^{{({\bf 1})} +}$. $X$ stands for a number of neutrinos or taus.
The branching fractions involving more muons than electrons (not shown) are
equal to the analogous ones involving more electrons than muons.
The branching fractions of $W_\mu^{{({\bf 1})} -}$ are the same as for $W_\mu^{{({\bf 1})} +}$
except for flipping the electric charges of the final state leptons.
The branching fractions for `one-step' decays, $b_{l1}$, $b_{l2}$, $b_{l3}$
and $b_{Be}$, $b_{B\gamma }$, are defined in Eqs.~(\ref{onestep}) and (\ref{bmu-br}).\label{tab:totalBRW}}
}
As opposed to the three spinless adjoints and $B_\mu^{{({\bf 1})} }$
which at tree level have only 3-body decays,
the $W_\mu^{{({\bf 1})} }$ particles are heavier than the (1,0) leptons
and therefore decay with a branching fraction of almost 100\% into
one (1,0) lepton doublet and the corresponding standard model lepton
doublet.
Putting together the branching fractions for various decays of the electroweak
(1,0) bosons, we find the branching fractions for the
complete cascade decays of $W_\mu^{{({\bf 1})} 3}$ shown in Table \ref{tab:totalBRW}.
\subsection{Colored (1,0) particles}
At tree level, the (1,0) spinless adjoint of $SU(3)_c$ has only 3-body decays
into a quark-antiquark pair and one of the electroweak (1,0) bosons.
The decay widths are derived in Appendix C, and take the following form:
\begin{equation}
\Gamma\left(G^{{({\bf 1})} }_H \to B^{{({\bf 1})} }_{H} u_R\overline{u}_R\right)
= \frac{y_{u_R}^2\alpha\alpha_s}{64\pi \cos^2\!\theta_w}\,
M_{G_H^{{({\bf 1})} }} \,
{\cal I}_+\!\left( M_{G_H^{({\bf 1})} }, M_{B_H^{({\bf 1})} }, M_{U_-^{({\bf 1})} }\right) ~,
\end{equation}
\begin{equation}
\Gamma\left(G^{{({\bf 1})} }_H \to B^{{({\bf 1})} }_\mu u_R\overline{u}_R\right)
\approx
\frac{y_{u_R}^2\alpha\alpha_s}{140\pi \cos^2\!\theta_w}\,
M_{G_H^{{({\bf 1})} }} \, \frac{M_{U_-^{{({\bf 1})} }}^2}{M_{B_\mu^{{({\bf 1})} }}^2}
\frac{ \left( M_{G_H^{{({\bf 1})} }} - M_{B_\mu^{{({\bf 1})} }} \right)^7}
{( M_{U_-^{{({\bf 1})} }}^2 - M^2_{G_H^{{({\bf 1})} }} )^4} ~,
\end{equation}
for hypercharge (1,0) bosons in the final state, and
\begin{eqnarray} \hspace*{-3em}
\Gamma\left(G^{{({\bf 1})} }_H \to W^{{({\bf 1})} 3}_H u_L\overline{u}_L\right)
& \approx & \frac{\alpha\alpha_s}{420\pi \sin^2\!\theta_w}\,
M_{G_H^{{({\bf 1})} }}^2
\frac{ \left( M_{G_H^{{({\bf 1})} }} - M_{W_H^{{({\bf 1})} }} \right)^7}
{( M_{Q_+^{{({\bf 1})} }}^2 - M^2_{G_H^{{({\bf 1})} }} )^4} ~,
\nonumber \\ [0.6em] \hspace*{-3em}
\Gamma\left(G^{{({\bf 1})} }_H \to W^{{({\bf 1})} +}_H d_L \overline{u}_L \right)
& = & \Gamma\left(G^{{({\bf 1})} }_H \to W^{{({\bf 1})} -}_H u_L \overline{d}_L \right)
= 2 \, \Gamma\left(G^{{({\bf 1})} }_H \to W^{{({\bf 1})} 3}_H u_L \overline{u}_L \right) ~,
\eear
for $SU(2)_W$ (1,0) bosons.
Note that we have expanded the decay widths to leading order in the mass difference
of $G_H^{{({\bf 1})} }$ and the electroweak (1,0) boson [see Eq.~(\ref{fpe})]
in the case of $G_H\to B_\mu$ and $G_H\to W_H$ transitions, but not for
$G_H\to B_H$ where the mass difference is larger and the expansion does not provide
a good approximation.
$G^{{({\bf 1})} }_H$ has also a two-body decay into $B^{{({\bf 1})} }_\mu$ and a gluon,
via a dimension-5 operator shown in Eq.~(\ref{operator}), which is induced at one loop.
However, the width for this decay is highly suppressed because
$G^{{({\bf 1})} }_H$ and $B^{{({\bf 1})} }_\mu$ are almost degenerate.
After summing over all quark flavors, we find that
the dominant decay mode of $G^{{({\bf 1})} }_H$ is into $B^{{({\bf 1})} }_H q\overline{q}$,
with a total branching fraction of $b_{g1} \approx 96.5\%$.
The sum over all branching fractions of $G^{{({\bf 1})} }_H$ into $W^{{({\bf 1})} +}_H$ or
$W^{{({\bf 1})} -}_H$ plus a quark-antiquark pair is $b_{g2}^\prime \approx 2.3\%$.
The branching fraction for $G^{{({\bf 1})} }_H \to W^{{({\bf 1})} 3}_H q\overline{q}$
is $b_{g2} \approx 1.2\%$, while the decay into $B_\mu^{{({\bf 1})} }$ is highly suppressed due to the
very small mass difference involved in that case.
The branching fractions quoted here correspond to $1/R = 500$ GeV. For different
values of $1/R$, the branching fractions of $G^{{({\bf 1})} }_H$ change slightly due to
the dependence of $M_{T_\pm^{({\bf 1})} }R$ on $1/R$ shown in Table~\ref{tab:mass}.
For the coupling constants we use $\alpha_s = 0.107$,
$\alpha = 1/127$ and $\sin^2\theta_w = 0.235$, which are the standard model
values at 500 GeV.
The (1,0) quarks can decay into both vector and spinless
modes.
The largest decay width is into a $G_H^{{({\bf 1})} }$
and a standard model quark:
\begin{equation}
\Gamma\left(Q^{{({\bf 1})} }\!\to G_H^{{({\bf 1})} } q\right) = \frac{\alpha_s}{6}\,
M_{Q^{{({\bf 1})} }}
\left(1 - \frac{M_{G_H^{{({\bf 1})} }}^2}{M_{Q^{{({\bf 1})} }}^2} \right)^{\! 2} ~.
\label{Q-GHq}
\end{equation}
The $SU(2)_W$-doublet (1,0) quarks can also decay into a standard-model quark,
and an $SU(2)_W$ gauge boson or spinless adjoint.
Ignoring the standard-model quark mass, the decay width for the latter is
\begin{equation}
\Gamma\left(Q_+^{{({\bf 1})} }\!\to W_H^{{({\bf 1})} 3} q_{L}\right) =
\frac{\alpha }{32\sin^{2}\!\theta_{w}}M_{Q_+^{{({\bf 1})} }}
\left(1-\frac{M_{W_H^{{({\bf 1})} }}^2}{M_{Q^{{({\bf 1})} }}^2}\right)^{\! 2} ~,
\label{Q-WHq}
\end{equation}
and is twice as large in the case of $W_H^{{({\bf 1})} \pm}$.
The decays of (1,0) quarks into an $SU(2)_W$ (1,0) vector boson
and a standard model quark have a width
\begin{equation}
\Gamma\left(Q^{{({\bf 1})} }_+\!\to W^{{({\bf 1})} 3}_{\mu} q_{L} \right) =
\left(\frac{M_{Q_+^{{({\bf 1})} }}^2 - M_{W_\mu^{{({\bf 1})} }}^2}
{M_{Q_+^{{({\bf 1})} }}^2 - M_{W_H^{{({\bf 1})} }}^2}\right)^{\! 2}
\left(2 + \frac{M_{Q_+^{{({\bf 1})} }}^2}{M_{W_\mu^{{({\bf 1})} }}^2}\right)
\Gamma\left(Q_+^{{({\bf 1})} }\!\to W_H^{{({\bf 1})} 3} q_{L}\right) ~.
\label{Q-Wq}
\end{equation}
The width is twice as large for $Q_+^{{({\bf 1})} }\!\to W^{{({\bf 1})} \pm}_{\mu} q_{L}$.
All (1,0) quarks may also decay into (1,0) hypercharge bosons with widths
\begin{eqnarray}
\hspace{-1.1cm}
\Gamma\left(Q^{{({\bf 1})} }\!\to B_H^{{({\bf 1})} } q\right) & \! =\! &
\frac{Y^{2}_{q}\alpha }{32\cos^{2}\!\theta_{w}} M_{Q^{{({\bf 1})} }}
\left(1-\frac{M_{B_H^{{({\bf 1})} }}^2}{M_{Q^{{({\bf 1})} }}^2}\right)^{\! 2} ~,
\nonumber \\ [0.6em]
\hspace{-1.1cm}
\Gamma\left(Q^{{({\bf 1})} }\!\to B^{{({\bf 1})} }_{\mu} q \right) & \! =\! &
\left(\frac{M_{Q^{{({\bf 1})} }}^2 - M_{B_\mu^{{({\bf 1})} }}^2}
{M_{Q^{{({\bf 1})} }}^2 - M_{B_H^{{({\bf 1})} }}^2}\right)^{\! 2}
\left(2 + \frac{M_{Q^{{({\bf 1})} }}^2}{M_{B_\mu^{{({\bf 1})} }}^2}\right)
\Gamma\left(Q^{{({\bf 1})} }\!\to B_H^{{({\bf 1})} } q\right) ~,
\label{Q-Bq}
\eear
where $Y_{q}$ is the hypercharge of the quark $q$, normalized to be 1/3 for
$SU(2)_W$ doublets.
The branching fractions of the (1,0) quarks of the first and second generations
are shown in Table \ref{tab:BRquarks}.
The $B_-^{{({\bf 1})} }$ quark has the same branching fractions as $D_-^{{({\bf 1})} }$,
while those of the $Q_+^{{({\bf 1})} 3}=(T_+^{{({\bf 1})} },B_+^{{({\bf 1})} })$
quarks are more sensitive to $1/R$, as shown in Figure~\ref{fig:Brs},
because of the large top quark mass.
Finally, the KK mode of the $SU(2)_W$-singlet top quark, $T_-^{{({\bf 1})} }$,
has branching fractions highly sensitive to the mass of (1,0) Higgs
particles, with the decay into $b\, H^{{({\bf 1})} +}$ dominating over $t\, G_H^{({\bf 1})} $
if $H^{{({\bf 1})} +}$ is light. Because of this fact, and also because of their small
production cross section, third generation fermions do not result in many
multi-lepton events. Hence we will not give an expression for their branching fractions here.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{|c|c|c|c|c|c|}
\cline{1-2} \cline{4-6}
\ $V^{{({\bf 1})} }$ \ & Br$\left(U_+^{{({\bf 1})} } \!\to q_L V^{{({\bf 1})} }\!\right)\!$ & \hspace*{0.5em} &
\ $V^{{({\bf 1})} }$ \ & Br$\left(U_-^{{({\bf 1})} } \!\to u_R V^{{({\bf 1})} }\!\right)\!\!$ &
Br$\left(D_-^{{({\bf 1})} } \!\to d_R V^{{({\bf 1})} }\!\right)\!\!\!\!$
\rule{0mm}{5mm}\rule{0mm}{-22mm} \\ \cline{1-2}\cline{4-6} \cline{1-2} \cline{4-6}
\cline{1-2}\cline{4-6} \cline{1-2} \cline{4-6}
$G_H^{{({\bf 1})} }$ & $b_{q3} \approx 63.2 \%$ & &
$G_H^{{({\bf 1})} }$ & $b_{u3} \approx 82.1\%$ & $b_{d3} \approx 94.8\%$
\\ \cline{1-2} \cline{4-6}
$\!\!W_\mu^{{({\bf 1})} 3}$ ; $W_\mu^{{({\bf 1})} +}\!\!$ & $b_{q2} \approx 6.4\%$ ; $2b_{q2}$ & &
$B_\mu^{{({\bf 1})} }$ & $b_{u2} \approx 11.5\%$ & \ \ $b_{d2} \approx 3.3\%$
\\ \cline{1-2} \cline{4-6}
$\!\!W_H^{{({\bf 1})} 3}$ ; $W_H^{{({\bf 1})} +}\!\!$ & $b_{q1} \approx 5.6\%$ ; $2b_{q1}$ & &
$B_H^{{({\bf 1})} }$ & \ \ $b_{u1} \approx 6.4\%$ & \ \ $b_{d1} \approx 1.9\%$
\\ \cline{1-2} \cline{4-6}
$B_\mu^{{({\bf 1})} }$ & $b_{q0} \approx 0.55 \%$ & \multicolumn{3}{c}{} \\ \cline{1-2}
\end{tabular}
\medskip
\caption{Branching fractions of first and second generation (1,0) quarks, in percentage.
$D_+^{({\bf 1})} $ have the same branching fractions as $U_+^{({\bf 1})} $
except for a flip of the electric charge of the (1,0) bosons.
The $U_+^{({\bf 1})} $ decays into a $B_H^{{({\bf 1})} }$ and a quark is not shown
because it is too small to be relevant.
}
\medskip
\label{tab:BRquarks}
\end{table}
\begin{figure}[t]
\centerline{
\psfig{file=BR_tL.ps,width=7.8cm,angle=0}
\psfig{file=BR_bL.ps,width=7.8cm,angle=0}
}
\vspace*{-1.4mm}
\caption{Branching fractions for the $SU(2)_W$-doublet (1,0) quarks of the third generation,
assuming that the (1,0) Higgs particles have a mass $M_{H^{({\bf 1})} } = 1.05/R$.
}
\label{fig:Brs}
\end{figure}
The (1,0) vector gluon decays into a standard model quark and a (1,0)
quark. The width in the case of $SU(2)_W$-singlet down-type quarks
is given by
\begin{equation}
\label{Gmu-width}
\Gamma\left(G_\mu^{{({\bf 1})} }\! \rightarrow \sum_{i=1,2,3} D_{-_R}^{{({\bf 1})} i}
d_R^i\right) =
\frac{\alpha_{s}}{2} M_{G_\mu^{{({\bf 1})} }}
\left(1-\frac{M_{D_-^{{({\bf 1})} }}^2}{M_{G_\mu^{{({\bf 1})} }}^2}\right)^{\!2}
\left(1+\frac{M_{D_-^{{({\bf 1})} }}^2}{2 M_{G_\mu^{{({\bf 1})} }}^2}\right) ~.
\end{equation}
The widths into all other (1,0) quarks except for the top have similar forms.
For $1/R \lae 1.3$ TeV the decays of the (1,0) vector gluon into
$t_L T_{+_L}^{{({\bf 1})} }$ or $t_R T_{-_R}^{{({\bf 1})} }$ have a highly suppressed
phase space, and the branching fractions of
$G_\mu^{{({\bf 1})} }$ into a quark plus $Q_{+_L}^{{({\bf 1})} i}$,
$U_{-_R}^{{({\bf 1})} i}$, or $D_{-_R}^{{({\bf 1})} i}$, summed over the index $i$ which labels
the three generations, are given by 36.7\%, 24.6\% and 38.7\%, respectively.
For the purpose of analyzing the capability of the LHC to test
this model, we need to compute the branching fractions of the complete
cascade decays of the (1,0) quarks and gluons into the LKP and a
number of charged leptons or photons.
It is useful to compute first the sums over branching fractions
of the cascade decays that do not involve any $e^\pm$, $\mu^\pm$, or $\gamma$
for $G_H^{{({\bf 1})} }$,
\begin{equation}
b_{gX} = b_{g1} + \frac{2}{3}b_{g2} + \frac{b_{g2}^\prime}{3} ~,
\end{equation}
and for $U_-^{({\bf 1})} $, $D_-^{({\bf 1})} $, $Q_+^{({\bf 1})} $, respectively:
\begin{eqnarray}
b_{uX} & = & b_{u1} + b_{u2}\, b_{Be} + b_{u3}\, b_{gX} ~,
\nonumber \\ [2.4mm]
b_{dX} & = & b_{d1}+ b_{d2}\, b_{Be} + b_{d3}\, b_{gX} ~,
\nonumber \\ [2.4mm]
b_{qX} & = & b_{Be} b_{q0} +
\frac{4}{3} b_{q1} + \frac{2}{3}\left(2 b_{l1} + 3 b_{l2} + 2 b_{l3}b_{Be} \right) b_{q2}
+ b_{q3}\, b_{gX} ~.
\eear
The right-hand sides of the above equations are sums over separate
cascade decays, whose branching fractions are written as products
of `one-step' decays. For example, in the case of $b_{qX}$ the first term
comes from the $Q_+^{({\bf 1})} \to W_H^{({\bf 1})} \to B_H^{({\bf 1})} $ cascade, the second term comes from the
sum over $Q_+^{({\bf 1})} \to W_\mu^{({\bf 1})} \to \cdots \to B_H^{({\bf 1})} $ cascades, and the last term
comes from the $Q_+^{({\bf 1})} \to G_H^{({\bf 1})} \to B_H^{({\bf 1})} $ cascade.
The $Q_-^{{({\bf 1})} }$ and $G_H^{{({\bf 1})} }$ cascade
decays lead to at most two charged leptons, with small branching
fractions, as shown in Table~\ref{tab:BRGH}.
By contrast, $Q_+^{{({\bf 1})} }$
have larger branching fractions for decays involving charged
leptons, and include up to four charged leptons (see Table~\ref{tab:BRGmu}).
However, the cascade decay with the largest branching fraction to a photon
is that of $U_-^{{({\bf 1})} }$.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{|c||c|c|c|}
\hline
\hspace{-0.4em} Final-state $e,\mu,\gamma$ \hspace{-0.4em} &
$G_H^{{({\bf 1})} } \!\to ... \to B_H^{{({\bf 1})} } $
& $U_-^{{({\bf 1})} } \!\to ... \to B_H^{{({\bf 1})} } $ & $D_-^{{({\bf 1})} } \!\to ... \to B_H^{{({\bf 1})} } $
\rule{0mm}{5mm}\rule{0mm}{-22mm}
\\ \hline\hline
$X$ & $ b_{gX}
\approx 98.0\%$
& $b_{uX} \approx 89.4\%$ \ \ & $b_{dX} \approx 95.5\%$ \ \
\\ \hline
$e^+ \, (\mu^+) \, X$ & $\frac{\textstyle 1}{\textstyle 6} b_{g2}^\prime \approx 0.38\%$
& $\frac{\textstyle 1}{\textstyle 6}b_{u3}b_{g2}^\prime \approx 0.31\%$ &
$\frac{\textstyle 1}{\textstyle 6}b_{d3}b_{g2}^\prime \approx 0.36\%$
\\ \hline
$e^- \, (\mu^-) \, X$ & $\frac{\textstyle 1}{\textstyle 6} b_{g2}^\prime \approx 0.38\%$
& $\frac{\textstyle 1}{\textstyle 6}b_{u3}b_{g2}^\prime \approx 0.31\%$ &
$\frac{\textstyle 1}{\textstyle 6}b_{d3}b_{g2}^\prime \approx 0.36\%$
\\ \hline
$e^+e^- \, (\mu^+\mu^-) \, X$ & $\frac{\textstyle 1}{\textstyle 6}b_{g2} \approx 0.21\%$
& $b_{u2}b_{Be}
+ \frac{\textstyle b_{u3}}{\textstyle 6}b_{g2} \approx 2.6\%$ &
$b_{d2} b_{Be} + \frac{\textstyle b_{d3}}{\textstyle 6}b_{g2} \approx 0.90 \% $
\\ \hline
$\gamma \, X$ & $\approx 0$ & $b_{u2}b_{B\gamma} \approx 3.9\%$ & $b_{d2}b_{B\gamma} \approx 1.1\%$
\\ \hline
\end{tabular}
\medskip
\caption{Branching fractions for the complete cascade decays of $G_H^{({\bf 1})} $,
$U_-^{{({\bf 1})} }$ and $D_-^{{({\bf 1})} }$, with 0,1 or 2 charged leptons in the final state.
$X$ stands for a number of standard model fermions other than $e^{\pm}$
and $\mu^\pm$.
The branching fractions for $\overline{U}_-^{{({\bf 1})} }$ and $\overline{D}_-^{{({\bf 1})} }$
are the same as for $U_-^{{({\bf 1})} }$ and $D_-^{{({\bf 1})} }$.
}
\label{tab:BRGH}
\end{table}
\newpage
\vspace*{0.81cm}
\begin{table}[t!]
\centering
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{|c|c|}
\hline
\hspace{-0.1em} Final-state $e,\mu ,\gamma $ & $U_+^{{({\bf 1})} } \!\to ... \to B_H^{{({\bf 1})} } $
\rule{0mm}{5mm}\rule{0mm}{-22mm} \\ \hline\hline
$X$ & $b_{qX} \approx 74.5\%$
\\ \hline
$e^+ \, (\mu^+) \, X$ & $\frac{\textstyle 2}{\textstyle 3}b_{q1} +
\frac{\textstyle 2}{\textstyle 9}\left(3 b_{l1} + 7 b_{l2} + 3 b_{l3} b_{Be} \right)b_{q2}
+\frac{\textstyle 1}{\textstyle 6}b_{g2}^\prime b_{q3} \approx 7.3\% $
\\ \hline
$e^- \, (\mu^-) \, X$ & \ \ $\frac{\textstyle 2}{\textstyle 9}b_{l2}b_{q2}
+\frac{\textstyle 1}{\textstyle 6}b_{g2}^\prime b_{q3}\approx 0.58\% $
\\ \hline
$e^+e^- \, (\mu^+\mu^-) \, X$
& $b_{Be} b_{q0} +
\frac{\textstyle b_{q1}}{\textstyle 6} +
\frac{\textstyle 1}{\textstyle 18}(3 b_{l1} + 14 b_{l2} + 27 b_{l3} b_{Be} )b_{q2}
+ \frac{\textstyle b_{g2}}{\textstyle 6} b_{q3} \approx 2.6\% $
\\ \hline
$e^+\mu^- \, (e^-\mu^+) \, X$ & $\frac{\textstyle 2}{\textstyle 9}b_{l2}b_{q2} \approx 0.33\% $
\\ \hline
$e^+e^+e^- \, (\mu^+\mu^+\mu^-) \, X$
& $\frac{\textstyle 1}{\textstyle 3} (b_{l2} + 2 b_{l3} b_{Be} )b_{q2} \approx 0.58\%$
\\ \hline
$\mu^+e^+e^- \, (e^+\mu^+\mu^-) \, X$
& $\frac{\textstyle 1}{\textstyle 3} (b_{l2} + 2 b_{l3} b_{Be} )b_{q2} \approx 0.58\%$
\\ \hline
$\!e^+e^-e^+e^- \, (\mu^+\mu^-\mu^+\mu^-)X\!\!$
& $\frac{\textstyle 1}{\textstyle 36}(b_{l2} + 6 b_{l3} b_{Be})b_{q2} \approx 0.063\%$
\\ \hline
$e^+e^-\mu^+\mu^- \, X$
& $\frac{\textstyle 1}{\textstyle 18}(b_{l2} + 6 b_{l3} b_{Be})b_{q2} \approx 0.13\%$
\\ \hline
$\gamma \, X$ & $b_{B\gamma } b_{q0} +
\frac{\textstyle 4}{\textstyle 3} b_{l3} b_{B\gamma } b_{q2} \approx 0.38\% $
\\ \hline
$\gamma \, e^+ \, (\gamma \mu^+ ) \, X$ & $\frac{\textstyle 2}{\textstyle 3} b_{l3} b_{B\gamma } b_{q2} \approx 0.13\%$
\\ \hline
$\gamma \, e^+e^- \, (\gamma \mu^+\mu^- ) \, X$ &
$\frac{\textstyle 1}{\textstyle 6} b_{l3} b_{B\gamma } b_{q2} \approx 0.033\% $
\\ \hline
\end{tabular}
\medskip
\caption{Branching fractions for the complete cascade decays of
$U_+^{{({\bf 1})} }$ with up to four charged leptons or photons in the final state.
$X$ stands for a number of standard model fermions other than $e^{\pm}$
and $\mu^\pm$.
$\overline{D}_+^{{({\bf 1})} }$ has the same branching fractions as $U_+^{{({\bf 1})} }$, while
the branching fractions of $D_+^{{({\bf 1})} }$ and $\overline{U}_+^{{({\bf 1})} }$
are given by flipping the lepton charges in the first column.
The (1,0) top-quark doublet has braching fractions which are highly dependent
on $1/R$, and are not shown here.
}
\label{tab:BRGmu}
\end{table}
\section{Signatures of (1,0) particles at hadron colliders}
\label{sec:production}
In this section we discuss the prospects for discovery of (1,0) particles at the LHC and the Tevatron.
As shown in the previous section, a large number of leptons arises in the decays of
$W_\mu^{({\bf 1})} $ and
other (1,0) bosons, while photons arise in the decay of the $B_\mu^{({\bf 1})} $ vector boson.
We focus on computing the production cross sections of colored particles
and the number of events with leptons and photons resulting from their decays.
We will also include direct production of $W_\mu^{{({\bf 1})} }$ in our analysis although
this turns out to have a rather small effect.
\subsection{Pair production of level-1 particles}
We discuss the production of (1,0) particles in order of importance
for the lepton + photon signals under consideration.
This is more complicated than level-1 production in the case of
one universal extra dimension \cite{Macesanu:2002db}
because of the $G_H^{({\bf 1})} $ spinless adjoint, which is not present in the 5D theory,
and appears in the final state as well as in $s$- and $t$- channel exchanges.
We begin with the $SU(2)_W$-doublet quark $Q_+^{({\bf 1})} $, because
a large fraction of its cascade decays gives rise to charged leptons (see
Section~\ref{sec:decays}).
In addition, since it is lighter than the (1,0) vector gluon,
and because of its high multiplicity,
we expect $Q_+^{({\bf 1})} $ production to be the dominant source of multi-lepton signals.
We concentrate here on production mechanisms at the LHC, while
in section 4.3 we adapt this discussion to the case of $p\bar{p}$ collisions at the
Tevatron.
Given that there are more quarks than anti-quarks involved in proton-proton collisions,
we first discuss quark-initiated pair production,
$qq \to Q_\pm^{({\bf 1})} {Q}_\pm^{({\bf 1})} $,
which is mediated by $G_\mu^{({\bf 1})} $ and $G_H^{({\bf 1})} $ exchange in the $t$ channel,
as shown in Fig.~\ref{fig:qqQQ}.
Two (1,0) quarks of different flavors ($Q_\pm^{({\bf 1})} {Q'}_\pm^{({\bf 1})} $),
and an $SU(2)_W$ doublet-singlet pair ($Q_+^{({\bf 1})} {Q'}_-^{({\bf 1})} $) are produced in
a similar way.
\begin{figure*}[b]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{0.8}
\begin{center}
\begin{picture}(350,80)(40,20)
\ArrowLine( 60,75)(110,75)
\ArrowLine(60,25)(110,25)
\ArrowLine(110,25)(160,25)
\ArrowLine(110,75)(160,75)
\Gluon(110,75)(110,25){5}{4}
\Text( 220,50)[r]{{\large +}}
\ArrowLine(260,25)( 310,25)
\ArrowLine(310,25)( 360,25)
\ArrowLine(310,75)( 360,75)
\ArrowLine(260,75)( 310,75)
\DashLine(310,25)(310,75){3}
\Text(188,25)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(188,75)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(390,25)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(390,75)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(55,25)[r]{$ q$}
\Text(55,75)[r]{$ q$}
\Text(255,25)[r]{$q$}
\Text(255,75)[r]{$ q$}
\Text(145,50)[r]{$ G_\mu^{({\bf 1})} $}
\Text(340,50)[r]{$ G_H^{({\bf 1})} $}
\end{picture}
\end{center}
\caption{Diagrams for $Q_\pm^{({\bf 1})} Q_\pm^{({\bf 1})} $ production from quark-quark ($qq$)
initial state.}
\label{fig:qqQQ}
\end{figure*}
For low $1/R$, the quark anti-quark and gluon initiated production mechanisms
are also important.
Production from a quark anti-quark pair,
$q \bar{q}' \to Q_\pm^{({\bf 1})} \bar{Q'}_\pm^{({\bf 1})} $
and $q \bar{q}' \to Q_\pm^{({\bf 1})} \bar{Q'}_\mp^{({\bf 1})} $, is similar to the process shown
in Fig.~\ref{fig:qqQQ} with a fermion line replaced by an anti-fermion line.
When quarks in the initial state have a different flavor than the (1,0) quarks
in the final state, $q' \bar{q}' \to Q_\pm^{({\bf 1})} \bar{Q}_\pm^{({\bf 1})} $,
a single tree-level diagram with a gluon exchange in the $s$ channel contributes,
as shown in Fig.~\ref{fig:qqQQ'}.
The processes $q \bar{q} \to Q_\pm^{({\bf 1})} \bar{Q}_\pm^{({\bf 1})} $
(for which the initial and final states have same flavors) get contributions from
the two diagrams in Fig.~\ref{fig:qqQQ} with one of the fermion lines replaced by
an anti-fermion line,
and also from the diagram of Fig.~\ref{fig:qqQQ'} with $q'$ replaced by $q$.
\begin{figure*}[t!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{0.8}
\begin{center}
\begin{picture}(160,80)(0,20)
\ArrowLine( 20,20)( 60,50)
\ArrowLine( 60,50)( 20,80)
\Gluon(60,50)(100,50){4}{4}
\ArrowLine( 140,20)(100,50)
\ArrowLine(100,50)( 140,80)
\Text(166,20)[r]{$\bar{Q}_\pm^{({\bf 1})} $}
\Text(166,80)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(14,20)[r]{$ q'$}
\Text(14,80)[r]{$\bar{q}'$}
\Text(85,65)[r]{$ g$}
\end{picture}
\end{center}
\caption{$Q_\pm^{({\bf 1})} \overline{Q}_\pm^{({\bf 1})} $ production from $q'\bar{q}'$ initial state.}
\label{fig:qqQQ'}
\end{figure*}
$Q_\pm^{({\bf 1})} \bar{Q}_\pm^{({\bf 1})} $ can also be produced from two gluons in
the initial state, as shown in Fig.~\ref{fig:ggQQ}.
This production channel becomes increasingly important for smaller
(1,0) quark mass (smaller $1/R$)
due to the larger gluon flux in the parton distribution.
\begin{figure*}[h!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{0.8}
\begin{center}
\begin{picture}(440,80)(7,20)
\Gluon( 20,20)( 45,50){4}{4}
\Gluon( 45,50)( 20,80){4}{4}
\Gluon(45,50)(80,50){4}{4}
\ArrowLine( 113,20)(80,50)
\ArrowLine(80,50)( 113,80)
\Text(150,50)[r]{{\large +}}
\Gluon(170,75)(210,75){4}{4}
\Gluon(170,25)(210,25){4}{4}
\ArrowLine(210,25)(210,75)
\ArrowLine(210,75)(260,75)
\ArrowLine(260,25)(210,25)
\Text(305,50)[r]{{\large +}}
\Gluon(330,75)(370,75){4}{4}
\Gluon(330,25)(370,25){4}{4}
\ArrowLine(370,75)(370,25)
\ArrowLine(370,25)(420,25)
\ArrowLine(420,75)(370,75)
\Text(15,15)[r]{$g$}
\Text(15,85)[r]{$g$}
\Text(165,25)[r]{$ g$}
\Text(165,75)[r]{$ g$}
\Text(325,25)[r]{$g$}
\Text(325,75)[r]{$ g$}
\Text(65,65)[r]{$ g$}
\Text(135,20)[r]{$\bar{Q}_\pm^{({\bf 1})} $}
\Text(135,85)[r]{$ {Q}_\pm^{({\bf 1})} $}
\Text(285,25)[r]{$ \bar{Q}_\pm^{({\bf 1})} $}
\Text(285,80)[r]{$ {Q}_\pm^{({\bf 1})} $}
\Text(445,25)[r]{$\bar{Q}_\pm^{({\bf 1})} $}
\Text(445,80)[r]{${Q}_\pm^{({\bf 1})} $}
\Text(240,50)[r]{${Q}_\pm^{({\bf 1})} $}
\Text(400,50)[r]{${Q}_\pm^{({\bf 1})} $}
\end{picture}
\end{center}
\caption{Diagrams for $Q_\pm^{({\bf 1})} \bar{Q}_\pm^{({\bf 1})} $ production from gluon-gluon ($gg$)
initial state.}
\label{fig:ggQQ}
\end{figure*}
Since the $SU(3)_c$ (1,0) bosons, $G_\mu^{({\bf 1})} $ and $G_H^{({\bf 1})} $, decay to fewer leptons
than $Q_+^{({\bf 1})} $, we will next consider their associated production with $Q_+^{({\bf 1})} $.
The process $q g \to Q_\pm^{({\bf 1})} G_H^{({\bf 1})} $ is shown in Fig.~\ref{fig:qgQGH}.
Diagrams with a (1,0) vector gluon in the final state can be
obtained by replacing $G_H^{({\bf 1})} $ by $G_\mu^{({\bf 1})} $.
Similar diagrams, but with $G_H^{({\bf 1})} $ replaced by $W_\mu^{({\bf 1})} $ and an appropriate
flip between the up-type and down-type quarks, contribute to $q g \to Q_\pm^{({\bf 1})} W_\mu^{({\bf 1})} $
associated production.
\begin{figure*}[b!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{0.8}
\begin{center}
\begin{picture}(450,80)(3,20)
\Gluon(50,50)( 20,20){4}{4}
\ArrowLine( 20,80)( 50,50)
\ArrowLine(50,50)(90,50)
\DashLine( 120,20)(90,50){3}
\ArrowLine(90,50)( 120,80)
\Text(155,50)[r]{{\large +}}
\ArrowLine(180,75)(225,75)
\ArrowLine(225,75)(225,25)
\Gluon(225,25)(180,25){4}{4}
\DashLine(270,75)(225,75){3}
\ArrowLine(225,25)(270,25)
\Text(305,50)[r]{{\large +}}
\ArrowLine(330,75)(375,75)
\Gluon(375,25)(330,25){4}{4}
\DashLine(375,75)(375,25){3}
\DashLine(375,25)(420,25){3}
\ArrowLine(375,75)(420,75)
\Text(15,15)[r]{$ g$}
\Text(15,85)[r]{$ q$}
\Text(175,25)[r]{$ g$}
\Text(175,75)[r]{$ q$}
\Text(325,25)[r]{$ g$}
\Text(325,75)[r]{$ q$}
\Text(75,61)[r]{$ q$}
\Text(145,20)[r]{$ G_H^{({\bf 1})} $}
\Text(145,85)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(295,25)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(295,80)[r]{$ G_H^{({\bf 1})} $}
\Text(445,25)[r]{$ G_H^{({\bf 1})} $}
\Text(445,80)[r]{$ Q_\pm^{({\bf 1})} $}
\Text(253,50)[r]{$ {Q}_\pm^{({\bf 1})} $}
\Text(403,50)[r]{$ G_H^{({\bf 1})} $}
\end{picture}
\end{center}
\caption{Diagrams for $G_H^{({\bf 1})} Q_\pm^{({\bf 1})} $ production from quark-gluon initial state.}
\label{fig:qgQGH}
\end{figure*}
$G_H^{({\bf 1})} $ pair production is a
rather meager source of leptons or photons,
but for the sake of completeness we include here its diagrams:
quark initiated production $q\bar{q} \to G_H^{({\bf 1})} G_H^{({\bf 1})} $, and
gluon initiated production $gg \to G_H^{({\bf 1})} G_H^{({\bf 1})} $ are shown in
Figs.~\ref{fig:qqGHGH} and~\ref{fig:ggGHGH}, respectively.
$G_\mu^{({\bf 1})} $ pair production proceeds through the same diagrams
with all $G_H^{({\bf 1})} $ lines replaced by $G_\mu^{({\bf 1})} $ ones.
$G_\mu^{({\bf 1})} G_H^{({\bf 1})} $ associated production, $q\bar{q} \to G_H^{({\bf 1})} G_\mu^{({\bf 1})} $,
proceeds through four diagrams with $Q_+^{({\bf 1})} $ and $Q_-^{({\bf 1})} $ in the
$t$ and $u$ channels, similar to the second diagram in Fig.~\ref{fig:qqGHGH}.
There is no contribution from the $s$ channel because the coupling
$G_H^{({\bf 1})} g^\mu G_\mu^{({\bf 1})} $ does not exist at tree level due to gauge invariance.
\begin{figure*}[t!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{0.8}
\begin{center}
\begin{picture}(300,80)(0,20)
\ArrowLine( 20,20)( 60,50)
\ArrowLine( 60,50)( 20,80)
\Gluon(60,50)(100,50){4}{4}
\DashLine( 140,20)(100,50){3}
\DashLine(100,50)( 140,80){3}
\Text(165,50)[r]{{\large +}}
\ArrowLine(180,75)(230,75)
\ArrowLine(230,25)(180,25)
\ArrowLine(230,75)(230,25)
\DashLine(280,75)(230,75){3}
\DashLine(280,25)(230,25){3}
\Text(15,15)[r]{$ \bar{q}$}
\Text(15,85)[r]{$ q$}
\Text(175,25)[r]{$ \bar{q}$}
\Text(175,75)[r]{$ q$}
\Text(85,65)[r]{$ g$}
\Text(165,20)[r]{$ G_H^{({\bf 1})} $}
\Text(165,85)[r]{$ G_H^{({\bf 1})} $}
\Text(305,25)[r]{$ G_H^{({\bf 1})} $}
\Text(305,80)[r]{$ G_H^{({\bf 1})} $}
\Text(260,50)[r]{$ {Q}_\pm^{({\bf 1})} $}
\end{picture}
\end{center}
\caption{Diagrams for $G_H^{({\bf 1})} G_H^{({\bf 1})} $ production from $q\bar{q}$
( $u$-channel diagram is not shown).}
\label{fig:qqGHGH}
\end{figure*}
\begin{figure*}[t!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{0.8}
\begin{center}
\begin{picture}(450,80)(0,20)
\Gluon( 20,20)( 60,50){4}{4}
\Gluon( 60,50)( 20,80){4}{4}
\Gluon(60,50)(100,50){4}{4}
\DashLine( 140,20)(100,50){3}
\DashLine(100,50)( 140,80){3}
\Text(165,50)[r]{{\large +}}
\Gluon(180,75)(230,75){4}{4}
\Gluon(180,25)(230,25){4}{4}
\DashLine(230,25)(230,75){3}
\DashLine(230,75)(280,75){3}
\DashLine(280,25)(230,25){3}
\Text(305,50)[r]{{\large +}}
\Gluon(360,50)(320,20){4}{4}
\Gluon(320,80)(360,50){4}{4}
\DashLine(362,50)(400,20){3}
\DashLine(362,50)(400,80){3}
\Text(15,15)[r]{$ g$}
\Text(15,85)[r]{$ g$}
\Text(175,25)[r]{$ g$}
\Text(175,75)[r]{$ g$}
\Text(315,25)[r]{$ g$}
\Text(315,75)[r]{$ g$}
\Text(85,65)[r]{$ g$}
\Text(165,20)[r]{$ G_H^{({\bf 1})} $}
\Text(165,85)[r]{$ G_H^{({\bf 1})} $}
\Text(305,25)[r]{$ G_H^{({\bf 1})} $}
\Text(305,80)[r]{$ G_H^{({\bf 1})} $}
\Text(260,50)[r]{$ G_H^{({\bf 1})} $}
\Text(430,15)[r]{$ G_H^{({\bf 1})} $}
\Text(430,85)[r]{$ G_H^{({\bf 1})} $}
\end{picture}
\end{center}
\caption{Diagrams for $G_H^{({\bf 1})} G_H^{({\bf 1})} $ production from $gg$
(a $u$-channel diagram is not shown).}
\label{fig:ggGHGH}
\end{figure*}
Finally we consider associated production of $G_{\mu}^{({\bf 1})} $ or $G_H^{({\bf 1})} $ with
an $SU(2)_W$ vector boson, $W_\mu^{{({\bf 1})} }$,
as shown in Fig.~\ref{fig:qqW+GH}
(with $G_H^{({\bf 1})} $ in the final state replaced by $G_\mu^{({\bf 1})} $ for
$q \bar{q}' \rightarrow G_\mu^{({\bf 1})} W_\mu^{({\bf 1})} $).
For $W_\mu^{{({\bf 1})} 3}$ in the final state, the initial state and the (1,0)
quarks are all of the same type.
Associated production with hypercharge bosons, $B_\mu^{({\bf 1})} $ $B_H^{({\bf 1})} $, as
well as with the $SU(2)_W$ spinless adjoints $W_H^{({\bf 1})} $ are very small and
will be neglected;
we will also ignore production of (1,0) Higgs particles since their
phenomenology is highly model-dependent.
\begin{figure*}[h!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{0.8}
\begin{center}
\begin{picture}(350,80)(40,20)
\ArrowLine(80,75)(130,75)
\ArrowLine(130,25)(80,25)
\ArrowLine(130,75)(130,25)
\DashLine(180,75)(130,75){3}
\Photon(180,25)(130,25){3}{4}
\Text(230,50)[r]{{\bf\large +}}
\ArrowLine(260,75)(310,75)
\ArrowLine(310,25)(260,25)
\ArrowLine(310,75)(310,25)
\DashLine(360,25)(310,25){3}
\Photon(310,75)(360,75){3}{4}
\Text(75,25)[r]{$\bf \bar{q'}$}
\Text(75,75)[r]{$\bf q$}
\Text(255,25)[r]{$\bf \bar{q'}$}
\Text(255,75)[r]{$\bf q$}
\Text(220,25)[r]{$\bf W_\mu^{{({\bf 1})} +}$}
\Text(210,75)[r]{$\bf G_{H}^{({\bf 1})} $}
\Text(390,25)[r]{$\bf G_{H}^{({\bf 1})} $}
\Text(400,75)[r]{$\bf W_\mu^{{({\bf 1})} +}$}
\Text(160,50)[r]{$\bf {Q}_+^{({\bf 1})} $}
\Text(340,50)[r]{$\bf {Q'}_+^{({\bf 1})} $}
\end{picture}
\end{center}
\caption{Diagrams for $W_\mu^{{({\bf 1})} +} G_H^{({\bf 1})} $ production from $q\bar{q'}$.}
\label{fig:qqW+GH}
\end{figure*}
Given that there are many diagrams that need to be taken into account,
we have implemented the 6DSM detailed in section 2
in {\tt CalcHEP}~\cite{Pukhov:1999gg,Pukhov:2004ca},
a tree-level Feynman diagram calculator (for a description of our
{\tt CalcHEP} files, see \cite{web}).
Consequently it is rather straightforward to compute production cross sections for
(1,0) particles at various colliders.
As a cross-check we have compared the {\tt CalcHEP} output for all 2-
and 3-body decay widths with the corresponding analytic expressions
in Section~\ref{sec:decays}.
We also checked cross sections for selected production channels
using {\tt MadGraph/MadEvent}~\cite{Stelzer:1994ta,Maltoni:2002qb}.
The cross sections at the LHC ($\sqrt{s}=14$ TeV) are graphed as a function of
$1/R$ in Fig.~\ref{fig:xsec_su3},
and have been summed over various channels.
We assume five partonic quark flavors in the proton along with the gluon, and
ignore electroweak production of colored particles.
We use the CTEQ6L parton distributions~\cite{Pumplin:2002vw}, and
choose the scale of the strong coupling
constant $\alpha_s$ to be equal to the parton-level center of mass energy.
\begin{figure}[t]
\centerline{
\psfig{file=xsection_LHC_SU3-1.ps,width=8cm,angle=0}
\psfig{file=xsection_LHC_SU3-2.ps,width=8cm,angle=0} }
\vspace*{0.1mm}
\caption{Tree-level production cross sections of (1,0) particles at the LHC: (a) quark pairs, and
(b) final states involving bosons. The cross sections have been summed over the first two generations of
KK quarks and antiquarks. The weak-doublet $Q^{{({\bf 1})} }_+$ includes
both up- and down-type (1,0) quarks. The cross section for $U^{({\bf 1})} _+ D^{({\bf 1})} _+$ production (not shown)
turns out to be nearly equal to that for $U^{({\bf 1})} _+ U^{({\bf 1})} _+$. Cross sections
for the weak-singlet quarks (6D chirality $-$) are almost the same as those
for weak-doublet quarks (6D chirality $+$) and are not plotted.}
\label{fig:xsec_su3}
\end{figure}
$Q_+^{({\bf 1})} Q_+^{({\bf 1})} $ production, which is responsible for most
of the multi-lepton events (as shown later in Section~\ref{sec:lhc2}),
is dominated
by (1,0) quarks of the first 2 generations (88\% at $1/R=500$ GeV, increasing to 98\% at $1/R=1$ TeV).
The gluon-gluon initial state contributes
only $\sim$10\% (3\%) of the total $Q_+^{({\bf 1})} Q_+^{({\bf 1})} $ cross section at $1/R=500$ GeV
($1$ TeV), since firstly
the gluon flux in the proton at this
mass scale is small, and secondly, there is a large number of subprocesses
with $qq$ or $q\bar{q}$ initial states.
$G_H^{({\bf 1})} $ production is different in that the dominant contribution to this process
comes from the gluon initial state, with valence quarks making up
the remainder.
The production cross sections of the $SU(2)_W$ doublet and singlet (1,0)
quarks, $Q_+^{{({\bf 1})} }$ or $Q_-^{{({\bf 1})} }$, are almost equal,
since they are produced in exactly the same way (see Figs.~\ref{fig:qqQQ}-\ref{fig:qgQGH}).
The slightly higher mass of $Q_+^{{({\bf 1})} }$ lowers its production cross section, but this is a small effect.
As expected from the structure of the parton distribution function, the $G_\mu^{{({\bf 1})} }$
associated production cross sections drop off faster than others.
$Q_+^{({\bf 1})} U_-^{({\bf 1})} $ pair production, the main source of events containing
both photons and leptons,
proceeds through $G_\mu^{({\bf 1})} $ and $G_H^{({\bf 1})} $ exchange in the $t$-channel,
as in Fig.~\ref{fig:qqQQ} with one of the $Q_+^{({\bf 1})} $ quarks replaced by $U_-^{({\bf 1})} $.
Due to the partonic structure, the production with first-generation
quarks in the initial state are dominant, accounting for $\sim 50\%$
of all $Q_+^{({\bf 1})} U_-^{({\bf 1})} $ pairs produced for $1/R=500$ GeV.
As mentioned earlier, $W_\mu^{{({\bf 1})} }$ associated production, although small
compared to that for colored (1,0) particles, is not necessarily negligible because
of its large branching fraction into leptons. We have included the
cross section for the channel with the largest production rate,
$W_\mu^{{({\bf 1})} +} Q_+^{{({\bf 1})} }$, in Fig.~\ref{fig:xsec_su3}.
The dominant contribution to this process is from production with first generation (1,0) quarks.
$W_\mu^{{({\bf 1})} -}$ associated production is even smaller, by an extra factor of $\sim$3,
due to the partonic structure of the proton.
\subsection{Events with leptons and photons at the LHC}
\label{sec:lhc2}
Having determined the production rates of (1,0) particles, we now turn to a discussion
of their experimental signatures at the LHC.
First we will consider the production of
(1,0) particles which give $n\ell + m\gamma + \rlap{\,/}E_T$ with $n\geq n_{min}$ and $0 \leq m \leq 2$,
where we do not count leptons from the decay of the standard model particles.
We calculate the inclusive cross sections for the channels $n\ell + m\gamma+\rlap{\,/}E_T$ with $n\geq
n_{min}$ and $0 \leq m \leq 2$
in the following way.
There are 11 (1,0) particles with different branching fractions to multiple leptons
as discussed in Section~\ref{sec:decays}. We label these particles by $A_i^{({\bf 1})} $,
where $1 \leq i \leq 11$ is the particle type:
\begin{equation}
A_i^{({\bf 1})} = \Big (
W_\mu^{{({\bf 1})} },
G_\mu^{{({\bf 1})} }, G_H^{{({\bf 1})} }, T_+^{{({\bf 1})} }, B_+^{{({\bf 1})} }, T_-^{{({\bf 1})} },
U_-^{{({\bf 1})} }, D_-^{{({\bf 1})} },
Q_+^{{({\bf 1})} }
\Big ) \, .
\end{equation}
Their branching fractions, Br$(i,a,a')$, where $a$ is the number of leptons
($0 \leq a \leq 4$) and $a'$ is the number of photons ($0 \leq a' \leq 1$), are
given in Section \ref{sec:decays}. $Q_+^{{({\bf 1})} }$ and $U^{({\bf 1})} _-$ include
only the first two generations of weak doublets and up-type singlets.
One should keep in mind that the 3rd generation KK quarks and KK quarks of
the first two generations have different branching fractions to leptons so
they need to be tackled separately. For simplicity we use the same symbol here
for quarks and antiquarks.
The cross section for $n\ell+m\gamma + \rlap{\,/}E_T$ events with $n\geq n_{min}$ and $0 \leq m \leq 2$ is
\begin{equation}
\sigma (pp \to n\ell+m\gamma+\rlap{\,/}E_T~, n\geq n_{min} )
= \sum_{i=1}^{11}\sum_{j\geq i}^{11}
\sigma ( pp\to A_i^{({\bf 1})} A_j^{({\bf 1})} ) B_{ij} \, ,
\end{equation}
where $B_{ij}$ is a sum over products of branching fractions of the particles
$A_i^{({\bf 1})} $ and $ A_j^{({\bf 1})} $
\begin{equation}
B_{ij} = \sum_{\underset{a+b\geq n_{min}}{a,b=0}}^{4}
\sum_{\underset{a'+b' = m}{a',b'=0}}^{1}
{\rm Br}(i,a,a'){\rm Br}(j, b,b') \, ,
\end{equation}
Note that the total numbers of photons ($m$) and leptons ($n$) from the decay of a pair
of (1,0) particles are constrained by
$0 \leq n + 2m \leq 8$. It is not possible to obtain $8\ell+2\gamma + \rlap{\,/}E_T$ for instance,
since the hypercharge gauge boson $B_\mu^{({\bf 1})} $ can decay into either a photon or a fermion pair,
together with $B_H^{({\bf 1})} $, so a photon is only produced at the expense of two leptons.
Most (1,0) particles have branching fractions that are independent of $1/R$. However,
those for third generation quarks have variations due to
threshold effects (see Fig.~\ref{fig:Brs}).
We use values at large $1/R$, which
slightly underestimates the total number of events as branching fractions are larger at small $1/R$. Since the contribution from the third generation is small,
our approximation gives rise to negligible error.
\begin{figure}[t]
\centerline{
\psfig{file=3more.ps,width=12cm,angle=0}
}
\vspace*{1mm}
\caption{Sum over cross sections for (1,0) particle pair production at
the LHC times the branching fractions of
the cascade decays that give rise to $n\geq 3,4,5$ or 6 charged leptons
($\ell = e^\pm$ or $\mu^\pm$),
as a function of the compactification scale.}
\label{fig:nevt}
\end{figure}
\begin{figure}
\centerline{
\psfig{file=photon.ps,width=8cm,angle=0}
\psfig{file=eeee.ps,width=8cm,angle=0}
}
\vspace*{-4mm}
\caption{Cross sections for
(a) $m\gamma+n\ell+\rlap{\,/}E_T$ events with $n\geq n_{min}$ for $m=1,2$ and $1 \leq n_{min} \leq 4$
and (b) Lepton + photon events with two or more same-sign leptons, at the LHC as a function of $1/R$.}
\label{fig:eeee}
\end{figure}
Cross sections for multi-lepton events at the LHC
are shown in Fig.~\ref{fig:nevt} as a function of $1/R$.
Out of the total number of events with 5 leptons or more at $1/R=500$ GeV,
the majority arise from first- and second-
generation weak doublet quarks, either in pairs or in association with other particles;
$W_\mu^{({\bf 1})} $ pair production is responsible for around 10\%,
as is production including $SU(3)_c$ bosons, $G_{\mu ,H}^{({\bf 1})} $.
As parton distribution functions vary with the size of the extra dimensions,
so will the individual contributions,
although the sensitivity to the mass scale $1/R$ is small.
The results shown in Fig.~\ref{fig:nevt} include tree-level processes only.
We estimate that next-to-leading order effects will increase the cross sections by $\sim$30-50\%, especially due to initial state radiation.
A complete analysis of this effect is warranted, but is beyond the scope of this paper.
Also interesting are combined photon and lepton events which result from 1-loop decays of
the ${({\bf 1})} $ hypercharge gauge boson $B_\mu^{({\bf 1})} $ produced in the decay chain of $U_-^{({\bf 1})} $ quarks (see Fig.~\ref{fig:eeee}(a)).
Down-type quarks have smaller hypercharge and so couple less strongly; while quark doublets couple more strongly to weak bosons, resulting in a negligible
branching fraction into $B_\mu^{({\bf 1})} $. In Fig.~\ref{fig:ex-diagrams} we show typical diagrams for
$\ell^+ \ell^+ \ell^+ \ell^- \ell^-$ and $\gamma \ell^+ \ell^- $ signatures.
The rate for events with unusual
combinations of final states: two same-sign leptons and a photon,
$\gamma\ell^+ \ell^+$ ($\gamma\ell^- \ell^-$) for instance, or three
same-sign and one opposite sign lepton, $\ell^+\ell^+\ell^+\ell^-$
($\ell^-\ell^-\ell^-\ell^+$), are plotted in Fig.~\ref{fig:eeee}(b). The
latter process consists of around 10\% of the total rate for 4 lepton events,
and the largest single contribution to it is the decay of $U_+^{({\bf 1})} $
($D_+^{({\bf 1})} $) pairs. It arises only rarely in the standard model from $W^+W^+Z$ ($W^-W^-Z$) production.
\begin{figure}[b]
\centerline{
\psfig{file=d1.eps,width=7.5cm,angle=0}
\hspace{0.2cm}
\psfig{file=d2.eps,width=7.5cm,angle=0}
}
\vspace*{-1.4mm}
\caption{Representative processes that lead to
$5\ell + \rlap{\,/}E_T$ and $\gamma \ell^+\ell^- +\rlap{\,/}E_T$ events. Several other
production mechanisms as well as cascade decays contribute to these and related
signals.
\label{fig:ex-diagrams}}
\end{figure}
We expect that the small standard model backgrounds for these processes can be
eliminated by using a hard $\rlap{\,/}E_T$ cut in conjunction with a jet $p_T$
cut since the jets originating from the decay of (1,0) colored
particles should have a transverse momentum of the order of their mass
differences ($\sim 100$ GeV). One might also naively worry about
triggering issues due to the softness of leptons, since the cascade
decays giving rise to them occur between particles that are relatively
degenerate in mass. A preliminary analysis on a single leg of the
decay chain keeping exact spin correlations suggests that more than 90 \% of
lepton pairs have enough $p_T$ to evade a 15 GeV cut,
and that the leptons
are far enough away in $\Delta R$ to be visible as individual tracks.
Hence we do not anticipate any triggering problems, although a
detailed analysis of these issues using a detector simulator might be beneficial.
\subsection{Cross sections at the Tevatron}
At the Tevatron, the production from a $q\bar{q}$ initial state,
shown in Figs.~\ref{fig:qqQQ'},~\ref{fig:qqGHGH} and~\ref{fig:qqW+GH}, dominates.
We summarize our results for (1,0) production cross sections, as well as multi-lepton and lepton plus photon signatures in
Fig.~\ref{fig:prod-TeV}.
The lower center-of-mass energy of this collider slightly increases $W_\mu^{({\bf 1})} $ production cross sections
as compared with the LHC. This process now contributes 16\% of the total number of events
with 4 or more leptons for $1/R=300$ GeV.
We can use data gathered from Tevatron Run II to place rough constraints
on the radius of the extra dimensions. One potential channel that has been
searched for in the context of the minimal supersymmetric standard model is the trilepton signal~\cite{CDFnote,D0note}. We apply the results of this analysis, which found no excess over standard model background, directly to our model. If we assume an efficiency of $\sim5\%$~\cite{CDFnote,D0note}, we see that $1/R$ must be larger
than $\sim 270$ GeV, otherwise we might have expected to observe at least 3 events.
Low statistics for this final state, both in expected and observed events, make the limit rather less reliable than desired.
A more precise, though less stringent, constraint can be obtained by using Run II lepton + photon
data~\cite{Abulencia:2007zi}, which contains larger numbers of expected and observed events. The standard model prediction for the $\ell \gamma X$ channel
for instance, is $150.6\pm 13$ with an observation of 163 events. Assuming
that universal extra dimensions are responsible for the small excesses in this and the $\ell^+ \ell^- \gamma X$ channels allows us to obtain a limit on $1/R$
of around 240 GeV at 95\% C.L.
\begin{figure}[t]
\centerline{
\psfig{file=xsection_TeV_SU3.ps,width=8cm,angle=0}
\psfig{file=3more_TeV.ps,width=8cm,angle=0}}
\vspace*{-4mm}
\caption{(a) Production cross sections at the Tevatron and (b) Cross sections for multilepton
+ photon events, as a function of $1/R$.}
\label{fig:prod-TeV}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
\setcounter{equation}{0}
Despite the successful predictions of the 6DSM, the
hadron collider phenomenology of (1,0) KK modes has not
been previously studied due to the large number of mechanisms
that contribute to production cross sections.
Our inclusion in CalcHEP of the interactions between (1,0) particles and
standard model ones has allowed us to compute the
cross sections for (1,0) pair production at the LHC and the Tevatron.
The large cross sections (of almost $10^4$ fb at the LHC, for masses
around 500 GeV) shows that cascade decays with small branching fractions
may be observed, leading to a variety of discovery channels.
These are particularly interesting because of the
presence in the 4D effective theory of a spinless adjoint particle for each
standard model gauge group.
One-loop corrections to the level-1 masses tend to make these spinless adjoints lighter
than matter fields~\cite{Ponton:2005kx}
(the same result~\cite{Azatov:2007fa} applies to other models~\cite{Mohapatra:2002ug}),
forcing them to undergo tree-level 3-body decays
and emitting two standard model fermions each time. This results in significant
numbers of events with five or more leptons.
Multi-lepton events are not unique to the 6DSM, although the rates at which they occur in other theories are typically smaller.
In its 5D counterpart for example, it is
necessary to produce level-2 KK particles to give rise to long enough
cascades; the rate for such processes is suppressed because the particles
produced are heavier ($m\sim 2/R$) \cite{Datta:2005zs}.
Another theory leading to multi-lepton signatures involves a warped extra dimension with custodial
symmetry~\cite{Dennis:2007tv}, but leptons in that case come from decays of $W$ and $Z$,
whose branching fractions are small.
In supersymmetric models, cascade decays of squarks such as $\tilde{q}'_L \to \tilde{\chi}_2^\pm q
\to W^\pm \tilde{\chi}_2^0 q (\tilde{\chi}_1^\pm Z q)$ can also give
multi-lepton signatures at the cost of small production cross sections due to spin-statistics
as well as a small branching fraction for $\tilde{q}'_L \to \tilde{\chi}_2^\pm q$.
Nevertheless, it should be rather straightforward to differentiate
among these models if a sufficiently large number of multi-lepton events will
be observed at the LHC. The 6DSM has specific preditions for many observables.
In this paper we analyzed the rates for events with 3, 4, 5 and 6 leptons,
as well as the relative rates for events with three leptons of one charge
and one lepton of opposite charge. Other observables, such as the relative rates
for events with different numbers of electrons and muons, may be analyzed using
the branching fractions for complete cascade decays (see the tables in Section 3).
Another peculiarity of the 6DSM cascade decays is that they lead with reasonably large
branching fractions to events with photons. This is a consequence of the
2-body decay at one loop of the hypercharge (1,0) vector boson, which competes
successfully with its tree-level 3-body decays.
Events with leptons, photons and missing energy are also predicted in certain
supersymmetric extensions of the standard model, but again, there are several
different channels, and we expect that if such events will be seen in large numbers,
it will be possible to differentiate between models.
One may wonder how robust our predictions are against variations in the
mass spectrum, which may get contributions from operators localized at
the fixed points of the chiral square, as well as from higher-order QCD effects.
In the case of a single universal extra dimension,
deviations from the one-loop corrected mass spectrum lead to a variety of
phenomenological implications \cite{Cembranos:2006gt}.
Within the 6DSM, we expect that the rates for
multi-lepton events remain relatively large when the (1,0) mass spectrum
is perturbed.
This is due to the large number of particles involved in a typical decay chain,
with a standard model quark or lepton being emitted at each stage.
The total rates computed here
are sums over many such cascade decays of several (1,0) particles.
However, the events with photons depend entirely on the branching fractions of
a single particle, the hypercharge vector boson, and thus are less
generic for different mass spectra.
A more general approach would be to lift the constraints on the mass spectrum.
If excess events with leptons, missing energy and possibly photons
will be observed in certain channels at the LHC, then the (1,0) masses
would be determined by comparing a large set of observed rates with the
6DSM predictions.
One should also keep in mind that the predictions of the 6DSM are not
limited to collider signals. For example,
an interesting
feature is that the LKP has spin 0, with various implications for dark matter
\cite{next}.
\bigskip
{\bf Acknowledgments:} \ We would like to thank Hsin-Chia Cheng, Konstantin Matchev and Eduardo Ponton for
helpful conversations.
Fermilab is operated by Fermi Research Alliance, LLC under Contract No.
DE-AC02-07CH11359 with the United States Department of Energy.
\section*{Appendix A: \ Feynman rules for (1,0) modes}
\addcontentsline{toc}{section}{Appendix A: \ Feynman rules for (1,0) modes}
\label{app:diagrams}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
In this section we show Feynman rules that are relevant
for QCD production of (1,0) particles at hadron colliders.
Corresponding vertices involving electroweak gauge bosons can be easily inferred from those given below.
The vector-like nature of KK fermions allows for the usual QCD coupling to standard model gluons
seen in the $G_\mu Q^{{({\bf 1})} }\overline{Q}{^{({\bf 1})} }$ vertex below.
\begin{figure*}[h]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{1}
\normalsize
\large
\begin{center}
\begin{picture}(350,80)(40,20)
\Gluon(30,50)(70,50){3.2}{4}
\Vertex(70,50){2}
\ArrowLine(70,50)(100,80)
\ArrowLine(100,20)(70,50)
\Text(25,50)[r]{$G_\mu^{a}$}
\Text(130,90)[r]{$Q^{({\bf 1})} _{\pm}$}
\Text(130,10)[r]{$Q^{({\bf 1})} _{\pm}$}
\Text(130,50)[l]{$ = - i g_s \gamma^\mu T^a$}
\Photon(240,50)(280,50){3}{4}
\Vertex(280,50){2}
\ArrowLine(280,50)(310,80)
\ArrowLine(310,20)(280,50)
\Text(235,50)[r]{$G_\mu^{{({\bf 1})} a}$}
\Text(340,90)[r]{$Q^{({\bf 1})} _{\pm}$}
\Text(340,10)[r]{$Q^{(0,0)}_{\pm}$}
\Text(340,50)[l]{$ = - i g_s \gamma^\mu P_{\substack{L \\ R}} T^a$ }
\end{picture}
\end{center}
\end{figure*}
The interaction of a level-1 quark and a level-1 gluon is chiral and
so its vertex contains projection operators,
although the chirality of the incoming fermion is conserved.
\begin{figure*}[h]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{1}
\normalsize
\large
\begin{center}
\begin{picture}(350,80)(40,20)
\DashLine(30,50)(70,50){3}
\Vertex(70,50){2}
\ArrowLine(70,50)(100,80)
\ArrowLine(100,20)(70,50)
\Text(25,50)[r]{$G_H^{{({\bf 1})} a}$}
\Text(130,90)[r]{$Q^{({\bf 1})} _{\pm}$}
\Text(130,10)[r]{$Q^{(0,0)}\pm$}
\Text(130,50)[l]{$= - g_s P_{\substack{L\\ R}} T^a$}
\Gluon(240,50)(280,50){3.2}{4}
\Vertex(280,50){2}
\DashLine(280,50)(310,80){3}
\DashLine(310,20)(280,50){3}
\put(305,85){\vector(-1,-1){20}}
\put(305,15){\vector(-1,1){20}}
\Text(297,85)[r]{$p$}
\Text(297,15)[r]{$q$}
\Text(235,50)[r]{$G_\mu^{a}$}
\Text(345,90)[r]{$G_H^{{({\bf 1})} b}$}
\Text(345,10)[r]{$G_H^{{({\bf 1})} c}$}
\Text(340,50)[l]{$ = g_s f^{abc} \big ( p - q \big )^\mu$ }
\end{picture}
\end{center}
\end{figure*}
However, the interaction of a spinless adjoint $G_H^{(1,0)a}$ with fermions changes the chirality of
the incoming fermion since $G_H^{(1,0)a}$ is a scalar.
Note that the Feynman rules for standard-model gluons are fixed by gauge invariance.
The 3 and 4-point interactions involving only (1,0) vector bosons and zero-mode gluons
are identical to those in the standard model.
\begin{figure*}[h]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{1}
\normalsize
\large
\begin{center}
\begin{picture}(350,80)(40,20)
\Vertex(70,50){2}
\Gluon(70,50)(35,80){3.2}{4}
\Gluon(35,20)(70,50){3.2}{4}
\DashLine(70,50)(100,78){3}
\DashLine(100,22)(70,50){3}
\Text(35,86)[r]{$G_\mu^{b}$}
\Text(35,14)[r]{$G_\nu^{d}$}
\Text(135,86)[r]{$G_H^{{({\bf 1})} c}$}
\Text(135,14)[r]{$G_H^{{({\bf 1})} e}$}
\Text(130,50)[l]{$ = -i g_s^2 g^{\mu\nu} (f^{abc} f^{ade} + f^{abe}f^{adc})$}
\end{picture}
\end{center}
\vspace*{.3cm}
\end{figure*}
\vspace{0.2cm}
\begin{figure*}[h!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{1}
\normalsize
\large
\begin{center}
\begin{picture}(350,80)(40,20)
\Gluon(30,50)(70,50){3.2}{4}
\Vertex(70,50){2}
\Photon(70,50)(100,80){3}{4}
\Photon(100,20)(70,50){3}{4}
\Text(25,50)[r]{$G_\nu^{b}$}
\Text(130,90)[r]{$G_\mu^{{({\bf 1})} a}$}
\Text(130,10)[r]{$G_\rho^{{({\bf 1})} c}$}
\Text(130,50)[l]{$= g_s f^{abc} \big [ (k-p)_\lambda g_{\mu\nu} +
(p-q)_\mu g_{\nu\rho} + (q-k)_\nu g_{\mu\rho} \big ] $}
\put(88,83){\vector(-1,-1){20}}
\put(106,28){\vector(-1,1){20}}
\put(35,40){\vector(1,0){30}}
\Text(77,77)[r]{$k$}
\Text(105,45)[r]{$q$}
\Text(55,30)[r]{$p$}
\end{picture}
\end{center}
\end{figure*}
\vspace{0.2cm}
\begin{figure*}[h!]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{1}
\normalsize
\large
\begin{center}
\begin{picture}(350,80)(40,20)
\Vertex(69,50){2}
\Gluon(70,50)(38,77){3.2}{4}
\Gluon(38,23)(70,50){3.2}{4}
\Photon(70,50)(97,77){3}{3}
\Photon(97,23)(70,50){-3}{3}
\Text(38,84)[r]{$G_\mu^{a}$}
\Text(38,12)[r]{$G_\nu^{b}$}
\Text(131,84)[r]{$G_\rho^{{({\bf 1})} c}$}
\Text(131,14)[r]{$G_\sigma^{{({\bf 1})} d}$}
\Text(130,70)[l]{$ = -i g_s^2 \big [ f^{abe} f^{cde} (g^{\mu\rho} g^{\nu\sigma} - g^{\mu\sigma} g^{\nu\rho} ) + f^{ace} f^{bde} (g^{\mu\nu} g^{\rho\sigma} - g^{\mu\sigma} g^{\nu\rho} ) $ }
\Text(150,40)[l]{$+ f^{ade} f^{bce} (g^{\mu\nu} g^{\rho\sigma} - g^{\mu\rho} g^{\nu\sigma} ) \big ] $ }
\end{picture}
\end{center}
\end{figure*}
\vspace{0.35cm}
\section*{Appendix B: \ One-loop 2-body decays of (1,0) bosons}
\addcontentsline{toc}{section}{Appendix B: \ One-loop 2-body decays of (1,0) bosons}
\renewcommand{\theequation}{B.\arabic{equation}}
\setcounter{equation}{0}
We compute here the
amplitude for the process
$B_\nu^{{({\bf 1})} } \rightarrow B_H^{{({\bf 1})} } \gamma $, which proceeds through
one-loop diagrams with KK fermions running in the loop.
The couplings of the $B_\nu^{{({\bf 1})} }$ and $B_H^{{({\bf 1})} }$ bosons to the KK modes of a
6D chiral fermion $F_+$ are given by
\begin{eqnarray} \hspace*{-2cm}
\mathcal{L} \supset \frac{1}{4} g^\prime Y_{F_+} \overline{F}_+^{(j,k)} \hspace*{-0.2cm}
&& \left[ B_\nu^{{({\bf 1})} } \gamma^\nu \left( P_L \, d_{00}^{j,k;j^\prime\!,k^\prime}
- P_R \, d_{10}^{j,k;j^\prime\!,k^\prime}r_{jk}^*r_{j^\prime,k^\prime}\right) \right.
\nonumber \\ [2mm]
&& \left. \
- \, i B_H^{{({\bf 1})} }\left( P_R \, d_{01}^{j,k;j^\prime\!,k^\prime} r_{j^\prime,k^\prime}
- P_L \, d_{03}^{j^\prime \!,k^\prime\!;j,k}r_{jk}^*\right) \right] F_+^{j^\prime,k^\prime}
~.
\end{eqnarray}
Here we have defined
\begin{eqnarray}
d_{n n^\prime}^{j,k;j^\prime\!,k^\prime} & = & (-1)^{n} \delta_{k^\prime \!,k}
\left(\delta_{j^\prime \!, j-1} + (-1)^{n^\prime}\delta_{j^\prime \!, j+1} \right)
+ (-1)^{n} \delta_{j^\prime \!,j}\left( \delta_{k^\prime \!, k+1}
+ (-1)^{n^\prime}\delta_{k^\prime \!, k-1} \right)
\nonumber \\ [2mm]
&& + \, i^{n^\prime - n} \delta_{j,1}\delta_{k^\prime \!,0}\delta_{j^\prime \!,k}
+ i^{n+ 2n^\prime} \delta_{j^\prime \!,1}\delta_{k,0}\delta_{k^\prime \!,j} ~,
\end{eqnarray}
where $r_{j,k}$ are complex phases,
\begin{equation}
r_{j,k} = \frac{j+i k}{\sqrt{j^2+k^2}}
\end{equation}
and $Y_F$ is the hypercharge of the fermion, normalized to $-1$ for lepton doublets.
In the case of fermions with 6D chirality $-$, which contain right-handed zero modes,
the same formulas apply with the
$P_L$ and $P_R$ chirality projection operators interchanged.
\begin{figure}[t]
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{1.}
\normalsize
\begin{center}
\begin{picture}(110,80)(40,20)
\Photon(5,40)(50,40){5}{5}
\Photon(100,0)(170,0){3}{5}
\DashLine(100,80)(170,80){8}
\Line(50,40)(100,80)
\Line(100,0)(50,40)
\Line(100,0)(100,80)
\Text(2,47)[r]{$B_\nu^{{({\bf 1})} }$}
\Text(180,80)[l]{$B_H^{{({\bf 1})} }$}
\Text(182,0)[c]{$A_\mu$}
\Text(83,70)[r]{$ F^{(j^\prime,k^\prime)}$}
\Text(83,8)[r]{$ F^{(j,k)}$}
\Text(105,40)[l]{$ F^{(j,k)}$}
\put(-20,25){\vector(1,0){30}}
\Text(5,15)[c]{$p$}
\put(140,10){\vector(1,0){30}}
\Text(165,20)[c]{$p-p^\prime$}
\put(140,70){\vector(1,0){30}}
\Text(165,60)[c]{$p^\prime$}
\put(76,27){\vector(-1,1){4}}
\Line(85,20)(75,28)
\Text(83,30)[c]{$l$}
\end{picture}
\end{center}
\vspace{0.5cm}
\caption{Dimension-5 operator induced by fermion loops.}
\label{fig:one-loop}
\end{figure}
Dimension-5 operators coupling a (1,0) vector boson to a (1,0)
spinless adjoint and a standard-model gauge boson are induced
at one loop by the diagram in Figure \ref{fig:one-loop}, with fermion KK modes running
in the loop.
The contribution of a fermion $F_+$ to the
amplitude for $B_\nu^{{({\bf 1})} } \rightarrow B_H^{{({\bf 1})} } \gamma_\mu$
is given by
\begin{equation}
{\cal M}\left( B_\nu^{({\bf 1})} \rightarrow B_H^{({\bf 1})} \gamma_\mu \right)_{F_+} =
- \frac{1}{4} \left(g^{\prime} \frac{Y_{F_+}}{2} \right)^{\! 2} e\, Q_{F_+}\,
\varepsilon_\mu^*(p-p^\prime) \, \varepsilon_\nu (p) \, I_{F_+}^{\mu\nu (j,k;j^\prime \!, k^\prime)} ~,
\end{equation}
where
\begin{equation}
I_{F_+}^{\mu\nu (j,k;j^\prime \!,k^\prime)} = \int \!\!\frac{d^4 l}{(2\pi)^4}
{\rm Tr}
\frac{m_F^{j,k;j^\prime \!, k^\prime}
\left[l\!\!/ \gamma^\mu + \gamma^\mu ( l\!\!/ + p\!\!\!/ - p^\prime \!\!\!\!/ )\right]
(l\!\!/ + p\!\!\!/)
- m_F^{j^\prime \!, k^\prime \!;j,k} l\!\!/\gamma^\mu
( l\!\!/ + p\!\!\!/ - p^\prime \!\!\!\!/) }
{\left(l^2 - M^2_{F^{(j,k)}}\right)\left[(l+p-p^\prime)^2 - M^2_{F^{(j,k)}}\right]
\left[(l+p)^2 - M^2_{F^{(j^\prime,k^\prime)}}\right]} \gamma^\nu\gamma_5
\end{equation}
and
\begin{equation}
m^{j,k;j^\prime \!, k^\prime}_F = M_{F^{(j,k)}} \, {\rm Re} \left[r_{jk}
\left(d_{00}^{j,k;j^\prime \!, k^\prime}
d_{01}^{j^\prime\!,k^\prime\!;j, k}
- d_{10}^{j^\prime\!,k^\prime\!;j, k}
d_{01}^{j,k;j^\prime\!, k^\prime} \right)\right] ~.
\end{equation}
After integrating over the loop momentum $l$, and summing over fermions, we find the
amplitude
\begin{equation}\label{equ:3bodydecay}
{\cal M}\left( B_\nu^{({\bf 1})} \rightarrow B_H^{({\bf 1})} \gamma_\mu \right) =
- \frac{g^{\prime 2} e}{8 \pi^2} \epsilon^{\mu\nu\alpha\beta}
\frac{\varepsilon_\mu^*(p-p^\prime) \varepsilon_\nu (p) p_\alpha p^\prime_\beta }{M_{B_\nu^{({\bf 1})} }^2
- M_{B_H^{({\bf 1})} }^2} \sum_F \sigma_F \, \left(\frac{Y_F}{2}\right)^2 \, Q_F \, {\cal E}_F ~,
\end{equation}
where $\sigma_F = \pm 1$ when $F$ has 6D chirality $\pm$, and
\begin{equation}
{\cal E}_F = \sum_{j,k;j^\prime\!, k^\prime}
m_F^{j,k;j^\prime \!, k^\prime} J_F^{j,k;j^\prime\!, k^\prime} ~,
\end{equation}
with $J_F$ given by an integral over a Feynman parameter:
\begin{equation}
J_F^{j,k;j^\prime\!, k^\prime} = \int_0^1 \frac{dx }{x}
\ln \left( 1 + \frac{x(1-x)
\left(M_{B_\nu^{({\bf 1})} }^2 - M_{B_H^{({\bf 1})} }^2\right)}
{(1-x)M_{F^{(j,k)}}^2 + x M_{F^{(j^\prime\!,k^\prime)}}^2 - x(1-x) M_{B_\nu^{({\bf 1})} }^2} \right) ~.
\end{equation}
The $m^{j,k;j^\prime \!, k^\prime}$ quantities vanish unless the set of KK numbers
$(j,k;j^\prime \!, k^\prime)$ is given by (1,0;1,1), (1,1;1,0) or (1,0; 0,0).
This is a consequence of the vectorlike nature of the fermion higher KK modes.
Therefore,
\begin{equation}
{\cal E}_F
= M_{F^{(1,0)}}\left( 2 J_F^{1,0; 0,0} + J_F^{1,0;1,1} \right)
+ \sqrt{2} M_{F^{(1,1)}} J_F^{1,1;1,0} ~.
\label{ef}
\end{equation}
Note that ${\cal E}_F$ depends only on the (1,0) masses and on the masses of the (0,0) and
(1,1) fermions. The mass corrections for (1,1) fermions,
$\big\{Q^3_+, T_-,Q^{1,2}_+,U^{1,2}_-,D^{1,2,3}_-,L_+$ and $E_-\big\}$,
are given by $\sqrt{2}/R$ multiplied by the coefficients
$\left\{1.33,1.31,1.31,1.27,1.26,
1.05,1.02\right\}$ respectively \cite{Burdman:2006gy},
ignoring electroweak symmetry breaking effects.
Note also that in the limit that all the fermions at each
KK level are degenerate, ${\cal E}_F$ becomes independent of $F$ and so can be taken out of the sum in
Eq.~(\ref{equ:3bodydecay}), which then vanishes identically by anomaly cancellation.
This completes the computation of the amplitude for
$B_\nu^{{({\bf 1})} } \rightarrow B_H^{{({\bf 1})} } \gamma $, which determines the coefficient of
the dimension-5 operator shown in Eq.~(\ref{operator}), and the decay width of
$B_\nu^{{({\bf 1})} }$ shown in Eq.~(\ref{oneloopdecay}).
\bigskip
\section*{Appendix C: \ Tree-level 3-body decays of (1,0) bosons}
\addcontentsline{toc}{section}{Appendix C: \ Tree-level 3-body decays of (1,0) bosons}
\label{app:3body}
\renewcommand{\theequation}{C.\arabic{equation}}
\setcounter{equation}{0}
\begin{figure*}[t]
\begin{center}
\unitlength=1.0 pt
\SetScale{1.0}
\SetWidth{1}
\begin{picture}(360,90)(25,30)
\DashLine(50.0,72.5)(90.0,72.5){3}
\Photon(50.0,68.0)(90.0,68.0){2.5}{6}
\Text(95.0,70.0)[r]{$\bullet$}
\ArrowLine(90.0,70.0)(130.0,90.0)
\ArrowLine(130.0,50.0)( 90.0,70.0)
\ArrowLine(170.0,30.0)(130.0,50.0)
\DashLine(170.0,72.0)(130.0,51.0){3}
\Photon(169.0,67)(133.0,48.0){2.5}{6}
\Text(134.0,49.0)[r]{$\bullet$}
\Text( 45.0,73.0)[r]{$A_2$}
\Text(190.0,70.0)[r]{$A_1$}
\Text(110.0,50.0)[r]{$F$}
\Text(140.0,95.0)[r]{$f$}
\Text(180.0,25.0)[r]{$\bar{f}$}
\Text( 210,50)[r]{+}
\DashLine(230.0,72.5)(270.0,72.5){3}
\Photon(230.0,68.0)(270.0,68.0){2.5}{6}
\Text(275.0,70.0)[r]{$\bullet$}
\ArrowLine(270.0,70.0)(310.0,90.0)
\ArrowLine(310.0,90.0)(350.0,110.0)
\ArrowLine(310.0,50.0)(270.0,70.0)
\DashLine(350.0,73.5)(315.0,91.0){3}
\Photon(350.5,69.0)(310.0,89.0){2.5}{6}
\Text(316.0,90.0)[r]{$\bullet$}
\Text(225.0,73.0)[r]{$A_2$}
\Text(365.0,110.0)[r]{$f$}
\Text(370.0,65.0)[r]{$A_1$}
\Text(297.0,90.0)[r]{$F$}
\Text(320.0,50.0)[r]{$\bar{f}$}
\end{picture}
\end{center}
\caption{The diagrams for 3-body decay of (1,0) particles.
$A_2$ and $A_1$ are heavy bosons of spin 0 or 1, $F$ is a heavier fermion,
and $f$ is a much lighter fermion.}
\label{fig:diagrams}
\end{figure*}
In this Appendix we compute the width for 3-body decays of (1,0) bosons.
Let us consider a generic 3-body decay of a boson
$A_2$ of mass $M_2$ into a boson $A_1$ of mass $M_1$
and a fermion-antifermion pair $f \bar{f}$,
via an off-shell fermion $F$, of mass $M_F > M_2 > M_1$.
There are two tree-level diagrams contributing to the process
$A_2 \to (F^\ast f) \to A_1 f \bar{f}$, as shown in Fig.~\ref{fig:diagrams}.
For simplicity, we assume that the final-state fermions are massless.
The decay width is given by
\begin{equation}
\Gamma (A_2 \to A_1 f \bar{f})
= \frac{1}{64\pi^3 M_2}
\int_{0}^{\mu_\circ} dE_f \int_{\mu_\circ-E_f}^{E_{\bar{f}}^{\rm max}} dE_{\bar{f}} \,
\, \overline{ \left|\cal{M}\right|^2 } ~,
\end{equation}
where $\cal{M}$ is the matrix element, $E_f$ and $E_{\bar{f}}$ are the
energies of the final-state fermions in the rest frame of $A_2$, and we defined
\begin{equation}
\mu_\circ \equiv \frac{M_2^2 - M_1^2}{2 M_2} \, .
\label{mu21}
\end{equation}
For a fixed $E_f$, the maximum value of $E_{\bar{f}}$ is
\begin{eqnarray}
E_{\bar{f}}^{\rm max} &=& \frac{ \mu_\circ - E_f }{ 1- 2E_f/M_2} \, .
\label{Emax}
\end{eqnarray}
Let us first consider the case where both $A_1$ and $A_2$ have spin 0
(we label them by $A_{1H}$ and $A_{2H}$ in that case) and have pseudo-scalars
couplings to the fermions:
\begin{equation}
\left( g_1 A_{1 H} + g_2 A_{2 H} \right) i \overline{F}_L f_R + {\rm H.c.} ~,
\label{couplings-ps}
\end{equation}
where $g_{1,2}$ are real dimensionless couplings.
The matrix element squared, summed over the spins of $f$ and $\bar{f}$, is given by
\begin{equation}
\overline{\left|{\cal M}\right|^2} \left( A_{2H} \to f_R \bar{f}_R A_{1H} \right)
= 2\left( g_1 g_2 \right)^2
\Big [ 2 (P_f \cdot P_1)(P_f \cdot P_1)
- M_2^2 (P_f \cdot P_{\bar{f}}) \Big ] \, \Delta^2 ~,
\end{equation}
where $P_1$, $P_f$ and $P_{\bar{f}}$ are the 4-momenta of $A_{1H}$, $f$
and $\bar{f}$, respectively. The quantity
\begin{equation}
\Delta = \frac{1}{ (P_1 + P_f)^2 - M_F^2} -
\frac{1}{ (P_1 + P_{\bar{f}})^2 - M_F^2} \, ,
\label{Delta}
\end{equation}
accounts for the propagators of the off-shell fermion in the two diagrams
of Fig.~\ref{fig:diagrams}.
The two diagrams have opposite sign, resulting in the sign between the two terms
in $\Delta$, because of the different momentum flow through the intermediate fermion line.
In the center-of-mass frame, the width becomes
\begin{equation}
\Gamma (A_{2H} \to A_{1H} f_R \bar{f}_R)
= \frac{\left( g_1 g_2 \right)^2 }{128\pi^3} M_2 \, {\cal I}_+(M_2, M_1, M_F)
\label{AHtoAH}
\end{equation}
where we defined
\begin{equation}
{\cal I}_\pm(M_2, M_1, M_F) =\int_{0}^{\mu_\circ} dE_f \int_{\mu_\circ-E_f}^{E_{\bar{f}}^{\rm max}}
\! dE_{\bar{f}} \; \frac{2E_f E_{\bar{f}} \pm M_2 \left(\mu_\circ - E_f-E_{\bar{f}}\right) }
{M_2^2(\mu_{\star} + E_f)^2 ( \mu_{\star} + E_{\bar{f}})^2} \left(E_f-E_{\bar{f}}\right)^2 ~.
\label{fpm}
\end{equation}
The function ${\cal I}_-$ is introduced for later convenience,
$\mu_\circ$ and $E_{\bar{f}}^{\rm max}$ are given in Eqs.~(\ref{mu21}) and (\ref{Emax}), respectively,
and
\begin{equation}
\mu_{\star} \equiv \frac{M_F^2 - M_2^2}{2 M_2} ~~ .
\end{equation}
Let us now study the case where $A_2$ has spin 1
(we label it by $A_{2\mu}$ in that case) and
couples to one chirality of the fermions:
\begin{equation}
g_2 A_{2 \mu} \overline{F}_R \gamma^\mu f_R + {\rm H.c.} ~.
\end{equation}
The matrix element squared, averaged over the polarizations of $A_{2 \mu}$
and summed over the spins of $f$ and $\bar{f}$, is given by
\begin{equation}
\overline{\left|{\cal M}\right|^2} \left( A_{2\mu} \to f_R \bar{f}_R A_{1H} \right)
= \frac{2}{3} \left( g_1 g_2 \right)^2
\left(\frac{M_F}{M_2}\right)^{\! 2} \Big [ 2 (P_f \cdot P_2)( P_{\bar{f}} \cdot P_2)
+ M_2^2 P_f \cdot P_{\bar{f}} \Big ] \Delta^2 ~,
\end{equation}
where $P_2$ is the 4-momentum of $A_{2H}$.
Again, the two diagrams have opposite signs, resulting in the form of $\Delta$
given in Eq.~(\ref{Delta}).
However, the sign difference in this case is due to the pseudo-scalar coupling.
The width in the center-of-mass frame is given by
\begin{equation}
\Gamma (A_{2\mu} \to A_{1H} f_R \bar{f}_R)
= \frac{\left( g_1 g_2 \right)^2 }{384\pi^3} \frac{M_F^2}{M_2} \, {\cal I}_-(M_2, M_1, M_F) ~,
\label{AmutoAH}
\end{equation}
where ${\cal I}_-$ is the phase-space integral shown in Eq.~(\ref{fpm}).
The only other case relevant for the decays of the (1,0) particles
discussed in Section~\ref{sec:decays} is that where $A_2$ has spin 0 and pseudo-scalar couplings
[see Eq.~(\ref{couplings-ps})], while $A_1$ has spin 1 and a coupling
\begin{equation}
g_1 A_{1 \mu} \overline{F}_R \gamma^\mu f_R + {\rm H.c.} ~.
\end{equation}
The matrix element squared, summed over the polarizations of $A_{1 \mu}$
and the spins of $f$ and $\bar{f}$, is given in this case by
\begin{equation}
\overline{\left|{\cal M}\right|^2} \left( A_{2H} \to f_R \bar{f}_R A_{1\mu} \right)
= 2 \left( g_1 g_2 \right)^2
\left(\frac{M_F}{M_1}\right)^{\! 2} \Big [ 2 (P_f \cdot P_1)( P_{\bar{f}} \cdot P_1)
+ M_2^2 P_f \cdot P_{\bar{f}} \Big ] \Delta^2 ~,
\end{equation}
where $\Delta$ is defined in Eq.~(\ref{Delta}).
The width in the center-of-mass frame is given by
\begin{equation}
\Gamma (A_{2H} \to A_{1\mu} {\cal I}_R \bar{f}_R)
= \frac{\left( g_1 g_2 \right)^2 }{128\pi^3} M_2 \frac{M_F^2}{M_1^2} \,
\left[ \left(1 - \frac{2\mu_\circ}{M_2}\right) {\cal I}_-(M_2, M_1, M_F)
+ \frac{2\mu_\circ}{M_2}{\cal I}_+(M_2, M_1, M_F) \right] ~.
\label{AHtoAmu}
\end{equation}
If the heavy particles are approximately degenerate,
which is the case for the (1,0) particles studied in this paper, then
$\mu_\circ \ll M_2$ and $\mu_{\star} \ll M_2$ (which implies $\mu_\circ\approx M_2-M_1$ and $\mu_{\star}\approx M_F-M_2$),
and the double integrals of Eq.~(\ref{fpm}) may be performed analytically:
\begin{eqnarray}
\hspace*{-7em}
{\cal I}_+ (M_2, M_1, M_F) & = &
\frac{-8}{M_2^3}\left[ \rule{0mm}{5mm} \mu_{\star} \frac{\mu_\circ + \mu_{\star}}{\mu_\circ + 2 \mu_{\star}}
\left( \mu_\circ^2 + 5 \mu_\circ \mu_{\star} + 5 \mu_{\star}^2 \right)
\ln \left( 1 + \frac{\mu_\circ}{\mu_{\star}}\right) \right.
\nonumber \\ [0.7em]
&& \; \; \; - \left.
\frac{\mu_\circ}{12} \left( \mu_\circ^2 +30 \mu_\circ \mu_{\star} + 30 \mu_{\star}^2 \right) \rule{0mm}{5mm}\right]
\left[1+ O\left(\frac{\mu_\circ}{M_2} , \frac{\mu_{\star}}{M_2} \right) \right]~.
\eear
A simple relation between the ${\cal I}_\pm$ functions holds at leading order
in $1/M_2$:
\begin{equation}
{\cal I}_- = 3 {\cal I}_+ \left[1+ O\left(\frac{\mu_\circ}{M_2} , \frac{\mu_{\star}}{M_2} \right) \right]~.
\end{equation}
It is also useful to note that for $\mu_\circ \ll M_2$
{\it and} $\mu_\circ \ll \mu_{\star}$,
\begin{eqnarray}
{\cal I}_+(M_2, M_1, M_F) & = & \frac{\mu_\circ^7}{105 \, M_2^3 \mu_{\star}^4}
\left[1 -2 \frac{\mu_\circ}{\mu_{\star}} + \frac{\mu_\circ}{M_2}
+ O\left(\frac{\mu_\circ^2}{\mu_{\star}^2}, \, \frac{\mu_\circ^2}{M_2^2} \right)\right] ~,
\nonumber \\ [0.7em]
{\cal I}_-(M_2, M_1, M_F) & = & \frac{\mu_\circ^7}{35 \, M_2^3 \mu_{\star}^4}
\left[1 -2 \frac{\mu_\circ}{\mu_{\star}} + \frac{5\mu_\circ}{3M_2}
+ O\left(\frac{\mu_\circ^2}{\mu_{\star}^2}, \, \frac{\mu_\circ^2}{M_2^2} \right)\right] ~.
\label{fpe}
\eear
This very strong dependence on $\mu_\circ \approx M_2-M_1$
is somewhat surprising. The phase-space integrals of Eq.~(\ref{fpm}) give three powers of $\mu_\circ$,
and the matrix element squared appears at first sight to give only one more power
of $\mu_\circ$. However, the relative sign of the two diagrams forces a cancellation
of the leading term within $\Delta$ [see Eq.~(\ref{Delta})], so that $\Delta^2$
gives the $\left(E_f-E_{\bar{f}}\right)^2$ factor in Eq.~(\ref{fpm}),
which accounts for two more powers of $\mu_\circ$.
Furthermore, the integration over $E_{\bar{f}}$ cancels
the leading term in the $\mu_\circ$ expansion of the numerator of ${\cal I}_\pm$.
The resulting dependence on the 7th power of $\mu_\circ$ implies that
the decay width is extremely suppressed, if $A_2$ and $A_1$ are more
degenerate than the $F-A_2$ pair.
The decay widths given in Eqs.~(\ref{AHtoAH}) and (\ref{AHtoAmu}) are used in
Section~\ref{sec:decays} for computing the branching fractions of the
spinless adjoints, while the decay width of Eqs.~(\ref{AmutoAH}) determines
the branching fractions of the (1,0) hypercharge vector boson. |
cond-mat/0703821 | \section{Introduction}
\renewcommand{\theequation}{1.\arabic{equation}}
The handling of small liquid droplets in contact with a gas phase on top of a solid substrate,
or of liquid lenses at the
interface of two other fluid phases plays an important role in the context of microfluidics
(see, e.g., Refs. \cite{Mifl1,Mifl2,Mifl3,Mifl4}). For
droplets or lenses with linear extensions in the nanometer regime a proper
thermodynamic description requires to account for not
only the bulk and surface properties but also for the special properties of matter in the vicinity of three-phase contact. In
order to capture the contribution of the three-phase-contact
region to the relevant thermodynamic potential, a certain excess contribution to the appropriate free energy that scales
with the linear extension of the system is associated with the three-phase-contact line, defined by a common intersection of
the interfaces meeting in the region of three-phase contact.
\par
The liquid lens is an example of an inhomogeneous system in which
three thermodynamic phases, say $\alpha$, $\beta$, and $\gamma$, coexist in a (constrained) equilibrium.
The thermodynamic coexistence of bulk phases is provided by specific choices of the thermodynamic state
of the system whereas their spatial coexistence follows from appropriate boundary conditions.
We consider the following set-up. A drop of the non-wetting $\beta$ phase is placed at the interface between
two other fluids. A microscopically thin equilibrium wetting film of $\beta$ phase is formed at the
interface and for suitably chosen substances and conditions a surplus of $\beta$ phase forms a lens
at the $\alpha$--$\gamma$ interface.
(In principle the exploration of the configuration space allows
for shifting the lens laterally along the interface; however, for the purposes of the present paper one may
disregard this degree of freedom.)
For such a system two basic scenarios are possible.
The lens ($\beta$ phase) can exchange matter with the surrounding phases, so that the chemical
potentials in all phases are equal. There are cases in which the lens is in a stable equilibrium
with the surrounding phases; in other cases the lens is unstable but could be stabilized by imposing
suitable constraints.
Alternativly, one may consider a nonvolatile liquid ($\beta$ phase), i.e., one constrains the volume of the liquid
while chemical equilibrium is not attained.
\\
In addition to the concept of
interfaces separating the bulk phases and of interfacial or surface tensions, the systematic thermodynamic description of such
systems leads to considering the contact line ${\cal{L}}_{\alpha\beta\gamma}$ along which the three interfaces meet. The
line tension $\tau$ is attributed to the
contact line ${\cal{L}}_{\alpha\beta\gamma}$. It is defined as
the 'line contribution' to the grand canonical free energy $\Omega$ of the inhomogeneous system per unit length of the
contact line \cite{Gib1,Lang1,RowWid,Wid1,Wid5} which is the leading term left after subtracting volume
and surface or interface contributions from $\Omega$, i.e.,
\begin{multline}
\frac{\Omega -
(\sum\limits_{\kappa=\alpha,\beta,\gamma} V_{\kappa} \,\,\omega_{\kappa}) - A_{\alpha\beta}\,\sigma_{\alpha\beta} -
A_{\alpha\gamma} \,\sigma_{\alpha\gamma} -A_{\beta\gamma}\,\sigma_{\beta\gamma}}{L_{\alpha\beta\gamma}}
\\
= \, \tau \, + \, {\mathrm{ s.l.t.}} \quad.
\label{eta}
\end{multline}
With s.l.t. we denote subleading terms which vanish for $L_{\alpha\beta\gamma} \longrightarrow \infty$.
The symbol $\omega_{\kappa}$ denotes the grand canonical free energy density of the homogeneous bulk phase
$\kappa$ ($\kappa =\alpha, \beta, \gamma$), $V_{\kappa}$ is the volume assigned to this phase, $A_{\kappa\kappa'}$ is
the area of the $\kappa$--$\kappa'$ interface, $\sigma_{\kappa\kappa'}$ is the corresponding interfacial tension, and
$L_{\alpha\beta\gamma}$ is the length of the three-phase-contact line. The definition of $\tau$ in Eq. (\ref{eta})
refers to a reference state in which
uniform bulk phases are extrapolated right up to the interfaces, and analogously the interface or surface properties of
laterally homogeneous interfaces or surfaces are extrapolated right up to the contact line ${\cal{L}}_{\alpha\beta\gamma}$. In
Eq. (\ref{eta}) we do not take into account contributions to $\Omega$ related to the presence of walls enclosing the whole system;
the only inhomogeneities of the system which
are relevant in the present analysis are those related to the spatial coexistence of the three thermodynamic phases $\alpha$, $\beta$, and
$\gamma$. The above decomposition allows one to calculate the line tension for a given thermodynamic system provided
the quantities on the lhs of Eq. ({\ref{eta}}) are determined in separate preceding steps involving similar considerations of suitable
thermodynamic limits.
\par
A different typ of an inhomogeneous fluid system is a sessile drop on a solid substrate. In this case only two
thermodynamic phases
coexist whereas the substrate acts as an inert spectator phase. In such cases the three-phase-contact line
corresponds to the region where the interface between two coexisting thermodynamic phases meets the substrate
\cite{Pom1,JoGe1,PomVan1,Gen01,D01,Wid2t,Ind2t,Wid3,GD1,BD,Dob2,Dob3,White1,Wayn1,Dus1,AmirNeu}.
In the literature the term line energy or line elasticity is also frequently used in connection with an extra surface energy
associated with the deformation of the three-phase-contact line as the result, e.g., of interactions with
surface defects (see, e.g., Refs. \cite{JoGe1,PomVan1,Gen01,Raph1t,Pom1}).
This quantity has to be clearly distinguished from the
line tension as introduced via Eq. ({\ref{eta}}).
\par
In the literature the notion of line tension is also used to describe the one-dimensional interface of two coexisting, intrinsically
two-dimensional phases such as liquid- and vapor-like phases in Langmuir--Blodget films. This, however, corresponds
to the lower-dimensional version of an interfacial tension and not to the coexistence of three phases as considered here.
\\
There is a growing body of literature describing both experimental and theoretical investigations of the line tension
and already a number of reviews have been published on that subject \cite{Ind2t,Drelich,AmirNeu,Tosh1e,Rusanov-0,Rusanov-2}.
Experimental investigations were carried out for drops on solid substrates
\cite{Law2e,Wang1,Pom2e,Pom3,Bue1,GaNe1,Tosh1,Dun1e,Dus1e,Pom1e,Amir3e,Herm1e,Herm2e,See1,Daillant,Checco,Hoorfar,StaRaSchu,Amirfazli,GaNe2,Wang2}, for liquid lenses at the interface of two fluids
\cite{Tosh1,StaRaSchu,Dussaud,Chen4,Chen5,Aveyard,Li1,Takata}, and for spherical particles or bubbles at the interface of two
fluids \cite{Schel,Plat1,Dim1,Ave1,JensLi,Iva1,Broch10,Butt1,Butt2,Butt3}; also the role of line tension in epitaxial growth
has been discussed \cite{Goe}. On the theoretical side the theory of capillarity has been extended in a phenomenological
way by taking into account line contributions and the consequences of these extensions have been explored
\cite{Wid1,Buk3,BorNeu,Pethica,Iva1,Chen3,Chen1,Marmur-n1,Marmur-n2,Babak,SoloWhite,Xia,KubNap1,GaNe2,Wid3,Li1,Brink,Brink-b,Brink-c,Gretz,Buff1,Guzz}.
Furthermore there are microscopic calculations of the line tension
\cite{Wid5,GD1,BD,Dob2,Dob3,WidCl1,WidWid1,VarRob1,Perk1,Buk1,Ker1,Nav1,Tar1,JoGe2,Dob1,Gand}
and many studies, in most cases based on microscopic theories as well, concentrate on the behavior of
the line tension in the vicinity of a wetting transition
\cite{Wid2t,Ind2t,Szl1,Ind1,IndBacLan1t,IndRob1t,VarRob2,DobInd1t,IndDob1t,RobInd0,Blos1,Perk2,Widom-n4}.
Theoretical studies are also devoted to the examination of the influence of the droplet size on the line tension
\cite{Jakub} and of the electrostatic contributions to the line tension which arise in the presence of surface charges
\cite{Chou1}.
The boundary tension between two coexisting wetting films of different thicknesses was studied as well
(see, e.g., Refs. \cite{Perk1,Perk2,Perk3,Erring}).
A drop-size dependence of the contact angle was studied in molecular dynamics simulations without deducing
values for the line tension (see, e.g., Ref. \cite{Guo}).
Finally, the line tension was studied also via molecular dynamics simulations (see, e.g., Refs. \cite{Bres4,Werder})
or via an analysis of probability distributions as obtained from Monte Carlo simulations at
three-phase contact \cite{Djikaev-n1}.
\par
A closer inspection of these results reveals that certain aspects of the line tension either give rise to conflicting
statements or
remain unaddressed, leading us to set out to clarify the following basic questions:
\begin{itemize}
\item { {In which sense is the concept of the line tension well defined? }}
\item { { Is it sufficient for the determination of equilibrium shapes to characterize the thermodynamics of the contact
line by a line tension only?}}
\end{itemize}
These questions arise because it is not obvious that the definition of $\tau$ via
Eq. (\ref{eta}) leads to a unique result. The reason lies in the arbitrariness in the definition of the position of the interfaces
between
adjacent phases. This arbitrariness is due to the smooth spatial transition between the adjacent phases.
Once the density distributions of the fluid components across the interfacial region are known, for example on the
basis of scattering experiments or atomic force microscopy \cite{Pom2e,Pom3,Bue1}, or theoretically by simulations or
density-functional calculations, a criterion has to be applied which fixes the interface
position somewhere in the transition region. However, there exists a multitude of sensible choices. The
arbitrariness in the definition of the interface positions leads to an arbitrariness in the definitions of the volumes,
areas, lengths,
and to some extent even interfacial tensions (see, e.g., Refs. \cite{RowWid,Kalikman,Tolman,Hill,Buff,Kondo,Hend}).
Although it was shown by Widom \cite{Wid1,RowWid} that
$\tau$ is independent of the particular choice of the dividing interfaces in the case of a straight three-phase-contact line in
a system with three thermodynamically coexisting fluid phases separated by planar interfaces, it is not clear that the same will
be true in other cases as well, e.g., if two fluid phases are in contact with an inert solid phase, or for systems in
which the
curvatures of the interfaces and of the contact line plays a role.
A first discussion, which naturally raises the issues, concerning the uniqueness of the line tension for the case
in which two fluid phases are in contact with and meet at an inert solid phase, is given in Sect. 3.
This discussion, however, turns out to be incomplete and it leads to a contradiction with one of
the results obtained in Sect. 5, which states that the line tension {\em as defined there} is unique.
A thorough discussion
and the resolution of the contradiction is given in Sect. 6.
In Sect. 5 we mainly investigate the three-phase-contact line in systems with curved interfaces.
On the other hand, just these systems
are studied experimentally if one attempts to determine the value of the line
tension, e.g., via measuring the dependence of the contact angle on droplet size (see, e.g., Refs. \cite{Pom1e,Pom2e,Herm1e,Herm2e,Drelich,Dussaud,Chen4,Chen5,Aveyard,Amirfazli,Law2e,Li1,Takata}).
\par
The curvature dependence of interfacial tensions leads to contributions to the free energy which scale with the linear
extension of the system, i.e., very much like the line contribution. Since in many cases interface curvature and the length of the three-phase-contact line
cannot be varied independently, these two contributions are not distinguishable per se. Thus, line tension and
curvature effects on the interfacial tensions have to be discussed simultaneously. Moreover, the curvature expansion
of the surface tension has to be known in advance before the line tension may be determined from Eq.
(\ref{eta}). \\
Since in the present context it is unavoidable to consider curvature effects on the interfacial tensions and because
it is known that a consistent
description of curved (spherical) interfaces must take into account --- for general dividing interfaces ---
derivatives of the surface
tensions with respect to their radii of curvature, i.e., their bending rigidities (see, e.g., Refs.
\cite{RowWid,Kondo,Hend,Kalikman,Tolman,Hill,Buff,RejNap,T1,T101,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T14}),
we expect that a complete thermodynamic
description of three-phase-contact lines requires the introduction of further material parameters
in addition to the line tension.
Indeed,
a number of theoretical analyses have appeared in the literature in which additional material parameters were
introduced in order to characterize the three-phase-contact line \cite{Marmur-n1,Marmur-n2,Babak,SoloWhite,BorNeu}.
Recent studies have been carried out by Rusanov et al. \cite{Rusanov-1} for a
drop on a solid substrate, and by Widom and coworkers in connection with the line analogue of the
Gibbs adsorption equation \cite{Widom-n1,Widom-n2,Widom-n3,Widom-n5,Widom-n6,Widom-n7}
for a straight contact line at a genuine
three-phase contact.
In Sect. 5 we systematically study these issues guided by the concept of form-invariance of
the basic equations under so-called notional shifts of dividing interfaces (for a definition see, c.f., Sects. 2 and 5).
\par
As a prerequisite to a thorough study of these issues we also have to answer similar questions
related to the concept of surface tensions. However, before discussing them and in particular the main
questions concerning the line tension, we point at related issues recurring in
the literature which can be judged only after the basic questions raised above have found a satisfactory answer. These
issues are:
\begin{itemize}
\item What is the typical magnitude of the line tension ? \\
Of course, the line tension depends on the thermodynamic state of
the system and the number of relevant thermodynamic degrees of freedom varies from system to system. For example, for
certain systems the temperature dependence of the line tension becomes especially pronounced close to wetting
transitions \cite{Wid2t,WidCl1,WidWid1,Szl1,Ind1,Ind2t,Law2e,Wid3,BD,Dob2,Dob3,Sch1t,IndBacLan1t,IndRob1t,VarRob1,VarRob2,Abr1t,DobInd1t,Perk1,IndDob1t,RobInd0,Blos1,Buk1}.
On the other hand, for many systems away from such special
thermodynamic states there is now widespread agreement that the value of the line tension is of the order of $10^{-11}$ N
\cite{Pom2e,Pom3,GD1,BD,Daillant}. With the experimental accuracy available at present the corresponding prefactor distinguishes
between different systems.
\item What is the sign of the line tension?
\\Various experimental and theoretical investigations lead to values of the line
tension which include both signs \cite{Clar1,GD1,Tosh1,Herm1e,Rosso,Chen4,Chen5,BD,Wang1}.
Since - contrary to the interfacial tension -
there exists no thermodynamic argument for a specific sign of
the line tension \cite{Wid1,Guzzardi}, so that experimental findings for the line tension with a certain sign cannot be discarded
from the outset.
(Even in this respect a contrary statement may be found in the literature \cite{Li2}.)
On the other hand, if one is interested in the temperature dependence of the line tension for a
specific system which undergoes a first-order wetting transition, both theoretical and experimental results show
that upon increasing the temperature towards the wetting transition temperature the line tension changes sign from
negative to positive values \cite{Ind2t,Law2e,BD,Wang1,Wang2}.
\item Is a negative line tension compatible with the structural stability of drops? \\
The mesoscopic analysis of line tensions including
small wavelength fluctuations confirms that negative values of the line tension do not lead to instabilities
\cite{Dob2,Rosso,Guzzardi}.
\item What does the line tension depend on? \\
Similarly to the interfacial tension it is a function of the
thermodynamic state of the system. For straight contact lines, defined by intersecting planar interfaces,
the phases involved must be at stable thermodynamic coexistence. For example,
one-component fluids in contact with an inert
substrate must be at liquid--vapor coexistence $\mu = \mu_0 (T)$, where $\mu$ denotes the chemical potential.
Thus in this case the line tension is a function of temperature only. If the fluid in contact with a
substrate consists of a binary mixture
of A and B particles, the fluid must be at fluid--fluid coexistence, i.e., $\mu^{\mathrm{A}} = \mu^{\mathrm{A}} (\mu^{\mathrm{B}},T)$
which leaves two thermodynamic variables free. For three-phase contact among three fluid phases the system must be at the
triple line ($\mu^{\mathrm{A}} = \mu^{\mathrm{A}}_0(T), \mu^{\mathrm{B}} = \mu^{\mathrm{B}}_0(T)$) of A-rich liquid,
B-rich liquid, and vapor coexistence so that in this case the line tension is again a function of temperature only.
This dependence can be reparametrized in terms of the temperature dependent contact angle $\theta (T)$.
The situation is somewhat different for curved contact lines defined by intersecting curved interfaces.
Droplets of finite size residing on a substrate or liquid lenses formed at planar fluid--fluid interfaces
are examples in
which the contact lines are curved.
Due to their curvature droplets or lenses remain in a (constrained)
equilibrium with their surrounding phases,
which takes place at chemical potentials slightly off their values $\mu_0^i$
at stable thermodynamic coexistence with planar interfaces. Under these conditions
the pressure is different inside and outside the droplet or lens. This pressure difference and at the same time
the size of the droplet or lens are prescribed by the chosen chemical potentials or alternatively by the
temperature and the ambient pressure which deviates from that for stable coexistence at the same temperature.
Thus in principle the line tension may now depend on a further thermodynamic variable in addition to $T$.
This dependence can be reparametrized in terms of the size of the droplet or lense.
\\
In addition, from the theoretical point of view the line tension $\tau$ is a functional of both the interaction
potentials between the fluid particles and the substrate potential. If the microscopic forces are too
long-ranged $\tau$ becomes ill defined while the corresponding interfacial and surface tensions retain their validity
\cite{Ind2t,DN1}, i.e., in this case the size dependence of $\Omega$ cannot be described in terms of a bulk,
surface, and line contribution with a line tension which is size- and shape-independent in the thermodynamic limit.
\\
It appears that in extracting line tensions from experimental data so far the curvature dependence of the
interfacial tension, characterized by the
Tolman length \cite{T0,ChenTrein,T1,T1b,T101,T2,T3,T4,T5,T6,T7,T8,T9,T10,T11,T12,T14,Anisimov},
has been completely neglected. Therefore
the question arises to which extent the experimental determination of the line tension is affected by the
Tolman length.
\item Can one measure the line tension? \\
In the last chapter we shall point at possible difficulties to determine line tensions uniquely.
\end{itemize}
\vspace*{0.5cm}
\section{The substrate--fluid surface tension}
\renewcommand{\theequation}{2.\arabic{equation}}
\setcounter{equation}{0}
\vspace*{0.5cm}
In the following we discuss a possible source of ambiguity in the definition of the line tension
(see Eq. \eqref{eta}) in the case that one of the
phases considered in Sec. 1 is an inert substrate, i.e., we consider two-phase coexistence in the
presence of an inert wall rather than coexistence of three genuine thermodynamic phases. We focus on
the issue of non-uniqueness of the liquid--substrate or gas--substrate interfacial tensions.
To this end we first briefly mention
the related question of the thermodynamic determination of the interfacial tension $\sigma_{\kappa\kappa'}$ in a system in which
two coexisting {\it fluid phases}, say $\kappa$ and $\kappa'$, meet along a planar interface \cite{RowWid,Wid1}.
Using the formula
\begin{equation}
\label{sigma}
\sigma_{\kappa\kappa'} = \lim_{V_{\kappa,\kappa'},A_{\kappa \kappa'} \rightarrow \infty} \,
\frac{\Omega -V_{\kappa} \,\omega_{\kappa} -
V_{\kappa'} \,\omega_{\kappa'} }{A_{\kappa\kappa'} } \quad,
\end{equation}
the question appears whether the corresponding value of the
interfacial tension depends on the arbitrary choice of the position of the fluid--fluid interface
and the corresponding volumes $V_{\kappa}$ and $V_{\kappa'}$.
In the case of a planar interface the value of the interfacial tension does
not depend on the position of the interface, i.e., Eq. (\ref{sigma}) leads to a unique result. The reason
is that the total volume $V_{\kappa} + V_{\kappa '}$ as well as the surface area $A_{\kappa \kappa '}$ are independent
of the location of the dividing surface; in addition due to thermal equilibrium one has $\omega_{\kappa} =
\omega_{\kappa '}$. (For a complete discussion of this issue see, e.g., Refs. \cite{RowWid,Wid1}.) \\
However, in situations in which one of the phases, say phase $\gamma$, is an inert substrate with a planar
surface, a difference to the case of fluid--fluid interfaces arises because the inert substrate is not one of
the thermodynamically
coexisting phases. Although it is not a priori obvious whether one should treat the substrate as a part of the system or not, it
is usually considered as an external object which just provides a steep external potential defining the boundaries of the
system. \\
It might appear that the freedom in positioning the dividing interface is less obvious for the solid--fluid interface than for the
fluid--fluid interface because a solid surface is defined rather sharply, say by the positions of the nuclei
of the atoms forming the
topmost layer. However, even then the position at which an actual experimental technique
(e.g., atomic force microscopy, optical methods, etc.)
will locate the surface of the solid will certainly deviate from the definition given above, because of the smooth
decay of the substrate potential and of the finite extension of the electron cloud of the substrate, and because
the fluid
phase in contact with the solid only gradually attains its bulk properties. Depending on the kind of experiment and on the
way the data are analyzed, relative shifts in the location of the dividing surface which are of the order of one or even several
atomic radii are conceivable. This shift multiplied by the surface tension under consideration yields a force
which is comparable with the magnitude of the line tension.
Moreover, a solid substrate in contact with a vapor phase might be covered with a
thin liquid-like wetting film which is in thermal equilibrium with the bulk vapor phase. In a thermodynamic description in terms of interfacial and line tensions
as considered here, this thin wetting film is not treated as a separate entity but as part of the solid--vapor
interface and as such it contributes to the actual
solid--vapor ({\em g}as) surface tension $\sigma_{\mathrm{sg}}$. Accordingly the question arises,
where to place the solid--vapor
interface. It could be somewhere in the transition region from the liquid-like film to the vapor or at the transition
region from the solid to the liquid-like film. It is very likely
that different experimental techniques imply different conventions.
(If nonequilibrium situations are considered $\sigma_{\mathrm{sg}}$ may be different from its equilibrium value
if the aforementioned thin wetting film -- present at thermal equilibrium -- has not yet formed.)
\par
In order to explore the consequences of the freedom in choosing the dividing surface, we consider a system in which a planar
inert substrate is exposed to the fluid $f$, i.e., the liquid or gas phase (see Fig. 1).
\begin{figure}[h]
\includegraphics*[scale=.30]{Fig1.eps}
\caption{\label{fig1} A fluid phase (f) in the presence of a planar substrate (s).
Two choices for the position of the planar substrate--fluid interface are marked as (1) and (2); they are shifted
with respect to one another by a distance $\delta h$. In each case the area $A_{\mathrm{sf}}$ of the
substrate--fluid interface is the same.}
\end{figure}
If we consider the solid as a part
of our
system the value of the substrate--fluid surface tension $\sigma_{sf}$ follows from the equation
\begin{equation}
\label{sf}
\sigma_{sf} = \lim_{V_{\mathrm{f},\mathrm{s}},A_{\mathrm{s} \mathrm{f}} \rightarrow \infty} \,
\frac{\Omega - V_{f} \,\omega_{f}\, - V_{s} \,\omega_{s}}{A_{sf}} \quad,
\end{equation}
where $\Omega$ denotes the grand canonical potential of the fluid plus that of the substrate, $V_{f}$ is the value of the fluid
volume compatible with the chosen location of the planar substrate--fluid interface, $V_{s}$ is the corresponding volume of
the solid, and $\omega_{f}$ and $\omega_{s}$ are the grand canonical free energy
densities of the fluid and solid, respectively. $A_{sf}$ is the area of the planar solid--fluid
interface. If the substrate is in a constrained equilibrium (e.g., no interdiffusion of solid and fluid particles),
one has $\omega_{f} \neq \omega_{s}$. \\
Since the position of the substrate--fluid interface is not unique we can consider another position of this
interface which is parallel to the first one and located a distance $\delta h$ above it (see Fig.1). In this case $V_{f}^{(2)} =
V_{f}^{(1)} - \delta h \,A_{sf}$, $V_{s}^{(2)} = V_{s}^{(1)} + \delta h \,A_{sf}$, whereas the surface area
$A_{sf}$ does not change upon the vertical shift of the interface. It follows from Eq. (\ref{sf}) that
the values of the substrate--fluid surface tensions corresponding to these two choices differ by
\begin{equation}
\label{deltasigmasf}
\sigma_{sf}^{(2)} - \sigma_{sf}^{(1)} = \left ( \omega_{f}\, - \omega_{s}\, \right ) \delta h \quad.
\end{equation}
Thus a different choice of the location of the substrate--fluid interface corresponds to a redistribution of the free energy
between the bulk and the surface terms, and to different values of the substrate--fluid surface tension. Note that in the case
of {\it thermodynamically coexisting \rm} liquid and gas phases the grand canonical free energy densities of the two
phases (i.e., the negative pressures) are equal ($\omega_{l} = \omega_{g} $) and thus
the well known conclusion $\sigma_{lg}^{(2)} = \sigma_{lg}^{(1)}$ follows
(see Refs. \cite{RowWid,Wid1}).
At the same time we emphasize that the difference $\sigma_{sg} - \sigma_{sl}$ (which, e.g., enters into
Young's law for the contact angle (see, c.f., Eq. (\ref{Young})) does not depend on this choice of
the dividing surface provided the dividing interfaces between solid and gas
on one hand and between solid and liquid on the
other hand are chosen to be at the same height above the solid, and provided the solid is in the same state in
both cases.
\section{Three-phase-contact line at intersecting planar interfaces: uniqueness of the line tension}
\renewcommand{\theequation}{3.\arabic{equation}}
\setcounter{equation}{0}\vspace*{0.5cm}
In the case of three genuine thermodynamically coexisting phases the issue analogous to the one raised in Sect. 2 is
whether the value of the line
tension $\tau$ determined from Eq. (\ref{eta}) depends on the location of the dividing interfaces which in turn determine the
location of the contact line. One can show \cite{RowWid} that in the case that these phases meet along a {\it straight line
\rm} parallel shifts of this line do not influence the value of the line tension. \\
In order to study whether the location of the substrate--fluid interface affects the value of $\tau$, we investigate the following
configuration of two-phase coexistence in the presence of a planar solid substrate.
Due to thermal equilibrium of the gas and liquid phase one can impose lateral boundary conditions such that far
to the left the substrate is exposed to the gas phase whereas far to the right the substrate is exposed
to the liquid. This enforces the formation of the liquid--gas interface which meets the substrate with
a contact angle $\theta$ (see Fig. 2). On a macroscopic scales the liquid--gas interface is also planar.
\begin{figure}[t,b]
\includegraphics*[scale=.40]{Fig2.eps}
\caption{\label{fig2} Coexisting gas (g) and liquid (l) phases in
the presence of a planar substrate (s). The planar liquid--gas interface meets the substrate with
a contact angle $\theta$. Far to the left (right) there is the substrate--gas (--liquid) interface.
Two choices of the position of the substrate--fluid interfaces are
marked by (1) and (2); their distance is denoted by $\delta h$.
This results in two parallel contact lines the cross sections of which are indicated by the dots.
$\Delta A_{\mathrm{lg}}$ and $\Delta A_{\mathrm{sl}}$
denote the corresponding changes in the liquid--gas and substrate--liquid interfacial areas, respectively.
For both choices the contact angle $\theta$ is the same.}
\end{figure}
\\
We again consider two parallel positions of the substrate--fluid interface at a distance $\delta h$ from each other.
This results in two
corresponding, parallel contact lines. From simple geometrical considerations it follows that the values of the
line tensions corresponding to these choices differ by
\begin{equation}
\label{deltaeta1}
\tau^{(2)} - \tau^{(1)} \,=\, \frac{\sigma_{lg} + (\sigma_{sl}^{(1)} -\sigma_{sg}^{(1)})\cos\theta}
{\sin\theta} \,\delta h \quad.
\end{equation}
In the calculation leading to Eq. (\ref{deltaeta1}) we have used the aforementioned result
$\sigma_{sg}^{(2)} - \sigma_{sl}^{(2)} = \sigma_{sg}^{(1)} -\sigma_{sl}^{(1)}$ and the fact that the freedom
in the choice of the substrate--fluid
interface position does not influence the value of the liquid--gas interfacial tension
$\sigma_{lg}$, i.e., $\sigma_{lg}^{(2)}= \sigma_{lg}^{(1)}=\sigma_{lg}$.
After using Young's equation \cite{RowWid,Gen01,D01}
\begin{equation}
\label{Young}
\sigma_{sg} = \sigma_{sl} + \sigma_{lg} \cos\theta_{0}
\end{equation}
and identifying $\theta = \theta_{0}$ one obtains
\begin{equation}
\label{deltaeta2}
\tau^{(2)} - \tau^{(1)} = \sigma_{lg} \,\delta h \, \sin\theta_{0} \quad.
\end{equation}
This result reflects the fact that there is no explicit force balance perpendicular to the solid surface. Taken at face value the
above result would show that the freedom in positioning the substrate--fluid interfaces with the consequential shift
of the contact
line is reflected in the change of the value of the line tension. This change is proportional to the distance $\delta h$ between
the two arbitrarily selected positions of the substrate--fluid interfaces. Numerically, the rhs of Eq. (\ref{deltaeta2})
is comparable with $\tau$. However, Eq. (\ref{deltaeta2}) is in conflict with a result which will be obtained in
Sect. V of the present work,
namely the invariance of the line tension with respect to notional changes of the system. This
puzzle will be resolved in Sect. VI.
Here we only note that it turns out that the two conflicting statements are based on two different
definitions of a line tension both of which seem to be absolutely compelling from the point of view of how
they are introduced. A relation between the two definitions of $\tau$ will be given. But we also point
out that the way how Eq. (\ref{deltaeta2}) has been obtained above has to be scrutinized carefully,
because, while deriving Eq. (\ref{deltaeta2}), all contributions to $\Omega$ which are generated
by separating a subsystem from its surrounding
and which are proportional to the linear extension of the system,
have been disregarded. We shall show that in general this is not permissible. \\
\section{ Line tension and contact angles: problems
associated with the standard modified Neumann and Young equations }
\renewcommand{\theequation}{4.\arabic{equation}}
\setcounter{equation}{0}\vspace*{0.5cm}
We start this section by considering genuine three-phase coexistence in which a lense consisting of a fluid phase $\beta$ is located
at the interface between two fluid phases $\alpha$ and $\gamma$ (see Fig. 3(a)).
\begin{figure}
\includegraphics*[scale=.30]{Fig3.eps}
\caption{\label{fig3} (a) A liquid lens (phase $\beta$) at the $\alpha$--$\gamma$
interface between two fluid phases $\alpha$ and $\gamma$; $r$ is the radius of the circular intersection
between the two spherical caps forming the lens. The various contact angles are denoted also as $\alpha$,
$\beta$, and $\gamma$.
(b) A sessile liquid drop (phase $\beta$) in contact with its vapor (phase $\alpha$) and a planar
substrate ($\gamma$) forming a contact angle $\theta$. The drop is a spherical cap. }
\end{figure}
The lense is taken to be formed by two
spherical caps of different radii intersecting along a circle of radius $r$. The three-phase-contact line of circular shape is
accompanied by the line tension $\tau$. To simplify the notation also the corresponding contact angles are denoted by the
same symbols as the phases, i.e., $\alpha$, $\beta$, and $\gamma$, where $\alpha + \beta + \gamma = 2\pi$. In the absence of the
line tension the contact angles $\alpha_0$, $\beta_0$, and $\gamma_0$ fulfill the equation
(see, e.g., Ref. \cite{RowWid})
\begin{equation}
\sigma_{\alpha\gamma} + \sigma_{\alpha\beta} \, \cos\alpha_0 \, + \, \sigma_{\beta\gamma} \, \cos\gamma_0 \, = \, 0 \quad.
\end{equation}
If a line-tension contribution to the constrained
grand canonical free energy $\tilde{\Omega}$
is included one obtains, from the minimization of
$\tilde{\Omega}$ at a constant volume of the liquid phase $\beta$, the modified Neumann equation
(see, e.g., Refs. \cite{Buff1,Dussaud})
\begin{multline}
\label{mye1}
\sigma_{\alpha\beta} \,\left(\cos\alpha\,-\, \cos\alpha_0 \,\right)\, + \, \, \sigma_{\beta\gamma} \, \left(\cos\gamma\,-
\,\cos\gamma_0\right)\,=\, \frac{ \tau}{r}\, ,
\\
\end{multline}
provided neither bending rigidities of the interfaces nor further properties attributed to the contact line
other than the line tension (such as rigidities against changes of contact angles) are taken into account.
Equation (\ref{mye1}) can be equivalently rewritten as
\begin{equation}
\label{mye10}
\cos\beta \,=\, \cos\beta_0 \,-\,\frac{\sin\beta_0}{\sin\alpha_0}\,\,\frac{\tau}{\sigma_{\alpha\beta}\,r}
\quad.
\end{equation}
In particular, if one of the phases, say phase $\gamma$, is taken to represent an inert substrate with planar surface
($\beta_{0}=\pi - \alpha_{0}$) (see Fig. 3(b)), the above equation turns into the modified Young equation
(see, e.g., Refs. \cite{Tosh1,Gretz,BorNeu})
\begin{equation}
\label{mye2}
\cos\theta = \cos\theta_0 - \frac{\tau}{\sigma_{\alpha\beta}\,r} \quad,
\end{equation}
where $r$ denotes the radius of the circular substrate--phase-$\alpha$--phase-$\beta$ contact line. Equations
(\ref{mye10}) and (\ref{mye2}) represent asymptotic formulae valid in the limiting case of large lenses or drops,
i.e., $\tau$ governs the {\it leading} behavior for $r \rightarrow \infty$.
\\
We note that in many experiments involving sessile liquid drops a so-called line tension $\tau$ is
deduced from measurements
of the contact angle $\theta$ as a function of the radius $r$ via fitting the modified Young equation (Eq. (\ref{mye2}))
to the data
\cite{RowWid,Pom2e,Pom3,Law2e,Wang1}. Similar experiments have been carried out with lens-like objects
\cite{Dussaud,Chen4,Chen5,Aveyard,Li1,Takata}. \\
However, a closer look at the procedure described above of determining $\tau$ reveals problems
in using the modified Young
(Eq. (\ref{mye2})) or Neumann equations (Eqs. (\ref{mye1}, \ref{mye10})) which are related to the freedom
in positioning the
dividing interfaces. A shift of the solid--liquid dividing interface (in the case of a sessile drop) by
$\delta h$ or a change of the radius
of a spherical interface (for the lens or the drop) by $\delta R$ leads to changes of the contact angles as well. From
simple geometrical considerations one finds that the corresponding changes in $\cos \theta $ or $\cos \beta $
are of the order
$ \delta h / r$ or $ \delta R/ r$, i.e., they are of the same order as the corrections stemming from the
presence of the line tension. Therefore, upon
applying the modified Neumann or Young equation to the same physical object but with different choices for the dividing
interfaces one would have to introduce two different and suitably chosen values of $\tau$ for different dividing interfaces in order to obtain
the correct relations between the two corresponding contact angles. (In the line-tension related correction terms
in Eqs. (\ref{mye10}, \ref{mye2})
the quantities other than $\tau$ are either independent of the choice of dividing interfaces or their
changes with the dividing interfaces give rise to higher order corrections.) On the other hand, by decomposing
$\Omega$ in two different ways we had found that for a straight contact line at a genuine three-phase contact the line tension
is independent of the choice of dividing interfaces. This contradicts the result of the previous argument. Finally, we can also
look at what happens if we shift the substrate--fluid interface for the sessile drop. If we compute the
difference $\tau^{(2)} - \tau^{(1)}$ enforced by the geometrical relations between $\theta^{(2)}$ and $\theta^{(1)}$ together with
the requirement that both $\theta^{(2)}$ and $\theta^{(1)}$ fulfil the modified Young equation we
obtain $\tau^{(2)} - \tau^{(1)} = - \sigma_{\mathrm{lg}}\delta \mathrm{h} \sin \theta _0 $ which has the same structure
as the
value given in Eq. (\ref{deltaeta2}) (obtained from a comparison of two different ways of decomposing $\Omega$), but
it has the opposite sign.
\\
Summarizing these two findings it turns out that the relations between two line
tension values obtained from two different decompositions of $\Omega$ for two different sets of dividing interfaces are at
variance with the currently used modified Young and Neumann equations combined with elementary geometrical
considerations.\\
In order to find equations free from the above inconsistencies we shall investigate in detail two representative
systems: a liquid lens at the interface between two fluid phases and a sessile drop on a substrate. We shall include from the
outset the effects of curvature on the interfacial tensions and we shall explicitly state all conventions used in defining different
sets of dividing interfaces, and list all the properties which are assigned to the interfaces and to the reference bulk phases.
\section{ A closer look at lenses and drops}
\renewcommand{\theequation}{5.\arabic{equation}}
\setcounter{equation}{0}\vspace*{0.5cm}
\special{src: 299 naplin1.tex}
In this section we first study a lens-shaped fluid phase $\beta$ located at the planar interface between two other
fluid phases $\alpha$ and
$\gamma$, and secondly a sessile liquid drop of $\beta$ phase in contact with its vapor $\alpha$ on top of an undeformable
inert solid substrate $\gamma$.
\subsection{ General considerations}
In this context two main questions are addressed. First, how does the line tension depend on `parallel` displacements
of the positions of the dividing interfaces between two phases? Secondly, what can be learned from the fact that
thermodynamic potentials of the total system must be independent of arbitrary conventions regarding the choice of dividing
interfaces? In particular, we are interested in the resulting requirements with respect to the structure of the
equations relating the contact angle(s)
to the system size, e.g., to the radius $r$ of the three-phase-contact line. These
questions are motivated by the inconsistencies encountered for the modified Young and Neumann equations
in the previous chapter.
\\
First, we specify the systems in more detail and give the rules that we have chosen to define the dividing
interfaces. These rules still admit parallel shifts of the dividing interfaces. We also introduce the quantities that are
used in order to describe the shifted dividing interfaces. It is further necessary to fix the properties attributed to the (reference) bulk and the
interfaces present in the lens or drop, and we do this in accordance with what is known about the bulk fluid and
the interfacial properties of
spherical drops. Only after this step is completed the properties attributed to the line may be extracted. \\
The lens and the drop are completely characterized once the density distributions are known for all constituent species at
given thermodynamic conditions. Only for small drops the line tension and curvature effects are expected to become
relevant, and in this case gravity can be neglected.
(The validity of Young's law in the presence of gravity is discussed in Ref. \cite{Blok3t}.)
We also assume that no other external bulk forces act on the systems.
Under these conditions, provided the volume of the $\beta$ phase is sufficiently large, one can expect that there exist regions
of the lens (at some distance away from the three-phase contact lines) where the density distributions exhibit
-- to a good approximation -- radial symmetry relative to one of the two centers of curvature
$M_1$ and $M_2$; an additional 'center' at infinity characterizing the
planar interface is present in the case of a lens (see Fig. 4).
\begin{figure}[b,t]
\includegraphics*[scale=.30]{Fig4.eps}
\renewcommand{\figurename}{Fig.}
\caption{\label{fig4} A liquid lens at the planar $\alpha$--$\gamma$ interface (horizontal lines).
Two choices for the interfaces are marked by (1) and (2).
The relative concentric shifts of the spherical interfaces are characterized by $[\mathrm{d}R_1]$ and $[\mathrm{d}R_2]$.
The planar $\alpha$--$\gamma$ interface is shifted
together with the spherical ones, as indicated by the dashed and solid horizontal lines and
the change in the amount of $\gamma$ phase shown in lighter gray.
The corresponding contact angles are denoted by ${\alpha}^{(1)}$, ${\gamma}^{(1)}$,
${\beta}^{(1)} = 2\pi - {\alpha}^{(1)} - {\gamma}^{(1)}$
and ${\alpha}^{(2)}$, ${\gamma}^{(2)}$, ${\beta}^{(2)} = 2\pi - {\alpha}^{(2)} - {\gamma}^{(2)}$;
the contact-line radii
are $r^{(1)}$ and $r^{(2)}$. M$_1$ and M$_2$ are the centers for the radii of curvature
$R_1^{(1)}$, $R_1^{(2)} = R_1^{(1)} + [\mathrm{d}R_1]$ and
$R_2^{(1)}$, $R_2^{(2)} = R_2^{(1)} + [\mathrm{d}R_2]$, respectively.}
\end{figure}
In the case of the drop we correspondingly expect 'radial'
symmetries around
one center $M$ plus an additional 'center' at infinity characterizing the substrate--fluid interface (see Fig. 5).
For the lens, the center $M_1$ characterizes the density distributions at the
$\alpha$--$\beta$ interface, the center $M_2$ characterizes those at the $\beta$--$\gamma$ interface.
\begin{figure}[t]
\includegraphics*[scale=.30]{Fig5.eps}
\renewcommand{\figurename}{Fig.}
\caption{\label{fig5} A sessile liquid drop on a planar
substrate. Two choices for the liquid--vapor ($\beta$--$\alpha$), the substrate--liquid
($\gamma$--$\beta$), and substrate--vapor ($\gamma$--$\alpha$) interfaces are denoted as (1) and (2);
the corresponding concentric shifts are characterized by $[\mathrm{d}R]$ and
$[\mathrm{d} h]$. The corresponding contact angles are denoted by ${\theta}^{(1)}$ and ${\theta}^{(2)}$,
and $r^{(1)}$ and $r^{(2)}$ are the corresponding contact-line radii. M is the center of the radii of
curvature.
}
\end{figure}
In view of these radial symmetries it makes only sense to consider concentric shifts of the interfaces
with respect to the fixed centers. The two phases
$\alpha$ and $\gamma$ are assumed to be separated by a planar interface. If the $\alpha$--$\beta$ and $\beta$--$\gamma$
interfacial structures do not overlap except near the three-phase region, the interior of the lens is occupied by
an almost homogeneous $\beta$ phase.
This homogeneous $\beta$ phase interpolates smoothly between the radially symmetric density
distributions around the two centers associated with the two interfaces. Similar considerations apply to the drop except
that the $\beta$--$\gamma$ interface is planar which can be regarded as a limiting case of a spherical interface with
infinite radius of curvature.
\\
From the previous remarks it follows that to a large extent the isodensity contours
of the $\alpha$--$\beta$ and $\beta$--$\gamma$ interfaces
are segments of spherical surfaces. In order to define Gibbs dividing interfaces separating
two adjoining phases we use the spherical parts of the isodensity contours in a two-phase region and extrapolate
them into the three-phase-contact region where surfaces of constant densities actually are no longer spherical. Which of the
infinite number of surfaces of constant densities is chosen in order to construct a Gibbs dividing interface
is of course a matter of convention. \\
For the lens, once the centers $M_1$, $M_2$ are given and the radii $R_1$, $R_2$ are chosen according
to a certain convention we can define a three-phase-contact line of circular shape by the intersection of the two spheres
($M_1$,$R_1$) and ($M_2$,$R_2$) as indicated in Fig. 4. The third, planar Gibbs dividing interface between the phases $\alpha$
and
$\gamma$ is placed then in such a way that it coincides with the plane determined by the previously defined circular
three-phase-contact line.
Again this choice is a mere convention but deviating from it would create three different lines of intersection
between three pairs of interfaces ($\alpha$--$\beta$ intersecting with $\beta$--$\gamma$, $\alpha$--$\beta$ with $\alpha$--$\gamma$, and
$\alpha$--$\gamma$ with $\beta$--$\gamma$). Furthermore, for such a deviating choice a ring-shaped volume,
with triangular cross section defined by these three lines and the
connecting interfaces, could not be assigned to any of the three phases and would have to be treated separately as
a new ''line phase''.
This would create an unrewarding complication because the central idea of introducing the mathematical
Gibbs dividing surface is to allow for a natural extrapolation of
the properties of the adjacent bulk-like phases from both sides right to the dividing surface were they are
assumed to change discontinously. Within this approach there is no room for additional phases. A consistent description of the
actual system follows by defining appropriate surface excess quantities and their densities, like the surface tension. We
stick to Gibbs' idea and we use the convention described above within which all three lines of intersections
coincide, and no volume filled with a phase of an unassigned character is left over. \\
The procedure for the drop is analogous. The sphere ($M,R$) defines the $\alpha$--$\beta$ Gibbs dividing surface. Again,
the spherical part of the isodensity contour is extrapolated into the three-phase-contact region.
In the same spirit as above the $\alpha$--$\gamma$
and the $\beta$--$\gamma$ Gibbs dividing interfaces are chosen to lie in the same plane. The three-phase-contact
line is defined by the intersection of the $\alpha$--$\beta$ Gibbs dividing surface with the common $\alpha$--$\gamma$ and
$\beta$--$\gamma$ plane (see Fig. 5). Using this convention has the advantage of being in agreement with the one chosen for the lens in
the limit of one of the curvature radii becoming infinite. Furthermore, this construction creates
only one line of intersection instead of two.
\\
Once the dividing interfaces are defined the total volume $V$ is subdivided into domains assigned properly to the
phases $\alpha$, $\beta$, and $\gamma$. The grand canonical potential of the system is decomposed into the
corresponding bulk, interfacial, and line contributions. For the lens one has
\begin{eqnarray}
\Omega & = & - \, \sum_{\kappa=\left\{ \alpha,\beta,\gamma \right\} } p_{\kappa} V_{\kappa} + A_{\alpha
\beta}\sigma_{\alpha \beta} + A_{\beta \gamma}\sigma_{\beta \gamma}
\nonumber\\
& & + \left(A - \pi r^2\right)\sigma_{\alpha \gamma} \,+\, 2\pi r \tau \quad,
\label{lensdrop1}
\end{eqnarray}
where $p_{\kappa}$ is the pressure in the bulk-like phase $\kappa$ (= $\alpha$, $\beta$, or $\gamma$) and $V_{\kappa}$
is the corresponding volume. $A$ is the area of the planar $\alpha$--$\gamma$ interface in the absence
of the lens.
The values of $V_{\kappa}$ depend on the way the dividing interfaces are chosen. The total volume $V = \sum_{\kappa } V_{\kappa}$ of
the system is independent of this choice, and independent of the physical size of the lens or the drop. \\
For the drop one has
\begin{eqnarray}
\Omega & = & \sum_{\kappa=\left\{ \alpha,\beta \right\} } - p_{\kappa} V_{\kappa} + \omega_{\gamma}
V_{\gamma} + A_{\alpha \beta}\sigma_{\alpha \beta} + \pi r^2 \sigma_{\beta \gamma}
\nonumber\\
& & + \left(A - \pi
r^2\right)\sigma_{\alpha \gamma} + 2\pi r \tau \quad,
\label{lensdrop1b}
\end{eqnarray}
where $A_{\alpha \beta}$, $A_{\beta \gamma}$ (= $\pi r^2$ for the drop) are the areas of the $\alpha$--${\beta}$
and the $\beta$--$\gamma$
interfaces, respectively. $A$ is the area of the planar $\alpha$--$\gamma$ interface (in the absence of the
drop) and it is independent of the choice of the dividing interfaces. The area $\pi r^2$
denotes the $\alpha$--$\gamma$ interfacial area replaced
by the drop. The radius of the circular three-phase-contact
line is denoted by $r$ and $\sigma_{\alpha \beta}$, $\sigma_{\beta \gamma}$, and $\sigma_{\alpha \gamma}$
denote the surface tensions of the $\alpha$--${\beta}$, $\beta$--$\gamma$, and $\alpha$--$\gamma$ interfaces,
respectively. Finally, $\tau$
is the line tension, i.e., the excess free energy per unit length of the three-phase-contact line
we are interested in.
Strictly speaking the line tension defined via Eqs. (\ref{lensdrop1}) and (\ref{lensdrop1b}) is not identical
to its definition via Eq. (\ref{eta}), because in Eqs. (\ref{lensdrop1}) and (\ref{lensdrop1b}) we first keep
the subleading terms and treat them as subleading contributions to $\tau$, whereas in Eq. (\ref{eta}) we isolate
and drop these terms from the outset. However, if in the following we speak about the line tension itself,
as deduced from the decomposition of $\Omega$ via Eqs. (\ref{lensdrop1}) and (\ref{lensdrop1b}), we shall always
drop the subleading terms. In this sense the definitions in Eqs. (\ref{lensdrop1}) and (\ref{lensdrop1b})
and that via Eq. (\ref{eta}) do agree and we therefore do not introduce a different notation to
distinguish between these two expressions of the line tension.
\par
For the lens the pressures in the $\alpha$ and $\gamma$ phases are equal,
\begin{equation}
\label{lensdrop1c}
p_{\alpha} = p_{\gamma} = p \/ ,
\end{equation}
because the coexisting phases $\alpha$ and $\gamma$ are separated by a planar interface.
For the drop the role of $p_{\gamma}$ is played
by $ - \omega_{\gamma}$, i.e., the grand canonical free energy density of the solid while for the phase $\alpha$ we set
$p_{\alpha} = p $.
The pressure of the $\beta$ phase deviates from $p$ and is written as
(see, c.f., the discussion following Eq. (\ref{lensdrop4b}))
\begin{equation}
\label{lensdrop1d}
p_{\beta} = p + \Delta p .
\end{equation}
\\
The expression for the grand potential $\Omega$ can be regrouped as
\begin{equation}
\label{lensdrop2}
\Omega = \Omega_0 + \Delta \Omega \quad,
\end{equation}
where
\begin{equation}
\label{lensdrop3}
\Omega_0 = -p \,V + A \,\sigma_{\alpha \gamma}
\end{equation}
for the lens, and
\begin{equation}
\label{lensdrop3b}
\Omega_0 = -p \left ( V_{\mathrm{\alpha}} + V_{\mathrm{\beta}} \right ) + \omega_{\gamma} V_{\gamma} + A \sigma_{\alpha
\gamma}
\end{equation}
for the drop. Consequently one obtains
\begin{equation}
\label{lensdrop4}
\Delta \Omega = - \Delta p \,V_{\beta} + A_{\alpha \beta } \sigma_{\alpha \beta } +
A_{ \beta \gamma} \sigma_{ \beta \gamma} -\pi r^2 \sigma_{\alpha \gamma} + 2 \pi r \tau
\end{equation}
for the lens, and
\begin{equation}
\label{lensdrop4b}
\Delta \Omega = - \Delta p \,V_{\beta} + A_{\alpha \beta } \sigma_{\alpha \beta } + \left ( \sigma_{ \beta \gamma} -
\sigma_{\alpha \gamma} \right ) \pi r^2 + 2 \pi r \tau
\end{equation}
for the drop. \\
The equations given above hold for the two following scenarios. In the first scenario, the lens or drop
can exchange matter with the surrounding phase so that the chemical potentials in all phases are
equal and the pressure in the $\beta$ phase is determined by the chemical potentials.
In the second scenario, nonvolatile lenses or drops are investigated and the volume of the lens or
drop is prescribed. In this case
the pressure in the $\beta$ phase is not an independent thermodynamic variable
but is determined by the amount of $\beta$ phase.
The term $\Omega_0$ (Eq. (\ref{lensdrop3}) ) is independent of the lens size
and of the choice of dividing interfaces. This holds also for the drop (Eq. \ref{lensdrop3b}))
due to the transformation law given in Eq. (\ref{deltasigmasf}) which is valid if the $\alpha$--$\gamma$ and
$\beta$--$\gamma$ interfaces form a common plane, as is the case for the convention we have chosen. Since
$\Omega$ as a physical quantity and $\Omega_0$, as just argued, are independent of choices of dividing interfaces,
also $\Delta \Omega$ is independent of such choices. Thus in the following we can focus on $\Delta \Omega$.
In order to extract the value of $\tau$ from
a known, e.g., calculated $\Delta \Omega$, or to infer information about possible changes of $\tau$ upon shifts of the
dividing interfaces, at first it is necessary to specify all other quantities appearing in
Eqs. (\ref{lensdrop4}) and (\ref{lensdrop4b}).
The geometrical quantities like volumes, areas, and lengths are defined once the dividing interfaces are chosen.
The surface tension $\sigma_{\alpha \gamma}$ is that of a planar interface between phases $\alpha$ and
$\gamma$. If both phases $\alpha$ and $\gamma$ are fluid it is known (see, e.g., Ref. \cite{RowWid}) that $\sigma_{\alpha \gamma}$ is independent of the
position of the dividing surface between these phases. If one of the phases, e.g., $\gamma$ is an inert solid phase,
$\sigma_{\alpha \gamma}$ must be analyzed more closely; but we postpone this discussion to a later subsection
(see, c.f., Subsec. 5.3.1) in which sessile drops are
discussed separately. The quantities $\sigma_{\alpha \beta }$ and $\sigma_{\beta \gamma}$ are
surface tensions of curved surfaces -- again the case of a drop requires a separate discussion --
and thus they depend on
the corresponding radii of curvature.
We indicate this dependence $\sigma_{\alpha \beta }(R_1)$, $\sigma_{\beta \gamma}(R_2)$ explicitly whenever it is
necessary in order to avoid confusion. As already discussed above we postulate that the interfacial
tensions appearing in Eqs. (\ref{lensdrop4}) and (\ref{lensdrop4b}) have the same properties as their spherically
closed counterparts, such as
\begin{equation}
\label{lensdrop5}
\sigma_{\alpha \beta }(R_1) = \sigma_{\alpha \beta }(\infty) \left( 1 - \frac{2\delta_{\alpha \beta }^{\mathrm T}}{R_1} + s.l.t.
\right) \quad,
\end{equation}
were $\delta_{\alpha \beta }^{\mathrm T}$ is the Tolmann length for the $\alpha$--$ \beta$ interface;
subleading terms in $1/R_1$ are not considered here. An analogous equation holds for
$\sigma_{\beta \gamma}(R_2)$. Although in the following these expressions will not be be used explicitly it will be always
understood that surface tensions depend on the radius of curvature in such a way
(concerning the physical radius, see the corresponding remarks below).
A related property of the pressure difference $\Delta p$ is described by the generalized Laplace equation \cite{RowWid,Kondo,Hend,Kalikman}
\begin{eqnarray}
\Delta p & = &\frac{2\sigma_{\alpha \beta }(R_1)}{R_1} +
\left[\frac{\mathrm{d}\sigma_{\alpha \beta }}{\mathrm{d} R_{1}} \right] \quad, \nonumber \\
& = & \frac{2\sigma_{\beta \gamma}(R_2)}{R_2} +
\left[\frac{\mathrm{d}\sigma_{\beta \gamma}}{\mathrm{d} R_{2}} \right] \quad,
\label{lensdrop6}
\end{eqnarray}
where the terms in square brackets are quantities termed by Rowlinson and Widom \cite{RowWid} notional derivatives.
Such a term multiplied by a small [d$R_{i}$] approximates the change in
surface tension upon increasing the radius of the dividing surface by that value [d$R_i$]
without changing the physical system, i.e., the density distributions and all
thermodynamic variables remain fixed whereas the
description of the system has been changed by shifting the dividing surface. In the following square brackets will be
always used in order to characterize notional changes, e.g., [d$R_i$] denotes the
notional change of the radius whereas we would write d$R_i$ if we speak about a physical change of the
radius at a fixed convention of choosing the dividing surface (interface). The quantity
$\left[\frac{\mathrm{d}\sigma_{\alpha \beta }}{\mathrm{d} R_i} \right]$
depends on the choice of the dividing surface. The so-called surface of tension is defined as that dividing surface
for which $\left[\frac{\mathrm{d}\sigma_{\alpha \beta }}{\mathrm{d} R_{i}} \right]$ is zero.
For the equimolar dividing surface the notional derivative coincides with the derivative of the surface tension
with respect to the physical drop size, i.e., the derivative of Eq. (\ref{lensdrop5}). The second term on the rhs of Eq.
(\ref{lensdrop6}) renders the rhs invariant with respect
to changes of the dividing
surface. This must be the case because $\Delta p$ defined as the pressure difference between two bulk phases at given
thermodynamic conditions is a measurable and therefore invariant quantity.
Below, notional derivatives will be characterized
by putting them into square brackets.
\par
A property that will also be used extensively in the following is the fact,
that $\sigma_{\xi \nu}(R_i)$ ($i \in \{1,2\}, \, \xi, \nu \in \{\alpha, \beta, \gamma\}$) is independent
of the particular choice of the dividing surface up to and including the order $1/R_i$. At some occasions we
shall use the relation
\begin{equation}
\label{lensdrop7}
\left[\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i} \right]^{(2)} - \left[\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i}
\right]^{(1)} = \frac{2\sigma_{\xi \nu }\left[\mathrm{d}R_i\right]}{R_i^2} + s.l.t.
\end{equation}
between notional derivatives of surface tensions for two differently chosen dividing interfaces.
The two dividing interfaces are
denoted by the superscripts (1) and (2) and their radii are related via $\left[R_i\right]^{(2)} = \left[R_i\right] ^{(1)} +
\left[\mathrm{d}R_i\right]$; $\left[\mathrm{d}R_i\right]$ could be also understood as a differential. It
could be also finite but then we assume that it is small compared with
$R_i$. In Eq. (\ref{lensdrop7}) we write $R_i$
and do not introduce $\left[R_i\right]^{(1)}$ or $\left[R_i\right]^{(2)}$ because using one or the other convention
would lead to expressions which only differ in subleading terms which are neglected anyway. Equation (\ref{lensdrop7})
is valid if the values of both $\left[R_i\right]^{(1)}$ and $\left[R_i\right]^{(2)}$
are close to that of the radius corresponding
to the surface of tension and it is a direct consequence of well-known results obtained for spherical interfaces
(see, e.g., Refs. \cite{RowWid,Hend}). The neglected s.l.t. contain terms of the order $(1/R_i)^3$ or higher and
in addition they contain $\left[\mathrm{d}R_i\right]$ or differences between $\left[R_i\right]^{(i)}$ and
the radius corresponding to the surface of tension raised to the second or higher power.
Most of the following conclusions, however, do not make explicit use of Eq.
(\ref{lensdrop7}), but only of the fact that $\left[\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i} \right]^{(2)} -
\left[\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i} \right]^{(1)}$ is of higher order than $1/R_i$. We also use
(in the second part of our reasoning) the following relation -- known for spherical drops \cite{RowWid,Hend} --
between the stiffness against changes of the radius of
curvature at given thermodynamic conditions (i.e., fixed temperature and chemical potentials),
denoted here as $\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i} \vert$,
and the notional derivative of the surface tension:
\begin{equation}
\label{lensdrop8}
\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i} \Big\vert =
\left[\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i} \right] \quad.
\end{equation}
In the following we discuss lenses and drops. Although the arguments proceed similarly in both cases,
we discuss them separately
because we want to address certain special problems connected exclusively with the solid--fluid interfaces.
\subsection{The lens}
First, we investigate the consequences of the requirement that the grand canonical
potential of the total system is independent of particularly chosen dividing interfaces. Secondly, we derive --
from a variational procedure at fixed lens volume --
the equations yielding the contact angles as a function of the radius $r$ characterizing the lens size. The variational
problem is set up in a way which guarantees that the contact angles transform correctly upon notional shifts of the Gibbs dividing
interfaces. The results following from these two procedures are then compared. \\
\subsubsection{Notional variation of the grand canonical potential}
Notional variations (indicated by square brackets) leave the grand canonical potential
unchanged which in ''differential'' form is written as
\begin{equation}
\label{lensdrop9}
\left[ \mathrm{d} \Delta \Omega \right] = 0
\end{equation}
with $\Delta \Omega$ defined in Eq. (\ref{lensdrop4}).
The left-hand side of Eq. (\ref{lensdrop9}) gives the notional change of $\Delta \Omega$ upon notional
shifts of the dividing interfaces away from some given -- but essentially arbitrary -- set of dividing
interfaces, in the limit of very small notional shifts, characterized by, e.g., $[\mathrm{d}R_i]$ ($i = 1,2$)
(see below).
In the above equation all notional
variations of geometrical quantities are then expressed in terms of the notional variations $[\mathrm{d}R_i]$ ($i = 1,2$) of
the two radii of curvature $R_1$ and $R_2$ such as, for example,
\begin{equation}
\label{lensdrop10}
\left[ \mathrm{d}V_{\beta} \right] = \left[ \frac{
\mathrm{d}V_{\beta} }{ \mathrm{d} R_1 } \right] \left[ \mathrm{d} R_1 \right] +
\left[ \frac{ \mathrm{d}V_{\beta} }{ \mathrm{d} R_2 }
\right] \left[ \mathrm{d} R_2 \right] \quad.
\end{equation}
Similar equations hold for the notional variations of interfacial
areas $\left[\mathrm{d} A_{\alpha \beta} \right]$,
$\left[\mathrm{d} A_{\beta \gamma} \right]$, $\left[\mathrm{d} ( \pi r^2 ) \right]$,
and for the length of the three-phase-contact line $\left[\mathrm{d} ( 2\pi r ) \right]$ as well as for the surface
and line tensions.
The notional derivatives $\left[\frac{\mathrm{d}\sigma_{\alpha \beta }}{\mathrm{d} R_1} \right]$ and
$\left[\frac{\mathrm{d}\sigma_{\beta \gamma }}{\mathrm{d} R_2} \right]$ of the surface tensions are -- as already stated -- taken
to be identical to those defined for completely spherical drops (see Eqs. (\ref{lensdrop6} - \ref{lensdrop8})).
In addition we demand that $\left[\frac{\mathrm{d}\sigma_{\alpha \beta }}{\mathrm{d} R_2} \right] = 0$ and
$\left[\frac{\mathrm{d}\sigma_{\beta \gamma }}{\mathrm{d} R_1} \right] = 0$ because a notional change of the radius $R_1$ of the
$\alpha$--$\beta$ interface should not lead to notional changes of
the $\beta$--$\gamma$ interfacial tension, i.e., of the interface on the opposite side of the lens, and vice versa (see Fig. 4). We further
introduce two notional derivatives of the line tension: $\left[\frac{\mathrm{d}\tau}{\mathrm{d} R_1} \right]$ and
$\left[\frac{\mathrm{d}\tau}{\mathrm{d} R_2} \right]$. These are actually defined by Eq. (\ref{lensdrop9})
because all other quantities in this equation are either given by geometry or fixed via the set of definitions given
above. Certain properties of these new quantities will follow from the analysis given below.
\\
The pressure difference $\Delta p$
is related to the surface tensions and their notional derivatives via Eq. (\ref{lensdrop6}).
Since $\Delta p$ is unique the right hand sides of these two equations in Eq. (\ref{lensdrop6})
must be equal.
(The validity of the Laplace equation for even small lenses has been checked by simulations, e.g., in Ref. \cite{Bres3}.)
After using the geometrical relations $R_1 = r/\sin \alpha$ and $R_2 = r/\sin \gamma$ we obtain a first
equation relating $\alpha$, $\gamma$ and $r$:
\begin{equation}
\label{lens1}
\sigma_{\alpha \beta} \sin \alpha + \frac{r}{2}
\left[ \frac{\mathrm{d} \sigma_{\alpha \beta }}{\mathrm{d} R_1} \right] = \sigma_{\beta \gamma} \sin \gamma + \frac{r}{2} \left[
\frac{\mathrm{d} \sigma_{\beta \gamma }}{\mathrm{d} R_2} \right] \quad.
\end{equation}
Because the notional changes $[\mathrm{d}
R_1]$ and $[\mathrm{d} R_2]$ are independent, the prefactors of $[\mathrm{d} R_1]$ and $[\mathrm{d} R_2]$, obtained after
expressing Eq. (\ref{lensdrop9}) in terms of these variables, must both vanish.
In this way two further equations are obtained leading to three
equations in total. On the other hand only two equations are required in order to determine the contact
angles for a given physical
lens size, because the geometric parameters $\alpha$, $\beta$, and $r$ are already related through a prescribed volume of the $\beta$ phase.
Therefore the two aforementioned equations following from Eq. (\ref{lensdrop9}) have to be identical.
This leads to the following consistency condition:
\begin{equation}
\label{lens2}
\begin{split}
\cos \gamma \left[
\frac{\mathrm{d} \tau }{\mathrm{d} R_2} \right] - \cos \alpha
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} R_1} \right] = \frac{r}{2}
\left\{ \left[ \frac{\mathrm{d} \sigma_{\beta \gamma}}{\mathrm{d} R_2} \right] -
\left[ \frac{\mathrm{d} \sigma_{\alpha \beta
}}{\mathrm{d} R_1} \right] \right\} \, .
\end{split}
\end{equation}
The remaining third equation can be written in a symmetrized form,
\begin{multline}
\label{lens3}
\sigma_{\alpha \beta } \cos \alpha + \sigma_{\beta \gamma} \cos \gamma + \sigma_{\alpha \gamma }
= \frac{\tau}{r} \\
+ \frac{ r \left\{ \sin \gamma \cos \alpha - \sin \alpha \cos \gamma \right\} }{4 \cos
\alpha \cos \gamma } \left\{ \left[ \frac{\mathrm{d} \sigma_{\beta \gamma }}{\mathrm{d} R_2} \right] - \left[ \frac{\mathrm{d}
\sigma_{\alpha \beta }}{\mathrm{d} R_1} \right] \right\} \\
\hspace*{-0.10cm}
+ \frac{ \left\{ \sin \gamma \cos \alpha + \sin
\alpha \cos \gamma \right\} } { 2 \cos \alpha \cos \gamma}
\left\{ \cos \gamma \left[ \frac{\mathrm{d} \tau }{\mathrm{d}
R_2} \right] + \cos \alpha \left[ \frac{\mathrm{d} \tau }{\mathrm{d} R_1} \right] \right\} \hspace*{-0.10cm} ,
\\
\end{multline}
or, using the consistency condition given in Eq. (\ref{lens2}), it can be put into the following form:
\begin{multline}
\label{lens4}
\hspace*{-0.40cm}
\sigma_{\alpha \beta } \cos
\alpha + \sigma_{\beta \gamma} \cos \gamma + \sigma_{\alpha \gamma } = \frac{\tau}{r} + \sin \gamma
\hspace*{-0.10cm} \left[ \frac{\mathrm{d} \tau }{\mathrm{d} R_2} \right] +
\sin \alpha \hspace*{-0.10cm} \left[ \frac{\mathrm{d} \tau }{\mathrm{d} R_1} \right] \hspace*{-0.10cm}.
\\
\end{multline}
Since the contact angles $\alpha$ and $ \gamma$ are natural variables characterizing the
three-phase-contact region we express notional
derivatives of $\tau$ with respect to the radii in terms of notional derivatives with respect to the contact angles:
$\left[\frac{\mathrm{d} \tau }{\mathrm{d} R_i} \right] =
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} \alpha} \right]
\left[ \frac{\mathrm{d}
\alpha }{\mathrm{d} R_i } \right] + \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \gamma} \right]
\left[ \frac{\mathrm{d} \gamma
}{\mathrm{d} R_i } \right]$,
where
$\left[ \frac{\mathrm{d} \alpha }{\mathrm{d} R_i } \right]$ and $\left[ \frac{\mathrm{d} \gamma
}{\mathrm{d} R_i } \right]$
describe notional changes of $\alpha$ and $\gamma$, respectively, upon notional changes of $R_i$.
For given centers $M_1$ and $M_2$, the descriptions in terms of the pairs ($\alpha$, $\gamma$) or ($R_1$,$R_2$) are
equivalent. However, we do not transform the notional derivatives of the surface tensions,
because the radii of curvature $R_i$ are the natural variables
of the curved interfaces. In the new variables Eqs. (\ref{lens2}) and (\ref{lens4}) take the form
\begin{equation}
\label{lens5}
\begin{split}
\left[ \frac{\mathrm{d} \tau
}{\mathrm{d} \alpha} \right] \sin ^2 \alpha - \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \gamma} \right] \sin ^2 \gamma
= \frac{r^2}{2}
\left\{ \left[ \frac{\mathrm{d} \sigma_{\beta \gamma}}{\mathrm{d} R_2} \right] - \left[ \frac{\mathrm{d} \sigma_{\alpha \beta
}}{\mathrm{d} R_1} \right] \right\}
\end{split}
\end{equation}
and
\begin{eqnarray}
\sigma_{\alpha \beta } \cos \alpha + \sigma_{\beta
\gamma} \cos \gamma + \sigma_{\alpha \gamma } & = & \frac{\tau}{r} + \frac{\sin \alpha \cos \alpha}{r} \left[ \frac{\mathrm{d} \tau
}{\mathrm{d} \alpha } \right]
\nonumber \\
+ \frac{\sin \gamma \cos \gamma}{r} \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \gamma}
\right] \quad. & &
\label{lens6}
\end{eqnarray}
\par
The next question is whether the line tension $\tau$ depends on the choice of the dividing interfaces. In
order to answer it we compare $\Delta \Omega$ (Eq. (\ref{lensdrop4})) evaluated for two different dividing interfaces
and find {\it $\tau$ to be independent of the choices of the dividing interfaces within the leading order.}
More precisely this statement says that notional shifts of the dividing interfaces change $\tau$ only by contributions which
decrease as $1/r$ with increasing $r$ or faster.
(We recall our remarks after Eqs. (\ref{lensdrop1}) and (\ref{lensdrop1b})
saying that $\tau$ as defined by Eqs. (\ref{lensdrop1}) and (\ref{lensdrop1b}) may contain subleading terms.)
The {\em leading} contribution to $\tau$ which is independent of $r$, is independent of
the choices of the dividing interfaces. Although the subleading contributions to $\tau$ itself
may be neglected in the term $\tau/r$ in Eq. (\ref{lens4}) if we want to keep only terms up to the order $1/r$,
it is not permissible to neglect the terms containing the notional
derivatives $\left[\frac{\mathrm{d} \tau }{\mathrm{d} R_i} \right]$. If these derivatives are of the order $1/r$
-- in fact we shall see below that notional shifts of the dividing interfaces lead to changes in
$\left[\frac{\mathrm{d} \tau }{\mathrm{d} R_i} \right]$ which are of that order -- the pertinent
terms in Eq. (\ref{lens4}) are of the order of $1/r$, i.e., they are of the same order as
$\tau/r$ with only the leading
term in $\tau$ kept.
In proving the aforementioned results concerning the behavior of $\tau$ under notional shifts of the dividing interfaces,
we have used the fact that surface tensions are independent of the
chosen dividing interfaces up to order $1/R$. Furthermore we have used the generalized Laplace equation
(\ref{lensdrop6}) and also the fact that the difference
$\left[\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i} \right]^{(2)} - \left[\frac{\mathrm{d}\sigma_{\xi \nu }}{\mathrm{d} R_i}
\right]^{(1)}$ is a correction of higher order than we are interested in. Moreover, the contact angles and other
geometrical quantities for a choice (2) for the dividing interfaces have to be expressed in terms of the
corresponding quantities characterizing choice (1).
This has been carried out up to the order giving rise to contributions of the order of the line-tension contribution.
\\
Next we
investigate the transformation behavior of the quantities
$\left[ \frac{\mathrm{d} \tau }{\mathrm{d} R_i} \right]$ or $\left[
\frac{\mathrm{d} \tau }{\mathrm{d} \alpha } \right]$ and $ \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \gamma} \right]$ due to
notional shifts of the dividing interfaces. For this purpose we consider the equations for the contact angles
(Eqs. (\ref{lens1} - \ref{lens6}))
for two different sets (1) and (2) of dividing interfaces. The geometrical quantities,
say $\alpha ^{(2)}$, $\gamma ^{(2)}$,
and $r ^{(2)}$, in equations valid for the set (2) are then expressed in terms of $\alpha ^{(1)}$, $\gamma ^{(1)}$, and $r
^{(1)}$ up to the required order. The same is done for the surface tensions and their notional derivatives. We also use
the above result that $\tau$ is independent of the choice of dividing interfaces to leading order. From comparison of the two
systems of equations (Eqs. (\ref{lens1}), (\ref{lens2}), and (\ref{lens4}))
and the fact that both systems must yield the same relations between
$\alpha ^{(1)}$, $\gamma ^{(1)}$, and $r ^{(1)}$ for a fixed physical system, by using Eq. (\ref{lensdrop7})
we obtain the following transformation rules:
\begin{widetext}
\begin{eqnarray}
\left[ \frac{\mathrm{d} \tau
}{\mathrm{d} R_1} \right]^{(2)} - \left[ \frac{\mathrm{d} \tau }{\mathrm{d} R_1} \right]^{(1)} &
= & - \frac{\sigma_{\alpha \beta} \sin
\alpha \cos (\alpha + \gamma)}{r \sin (\alpha + \gamma)} \left[\mathrm{d} R_1 \right]
- \frac{\sigma_{\beta \gamma} \sin \gamma
}{r \sin (\alpha + \gamma)} \left[\mathrm{d} R_2 \right] \/ , \nonumber \\
\left[
\frac{\mathrm{d} \tau }{\mathrm{d} R_2} \right]^{(2)} - \left[ \frac{\mathrm{d} \tau }{\mathrm{d} R_2} \right]^{(1)} & = & -
\frac{\sigma_{\alpha \beta} \sin \alpha }{r \sin (\alpha + \gamma)} \left[\mathrm{d} R_1 \right]
- \frac{\sigma_{\beta \gamma} \sin
\gamma \cos (\alpha + \gamma)}{r \sin (\alpha + \gamma)} \left[\mathrm{d} R_2 \right] \/ . \nonumber \\
& &
\label{lens7}
\end{eqnarray}
\end{widetext}
Alternatively, this may be translated into the notional derivatives of $\tau$ with respect to the contact
angles:
\begin{eqnarray}
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} \alpha} \right]^{(2)} - \left[ \frac{\mathrm{d} \tau
}{\mathrm{d} \alpha} \right]^{(1)} & = & - \sigma_{\alpha \beta} \left[\mathrm{d} R_1 \right] \/ , \nonumber \\
\left[ \frac{\mathrm{d} \tau
}{\mathrm{d} \gamma} \right]^{(2)} - \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \gamma} \right]^{(1)} & =
& - \sigma_{\alpha \beta}
\left[\mathrm{d} R_2 \right] \quad.
\label{lens8}
\end{eqnarray}
Equations (\ref{lens7}) and (\ref{lens8}) show that the notional derivatives of $\tau$ depend on where the dividing
interfaces are located. This is completely analogous to what is known for the notional derivative of the
surface tension for closed spherical interfaces.
\subsubsection{Variational treatment with constraint of fixed volume}
A common procedure aimed at obtaining equations for the contact angles consists of minimization of the grand
canonical potential $\Delta \tilde{\Omega}$ with the constraint of fixed volume $V_{\beta}$ (compare Eq. ((\ref{lensdrop4})):
\begin{eqnarray}
\Delta \tilde{\Omega} & = & A_{\alpha \beta } \sigma_{\alpha \beta }(R_1) + A_{ \beta \gamma} \sigma_{ \beta
\gamma}(R_2) -\pi r^2 \sigma_{\alpha \gamma}
\nonumber\\
& & + 2 \pi r \tau (\alpha,\gamma ,r) + \lambda V_{\beta} \quad.
\label{lens9}
\end{eqnarray}
In Eq. (\ref{lens9}) no bulk contributions appear because the volume $V_{\beta}$, the bulk pressures $p_{\alpha}$
and $p_{\beta}$, and thus the pressure difference $\Delta p = p_{\beta} - p_{\alpha} $ are kept fixed due to the fixed
thermodynamic conditions specified by the temperature and the chemical potentials. The fixed volume condition is
implemented via the last term in
Eq. (\ref{lens9}) containing the Lagrange muliplier $\lambda$. The volume $V_{\beta}$ is the one enclosed
by the dividing interfaces. The independent variables of the variational problem are $\alpha$,
$\gamma$, and $r$ ($\beta$ is determined via $\alpha + \gamma + \beta = 2\pi$). In order to calculate
the variation $\mathrm{d}\Delta \tilde{\Omega}$
resulting from the variations of $\alpha$, $\gamma$, and $r$ we have to introduce -- in addition to
the interfacial and the line tensions -- new material parameters. These are the stiffness constants
$\frac{\mathrm{d}\sigma_{\alpha
\beta}}{\mathrm{d} R_1} \vert \hspace{0.2cm} \mathrm{and} \hspace{0.2cm} \frac{\mathrm{d}\sigma_{\beta \gamma}}{\mathrm{d} R_2}
\vert $
of the interfaces
describing the cost in interfacial free energy resulting from changes in the radii of curvature as well as
the stiffnesses
$ \frac{\mathrm{d}\tau}{\mathrm{d} \alpha} \vert \hspace{0.1cm},
\hspace{0.1cm} \frac{\mathrm{d}\tau}{\mathrm{d} \gamma} \vert , \hspace{0.1cm} \mathrm{and} \hspace{0.1cm}
\frac{\mathrm{d}\tau}{\mathrm{d} r} \vert $
of the three-phase-contact line against changes of the contact angles
$\alpha$ and $\gamma$, and the radius of curvature $r$, respectively.
To be explicit: in the variation we introduce also contributions
$A_{\alpha \beta } \frac{\mathrm{d}\sigma_{\alpha \beta}}{\mathrm{d} R_1} \vert
\frac{\partial R_1}{\partial r} \mathrm{d}r$ describing the cost in free energy
associated with a variation of the
curvature radius $R_1$, caused by a change of $r$ by $\mathrm{d}r$, without changing the interfacial area, or
$2 \pi r \frac{\mathrm{d}\tau}{\mathrm{d} r} \vert \mathrm{d}r$ etc..
The symbol $\vert$ indicates that the stiffnesses are given by the respective
derivatives evaluated {\em at fixed thermodynamic conditions.}
\\
The stiffness constants of the interfaces, e.g., $\frac{\mathrm{d}\sigma_{\alpha \beta}}{\mathrm{d}
R_1} \vert$ should not be confused with the derivatives of surface
tensions with respect to the physical radius. For example, $\frac{\mathrm{d}\sigma_{\alpha \beta}}{\mathrm{d} R_1}\vert$
does not coincide with the derivative of $\sigma_{\alpha\beta}$ given in Eq. (\ref{lensdrop5}) with
respect to $R_1$ because drops of different physical sizes in an unstable or constrained equilibrium with their environment
do not correspond to the same thermodynamic conditions given by temperature and chemical potentials.
The stiffness constants of the interfaces may be expressed via Eq. (\ref{lensdrop8}) in terms of the
notional derivatives. This relation shows that the stiffness constant of an
interface vanishes if the so-called surface of tension is chosen as the Gibbs dividing surface.
In the same way the stiffness constants attributed to the contact line should not be confused
with similarly looking derivatives with respect to an implicit dependence of the line tension
on contact angles. These implicit dependences reflect changes in the thermodynamic conditions.
The meaning of the stiffness constants is completely different.
In setting up the variational principle we explore constrained equilibrium
configurations in the neighbourhood of {\it the} equilibrium configuration in order to find the
equilibrium by minimizing the free energy at fixed thermodynamic conditions. Surface tensions,
the line tension, and the various stiffness constants describe the costs in free energy due
to virtual displacements of interfaces away from their equilibrium shape.
(In the summary operational procedures will be discussed how to determine stiffness constants
theoretically.)
\\
The equation
\begin{equation}
\label{lens10}
\mathrm{d}
\Delta \tilde{\Omega} = \frac{\mathrm{d}\Delta \tilde{\Omega}}{\mathrm{d} r} \mathrm{d}r +
\frac{\mathrm{d}\Delta \tilde{\Omega}}{\mathrm{d} \alpha} \mathrm{d}\alpha +
\frac{\mathrm{d}\Delta \tilde{\Omega}}{\mathrm{d}
\gamma} \mathrm{d} \gamma = 0
\end{equation}
leads to three equations because $\alpha$, $\gamma$, and $r$ can be varied
independently. They have the following form:
\begin{widetext}
\begin{eqnarray}
\hspace*{-0.60cm}
\lambda & = & - \frac{ 2 \sigma_{\alpha \beta} \sin \alpha }{r} - \frac{\mathrm{d}\sigma_{\alpha
\beta}}{\mathrm{d} R_1} \Big\vert + \frac{ ( 1 - \cos \alpha )}{ ( 1 + \cos \alpha )}\left\{
\frac{\mathrm{d}\sigma_{\alpha \beta}}{\mathrm{d} R_1} \Big\vert + \frac{ 2 \sin ^2 \alpha }{ r^2}
\frac{\mathrm{d}\tau}{\mathrm{d} \alpha} \Big\vert \right\} \, , \,
\label{lens11}
\end{eqnarray}
\begin{eqnarray}
\hspace*{-0.60cm}
\lambda & = & - \frac{
2 \sigma_{\beta \gamma} \sin \gamma }{r} - \frac{\mathrm{d}\sigma_{\beta \gamma}}{\mathrm{d} R_2} \Big\vert +
\frac{ ( 1 - \cos \gamma )}{ ( 1 + \cos \gamma )}\left\{ \frac{\mathrm{d}\sigma_{\beta \gamma}}{\mathrm{d} R_2} \Big\vert
+ \frac{ 2 \sin ^2 \gamma }{ r^2} \frac{\mathrm{d}\tau}{\mathrm{d} \gamma} \Big\vert \right\} \, , \,
\label{lens12}
\end{eqnarray}
and
\begin{multline}
\label{lens13}
\sigma_{\alpha \beta} \cos \alpha + \sigma_{\beta \gamma} \cos \gamma +
\sigma_{\alpha \gamma} = \frac{\tau}{r} + \frac{\mathrm{d} \tau}{\mathrm{d} r} \Big\vert + \frac{\cos \alpha \sin \alpha}{r}
\frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \Big\vert + \frac{\cos \gamma \sin \gamma}{r} \frac{\mathrm{d} \tau}{\mathrm{d}
\gamma} \Big\vert \\
+ \frac{r(1 - \cos \alpha ) }{\sin \alpha }\left\{ \frac{\mathrm{d}\sigma_{\alpha
\beta}}{\mathrm{d} R_1} \Big\vert + \frac{ 2 \sin ^2 \alpha }{ r^2} \frac{\mathrm{d}\tau}{\mathrm{d} \alpha} \Big\vert
\right\} \\
+ \frac{r(1 - \cos \gamma ) }{\sin \gamma }\left\{ \frac{\mathrm{d}\sigma_{\beta \gamma}}{\mathrm{d}
R_2} \Big\vert + \frac{ 2 \sin ^2 \gamma }{ r^2} \frac{\mathrm{d}\tau}{\mathrm{d} \gamma} \Big\vert \right\} \quad.
\end{multline}
\end{widetext}
The ensuing equality of the right hand sides of Eqs. (\ref{lens11}) and (\ref{lens12}) leads to one of two equations
relating the variables
$\alpha$, $\gamma$, and $r$. The second relation is provided by Eq. (\ref{lens13}). Finally, a third equation expresses
the fixed volume $V_{\beta}$ of the lens,
\begin{eqnarray}
V_{\beta} & = & \frac{1}{3}\pi r^3 \biggl \{ \frac{ (1 + \cos \alpha)(2- \cos \alpha) }{ (1 - \cos \alpha) \sin \alpha }
\nonumber\\
& & + \frac{ (1 + \cos \gamma)(2- \cos \gamma) }{ (1 - \cos \gamma) \sin \gamma } \biggr \}
\label{lens13-Z}
\end{eqnarray}
and thus finally determines $\alpha$, $\gamma$, $r$, and the Lagrange parameter $\lambda$ via
Eqs. (\ref{lens11}) or (\ref{lens12}).
\par
We now compare two such sets of equations, each for a different choice of the dividing interfaces in order
to find relations between the stiffnesses of the line for two different choices for the dividing interfaces.
Since the physical object is kept
fixed, the relations between one set of variables ($\alpha^{(2)}$, $\gamma^{(2)}$, $r^{(2)}$) and another set
($\alpha^{(1)}$, $\gamma^{(1)}$, $r^{(1)}$) follow from the definition of the lens by two intersecting
spheres $(M_1, R_1^{(i)})$ and $(M_2, R_2^{(i)})$ as described in Sec. 5 and simple geometrical
considerations.
The volume $V_{\beta}$ attributed to the lens
depends on the choice of the dividing interfaces, i.e., two different volumes $V_{\beta}^{(2)} \ne V_{\beta}^{(1)}$
are assigned to the same physical object.
The relation between $V_{\beta}^{(2)}$ and $V_{\beta}^{(1)}$ is known once the notional changes
of the radii $R_1$ and $R_2$, fixing the relative positions of the dividing interfaces, are given.
Therefore Eq. (\ref{lens13-Z}) does not contain any new information that could be used in order
to relate the stiffnesses which have to be attributed to the line for two different choices for the
dividing interfaces. Thus in the following, we have to consider only two pairs of equations:
first, the equation resulting from the equality of the right hand sides of Eqs. (\ref{lens11}) and (\ref{lens12}),
and secondly Eq. (\ref{lens13}), one pair for each choice of the dividing interfaces.
The parameters $\alpha^{(2)}$, $\gamma^{(2)}$, and $r^{(2)}$ can be expressed in terms of
$\alpha^{(1)}$, $\gamma^{(1)}$, and
$r^{(1)}$, and of the notional changes of the radii $\left[ \mathrm{d} R_i \right] = R_i^{(2)} - R_i^{(1)}$,
$i = 1,2$.
After expressing the pair of equations for convention
(2) in terms of $\alpha^{(1)}$, $\gamma^{(1)}$, and $r^{(1)}$ up to the order of the line tension term,
the resulting equations may be
compared directly with the pair of equations obtained if convention (1) was chosen from the beginning.
It is clear that the two pairs of
equations must be equivalent since they describe the same physical system in terms of the same variables.
In that comparison we further use
the relation (\ref{lensdrop8}) between the stiffnesses of the interfaces
against changes of the radii of curvature and
the related notional derivatives. In addition we use that the mentioned notional
derivatives written for two different dividing
interfaces are related via Eq. (\ref{lensdrop7}) which in the chosen variables read
(up to the relevant order it is not necessary to
distinguish between $\alpha^{(1)}$ and $\alpha^{(2)}$ etc. and thus on the right hand side
we omit the superscripts distinguishing between these
conventions):
\begin{equation}
\label{lens14}
\left[\frac{\mathrm{d}\sigma_{\alpha \beta }}{\mathrm{d} R_1} \right]^{(2)} -
\left[\frac{\mathrm{d}\sigma_{\alpha \beta }}{\mathrm{d} R_1} \right]^{(1)} = \frac{2\sigma_{\alpha \beta
}\sin^2\alpha\left[\mathrm{d}R_1\right]}{r^2} + s.l.t.
\end{equation}
and
\begin{equation}
\label{lens15}
\left[\frac{\mathrm{d}\sigma_{\beta \gamma }}{\mathrm{d} R_2} \right]^{(2)} - \left[\frac{\mathrm{d}\sigma_{\beta \gamma
}}{\mathrm{d} R_2} \right]^{(1)} = \frac{2\sigma_{\beta \gamma }\sin^2\gamma\left[\mathrm{d}R_2\right]}{r^2} +
s.l.t.\hspace{0.3cm} \quad.
\end{equation}
As a result of the comparison and the requirement of equivalence of the two pairs of
equations discussed above
(rhs of Eq. (\ref{lens11}) equals rhs of Eq. (\ref{lens12}), and Eq. (\ref{lens13}) )
we obtain two coupled equations for the following three quantities:
$\left( \frac{\mathrm{d}\tau}{\mathrm{d}
\alpha} \vert^{(2)} - \frac{\mathrm{d}\tau}{\mathrm{d} \alpha} \vert^{(1)} \right)$, $\left( \frac{\mathrm{d}\tau}{\mathrm{d}
\gamma} \vert^{(2)} - \frac{\mathrm{d}\tau}{\mathrm{d} \gamma} \vert^{(1)} \right)$, and $\left(
\frac{\mathrm{d}\tau}{\mathrm{d} r} \vert^{(2)} - \frac{\mathrm{d}\tau}{\mathrm{d} r} \vert^{(1)} \right)$ .
These equations have a manifold of solutions. If we pick a particular solution with
\begin{equation}
\label{lens16}
\frac{\mathrm{d}\tau}{\mathrm{d} r} \big\vert^{(2)} - \frac{\mathrm{d}\tau}{\mathrm{d} r} \big\vert^{(1)} =
0
\end{equation}
the two remaining quantities are given by
\begin{equation}
\label{lens17}
\begin{array}{l c l r }
\frac{\mathrm{d}\tau}{\mathrm{d} \alpha} \vert^{(2)} - \frac{\mathrm{d}\tau}{\mathrm{d} \alpha} \vert^{(1)} & = &
- \sigma _{\alpha \beta} \left[\mathrm{d}R_1 \right] & \quad \\
& = & \frac{ - r^2}{2\sin^2 \alpha}\left( \left[ \frac{\mathrm{d}
\sigma _{\alpha \beta}}{ \mathrm{d}R_1} \right]^{(2)} - \left[ \frac{\mathrm{d} \sigma _{\alpha \beta}} { \mathrm{d}R_1}
\right]^{(1)} \right) & \\
\frac{\mathrm{d}\tau}{\mathrm{d}
\gamma} \vert^{(2)} - \frac{\mathrm{d}\tau}{\mathrm{d} \gamma} \vert^{(1)} & = & - \sigma _{\beta \gamma}
\left[\mathrm{d}R_2 \right] &
\\
& = & \frac{ - r^2}{2\sin^2 \gamma}\left( \left[ \frac{\mathrm{d}
\sigma _{\beta \gamma}}{ \mathrm{d}R_2 } \right]^{(2)} - \left[ \frac{\mathrm{d} \sigma _{\beta \gamma}}{ \mathrm{d}R_2
}\right]^{(1)} \right) & \quad.
\end{array}
\end{equation}
For the special
choice in Eq. (\ref{lens16}) leading to Eq. (\ref{lens17}) the Lagrange multiplier $\lambda$
(see Eqs. (\ref{lens11}) and (\ref{lens12})) becomes
independent of the chosen dividing interfaces and therefore it allows for a physical interpretation.
The conditions (\ref{lens17})
are obviously fulfilled if the following relations between stiffness constants of the line and those of
the interfaces hold:
\begin{equation}
\label{lens18}
\frac{\mathrm{d}\tau}{\mathrm{d} \alpha} \Big\vert = \frac{ -
r^2}{2\sin^2 \alpha} \frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R_1} \Big\vert =
\frac{ - r^2}{2\sin^2 \alpha}
\left[ \frac{\mathrm{d} \sigma _{\alpha \beta}} { \mathrm{d}R_1} \right]
\end{equation}
and
\begin{equation}
\label{lens19}
\frac{\mathrm{d}\tau}{\mathrm{d} \gamma} \Big\vert = \frac{ - r^2}{2\sin^2 \gamma} \frac{\mathrm{d} \sigma _{\beta
\gamma}}{ \mathrm{d}R_2} \Big\vert = \frac{ - r^2}{2\sin^2 \gamma} \left[ \frac{\mathrm{d} \sigma _{\beta \gamma}} {
\mathrm{d}R_2} \right] \quad .
\end{equation}
Additional terms which are independent of the choice of dividing interfaces could be added on the
right hand sides of Eqs. (\ref{lens18}) and (\ref{lens19}) without violating the conditions (\ref{lens17}).
However, keeping such terms would not lead to more general results, but to equations which are more complicated
than those presented below. As we shall see, the special choices of the right hand sides of
Eqs. (\ref{lens18}) and (\ref{lens19}) lead to equilibrium conditions for the contact angles which
agree with Eqs. (\ref{lens1}) and (\ref{lens6}) which have been derived above from the principle
of invariance of $\Omega$ under notional changes. This agreement is achieved for the particular
choice
$\frac{\mathrm{d}\tau}{\mathrm{d} r} \vert ^{(2)} - \frac{\mathrm{d}\tau}{\mathrm{d} r} \vert ^{(1)} = 0$,
but $\frac{\mathrm{d}\tau}{\mathrm{d} r} \vert$ is still left undetermined by consistency requirements alone.
Insertion of Eqs. (\ref{lens18}) and (\ref{lens19}) into Eqs. (\ref{lens11}) and (\ref{lens12}) leads to
\begin{equation}
\label{lens20}
\lambda
= - \frac{ 2 \sigma _{\alpha \beta} \sin \alpha }{r} -
\frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R_1} \Big\vert =
- \frac{ 2 \sigma _{\alpha \beta}}{R_1} - \left[ \frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R_1} \right ]
\end{equation}
and
\begin{equation}
\label{lens21} \lambda = - \frac{ 2 \sigma _{\beta \gamma}}{R_2} - \left[
\frac{\mathrm{d} \sigma _{\beta \gamma}}{ \mathrm{d}R_2} \right ] \quad.
\end{equation}
In other words, $\lambda$ equals minus
the Laplace pressure expressed via the generalized Laplace equation (compare Eq. (\ref{lensdrop6}))
which supports the choice made in Eqs. (\ref{lens18}) and (\ref{lens19})
above. Accordingly, the equations relating the contact angles to the radius $r$ read
(recall that this holds only for choices which are in accordance with Eq. (\ref{lens16}) as well as
Eqs. (\ref{lens18}) and (\ref{lens19})):
\begin{equation}
\label{lens22}
\sigma _{\alpha
\beta} \sin \alpha + \frac{r}{2} \left[ \frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R_1} \right ]
= \sigma _{\beta
\gamma} \sin \gamma
+ \frac{r}{2} \left[ \frac{\mathrm{d} \sigma _{\beta \gamma}}{ \mathrm{d}R_2} \right ]
\end{equation}
and
\begin{eqnarray}
& & \sigma _{\alpha \beta} \cos \alpha + \sigma _{\beta \gamma} \cos \gamma + \sigma _{\alpha
\gamma} = \frac{\tau}{r} + \frac{\mathrm{d} \tau}{\mathrm{d} r} \Big\vert
\nonumber\\
& & + \frac{\sin \alpha \cos \alpha }{r}
\frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \Big\vert
+ \frac{\sin \gamma \cos \gamma }{r} \frac{\mathrm{d} \tau}{\mathrm{d}
\gamma} \Big\vert .
\label{lens23}
\end{eqnarray}
Comparison of Eq. (\ref{lens23}) with Eq. (\ref{lens6}) (Eqs. (\ref{lens22}) and (\ref{lens1})
are identical anyway) renders the relation
\begin{multline}
\label{lens24}
\frac{\mathrm{d} \tau}{\mathrm{d} r}
\Big\vert + \frac{\sin \alpha \cos \alpha }{r} \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \Big\vert + \frac{\sin \gamma \cos
\gamma }{r} \frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \Big\vert =
\\
\frac{\sin \alpha \cos \alpha }{r} \left [ \frac{\mathrm{d}
\tau}{\mathrm{d} \alpha} \right ] + \frac{\sin \gamma \cos \gamma }{r}
\left [ \frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \right ] \quad.
\end{multline}
Adopting this relation for the surface of tension ${(\mathrm{s})}$ and using the relations
in Eqs. (\ref{lens18}), (\ref{lens19}), and
(\ref{lens16}) and noting further that
$\left [ \frac{\mathrm{d} \sigma _{\alpha \beta }}{\mathrm{d} R_1} \right ]^{(\mathrm{s})} = 0 =
\left [ \frac{\mathrm{d} \sigma _{\beta \gamma }}{\mathrm{d} R_2} \right ]^{(\mathrm{s})} $
and thus $\frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \vert ^{(\mathrm{s})} = 0 =
\frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \vert ^{(\mathrm{s})}$ we find
(note that because of Eq. (\ref{lens16}) $\frac{\mathrm{d} \tau}{\mathrm{d} r} \vert$
is independent of the choice of the dividing interfaces)
\begin{equation}
\label{lens25}
\begin{split}
\frac{\mathrm{d} \tau}{\mathrm{d} r} \Big\vert = \frac{\mathrm{d} \tau}{\mathrm{d} r} \Big\vert
^{(\mathrm{s})} = \frac{\sin \alpha \cos \alpha }{r} \left [ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \right ] ^{(\mathrm{s})} +
\frac{\sin \gamma \cos \gamma }{r} \left [ \frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \right ]
^{(\mathrm{s})} \hspace*{-0.20cm}.
\end{split}
\end{equation}
On the rhs of Eq. (\ref{lens25}) superscripts $^{(\mathrm{s})}$ are omitted for $\alpha$, $\gamma$, and $r$
because differences
between the values of $\alpha$, $\gamma$, and $r$ for different dividing interfaces
give rise only to higher order corrections.
Reinserting Eq. (\ref{lens25}) into Eq. (\ref{lens24}) we further find
\begin{equation}
\label{lens26}
\frac{\mathrm{d} \tau}{\mathrm{d} \alpha } \Big\vert = \left[ \frac{\mathrm{d}
\tau}{\mathrm{d} \alpha } \right] - \left[ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha } \right] ^{(\mathrm{s})}
\end{equation}
and
\begin{equation}
\label{lens27}
\frac{\mathrm{d} \tau}{\mathrm{d} \gamma } \Big\vert = \left[
\frac{\mathrm{d} \tau}{\mathrm{d} \gamma } \right] - \left[ \frac{\mathrm{d} \tau}{\mathrm{d} \gamma } \right]
^{(\mathrm{s})} \quad.
\end{equation}
Of course, due to these relations the transformation behavior for the stiffness constants of the line tension
(Eqs. (\ref{lens16}) and (\ref{lens17})) is consistent with that for the notional derivatives (Eq. (\ref{lens8})).
It should also be noted that
$\frac{\mathrm{d} \tau}{\mathrm{d} \alpha } \vert ^{(\mathrm{s})} = 0$ and
$\frac{\mathrm{d} \tau}{\mathrm{d} \gamma } \vert ^{(\mathrm{s})} = 0$ .
\subsection{The drop}
\subsubsection{Notional variation of the grand canonical potential}
In the case of a drop our analysis also starts from the requirement that the grand potential must be
invariant with respect to notional changes. They consist of a change
of the radius of the
$\alpha$--$\beta$ interface by $\left[ \mathrm{d} R \right]$ and a common shift of the
$\alpha$--$\gamma$ and $\beta$--$\gamma$ interfaces
by $\left[ \mathrm{d} h \right]$. From the requirement $\left[ \mathrm{d} \Delta \Omega \right] = 0$
we obtain two equations ($\left[ \mathrm{d} R \right]$ and $\left[ \mathrm{d} h \right]$ can be chosen
independently) relating the contact angle $\theta$ to the radius $r$. However, these equations have to be identical
because already a single equation is sufficient to determine the relation between $\theta$ and $r$.
This leads to the consistency relation
\begin{equation}
\label{drop1} \left[ \frac{\mathrm{d} \tau }{\mathrm{d} h} \right] +\cos \theta \left[ \frac{\mathrm{d} \tau }{\mathrm{d} R} \right] = -
\frac{r}{2} \left[ \frac{\mathrm{d} \sigma_{\alpha \beta }}{\mathrm{d} R} \right]
\end{equation}
and the following equation for the contact angle:
\begin{equation}
\label{drop2}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma }) = - \frac{\tau}{r} - \sin \theta \left[
\frac{\mathrm{d} \tau }{\mathrm{d} R} \right] \quad.
\end{equation}
Similarly as for the lens we have introduced notional derivatives
$\left[ \frac{\mathrm{d} \tau }{\mathrm{d} h} \right]$ and $\left[ \frac{\mathrm{d} \tau }{\mathrm{d} R} \right]$.
We also make use of a notional derivative
$\left[ \frac{\mathrm{d} \sigma_{\alpha \beta} }{\mathrm{d} R} \right]$ of the spherically shaped
$\alpha$--$\beta$ interface but --- at the same time --- we do not take into account a notional derivative
$\left[ \frac{\mathrm{d} (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma}) }{\mathrm{d} h} \right]$ of the difference in the
interfacial tensions of the two planar substrate--liquid ($\gamma$--$\beta$) and substrate--vapor ($\gamma$--$\alpha$)
interfaces. This calls for a comment. Here the quantity $\sigma_{\beta \gamma} - \sigma_{\alpha \gamma}$ denotes the
difference of two interfacial tensions corresponding to a situation in which the same substrate phase $\gamma$
remains in contact with either the $\alpha$ or the $\beta$ phase. Both phases $\alpha$ and $\beta$ are at the
same pressure $p$. This quantity together with the $\alpha$--$\beta$ surface tension at infinite radius defines
--- via the Young equation --- the contact angle $\theta_0$ of a macroscopicly large
liquid drop (phase $\beta$) in contact with
its vapor (phase $\alpha$) on top of the substrate $\gamma$. The angle $\theta_0$ is an observable quantity and
does not depend on the choice of the dividing surface. Since $\sigma_{\alpha\beta}$ does not depend on the choice of
the dividing interfaces either, it follows that $ \sigma_{\beta \gamma} - \sigma_{\alpha \gamma} $ cannot depend on the
chosen position $h$ of the common $\alpha$--$\gamma$ and $\beta$--$\gamma$ dividing interfaces.
\\
One arrives at the same conclusion by calculating the changes of $ \sigma_{\beta \gamma}$ and $\sigma_{\alpha \gamma}$
with respect to changes $\mathrm{d} h$ in the interface positions. In the difference
$ \sigma_{\beta \gamma} - \sigma_{\alpha \gamma} $ terms depending on $\mathrm{d} h$
drop out provided the pressure in the phases $\alpha$ and $\beta$ is the same, and a common height
'above' the substrate is chosen for the $\alpha$--$\gamma$ and for the $\beta$--$\gamma$ dividing interfaces (see, e.g., the
discussion at the end of Sect. 2). (Both these assumptions are implicitly contained in the Young equation.)
Actually, further complications would arise if, in an attempt to mimic the situation in a drop more closely,
we would carry out the calculations at different pressures, e.g., for $p_{\beta} = p_{\alpha} +
\Delta p$ with the Laplace pressure $\Delta p$. A quantity $ (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma})' $
defined in that way would depend on the choice of the dividing interfaces and additional terms of the same order as those
introduced by the line tension would appear in the final equations. Furthermore the difference
$ (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma})' $ would not be related to $\theta_0$ via the standard Young equation
and thus could not be determined via an independent experiment of measuring the shape of a macroscopic drop.
Therefore it does not appear to be useful
to introduce quantities like $ (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma})' $ (see, however, the
discussion in Subsect. VI C).
\\
Equations (\ref{drop1}) and (\ref{drop2}) may be rewritten in terms of $\theta$ and $r$.
In these variables the consistency equation
and the modified Young equation take the following form:
\begin{equation}
\label{drop3}
\sin ^2 \theta \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \theta} \right] = \frac{r^2}{2} \left[ \frac{\mathrm{d} \sigma_{\alpha \beta}
}{\mathrm{d} R} \right] \quad
\end{equation}
and
\begin{equation}
\label{drop4}
\begin{split}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma }) =
- \frac{\tau}{r} - \left[ \frac{\mathrm{d} \tau }{\mathrm{d} r } \right]
- \frac{\sin \theta \cos \theta}{r} \hspace{-0.10cm} \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \theta } \right]
\hspace{-0.10cm}.
\end{split}
\end{equation}
Since the $\gamma$--$\alpha$ and $\gamma$--$\beta$ interfaces are planar, a notional derivative of $\tau$ with respect to the
angle $\gamma$ as introduced for the lens cannot be defined in the present case; instead
$\left[ \frac{\mathrm{d} \tau }{\mathrm{d} r } \right]$ is used. As in the case of the lens the relations between
($[\mathrm{d} R]$, $[\mathrm{d} h]$)
and ($[\mathrm{d} \theta]$, $[\mathrm{d} r]$) are unique and the same is true for the
corresponding notional derivatives.
\\
Similarly as for the lens we find that {\em $\tau$ is independent of the choice of dividing
interfaces.} This statement is true in the same sense as the corresponding one discussed for the
lens below Eq. (\ref{lens6}). As in the case of the lens, higher order corrections to $\tau$ decreasing
with increasing $r$ like, e.g., $1/r$ -- such corrections are possible if $\tau$ is defined
via Eq. (\ref{lensdrop1b}) instead of Eq. (\ref{eta}) -- depend on the choice of the
dividing interfaces. Correspondingly, the notional derivatives of $\tau$ appearing in Eqs.
(\ref{drop3}) and (\ref{drop4}) also depend on the choice of dividing interfaces and the corresponding
terms are furthermore of the same order as the term $\tau / r$ with only the leading term in $\tau$
being kept.
\\
We now analyse the transformation of the notional derivatives upon notional shifts of the dividing interfaces.
The same kind of arguments as for the lens leads to the following relations:
\begin{eqnarray}
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} R} \right]^{(2)} - \left[ \frac{\mathrm{d} \tau }{\mathrm{d} R} \right]^{(1)} & = &
\frac{\sigma_{\alpha \beta} }{r } \left\{ \cos \theta \left[\mathrm{d} R \right] -
\left[\mathrm{d} h \right] \right \}
\nonumber \\ \left[ \frac{\mathrm{d} \tau }{\mathrm{d} h} \right]^{(2)} - \left[ \frac{\mathrm{d} \tau }{\mathrm{d} h}
\right]^{(1)} & = & - \frac{\sigma_{\alpha \beta} }{r } \left \{ \left[\mathrm{d} R \right] -
\cos \theta \left[\mathrm{d} h \right] \right \} \, .
\nonumber \\
& &
\label{drop5}
\end{eqnarray}
Changing the set of independent variables from ($h$, $R$) to ($r$, $\theta$) leads to the following form of the above
transformation laws:
\begin{eqnarray}
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} \theta} \right]^{(2)} - \left[ \frac{\mathrm{d}
\tau }{\mathrm{d} \theta} \right]^{(1)} & = &
\sigma_{\alpha \beta} \left[\mathrm{d} R \right] \nonumber \\
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} r} \right]^{(2)} -
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} r} \right]^{(1)} & = & - \frac{\sigma_{\alpha \beta}
\sin \theta }{r } \left[\mathrm{d} h \right]
\, .
\label{drop6}
\end{eqnarray}
\subsubsection{Variational treatment with constraint of fixed volume }
Here the objective is to minimize the grand canonical potential $\Delta \tilde{\Omega}$ with the
constraint of fixed volume $V_{\beta}$:
\begin{equation}
\label{drop7}
\Delta \tilde{\Omega} = A_{\alpha \beta } \sigma_{\alpha \beta }(R)
-\pi r^2 (\sigma_{\alpha \gamma} - \sigma_{\beta \gamma} ) +
2 \pi r \tau (\theta ,r) + \lambda V_{\beta}
\end{equation}
in which no bulk terms are included since the Laplace pressure and the volume of the $\beta$ phase are considered to be
fixed; the Lagrange multiplier is denoted by $\lambda$. We use $\theta$ and $r$ as the variables describing the system,
and we introduce
the stiffness constants $\frac{\mathrm{d}\sigma_{\alpha \beta}}{\mathrm{d} r} \vert$ and $\frac{\mathrm{d}\sigma_{\alpha
\beta}}{\mathrm{d} \theta} \vert$ which are then expressed in terms of the 'natural' stiffness constant
$\frac{\mathrm{d}\sigma_{\alpha \beta}}{\mathrm{d} R} \vert$ with known properties. In addition, similarly as for the
lens, we introduce the stiffness constants $\frac{\mathrm{d}\tau}{\mathrm{d} r} \vert$ and
$\frac{\mathrm{d}\tau}{\mathrm{d} \theta} \vert$
describing the cost in free energy attributed to the three-phase-contact line resulting from
variational changes of $r$ and $\theta$ at fixed thermodynamic conditions.
In the following we shall determine how these stiffness constants depend on the choice of dividing
interfaces and how they are
related to notional derivatives.
\\
If we impose the relation (for a justification see the discussion following, c.f., Eq. (\ref{drop11}))
\begin{equation}
\label{drop8}
\sin ^2 \theta \frac{\mathrm{d} \tau }{\mathrm{d} \theta} \Big\vert = \frac{r^2}{2} \frac{\mathrm{d} \sigma_{\alpha \beta}
}{\mathrm{d} R} \Big\vert
\end{equation}
as in the case of the lens, the Lagrange multiplier $\lambda$
acquires the meaning of the negative Laplace pressure:
\begin{equation}
\label{drop9}
\begin{split}
\lambda = - \Delta p = - \frac{ 2 \sigma _{\alpha \beta} \sin \theta }{r} -
\frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R}
\Big\vert = - \frac{ 2 \sigma _{\alpha \beta}}{R} -
\left[ \frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R} \right ] .
\end{split}
\end{equation}
Equation (\ref{drop8}) corresponds to the relations in Eqs. (\ref{lens18}) and (\ref{lens19}) which
were introduced in the case of the lens
for similar reasons. We also remark that Eq. (\ref{drop8}) is formally identical to the consistency relation
in Eq. (\ref{drop3}).
The second equation, obtained from minimizing $\Delta \tilde{\Omega} $ and due to the independence of the
variables $\theta$ and $r$, has the following form (compare Eq. (\ref{lens13})):
\begin{eqnarray}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma }) = - \frac{\tau}{r} -
\frac{\mathrm{d} \tau }{\mathrm{d} r } \Big\vert & &
\nonumber \\
-
\frac{r \sin \theta }{\left( 1 - \cos \theta \right )} \frac{\mathrm{d}
\sigma_{\alpha \beta } }{\mathrm{d} R } \Big\vert +
\frac{ \left( 2 + \cos \theta \right ) \sin \theta}{r} \frac{\mathrm{d} \tau
}{\mathrm{d} \theta } \Big\vert \quad. & &
\label{drop10}
\end{eqnarray}
(An equation which expresses
the fixed drop volume
$V_{\beta} = (1/3)\pi r^3 [ (1 - \cos \theta)(2 + \cos \theta) ]/[ (1 + \cos \theta) \sin \theta ]$
in terms of $r$ and $\theta$ and which together with the other equations fixes $r$, $\theta$, and the
Lagrange parameter $\lambda$, does not contain information that could be used in the following.
The same has been found for the lens.)
\\
Similarly as for the lens, we deduce from a comparison of the two equations, which follow from
Eq. (\ref{drop10}) for two differently chosen dividing interfaces,
the transformation laws between the stiffness constants of the three-phase-contact line:
\begin{eqnarray}
\frac{\mathrm{d} \tau }{\mathrm{d} \theta } \Big\vert^{(2)} - \frac{\mathrm{d} \tau }{\mathrm{d} \theta}
\Big\vert^{(1)} & = &
\sigma_{\alpha \beta} \left[\mathrm{d} R \right] \nonumber \\
\frac{\mathrm{d} \tau
}{\mathrm{d} r} \Big\vert^{(2)} - \frac{\mathrm{d} \tau }{\mathrm{d} r}
\Big\vert^{(1)} & = & - \frac{\sigma_{\alpha \beta} \sin
\theta }{r } \left[\mathrm{d} h \right] \quad .
\label{drop11}
\end{eqnarray}
Actually, the transformation behavior of $\frac{\mathrm{d} \tau }{\mathrm{d} \theta } \vert$
is already fixed independently via Eq. (\ref{drop8}),
$\frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R} \vert =
\left[ \frac{\mathrm{d} \sigma _{\alpha \beta}}{ \mathrm{d}R} \right ] $,
and Eq. (\ref{lensdrop7}). There is no contradiction between these two independent requirements
because of the formal equivalence
of the transformation laws in Eqs. (\ref{drop6}) for the notional derivatives and those for the stiffness
constants in Eqs. (\ref{drop11}), and because Eqs. (\ref{drop3}) and (\ref{drop8}) are formally identical.
In other words, one is indeed
free to fix $\frac{\mathrm{d} \tau }{\mathrm{d} \theta} \Big\vert$ by Eq. (\ref{drop8})
and thus to bestow
the meaning of the negative Laplace pressure $- \Delta p$ on the Lagrange multiplier $\lambda$.
\\
Equation (\ref{drop10}) can be simplified by using Eq. (\ref{drop8}):
\begin{equation}
\label{drop12}
\begin{split}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma} - \sigma_{\alpha \gamma }) = - \frac{\tau}{r} -
\frac{\mathrm{d} \tau }{\mathrm{d} r } \Big\vert - \frac{ \sin \theta \cos \theta }{r}
\frac{\mathrm{d} \tau }{\mathrm{d} \theta }
\Big\vert .
\end{split}
\end{equation}
From a comparison of Eqs. (\ref{drop12}) and (\ref{drop4}) we obtain
\begin{equation}
\label{drop13}
\frac{\mathrm{d} \tau }{\mathrm{d} \theta } \Big\vert = \left[ \frac{\mathrm{d} \tau }{\mathrm{d} \theta } \right]
\hspace{0.2cm} ,
\hspace{0.2cm} \frac{\mathrm{d} \tau }{\mathrm{d} r} \Big\vert =\left[ \frac{\mathrm{d} \tau }{\mathrm{d} r} \right] \quad.
\end{equation}
(Note that applying Eq. (\ref{drop8}) to Eq. (\ref{drop13}) in combination with Eq. (\ref{lensdrop8}) yields
$ \frac{\mathrm{d} \tau }{\mathrm{d} \theta } \vert ^{(\mathrm{s})} = 0 $.) Again, these relations are compatible with the
transformation laws for both the notional derivatives and the stiffness constants.
\section{Possibility of choosing different definitions for line tensions}
\renewcommand{\theequation}{6.\arabic{equation}}
\setcounter{equation}{0}\vspace*{0.5cm}
At this point it is due to comment on the seeming contradiction between our statement
that $\tau$ is independent of the
choice of dividing interfaces and Eq. (\ref{deltaeta2}), following from a discussion of
a straight three-phase-contact line at a wedge shaped fluid volume,
which states the opposite and tells how $\tau$ should change upon shifting the fluid--solid dividing interface.
A question related to that issue is whether one can use the line tensions of a gas--liquid--solid contact
calculated from a certain
microscopic theory in combination with a decomposition scheme in a model system like that considered
in Sect. III (i.e., for a fluid wedge geometry, see, e.g., Refs. \cite{GD1,BD}),
and use them in the formulae derived above for the drop.
\\
In order to answer these questions we reanalyze again the particular
wedge geometry investigated already in Sect. III in order to find out whether implicit assumptions
in the analysis given there are at variance with prescriptions used in Sect. V
for the drop. We also resume our discussion of the drop system
in order to see whether sensible alternative definitions of the line tension
in that system do exist and whether the contradiction mentioned above can be resolved by using an alternative
definition of $\tau$. If such an alternative exists the question about the relation between
the two different line tensions has to be answered.
\subsection{Decomposition and reassembly of a system: \\ difficulties with edge terms}
In Sect. III we have calculated the changes of the areas of the liquid--gas
and of the solid--fluid interfaces due to a notional shift of the position of the solid--fluid
interface within the box with rectangular cross section, as
shown in Fig. 2, implicitly assuming that these area changes are representative
for an entire system from which Fig. 2 actually shows only
a small part. In this procedure a possible source of errors might be hidden. We therefore
discuss now the underlying general procedure implicitly used in Sect. III. First, the total system
is decomposed into two subsystems I and II such that subsystem I contains
the three-phase-contact line as the object of interest. On the other hand subsystem I is
surrounded by a subsystem II
which itself does not contain any actual line contribution. (In the case of straight contact lines
this requires to use periodic boundary conditions in one direction;
this is of no relevance for the following discussion.)
The contribution of subsystem I to the grand potential is then analyzed in terms of volume,
interfacial, and line
contributions {\em ignoring unphysical edge terms}. The same is done for subsystem II
except that no actual line contribution appears in the decomposition.
At the end all contributions from both subsystems
are put together in order to calculate the grand potential of the total system.
It is expected that the grand potential (and the decomposition into volume, interfacial,
and line contributions) calculated in that way is equal to what one obtains
if one decomposes the grand potential of the whole system into volume, interfacial, and line
contributions and then adds up these contributions without first partitioning the system.
In what follows we discuss why it may be sometimes misleading to ignore the unphysical edge
terms.
\par
The decomposition introduced above is not
unique. Two examples of such decompositions
are indicated in Fig. \ref{fig9} by two boxes
of different shapes enclosing and defining the respective subsystem I. The structure is assumed to
be translationally invariant in the direction perpendicular to the plane shown in Fig. \ref{fig9}.
The cross section of the box has the shape of a rectangle for one of the chosen decompositions and
in the other case it has the shape of a parallelogram such that two of the faces of the box
are perpendicular to the liquid--gas interface. The embedding system II is not specified in
any detail except that it is understood that the structures found within the box, e.g., the liquid--gas
interface, extend beyond the box boundaries into the embedding system II.
For that reason interfacial energies at the boundaries of the boxes do not arise. (This statement
also comprises the solid--liquid and the solid--gas interfaces since the respective boundary of the
box can be placed anywhere inside the solid.) There are also no actual line contributions to the
free energy from the edges of the box or from the line at which the liquid--gas interface intersects a
face of the box. The environments of these lines are not different from bulk
or from a corresponding region in an infinitely extended interface.
Inspite of the absence of actual line contributions,
artificial non-physical line contributions at the lines just mentioned have to be
included in the expressions for the grand canonical potentials characterizing the subsystems.
That there is a need to introduce these artificial line terms becomes very plausible if
one looks at the two possible decompositions of a system into two subsystems shown in Fig. \ref{fig9}.
Evidently interfaces are cut in different ways by the faces defining subsystem I for the
two considered options. Therefore, contributions to the grand canonical potential stemming from
certain parts of the interfaces are either attributed to subsystem I or subsystem II depending on how the
decomposition is done. Only if one includes the artificial line terms in the analysis one can
keep track of such traces of the decomposition; otherwise one runs into inconsistencies.
\par
Before we proceed with this analysis, which ultimately aims at finding out whether and how
the true line tension may be determined from microscopic calculations for the most simple geometries,
we continue with the additive decomposition of
the grand canonical potential $\Omega$ of a system
\begin{equation}
\label{decomp-gen-1}
\Omega = \Omega_{\mathrm{I}} + \Omega_{\mathrm{II}} \quad ,
\end{equation}
where $\Omega_{\mathrm{I}}$ and $\Omega_{\mathrm{II}}$ are the contributions
to the grand canonical potential attributed to
subsystem I and II, respectively, which, e.g., in the spirit of density functional
theory can be thought of to be calculated from the density distributions characterizing the
total system.
Already this decomposition is
problematic and not unique although obvious choices do exist.
For instance, if we base our theoretical description on a type of model in which
the grand canonical potential is expressed in terms of a local functional of the number
density (of, e.g., the
Ginzburg--Landau type), $\Omega_{\mathrm{I}}$
is obtained by integrating the local density of the grand canonical potential over the
volume of the box defining the subsystem I.
The local density of the grand canonical potential is obtained from particle-density distributions
which may be obtained from calculations restricted to the interior of the box. The boundary
conditions have to correspond to a seamless continuation of the structures in the interior
of subsystem I
into the surrounding subsystem II. The decomposition procedure becomes less obvious if a non-local
density-functional description is used. In that case interactions across the boundaries of the
subsystems occur. In order to calculate the contribution to $\Omega_{\mathrm{I}}$
coming from two-center integrals one might, e.g., restrict one of the integrations
to the interior of the box, whereas the second integration runs over the total volume of
the whole system. The value obtained for $\Omega_{\mathrm{I}}$ is independent of size and
shape of the surrounding subsystem II provided it is sufficiently big and the interactions
between particles decay sufficiently rapidly as a function of their distance. Artificial
interfacial contributions from the surfaces of the box are eliminated automatically by
chosing the described procedure. By carrying out the integrations in the way outlined above
we basically mimic a local density of $\Omega$ allowing, however, for the embedding
of the subsystem into a global system.
\par
In the sense of the previous paragraph $\Omega$ can be
decomposed into a sum of two contributions originating from two subsystems I and II.
Since for this decomposition no use has been made of the concept of dividing interfaces between phases, in particular
between solid and liquid or solid and gas,
it is also clear that the two contributions $\Omega_{\mathrm{I}}$ and
$\Omega_{\mathrm{II}}$ must be independent of choices for the dividing interfaces between
solid and fluid.
\\
In the next step $\Omega_{\mathrm{I}}$ and $\Omega_{\mathrm{II}}$ are further decomposed
into volume, interface, and line contributions, i.e.,
\begin{equation}
\label{decomp-gen-2}
\Omega_{\mathrm{I}} = V_{\mathrm{I}} \omega + A^{\mathrm{I}}_{\mathrm{lg}} \sigma_{\mathrm{lg}}
+ A^{\mathrm{I}}_{\mathrm{sl}} \sigma_{\mathrm{sl}} +
A^{\mathrm{I}}_{\mathrm{sg}} \sigma_{\mathrm{sg}}
+ L \tau + \sum _i L _i \tau _i^{\mathrm{art,I}} \quad ,
\end{equation}
and
\begin{equation}
\label{decomp-gen-3}
\Omega_{\mathrm{II}} = V_{\mathrm{II}} \omega + A^{\mathrm{II}}_{\mathrm{lg}} \sigma_{\mathrm{lg}}
+ A^{\mathrm{II}}_{\mathrm{sl}} \sigma_{\mathrm{sl}} +
A^{\mathrm{II}}_{\mathrm{sg}} \sigma_{\mathrm{sg}}
+ \sum _i L _i \tau _i^{\mathrm{art,II}} \quad ,
\end{equation}
where $A^{\mathrm{I}}_{\mathrm{lg}}$ and $A^{\mathrm{II}}_{\mathrm{lg}}$ are the areas of
the l--g interfaces within subsystem I and II, respectively,
$A^{\mathrm{I}}_{\mathrm{sl}}$ and $A^{\mathrm{II}}_{\mathrm{sl}}$ are those of the s--l interfaces, and
$A^{\mathrm{I}}_{\mathrm{sg}}$ and $A^{\mathrm{II}}_{\mathrm{sg}}$ are the areas of the s--g interfaces
(l: liquid, g: gas, s: solid or wall).
$\tau$ is the line tension of the actual three-phase-contact line and thus it appears only in subsystem I.
The terms $\tau _i^{\mathrm{art,I}}$ or $\tau _i^{\mathrm{art,II}}$ characterize the artificial
line contributions resulting from the decomposition into subsystems. The quantities $L _i$ are the lengths of
the artificial lines, which are equal in both subsystems. Since
$\Omega = \Omega_{\mathrm{I}} + \Omega_{\mathrm{II}}$ and because the only actual line inhomogeneity,
which is present in the
total system, is accounted for by the line tension $\tau$, one has
\begin{equation}
\label{decomp-gen-4}
\sum _i L _i \tau _i^{\mathrm{art,I}} = - \sum _i L _i \tau _i^{\mathrm{art,II}} \quad ,
\end{equation}
i.e., the artificial line contributions of the two subsystems cancel each other.
\\
\begin{figure}
\includegraphics*[scale=.32]{Fig9.eps}
\caption{\label{fig9} Two possibilities for cutting a subsystem I out of a total system
containing a l--g interface and a three-phase-contact line with the solid s.
These two possibilities are indicated by two boxes (full and dotted lines)
enclosing the l--g--s contact line. The structure is assumed to be translationally invariant
perpendicular to the plane of the figure. Two possible choices for the solid--fluid dividing interface
are indicated by solid (1) and dashed (2) lines. The two dividing interfaces are separated by a
distance $\delta h$. The areas of the solid--liquid (solid--gas) interfaces within subsystem I
change when the solid--fluid dividing interface is shifted. The values of these area changes
depend on the shape of the box defining subsystem I. This dependence is indicated by the two horizontal
double arrows. The short double arrow indicates the changes, due to the shift $\delta h$,
of the areas of the solid--gas
and the solid--liquid interfaces within the box of rectangular cross section, the long
double arrow the changes of these areas within the box with parallelogram-shaped cross section.
}
\end{figure}
Since $\Omega_{\mathrm{I}}$ is independent of the choice of the dividing interfaces between phases,
in particular of the solid--fluid dividing interface,
the same arguments as used in Sec. 3 can now be applied to the subsystem I and to
$\Omega_{\mathrm{I}}$
in order to study the influence of a shift by an amount $\delta h$ of the s--g and s--l
dividing interfaces. (We restrict the discussion to a structure that is translationally invariant
in the direction parallel to the line, i.e., $L _i = L$.)
If the rectangular box indicated in Fig. \ref{fig9} is chosen one obtains
\begin{equation}
\label{decomp-gen-5}
\left [ \tau + \sum _i L _i \tau _i^{\mathrm{art,I}} \right ] ^{(2)} -
\left [ \tau + \sum _i L _i \tau _i^{\mathrm{art,I}} \right ] ^{(1)}
= \sigma_{\mathrm{lg}} \sin \theta \delta h
\end{equation}
for the difference of the expression
in square brackets for the two choices (2) and (1) of the dividing interfaces (compare Fig. 2).
The only but essential difference to Eq. (\ref{deltaeta2}) is that Eq. (\ref{decomp-gen-5}) gives the difference
not only for $\tau$ but for the sum of the actual and the artificial line energies. There seems to be a chance
that the seeming contradiction between the result that $\tau$
of the actual three-phase-contact line is independent of the choice of dividing
interfaces and Eq. (\ref{deltaeta2}) can be resolved by including the artificial line contributions
and replacing Eq. (\ref{deltaeta2}) by Eq. (\ref{decomp-gen-5}).
Of course Eq. (\ref{decomp-gen-5}) is only valid for the decomposition employing the rectangular box.
If instead of a rectangular box a box with a cross section of the
shape of a parallelogram (as shown in Fig. \ref{fig9}) is chosen Eq. (\ref{decomp-gen-5}) is replaced by
\begin{equation}
\label{decomp-gen-6}
\left [ \tau + \sum _i L _i \tau _i^{\mathrm{art,I}} \right ] ^{(2)} -
\left [ \tau + \sum _i L _i \tau _i^{\mathrm{art,I}} \right ] ^{(1)}
= 0 \quad ,
\end{equation}
because the changes of the solid--liquid (solid--gas) interfaces within subsystem I, resulting
from the shift of the solid--fluid dividing interface, are computed in the way indicated
by the long dotted double arrow in Fig. \ref{fig9} and not
in the way indicated by the short solid double arrow in Fig. \ref{fig9}
which is the correct way to compute those changes within the rectangular box.
From the comparison of Eq. (\ref{decomp-gen-5}) and Eq. (\ref{decomp-gen-6}) it becomes obvious that
in order to deduce $\tau$ from a calulated expression for $\Omega_{\mathrm{I}}$ one has to subtract
not only volume and interface contributions but also the artificial line contributions.
Such artificial line contributions are generated if a box boundary cuts
through the inhomogeneous structure of an interface in a non-adapted way.
Within an interfacial region, which has a macroscopic extension in the lateral directions, the density
profiles of the constituents of the fluid only depend on the coordinate perpendicular to
the interface. If the box boundary defining subsystem I cuts perpendicularly through
the interface, the cut is adapted to the density profiles and no artificial line contribution
is generated. If by contrast, a box boundary cuts through the interface with an angle
deviating from $90^{\mathrm{o}}$, a spatial region of columnar shape, filled
with an inhomogeneous fluid, is generated above or below the interface, which
no longer belongs to subsystem I although according to the interfacial
area attributed to subsystem I it should contribute to the interfacial energy within
that subsystem. Vice versa a volume of columnar shape is attributed to subsystem I
although the fluid within that volume should contribute to the interfacial energy
attributed to subsystem II.
Since in general the free energy contributions from these two columnar spatial
regions do not compensate one is left with an artificial line contribution.
\par
For that reason we discuss still another possibility (Fig. \ref{Fig-Wedge-b})
for cutting a subsystem I out of a total system
containing a l--g interface and a three-phase-contact line with the solid s.
For this choice the box boundaries cut perpendicularly, i.e., in an adapted manner
through both interfaces. This choice avoids the appearance of artificial line terms
at the cuts of the box boundaries through the liquid--gas or the solid--fluid interfaces.
An additional line generated by this particular choice of the box boundaries is placed
such, that the system is completely homogeneous in a big volume around that line.
Therefore, no net artificial line contribution is associated with that line, too.
Carrying out the same analysis as for the two other box shapes we again
obtain Eq. (\ref{decomp-gen-5}) but the
artificial line energies should now vanish leading back to Eq. (\ref{deltaeta2}).
\par
After our previous analyses for different box shapes defining subsystem I one might
come to the following suggestion to cope with artificial line contributions.
For the rectangular box (which leads to Eq. (\ref{decomp-gen-5})) the only artificial
line contribution stems from the intersection of the liquid--gas interface
with the box boundary. No artificial line contributions arise at the cuts of the
box boundaries with the solid--fluid interface, because the box boundaries cut perpendicularly
through that interface. There are also no artificial line contributions from
the edges of the box located in homogeneous regions of the fluid.
If we now argue that the artificial line contribution does not change upon a notional
shift of the solid--fluid interface because the corresponding line is far away from
that interface, the terms related to the artificial line contribution would cancel in
Eq. (\ref{decomp-gen-5}) and we would again be led back to Eq. (\ref{deltaeta2}).
In the case of the parallelogram shaped box and Eq. (\ref{decomp-gen-6}) we might argue
using the same type of arguments, that only the two line contributions due to the
intersection of the box boundaries with the solid--fluid interface do arise. But now
one would be inclined to admit that these artificial line contributions might change
upon a a notional shift of the solid--fluid interface because the lines are spatially
associated with the shifted interface. Therefore, one would not draw conclusions about
notional changes of $\tau$ from Eq. (\ref{decomp-gen-6}). However, one should not
rely too heavily on such type of arguments. After all, notional shifts of dividing interfaces
lead just to changes in interfacial areas and thus to changes in interfacial contributions
to the free energy which are then subtracted from the total free energy in order to
define the line tension(s). From the viewpoint of a macroscopic theory it
is not clear how these changes of interfacial free energies should be split up
among the real and the artificial line contributions.
\\
In the present subsection we have discussed how a line tension $\tau$ and its
dependence on the choice of dividing interfaces can be determined unambigously
from calculations for a simple model system, i.e., a liquid wedge in contact with a gas phase
on a solid substrate.
In order to be able to proceed, one has to carry out the calculations
within a finite subsystem (box) cut out from the unbounded model system. The
seamless continuation into the embedding system has to be incorporated into the calculation
by choosing proper boundary conditions and by taking into account certain interactions
of the finite subsystem with the embedding system. In addition, the box boundaries
have to be choosen such, that they cut through interfaces perpendicularly and
artificial box edges
have to be placed into homogeneous regions of the total system (a possible choice of the
box is shown in Fig. \ref{Fig-Wedge-b}).
With all these precautions we find that the dependence of $\tau$ on the choice of the solid--fluid
dividing interface is definitely given by Eq. (\ref{deltaeta2}).
\begin{figure}[h]
\includegraphics*[scale=.32]{Fig-Wedge-b.eps}
\caption{\label{Fig-Wedge-b} Another possibility for cutting a subsystem I out of a total system
containing a l--g interface and a three-phase-contact line with the solid s.
The structure is assumed to be translationally invariant
perpendicular to the plane of the figure. Two possible choices for the solid--fluid dividing interface
are indicated by solid (1) and dashed (2) lines. The two dividing interfaces are separated by a
distance $\delta h$. The areas of the solid--liquid (solid--gas) interfaces within subsystem I
change (horizontal double arrow) if the solid--fluid dividing interface is shifted. These changes in areas are
identical to those obtained if the box of rectangular cross section is chosen in Fig. \ref{fig9}.
}
\end{figure}
\subsection{Finite containers filled with fluids }
After the discussions in the previous subsection we are still left with the contradiction
between Eq. (\ref{deltaeta2}), which tells us how $\tau$ should depend on the choice of
dividing interfaces, and our statement in Sect. V which says that $\tau$ should be
independent of that choice. We would like to stress that this latter statement
is based on a well defined decomposition scheme of the grand potential of a drop
on a substrate (or a liquid lens at a fluid interface).
\\
In order to gain further insight into this problem
we now discuss a finite fluid system enclosed in a container
with solid walls. This way we avoid from the outset the necessity to cut out a finite
subsystem and we stay away from the danger to pick up unphysical edge or line contributions.
Our treatment of closed finite containers gives us also the opportunity to discuss
a further aspect of solid--fluid systems, i.e., the curvature of the solid--fluid
interface.
We further chose the boundary conditions such that the liquid--vapor interface
is planar and thus no complications occur related to the Laplace pressure.
In addition there is no curvature correction to the liquid--vapor surface tension.
On the other hand if a container with only planar walls is partially filled with
a liquid in contact with its vapor phase, in addition to the liquid--vapor--solid
contact line further edge (line) contributions do appear which
scale with the same linear dimension of the container as the three-phase-contact line
we are interested in. Therefore, the different line (edge) contributions cannot be
separated unless these edge contributions have been determined independently from investigations
of reference configurations not containing a liquid--vapor--solid contact line. As a reference
configuration the same container but completely filled with either liquid
or gas (vapor) is chosen. But then one encounters the problem that there is no unique description
how to extract all additional edge contributions individually and in particular it is
impossible to find the transformations upon notional shifts of dividing
interfaces for each of these edge contributions individually.
\\
In order to facilitate to vary the length of the liquid--vapor--solid contact line
indepently from the lengths of container edges we consider a biconical container
composed of two identical but oppositely oriented right cones with circular base
which are glued together base to base along the circumference of the base
as shown in Fig. \ref{fig7}.
\begin{figure}[h]
\includegraphics*[scale=.37]{Fig7.eps}
\caption{\label{fig7} Cut through a biconical container (circular base) with an opening angle
180$^{\mathrm{o}} - 2\theta$.
The apexes of the container are filled with the liquid phase (l) of a fluid up to
the planar liquid--gas interfaces.
The central part of the container is filled with the coexisting gas phase.
The filling height is determined by a prescribed amount of liquid.
Two possible choices for the
dividing interface between solid (wall) and fluid are indicated by solid and dashed lines.
The two possible dividing interfaces are separated by a distance $\delta$. The two
corresponding positions of the liquid--gas--solid contact lines denoted as L are indicated
by small and large dots. At the joint between the two cones there is another circular line
L$_{\mathrm{cont}}$ formed.
}
\end{figure}
All container walls are composed of the same solid material s. The cones are
filled with liquid l up to a certain height, which is determined by the amount of liquid provided,
and the liquid is in contact with its
vapor phase g. The opening angle of the cone is chosen to be $180^{\mathrm{o}} - 2 \theta$
such that the liquid--vapor interface is planar. The system
contains two structural units characterized by lines. The first line is at the joint of the two
cones and is termed L$_{\mathrm{cont}}$ (Fig. \ref{fig7}). It has the shape of a circle with a radius
denoted as $r$. In the following the related line (edge) tension will be called $\tau_{\mathrm{cont}}$.
The second line, the one we are
interested in, is the three-phase-contact line between solid, vapor, and liquid. It also has a circular
shape with a radius denoted as $r_{\mathrm{lg}}$. The related line tension will be called $\tau$.
Again a parallel displacement of the dividing interfaces between solid and fluid
by an amount $\delta$ is considered. This parallel displacement is indicated in Fig. \ref{fig7}.
This set-up allows one to vary the length of the three-phase-contact line
corresponding to $\tau$ independently
from the container dimensions. It is tempting to conclude that thus the line tension $\tau$
and its dependence on the choice of dividing interfaces can be
separated unambigously from the line contributions related to the geometry of the container.
However, since the container walls are curved, we also have to take into account curvature corrections to
the solid--fluid interface tensions. As we shall see, these curvature corrections bring about contributions
to the grand potential which scale with the length of the liquid--vapor--solid contact line.
Therefore, for this geometry one also has to study at first reference configurations not containing a
liquid--vapor--solid contact line, i.e., the bicone being homogeneously filled with either liquid or vapor,
in order to determine how the curvature contributions depend
on the choice of dividing interfaces. This is actually possible if an additional albeit
plausible assumption is made (see below). In the next step the bicone filled partially with liquid
and partially with vapor
is analyzed. What is found for the curvature terms in the previous step for the container
filled with an homogeneous fluid is then
transferred to the new situation of a container filled with an inhomogeneous fluid without
any modification.
\subsubsection{Homogeneously filled bicone}
We decompose the grand canonical potential into volume, interfacial, line contributions,
and we include curvature contributions due to the curvature of the solid--fluid wall
(see, e.g., \cite{Roth1,Roth2,Roth3,Roth4}. The curvature
contributions could be also expressed as a curvature correction to the solid--fluid
interfacial tension. Terms which do not scale at least with a linear container dimension
are disregarded:
\begin{multline}
\label{bicone-hom-1}
\frac{1}{2}\Omega = -p V^{(i)}_{\mathrm{cone}} + A^{(i)}_{\mathrm{cone-ssf}} \,
\sigma^{(i)}_{\mathrm{sg(sl)}}
+ \kappa^{(i)}_{\mathrm{sg(sl)}} C^{(i)}_{\mathrm{cone-ssf}} \\
+ \frac{1}{2} L^{(i)}_{\mathrm{cont}} \tau^{(i)}_{\mathrm{cont}}
\quad ,
\end{multline}
which is the grand canonical potential for one half of the system and $p$ is the pressure
of the fluid, $V^{(i)}_{\mathrm{cone}}$ the volume of one cone, $A^{(i)}_{\mathrm{cone-ssf}}$
the area of the side surface of one cone, $\sigma^{(i)}_{\mathrm{sg(sl)}}$ is the interface
tension of a planar solid--gas or solid--liquid interface, respectively, $C^{(i)}_{\mathrm{cone-ssf}}$
is the mean curvature of the side surface integrated over the whole area of the side surface
of one cone with $\kappa^{(i)}_{\mathrm{sg(sl)}}$ as the corresponding thermodynamic coefficient,
and $\tau^{(i)}_{\mathrm{cont}}$
is the line (edge) tension associated with the joint between the two cones
(see Fig. \ref{fig7}) of length $L^{(i)}_{\mathrm{cont}}$. The superscript
$^{(i)}$ indicates that geometric quantities as well as the interfacial and line tensions and
the coefficient of the curvature term depend on the choice ($i$) of the dividing interface
between solid and fluid. The integrated mean curvature of the side surface of a cone with
base radius $r^{(i)}$ is
\begin{equation}
\label{curvature-1}
C^{(i)}_{\mathrm{cone-ssf}} = \frac{\pi r^{(i)} \sin \theta}{\cos \theta} .
\end{equation}
If we again compare the decompositions for the two choices (1) and (2) of the dividing interface
shown in Fig. \ref{fig7} and if we use $\sigma^{(2)}_{\mathrm{sg(sl)}} - \sigma^{(1)}_{\mathrm{sg(sl)}}
= -p \delta $ ( a special variant of Eq. (\ref{deltasigmasf}) not restricting the generality of
the following arguments) and if we neglect terms which do not scale at least with $r^{(i)}$,
we obtain the relation
\begin{multline}
\label{bicone-hom-2}
\kappa^{(1)}_{\mathrm{sg(sl)}} C^{(1)}_{\mathrm{cone-ssf}} +
\frac{1}{2} \tau^{(1)}_{\mathrm{cont}} L^{(1)}_{\mathrm{cont}}
\\
=
\kappa^{(2)}_{\mathrm{sg(sl)}} C^{(1)}_{\mathrm{cone-ssf}} +
\frac{1}{2} \tau^{(2)}_{\mathrm{cont}} L^{(1)}_{\mathrm{cont}}
\\
+ p \frac{\pi r^{(1)} }{\sin\theta \cos \theta } \delta ^2 -
\sigma^{(1)}_{\mathrm{sg(sl)}} \frac{\delta}{\sin\theta \cos \theta } 2 \pi r^{(1)} .
\end{multline}
Without a further assumption it is of course not possible to obtain the transformation behavior
upon shifting the dividing interface for $\kappa_{\mathrm{sg(sl)}}$ and
$\tau_{\mathrm{cont}}$ independently. In order to proceed we consider a rounding of the edge
at L$_{\mathrm{cont}}$ around the joint of the two cones and attribute to this
edge a total (integrated) mean curvature based on the following arguments.
For a container with smooth walls without any edge, differential geometry provides the
general relations (see, e.g., Refs. \cite{Smirnow, Hadwig})
\begin{equation}
\label{curvature-2}
V^{(2)} = V^{(1)} - A \delta + C \delta ^2 - \frac{1}{3} X \delta ^3
\end{equation}
and
\begin{equation}
\label{curvature-3}
A^{(2)} = A^{(1)} - 2 C \delta + X \delta ^2 \quad ,
\end{equation}
expressing the changes in volume and surface area of a container, associated with
an infinitesimal parallel shift $\delta$ of its surface,
in terms of the area $A$, the total mean curvature $C$, and the total Gaussian
curvature $X$. From the calculated volume and area changes and from Eqs. (\ref{curvature-2}) and (\ref{curvature-3}),
the total mean curvature for the bicone (with rounded edge) turns out to be
\begin{equation}
\label{curvature-4}
C = \frac{2\pi r }{\sin \theta \cos \theta} \quad ,
\end{equation}
whereas adding up the integrated mean curvatures of the side surfaces of the two cones (Eq. (\ref{curvature-1}))
gives
\begin{equation}
\label{curvature-5}
C_{\mathrm{bicone}} = \frac{2\pi r \sin ^2 \theta}{\sin \theta \cos \theta} \quad .
\end{equation}
This means that a contribution
\begin{equation}
\label{curvature-6}
C_{\mathrm{seam}} = \frac{2\pi r \cos ^2 \theta}{\sin \theta \cos \theta}
\end{equation}
to the total mean curvature of the container is missing. Obviously this contribution
can be attributed to
the line L$_{\mathrm{cont}}$ at which the two cones are glued together
and the curvature attributed to that line can be realized by deforming the surface in
its vicinity into one that is differentiable.
We use now this observation in order to replace the line (edge) term by an equivalent
curvature term:
\begin{multline}
\label{bicone-hom-3}
\frac{1}{2} \tau_{\mathrm{cont}} L_{\mathrm{cont}} \longrightarrow
\frac{1}{2} \kappa_{\mathrm{sg(sl)}} C_{\mathrm{seam}}
\\
= \kappa_{\mathrm{sg(sl)}} \frac{\pi r \cos ^2 \theta}{\sin \theta \cos \theta}
\quad .
\end{multline}
Combining Eq. (\ref{bicone-hom-2}) and Eq. (\ref{bicone-hom-3}) one finds
\begin{equation}
\label{curvature-7}
\kappa^{(1)}_{\mathrm{sg(sl)}} = \kappa^{(2)}_{\mathrm{sg(sl)}} + p \delta ^2 -
2 \sigma ^{(1)}_{\mathrm{sg(sl)}} \delta .
\end{equation}
\subsubsection{Bicone filled partially with liquid and vapor}
The grand canonical potential of the bicone filled partially with liquid and partially
with vapor (see Fig. \ref{fig7}) decomposes as
\begin{multline}
\label{bicone-inhom-1}
\frac{1}{2}\Omega = -p V^{(i)}_{\mathrm{cone}} + A^{(i)}_{\mathrm{cone-sg}} \,
\sigma^{(i)}_{\mathrm{sg}}
+ A^{(i)}_{\mathrm{cone-sl}} \, \sigma^{(i)}_{\mathrm{sl}}
\\
+ \kappa^{(i)}_{\mathrm{sg}} C^{(i)}_{\mathrm{cone-sg}}
+ \kappa^{(i)}_{\mathrm{sl}} C^{(i)}_{\mathrm{cone-sl}}
+ A^{(i)}_{\mathrm{lg}} \, \sigma_{\mathrm{lg}}
\\
+ \frac{1}{2} L^{(i)}_{\mathrm{cont}} \tau^{(i)}_{\mathrm{cont}}
+ L^{(i)} \tau^{(i)}
\quad ,
\end{multline}
where $A^{(i)}_{\mathrm{cone-sg}}$ and $A^{(i)}_{\mathrm{cone-sl}}$ are those areas of
the cone side surface which are in contact with the gas and the liquid phase,
respectively. $\sigma^{(i)}_{\mathrm{sg}}$ and $\sigma^{(i)}_{\mathrm{sl}}$ are the
interface tensions of the planar solid--gas and solid--liquid interfaces.
$C^{(i)}_{\mathrm{cone-sg}}$ and $C^{(i)}_{\mathrm{cone-sl}}$ are the mean curvatures
of the cone side surface integrated over that part of the surface which is in contact
with the gas phase and the liquid phase, respectively. $\kappa^{(i)}_{\mathrm{sg}}$ and
$\kappa^{(i)}_{\mathrm{sl}}$ are the corresponding curvature coefficients.
$A^{(i)}_{\mathrm{lg}}$ is the area of the planar liqid--gas interface and
$\sigma_{\mathrm{lg}}$ the surface tension of a planar liquid--gas interface.
The last but one term in Eq. (\ref{bicone-inhom-1}), already familiar from the
homogeneously filled bicone, gives the line contribution from
the edge along which the two cones are glued together. The last term is the contribution
from the solid--liquid--gas three-phase-contact line. The superscript $^{(i)}$ indicates
which of the quantities depends on the choice of the solid--fluid dividing interface
($\sigma^{(i)}_{\mathrm{sg}} - \sigma^{(i)}_{\mathrm{sl}}$ is independent of such
choices).
As in the previous subsection we compare the decompositions for the two different choices of dividing interfaces
indicated in Fig. \ref{fig7}, we neglect all higher order terms which do not scale
with a characteristic system size, we further use Eq. (\ref{bicone-hom-2}) which
has been obtained for the homogeneously filled bicone, and finally end up with the relation
\begin{multline}
\label{bicone-inhom-2}
\left( \kappa^{(1)}_{\mathrm{sl}} - \kappa^{(1)}_{\mathrm{sg}} \right)
\pi r^{(1)}_{\mathrm{lg}} \cot \theta
+ 2 \pi r^{(1)}_{\mathrm{lg}} \tau ^{(1)}
\\
=
\left( \kappa^{(2)}_{\mathrm{sl}} - \kappa^{(2)}_{\mathrm{sg}} \right)
\pi r^{(1)}_{\mathrm{lg}} \cot \theta
+ 2 \pi r^{(1)}_{\mathrm{lg}} \tau ^{(2)} .
\end{multline}
Equation (\ref{bicone-inhom-2}) states that a certain combination of curvature and line
contributions is invariant under a notional shift of the solid--fluid dividing interface.
In order to fix the transformation behavior of the individual contributions under a
notional shift, an additional convention is required. Insisting
on using Eq. (\ref{curvature-7}), which appears to be a most plausible choice for the
kind of systems considered, fixes the transformation behavior
of the difference of curvature coefficients which appear in Eq. (\ref{bicone-inhom-2}).
This leads to Eq. (\ref{deltaeta2}) which tells that $\tau$ does depend on the
choice of the solid--fluid dividing interface.
\subsection{The drop revisited}
The surface tensions at planar solid--fluid interfaces depend on the position of the dividing
interfaces as given in Eq. (\ref{deltasigmasf}). Therefore, the difference
$\sigma_{\alpha \gamma} - \sigma_{\beta \gamma}$ of the gas--solid and the liquid--solid
interface tensions at coexistence of the gas and the liquid phase does not depend on the
choice of the solid--fluid dividing interface. In our analysis of the drop in Sect. V
we could make use of that property because in defining $\tau$ we have used
solid--fluid interface tensions at the very same pressure irrespective of whether
the fluid phase is the gaseous phase outside the drop or the liquid phase in the interior of the drop.
We have also discussed the possibility of introducing instead
a quantity $(\sigma_{\alpha \gamma} - \sigma_{\beta \gamma})'$ in which the
two interface tensions are meassured at different pressures. We also have given
reasons why we have not pursued this possibility further.
\begin{figure}[h]
\includegraphics*[scale=.30]{Fig-drop.eps}
\caption{\label{Fig-drop.eps} A sessile liquid drop on a planar substrate ($\gamma$). Two choices for the
substrate--fluid dividing interface separated by a distance $\delta h$ are shown.
The contact angle $\theta$ is only shown for that choice of the dividing interface, which is indicated
by the solid horizontal line. The pressure in the liquid phase ($\beta$) differs by $\Delta p$ from that
in the gas phase ($\alpha$). The drop is a spherical cap with radius $R = r/\sin \theta$,
area $A = 2\pi R^2 ( 1-\cos \theta)$, and volume $V = (\pi/3)R^3( 2 - 3\cos \theta + \cos ^3 \theta)$.
}
\end{figure}
However, in view of the
conflicting results for the transformation behavior under notional shifts of dividing
interfaces between different definitions of line tensions we now seriously consider
this option.
\\
In order to make our reasoning as transparent as possible we discuss now
only a notional shift of the substrate--fluid dividing interface as shown in Fig. \ref{Fig-drop.eps},
but leave the liquid--gas dividing interface fixed.
We again use the decomposition in Eq. (\ref{lensdrop4b}) for two positions of the
substrate--fluid dividing interface and use at first the relation
\begin{equation}
\label{interface-tension-contrast}
(\sigma^{(2)}_{\alpha \gamma} - \sigma^{(2)}_{\beta \gamma}) -
(\sigma^{(1)}_{\alpha \gamma} - \sigma^{(1)}_{\beta \gamma}) = 0
\end{equation}
which is valid if both interface tensions are measured at the same pressure (and if for both
the $\alpha$--$\gamma$ and the $\beta$--$\gamma$ interface
the dividing interface is positioned at the same height). After neglecting terms which do not scale with
at least a linear extension of the system one arrives at
\begin{multline}
\label{drop-again-1}
2 \pi r \left( \tau^{(1)} - \tau^{(2)} \right) = - \Delta p \left( V^{(2)} - V^{(1)} \right)
\\
+ \sigma_{\alpha \beta }(R) \hspace{-0.10cm} \left( A^{(2)} - A^{(1)} \right)
+ \pi \left( \sigma_{\beta \gamma} - \sigma_{\alpha \gamma} \right)
\hspace{-0.10cm} \left( (r^{(2)})^2 - (r^{(1)})^2 \right) \hspace{-0.10cm}.
\\
\end{multline}
Since in Eq. (\ref{drop-again-1}) neither the liquid--gas surface tension $\sigma_{\alpha \beta }(R)$
( in the relevant order) nor the difference $\left( \sigma_{\beta \gamma} - \sigma_{\alpha \gamma} \right)$ depend on the
choice of the substrate--fluid dividing interface, the line tension $\tau$ is the only physical
quantity that can pick up all the notional changes of volume and interfacial contributions.
In particular, the notional change of the volume contribution
(in leading order $ V^{(2)} - V^{(1)} = -\pi r^2 \delta h $ and $ \Delta p \propto 1/r$)
does also contribute to the notional change of $\tau$. Evaluating Eq. (\ref{drop-again-1})
up to leading order, i.e., neglecting corrections to $\tau$ of the order of $1/r$,
one inevitably comes to the conclusion that $\tau$ is independent of the choice of the
substrate--fluid dividing interface.
\\
The only sensible way how an alternative definition of
the line tension $\tau$ could be introduced is to use in the decomposition scheme of
Eq. (\ref{lensdrop4b}) a solid--liquid interfacial tension $\sigma_{\beta \gamma}(p + \Delta p)$
taken at the pressure $p + \Delta p$ of a bulk liquid in contact with the solid,
whereas the solid--gas interfacial tension $\sigma_{\alpha \gamma}(p )$ is taken at the
pressure $p$. One may argue that this way one mimics the actual conditions at the interfaces
of the liquid-drop--vapor--solid system. On the other hand the interfacial tensions
$\sigma_{\alpha \gamma}$ and $\sigma_{\beta \gamma}$ are not
measurable individually, whereas the difference $\sigma_{\beta \gamma}(p) - \sigma_{\alpha \gamma}(p)$
is a measurable quantity. Nevertheless, we now seriously consider that definition of $\tau$.
In order to study the transformation of this newly defined $\tau$ under notional changes we
proceed as at the beginning of this subsection with the only difference that instead of
Eq. (\ref{interface-tension-contrast}) we have to use now
\begin{equation}
\label{interface-tension-contrast-b}
(\sigma^{(2)}_{\alpha \gamma}(p) - \sigma^{(2)}_{\beta \gamma}(p + \Delta p)) -
(\sigma^{(1)}_{\alpha \gamma}(p) - \sigma^{(1)}_{\beta \gamma}(p + \Delta p)) = \Delta p \delta h
.
\end{equation}
If we carry out the analysis neglecting all terms which obviously can only give rise to
corrections to $\tau$ of the order $1/r$, i.e., terms which cannot contribute to
a line tension in its strict sense, we arrive at
\begin{multline}
\label{drop-again-2}
\tau^{(2)} - \tau^{(1)} =
\\
\frac{\delta h}{\sin \theta} \left[ \sigma_{\alpha \beta}(R)
+ ( \sigma^{(1)}_{\beta \gamma}(p + \Delta p) - \sigma^{(1)}_{\alpha \gamma}(p)) \cos \theta
\right]
\quad .
\end{multline}
We now replace $\sigma^{(1)}_{\beta \gamma}(p + \Delta p)$ in Eq. (\ref{drop-again-2})) by
$\sigma^{(1)}_{\beta \gamma}(p)$ because the difference of the two quantities is of the order of $1/r$
(we then can skip the superscript $^{(1)}$ in the difference between the solid--liquid and the
solid--gas surface tensions) and with the same argument we replace $\sigma_{\alpha \beta}(R)$
by its planar limit $\sigma_{\alpha \beta}^\infty$. We then can use Young's law and finally
arrive at
\begin{equation}
\label{drop-again-3}
\tau^{(2)} - \tau^{(1)} = \sigma_{\alpha \beta}^\infty \delta h \sin \theta
\quad ,
\end{equation}
i.e., we again find the transformation in Eq. (\ref{deltaeta2}), as we did
repeatedly in discussing systems with planar liquid--gas interfaces.
The reason why this second definition of the line tension leads us again to the transformation in
Eq. (\ref{deltaeta2}) can be traced back to the fact that in the second definition a contribution
to the notional change of the volume contribution
$\Delta p \left( V^{(2)} - V^{(1)} \right)$ is absorbed by a corresponding notional change
of $\sigma_{\alpha \gamma}(p) - \sigma_{\beta \gamma}(p + \Delta p)$, whereas in the first
definition this possibility does not exist.
\subsubsection{Relation between the two line tensions}
We now derive a relation between the two values of $\tau$ determined for the same system
under identical conditions, but using the two different definitions introduced above.
In order to do so
we compare the two decompositions of the grand canonical potential, on which
the two definitions are based on, for a given, but otherwise arbitrary, choice of dividing
interfaces. In order to distinguish the two definitions we denote with $\tau$ the value
of the line tension determined according to the decomposition scheme introduced in Sect. V
(see Eq. (\ref{lensdrop4}) with $\sigma_{\alpha \gamma}(p)$ and $\sigma_{\beta \gamma}(p)$),
which we sketched again at the beginning of this subsection. With $\tau_{\mathrm{w}}$
we denote the line tension determined via the second decomposition scheme
(see Eq. (\ref{lensdrop4}) but with $\sigma_{\alpha \gamma}(p)$ and $\sigma_{\beta \gamma}(p+\Delta p)$).
On the basis of its transformation behavior and because effects due to the Laplace pressure,
which do not occur for planar liquid--gas interfaces, are extracted from its definition, we identify
$\tau_{\mathrm{w}}$ with
the line tension as defined for the previously discussed systems with planar liquid--gas
interfaces (it shall be understood that the identification applies to the line tension in its
strict sense, i.e., disregarding subleading terms). The comparison
leads to the relation
\begin{equation}
\label{drop-again-4}
\tau = \tau_{\mathrm{w}} + \frac{r}{2} \left[ \sigma_{\beta \gamma}(p + \Delta p) -
\sigma_{\beta \gamma}(p) \right]
\quad .
\end{equation}
For a one-component fluid this relation can be re-expressed
by using
\begin{equation}
\label{drop-again-5}
\sigma(p + \Delta p) = \sigma(p) + \frac{\partial \sigma }{\partial \mu}
\frac{\partial \mu}{\partial p} \Delta p
\quad ,
\end{equation}
where $\mu$ is the chemical potential, and
\begin{equation}
\label{drop-again-6}
\frac{\partial \mu}{\partial p} = \frac{1}{\rho_\mathrm{b}}
\quad ,
\end{equation}
where $\rho_\mathrm{b}$ is the bulk density of the fluid and
\begin{equation}
\label{drop-again-7}
\frac{\partial \sigma }{\partial \mu} = -\Gamma =: - \frac{1}{A}\left[ \int _{V} \rho({\bf r})\mathrm{d}^3r -
\rho_\mathrm{b}V \right]
\quad ,
\end{equation}
whith $\Gamma$ as the excess adsorption.
Equation (\ref{drop-again-7}) can be easily derived from the definition of the interface tension
\begin{equation}
\label{drop-again-8}
\sigma = \frac{1}{A} \left( \Omega \left [ \rho _{\mathrm{eq}} ({\bf r}) \right ] + pV \right) ,
\end{equation}
where $\rho _{\mathrm{eq}} ({\bf r})$ is the equilibrium number density minimizing $\Omega$,
and the general functional form of the grand canonical potential:
\begin{equation}
\label{drop-again-9}
\Omega \left[ \rho ({\bf r}) \right] = {\cal{F}} \left( \rho ({\bf r}) \right) +
\int \rho ({\bf r}) \left( V_{\mathrm{ext}} - \mu \right)\mathrm{d}^3r
\quad .
\end{equation}
Combining Eqs. (\ref{drop-again-4}) -- (\ref{drop-again-9}) and expressing the Laplace pressure
$\Delta p$ in terms of the liquid--gas surface tension one obtains ($\rho_{\mathrm{b, \beta}}$
is the bulk density in the $\beta$ phase):
\begin{equation}
\label{drop-again-10}
\tau = \tau_{\mathrm{w}} - \frac{\Gamma_{\beta \gamma}}{\rho_{\mathrm{b, \beta}}}
\sigma_{\alpha \beta} \sin \theta
\quad .
\end{equation}
$\Gamma_{\beta \gamma}$ is the excess adsorption at the planar $\beta$--$\gamma$ interface.
In Eq. (\ref{drop-again-10}) both terms on the right hand side depend on the choice of
the liquid--solid dividing ($\beta$--$\gamma$) interface but these dependences cancel and thus $\tau$
is independent of that choice as it should.
\\
The discussion in the present section has shown
that the contradiction between our statement in Sect. V,
that $\tau$ is independent of the choice of dividing interfaces, and Eq. (\ref{deltaeta2}) is resolved
as follows. We have found that the definitions of the line tension at a
substrate--fluid--fluid interface, which have been chosen on one hand in Sect. V
and on the other hand in Eq. (\ref{deltaeta2}), are different. The line tension,
which appears in Eq. (\ref{deltaeta2}), is $\tau_{\mathrm{w}}$ and differs from $\tau$ as defined in Sect. V.
The possibility to define two different line tensions arises because it is not obvious how
to subtract the contribution to the grand canonical potential stemming from the planar
$\beta$--$\gamma$ (liquid--substrate) interface of a drop of $\beta$-phase in contact with
a substrate.
The liquid--substrate interface tension $\sigma_{\beta \gamma}$ is influenced by the Laplace pressure, but
$\sigma_{\beta \gamma}$ is not accessible. However, the difference
$\sigma_{\beta \gamma}(p)-\sigma_{\alpha \gamma}(p)$ of the fluid--substrate interface tensions
is measurable
via the contact angle for large drops. If in the decomposition scheme defining the
line tension we use an interface tension at a pressure deviating from the true pressure in
the interior of the drop, i.e., $\sigma_{\beta \gamma}(p)$,
in order to avoid the somewhat artificial combination
$\sigma_{\beta \gamma}(p+\Delta p)-\sigma_{\alpha \gamma}(p)$, one is led to the definition
$\tau$ for the line tension. If instead one uses $\sigma_{\beta \gamma}(p+\Delta p)$
in the decomposition one arrives at the definition $\tau_{\mathrm{w}}$.
Corresponding alternative choices do not exist in the case of the contact of three fluid phases,
since all fluid--fluid interface tensions are measurable individually and
since the dependence of interface tensions on curvature radii parametrize their dependence on
the Laplace pressure in a natural way. By contrast, the geometry of the solid--liquid interface
is not influenced by the Laplace pressure and it stays planar for all values of the Laplace pressure.
\par
The definition $\tau_{\mathrm{w}}$ for the line tension has the merit that interface contributions
due to the modification of the $\beta$--$\gamma$ interface tension caused by the Laplace pressure,
giving rise to a contribution proportional to the linear extension of the system,
are not implicitly included in $\tau_{\mathrm{w}}$. However, in order to use $\tau_{\mathrm{w}}$
in equations determining the contact angle, it is necessary to re-express a number of equations
given for the drop in Sect. V. Equation (\ref{drop1}) expressed in terms of $\tau_{\mathrm{w}}$
and its notional derivatives turns into
\begin{equation}
\label{drop1-neu}
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}}}{\mathrm{d} h} \right] +\cos \theta
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} R} \right] =
\sin \theta \sigma_{\alpha \beta }
\end{equation}
and instead of Eq. (\ref{drop2}) one obtains
\begin{equation}
\label{drop2-neu}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma}(p+\Delta p) - \sigma_{\alpha \gamma }(p))
= - \frac{\tau_{\mathrm{w}}}{r} - \sin \theta \left[
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} R} \right] ,
\end{equation}
or in an alternative notation
\begin{multline}
\label{drop2-neu-b}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma}(p) - \sigma_{\alpha \gamma }(p))
= -(\sigma_{\beta \gamma}(p+\Delta p) - \sigma_{\beta \gamma }(p)) \\
- \frac{\tau_{\mathrm{w}}}{r} - \sin \theta \left[
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} R} \right] \quad.
\end{multline}
The relations between the notional derivatives of $\tau$ and $\tau_{\mathrm{w}}$ are
\begin{equation}
\label{drop2-neu-c}
\left[\frac{\mathrm{d} \tau }{\mathrm{d} R} \right] = \left[\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} R} \right]
+ \frac{1}{2 \sin \theta} \left ( \sigma_{\beta \gamma}(p+\Delta p) - \sigma_{\beta \gamma }(p) \right )
\end{equation}
and
\begin{equation}
\label{drop2-neu-d}
\left[ \frac{\mathrm{d} \tau}{\mathrm{d} h} \right] = \left[ \frac{\mathrm{d} \tau_{\mathrm{w}}}{\mathrm{d} h} \right]
- \frac{\cos \theta}{2 \sin \theta} \left ( \sigma_{\beta \gamma}(p+\Delta p) - \sigma_{\beta \gamma }(p) \right )
- \frac{r \Delta p}{2} .
\end{equation}
In terms of the variables $\theta$ and $r$ one obtains
\begin{equation}
\label{drop3-neu}
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta} \right] = - \frac{r}{\sin \theta} \sigma_{\alpha \beta}
\end{equation}
and
\begin{equation}
\label{drop4-neu}
\begin{split}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma}(p+\Delta p) - \sigma_{\alpha \gamma }(p)) = \\
- \frac{\tau_{\mathrm{w}}}{r} - \left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \right]
- \frac{\sin \theta \cos \theta}{r} \left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta } \right] ,
\end{split}
\end{equation}
or
\begin{equation}
\label{drop4-neu-b}
(\sigma_{\beta \gamma}(p+\Delta p) - \sigma_{\alpha \gamma }(p)) =
- \frac{\tau_{\mathrm{w}}}{r} - \left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \right] ,
\end{equation}
which replace Eqs. (\ref{drop3}) and (\ref{drop4}).
It is interesting to see that in Eq. (\ref{drop4-neu-b}) the term $\sigma_{\alpha \beta } \cos \theta$
drops out due to the identity Eq. (\ref{drop3-neu}), i.e., this term must now be included in the term
$\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \right]$. This is displayed in the
relation
\begin{equation}
\label{drop4-neu-c}
\begin{split}
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \right] = \sigma_{\alpha \beta } \cos \theta
- \frac{1}{2} (\sigma_{\beta \gamma}(p+\Delta p) - \sigma_{\beta \gamma }(p)) \\ +
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} r } \right] + \frac{\sin \theta \cos \theta}{r}
\left[ \frac{\mathrm{d} \tau }{\mathrm{d} \theta } \right] .
\end{split}
\end{equation}
Equations (\ref{drop3-neu}) - (\ref{drop4-neu-c}) show that using $\tau_{\mathrm{w}}$ instead of $\tau$
spoils the clear hierarchy in the various terms describing notional changes.
\\
For completeness we provide also the transformation laws for the notional derivatives of $\tau_{\mathrm{w}}$:
\begin{equation}
\label{drop4-neu-d}
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} R } \right]^{(2)} -
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} R } \right]^{(1)} = \frac{\sigma_{\alpha \beta }}{r} \cos \theta
[\mathrm{d} R] ,
\end{equation}
\begin{equation}
\label{drop4-neu-e}
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} h } \right]^{(2)} -
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} h } \right]^{(1)} = - \frac{\sigma_{\alpha \beta }}{r} \cos \theta
[\mathrm{d} h] ,
\end{equation}
\begin{equation}
\label{drop4-neu-f}
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta } \right]^{(2)} -
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta } \right]^{(1)} = - \sigma_{\alpha \beta }
[\mathrm{d} R] ,
\end{equation}
and
\begin{equation}
\label{drop4-neu-g}
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \right]^{(2)} -
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \right]^{(1)} = \frac{\sigma_{\alpha \beta } \sin \theta}{r}
[\mathrm{d} h] .
\end{equation}
\par
We make now again contact with a variational approach with the constraint of fixed volume.
Of course one could stick to the variational approach as discussed in Sect. V without any modifications.
In that case one just might want to rewrite Eqs. (\ref{drop12}) and (\ref{drop13}) in terms of $\tau_{\mathrm{w}}$
and its notional derivatives. Alternatively, one may slightly modify the variational approach by
introducing a $\beta$--$\gamma$ interface tension measured at a pressure $p + \Delta p$ in the same
way as in the definition of $\tau_{\mathrm{w}}$. As in Sect. V we further introduce stiffnesses
$\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta} \Big\vert $ and
$\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r} \Big\vert $ but we do not introduce any
further stiffness. (One might contemplate to endow the $\beta$--$\gamma$ interface with a stiffness
but there is no geometric measure characterizing that interface which could be related to such a stiffness;
the area is already used and the radius of cirumference is better attributed to the contact line.)
In that way one arrives at
\begin{equation}
\label{drop8-neu}
\sin ^2 \theta \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta} \Big\vert
= \frac{r^2}{2} \frac{\mathrm{d} \sigma_{\alpha \beta}
}{\mathrm{d} R} \Big\vert
\end{equation}
and
\begin{equation}
\label{drop12-neu}
\begin{split}
\sigma_{\alpha \beta } \cos \theta + (\sigma_{\beta \gamma}(p + \Delta p) - \sigma_{\alpha \gamma }(p))
= - \frac{\tau_{\mathrm{w}}}{r} -
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \Big\vert \\
- \frac{ \sin \theta \cos \theta }{r}
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta }
\Big\vert .
\end{split}
\end{equation}
One also obtains the relations
\begin{equation}
\label{drop12-neu-b}
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \Big\vert =
\left[ \frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \right]
- \sigma_{\alpha \beta } \cos \theta - \frac{r \cos \theta}{2 \sin \theta}
\left[ \frac{\mathrm{d} \sigma_{\alpha \beta }}{\mathrm{d} R} \right] ,
\end{equation}
\begin{equation}
\label{drop12-neu-c}
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} r } \Big\vert = \frac{\mathrm{d} \tau}{\mathrm{d} r } \Big\vert
- \frac{1}{2}\left( \sigma_{\beta \gamma}(p + \Delta p) -\sigma_{\beta \gamma}(p )\right) ,
\end{equation}
\begin{equation}
\label{drop12-neu-d}
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta } \Big\vert = \frac{\mathrm{d} \tau}{\mathrm{d} \theta } \Big\vert
,
\end{equation}
and the transformation law
\begin{equation}
\label{drop12-neu-e}
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta } \Big\vert ^{(2)} -
\frac{\mathrm{d} \tau_{\mathrm{w}} }{\mathrm{d} \theta } \Big\vert ^{(1)} = 0 .
\end{equation}
\section{Summary}
\renewcommand{\theequation}{7.\arabic{equation}}
\setcounter{equation}{0}\vspace*{0.5cm}
We have discussed some conceptual issues that arise in a macroscopic thermodynamic
description of the three-phase contact of either three fluid phases or
of two fluid phases meeting an inert solid substrate. We have pointed
out that the concept of a line tension accompanying the
contact line has to be used with great care. The conceptual difficulties
arise because the interfaces between two phases are always diffuse and never
sharp. Therefore a certain degree of freedom exists in positioning, somewhere
in the transition region, an idealized mathematical interface (the so called
Gibbs dividing interface) separating the two adjacent phases.
As a consequence, a similar degree of freedom exists in the position of the contact line
defined as the common intersection of three dividing interfaces.
\par
We have analyzed implications for a consistent description of the contact line
following from the existence of these degrees of freedom.
For that purpose we have investigated two representative systems
of three-phase contact: a liquid lens at a fluid--fluid interface and a
liquid drop on top of a smooth substrate. Both systems are
used in experimental attempts to determine a line tension $\tau$ from the dependence
of contact angles on the system size.
We have defined a prescription for decomposing the grand canonical free energy
of a liquid lens or drop into volume, interface, and line contributions which
renders a line tension $\tau$ independent of a particular choice of
the dividing interfaces. The prescription rests on geometrical definitions and on the notion that those
interfaces of a lens or drop system, which are spherical segments, should be described in
the same way as the interfaces of completely spherical drops.
In particular this means that the pressure drop across the curved interface is
related to the surface tension (and surface stiffness against changes of the radius of curvature $R$)
via the generalized Laplace equation (\ref{lensdrop6}) and that the surface tension
of a spherical interface is independent of a notional shift $[\mathrm{d}R]$
of the dividing interface up to and including contibutions proportional to $ ~ 1/R$ for large $R$.
We have also used the fact that interface tensions of planar interfaces between two fluids
in thermal equilibrium are independent of the position of the dividing interfaces.
The same is implied
for the difference between the
substrate--gas (substrate--vapor) and the substrate--liquid interface tensions
although the substrate on one hand and the fluid phases on the other are not
in equilibrium. In fact this is true if the substrate--liquid and the substrate--gas interface
tensions used in the decomposition scheme correspond to the same pressure, a prescription
which has the advantage of relating the difference of these two quantities directly to the
contact angle of macroscopic drops which is a measurable quantity.
Our result, that $\tau$ is independent of the choice of
dividing surfaces, is first of all a generalization to {\em curved} interfaces and {\em curved}
contact lines of what is known for the line tension of a {\em straight}
contact line at the intersection of three {\em planar} interfaces in a genuine three-phase contact \cite{RowWid,Wid1}.
A further generalization is the one from a genuine three-phase contact to a contact between
two fluid phases and an inert solid phase.
It should be noted, however, that in the latter case an alternative definition of the
line tension, denoted as $\tau_{\mathrm{w}}$, is possible which seems to be more natural
for systems containing planar liquid--gas interfaces and planar or curved solid walls and more useful
for the purpose of computing line tensions by exploiting the simplest possible geometries.
It should be emphasized that $\tau_{\mathrm{w}}$ does depend on the choice of dividing interfaces.
The difference in the definition of this alternative line tension relative to the previous one
rests on choosing in the decomposition scheme of the grand canonical potential a different
substrate--liquid interface tension which is not taken at the pressure of the gas phase,
which is implied for the substrate--gas interface tension,
but at a pressure which is enhanced by
the Laplace pressure.
We have provided a simple relation between the values of the line tensions corresponding
to the two alternative definitions (Eqs. (\ref{drop-again-4}) and (\ref{drop-again-10})).
\par
We have further pointed out that the generalized Neumann or Young equations obtained from a minimization
principle by simply adding a line-tension contribution to the free energy
suffer from internal inconsistencies.
The purely geometric relations between two sets of contact angles obtained for two different
choices of the dividing interfaces are at variance with those equations and with our result from the
decomposition of the free energy which states that the line tension $\tau$ is
independent of such descriptive ambiguities. In the case of a drop on a solid substrate
using the alternative definition $\tau_{\mathrm{w}}$ of a line tension cannot
resolve those inconsistencies.
\par
In order to find equations for the contact angles which are internally consistent we have followed
two different routes. The first one is determined by the observation that the
grand canonical free energy must be independent of notional (descriptive) changes of the system, i.e.,
of 'parallel' shifts of the dividing interfaces at fixed physical configurations. In the mathematical formulation
we introduced notional changes of the line tension and corresponding
notional derivatives of $\tau$ with respect to contact angle(s) and to the contact-line radius
in addition to the well established notional derivatives of surface tensions with respect to radii of curvature.
The second route follows a minimization of the grand canonical free energy under the constraint of fixed
drop or lens volume. In contrast to the previous formulations we have included into the free energy
contributions from the stiffness of the interfaces with respect to a change of the radius of curvature
and from stiffnesses of the contact line with respect to changes in contact angle(s) and to the contact-line radius,
all at fixed thermodynamic variables.
The equations following from these two routes have been compared and combined into one
(set) of equations. For the lens these are Eqs. (\ref{lens22}) and (\ref{lens23}) together with
the relations in Eqs. (\ref{lens25}) -- (\ref{lens27}) and also Eqs. (\ref{lens18}) and (\ref{lens19}) relating the
stiffness constants and notional derivatives. For the drop the corresponding equations are given
by Eq. (\ref{drop12}) and the relations in Eqs. (\ref{drop13}) and (\ref{drop8}).
Furthermore we have found that the actual values of the stiffness constants depend on the choice
of the dividing interfaces. The relations between two sets of stiffness constants for two different
sets of dividing interfaces are given by Eqs. (\ref{lens16}) and (\ref{lens17}) for the lens and by
Eq. (\ref{drop11}) for the drop. (In case of the drop one could also use the alternative Eqs.
(\ref{drop8-neu}) to (\ref{drop12-neu-e}).)
\par
At this point it is interesting to see that for reasons similar to those which compelled us to introduce
notional derivatives of $\tau$ with respect to contact angles or stiffnesses of $\tau$ with respect
to changes of contact angles, Djikaev and Widom \cite{Widom-n2} (see also Ref. \cite{Widom-n3})
introduced a kind of 'derivative'
of $\tau$ with respect to contact angle in order to restore invariance against notional shifts of
dividing interfaces of their linear adsorption equation for a straight contact
line at a genuine three-phase contact.
Even closer to our discussion is the work by Rusanov et al. \cite{Rusanov-1}, which differs,
however, from ours in a number of important points. First, Rusanov et al. discuss only
drops on a substrate, whereas we discuss both lenses and drops. Second, in contrast to
Rusanov et al. we have
included notional shifts of the substrate--fluid interfaces. Third, Rusanov et al. have not
used a standard variational principle at constant volume
(typical examples for the use of this principle are given, e.g., in Refs. \cite{Wid3,Nav1,Blok3t})
and they did not give transformation laws between values of stiffness constants for different
choices of dividing interfaces.
Finally we have tried to make clear at which points in our line of arguments there
is still room for chosing different conventions and what follows as a necessity for any sensible convention.
\par
Next we discuss the consequences of our investigations for interpreting experimental data.
Before doing so, we provide explicit expressions for the change in
contact angles $\beta= 2\pi - (\alpha+\gamma)$ or $\theta$ with respect to
their limiting values $\alpha _0$, $\beta _0$, $\gamma_0$ or $\theta_0$ for macroscopicly large lenses or drops,
respectively (see Fig. 3).
From Eqs. (\ref{lens23}) and (\ref{lens22}) together with Eq. (\ref{lensdrop5}) and
the corresponding equation for $\sigma_{\beta \gamma}$ we obtain
\begin{equation}
\begin{split}
\label{lens28}
\cos \beta - \cos \beta _0 & =
- \frac{\sin \beta _0}{r} \biggl \{
2 \delta _{\alpha \beta}^{\mathrm{T}} \cos \alpha _0
\\
& + 2 \delta _{\beta \gamma}^{\mathrm{T}} \cos \gamma _0
+ \frac{ 2 \left ( \tau + r \frac{\mathrm{d} \tau}{\mathrm{d} r} \vert \right ) }
{\sigma_{\alpha \beta}^\infty \sin \alpha _0 + \sigma_{\beta \gamma}^\infty \sin \gamma _0 }
\\
& + \frac{ \cos \alpha _0}{\sigma_{\alpha \beta}^\infty } \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \Big\vert
+ \frac{ \cos \gamma _0}{\sigma_{\beta \gamma}^\infty } \frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \Big\vert
\biggr \}
+ \mathrm{O}\left ( \frac{\ln r}{r^2} \right ).
\end{split}
\end{equation}
Similarly, Eq. (\ref{drop12}) together with Eq. (\ref{lensdrop5}) yields
\begin{equation}
\begin{split}
\label{drop14}
\cos \theta - \cos \theta _0 & =
\frac{1}{r \sigma_{\alpha \beta}^\infty } \biggl \{
\left ( 2 \delta _{\alpha \beta}^{\mathrm{T}} \sigma_{\alpha \beta}^\infty
- \frac{\mathrm{d} \tau}{\mathrm{d} \theta} \Big\vert \right )
\sin \theta _0 \cos \theta _0
\\
& -\tau -r \frac{\mathrm{d} \tau}{\mathrm{d} r} \Big\vert
\biggr \}
+ \mathrm{O}\left ( \frac{\ln r}{r^2} \right ).
\end{split}
\end{equation}
The leading type, $ ~ \frac{\ln r}{r^2} $, of correction terms arises due to algebraically decaying dispersion
forces among the particles (see Ref. \cite{T0}).
In Eqs. (\ref{lens28}) and (\ref{drop14}) $r$ is the radius of the circular contact line,
$\sigma_{\xi \nu}^\infty$ the interface tension of the planar $\xi$--$\nu$ interface,
$\delta _{\xi \nu}^{\mathrm{T}}$ the Tolman length of the $\xi$--$\nu$ interface,
$\frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \vert$ etc. are stiffnesses against changes of contact
angles or of the radius of the line, which are attributed to the line, and $\tau$ is the line tension.
In the traditional analyses of size dependent contact angles only the term proportional to $\tau$ is
included. However, the omitted terms give contributions to the right hand sides of Eqs. (\ref{lens28}) and
(\ref{drop14}) which are comparable in magnitude to that term. Although $\tau$ is independent of the
chosen dividing interfaces, the stiffnesses $\frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \vert$ and
$\frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \vert$ in the case of the lens and
$\frac{\mathrm{d} \tau}{\mathrm{d} r} \vert$ and $\frac{\mathrm{d} \tau}{\mathrm{d} \theta} \vert$
in the case of the drop are not. Therefore, the additional terms in Eqs. (\ref{lens28}) and (\ref{drop14})
containing these stiffnesses even depend on
the positions of dividing interfaces which may be chosen arbitrarily within a certain range.
This is in accordance with the fact that this dependence is also present for the contact angles on the lhs
of Eqs. (\ref{lens28}) and (\ref{drop14}) (see step 1 in the protocol given below).
Moreover, the changes in the values of these particular terms with the position of dividing interfaces
are as big as the term proportional to $\tau$ itself.
In order to demonstrate this we compare $\tau$ with
$ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \vert$, $ \frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \vert$,
$\frac{\mathrm{d} \tau}{\mathrm{d} \theta} \vert$, or $\frac{\mathrm{d} \tau}{\mathrm{d} r} \vert$
as well as with the terms of the form $ 2 \delta ^{\mathrm{T}} \sigma$ involving the Tolman length
$\delta ^{\mathrm{T}}$.
The change, e.g., of $ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \vert$ with a shift of the
$\alpha$--$\beta$ dividing surface by $[ \mathrm{d} R_1 ]$ is equal to $ - \sigma_{\alpha \beta} [ \mathrm{d} R_1 ]$
For typical values of
$\sigma_{\alpha \beta}$ of the order of $10^{-2}\, \mathrm{J/m^{2}}$, and for
$[ \mathrm{d} R_1 ]$ of the order of $1\, \mathrm{nm}$ one obtains
a change in the value of $ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \vert$
of the order of $10^{-11}\, \mathrm{J/m}$, which is just the typical value of calculated or 'measured' line tensions.
Similar estimates hold for the other terms in particular also for those of the form
$ 2 \delta ^{\mathrm{T}} \sigma$ if the reasonable assumption is made that
$ 2 \delta ^{\mathrm{T}}$ is not much smaller than $1\, \mathrm{nm}$
(typical values of $ 2 \delta ^{\mathrm{T}}$ obtained theoretically are of the order of
half a molecular diameter, see, e.g., Refs. \cite{ThomGu,T2,T7}).
\par
In order to analyze experimental data with the help of Eqs. (\ref{lens28}) and (\ref{drop14})
a certain protocol has to be followed as discussed below:
\begin{enumerate}
\item Giving experimental values of contact angles $\beta$ or $\theta$ requires to
define the dividing interfaces relative to which the contact angles are measured.
(So far this information is missing for basically all published experimental
efforts to measure the line tension.)
After the dividing interfaces have been chosen (iso-density surfaces), the
spherical parts of the interface profiles of the drop or lens have to be determined
by fitting spheres to their central parts (e.g., to the top of the drop).
In case of the drop, one such sphere intersects with the planar solid--fluid
interface (chosen by convention) at the three-phase-contact line. The contact angle
at this line is obtained by a tangential-plane construction to the sphere, i.e., to
the spherical extrapolation of the liquid--gas interface down to the solid--fluid interface.
In the case of the lens,
two spheres are defined by the central part of the lens. These spheres intersect at the
three-phase-contact line, at which tangential
planes to the spheres define the contact angles.
\item The dependence of these angles on the lens or drop sizes, as characterized by $r$,
has to be studied in the {\em leading} asymptotic behavior for large $r$. This dependence
is not only determined by the line tension $\tau$ calculated for a straight
contact line but it depends on additional material parameters. Therefore, the
common practice of infering $\tau$ from $\beta(r)$ or $\theta(r)$ according to
Eqs. (\ref{mye10}) or (\ref{mye2}) is not valid.
\item If one would compare the rhs of Eqs. (\ref{lens28}) and (\ref{drop14}) with theoretical
quantities, these would have to be evaluated for that choice of the dividing interfaces
for which the experimental data on the lhs are given.
\item The values of certain stiffness constants appearing on the rhs of
Eqs. (\ref{lens28}) and (\ref{drop14}) depend sensitively on
the choice of dividing interfaces. However, the transformation behavior of
these material parameters between different such choices
is known and given by Eqs. (\ref{lens16}) and (\ref{lens17}) for the lens and
Eq. (\ref{drop11}) for the drop.
\item All quantities on the rhs of Eqs. (\ref{lens28}) and (\ref{drop14}) are accessible to
theoretical computations, e.g., based on density functional theory. Methods that can
be used to calculate the Tolman lengths are described in the literature (see, e.g., Refs.
\cite{ThomGu,T1,T1b,T101,T2,T3,T4,T5,T6,T7,T8,T11,T14}).
\\
In calculations of the line tension much care has to be taken not to pick up
additional artificial line or edge contributions. In the likely case that
within a theoretical approach one has calculated
$\tau_{\mathrm{w}}$, its relation to $\tau$ is given by
Eqs. (\ref{drop-again-4})
and (\ref{drop-again-10}), provided both quantities apply to the same thermo\-dynamic
conditions.
\\
Once the density profiles for a lens or drop
have been computed,
the stiffness constants $\frac{\mathrm{d} \tau}{\mathrm{d} \theta} \vert$,
$\frac{\mathrm{d} \tau}{\mathrm{d} r} \vert$ etc. may be determined from their relations to
the notional derivatives of $\tau$ (see, e.g., Eq. (\ref{drop13})) or in the case of
$\frac{\mathrm{d} \tau}{\mathrm{d} \theta} \vert$ also to the stiffness constant
of the interface (see, e.g., Eq. (\ref{drop8})) which again is given by a notional derivative
(Eq. (\ref{lensdrop8})).
Similar equations which apply to the case of the lens are Eqs. (\ref{lens18}), (\ref{lens19}),
(\ref{lens25}), (\ref{lens26}), and (\ref{lens27}). $\tau$ itself as well as the
notional derivatives of $\tau$ follow from decomposing the grand canonical potential for a series of
different choices for the dividing interfaces. For instance one might decompose the grand canonical
potential $\Omega$ of a given lens for a series of different values of $[R_1]$ and $[R_2]$.
Carrying out this decomposition up to the order $1/r$ and making use of previous knowledge about
the notional derivatives of the surface tensions (see Eq. (\ref{lensdrop7})), the notional derivatives
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} R_1} \right] $ and
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} R_2} \right] $
can be determined from the dependence of $\tau$ on $[R_1]$ and $[R_2]$.
These notional derivatives may be then converted
into the notional derivatives $ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \right]$
and $ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} \gamma} \right]$ via the relations
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} R_1} \right] =
\frac{ \sin \alpha \cos (\alpha + \gamma ) }{r \sin (\alpha + \gamma )}
\left[ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \right]
+ \frac{ \sin \gamma }{r \sin (\alpha + \gamma )}
\left[ \frac{\mathrm{d} \tau}{\mathrm{d} \gamma } \right] $
and
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} R_2} \right] =
\frac{ \sin \alpha }{r \sin (\alpha + \gamma )}
\left[ \frac{\mathrm{d} \tau}{\mathrm{d} \alpha} \right]
+ \frac{ \sin \gamma \cos (\alpha + \gamma ) }{r \sin (\alpha + \gamma )}
\left[ \frac{\mathrm{d} \tau}{\mathrm{d} \gamma } \right] $.
Similarly the notional derivatives $ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} R} \right] $ and
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} h} \right] $ can be determined in case of the drop
and converted into $ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} \theta} \right] $ and
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} r} \right] $ via the relations
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} R} \right] = \frac{ \cos \theta }{r}
\left[ \frac{\mathrm{d} \tau}{\mathrm{d} \theta} \right] +
\frac{ 1 }{ \sin \theta } \left[ \frac{\mathrm{d} \tau}{\mathrm{d} h} \right] $
and
$ \left[ \frac{\mathrm{d} \tau}{\mathrm{d} h} \right] = - \frac{ 1 }{r}
\left[ \frac{\mathrm{d} \tau}{\mathrm{d} \theta} \right] -
\frac{ \cos \theta }{ \sin \theta } \left[ \frac{\mathrm{d} \tau}{\mathrm{d} h} \right] $ .
\end{enumerate}
Although each individual quantity entering the rhs of Eqs. (\ref{lens28}) and (\ref{drop14}) can be
calculated separately in the way indicated above, measurements of the size dependent contact angles
of drops or lenses can provide only certain combinations of material parameters.
Their values depend sensitively but in a known way on
the choice of dividing interfaces.
\par
There still remain a number of open questions.
\begin{enumerate}
\item How can the various stiffness constants attributed to the contact line be measured?
The answer to this question requires the extension of the principles developed
here for simple geometries to more general geometries.
How do the stiffness constants of the contact line influence the equilibrium
shapes of more complex structures, e.g., of liquid bridges between solid substrates
or of lenses and drops distorted from their ideal shape due to external forces?
\item How can the stiffness constants of the contact line introduced here
be related to the 'derivatives' of $\tau$ with respect to contact angles
which were introduced by Djikaev and Widom \cite{Widom-n2} in order to restore
invariance of their linear adsorption equation against notional shifts of dividing interfaces?
\end{enumerate}
\vspace{0.40cm}
{\large \bf Acknowledgements:}
The authors thank R. Roth and B. Widom for helpful comments.
One of the authors M.N. expresses his thanks for the hospitality at the Max-Planck-Institut
f\"ur Metallforschung where most of this work has been done; he also acknowledges the support
via the Polish Ministry of Science and Higher Education grant N 202 076 31/0108. |
1501.07686 | \section{Introduction}
Trees are natural
structures
used in many fields in computer sciences like XML~\cite{XML1}, indexing, natural language processing, code generation for compilers, term rewriting~\cite{tata2007}, cryptography~\cite{DBLP:conf/stacs/2004} etc. This large use of this structure leads to concider the theoretical basics of a such notion.
In fact, in many cases, the problem of trees blow-up causes difficulties of storage and representation of this large amount of data. To outcome this problem, many solutions persist. Among them, the use of tree automata and rational tree expressions as compact and finite structures that recognize and represent infinite tree sets.
As a part of the formal language theory, trees are considered as
a
generalization of strings. Indeed in the late of 1960s~\cite{Brain97,magidor1969finite}, many researches generalize strings to trees and many notions
appeared like tree languages, tree automata, rational tree expressions, tree
grammars,
etc.
Since tree automata are beneficial in an acceptance point of view and the rational expressions in a descriptive one, an equivalence between the two representations must be resolved. Fortunately, Kleene result~\cite{TH68} states this equivalence between the accepted language of tree automata and the language denoted by rational expressions.
Kleene theorem proves that the set of languages denoted by all rational expressions over the ranked alphabet $\Sigma$ noted $Rat(\Sigma)$ and the set of all recognized languages over $\Sigma$ noted $Rec(\Sigma)$ are equivalent. This can be checked also by verifying the two inclusions $Rat(\Sigma)\subseteq Rec(\Sigma)$ and $Rec(\Sigma)\subseteq Rat(\Sigma')$ where $\Sigma\subseteq\Sigma'$.
In other words,
any tree language is recognized by some automaton if and only if it is denoted by some rational expression.
Thus two constructions can be pulled up.
From a rational expression to tree automata, several techniques exist. First, Kuske et
Meinecke
~\cite{DBLP:journals/ita/KuskeM11} generalize the notion of languages
partial
derivation~\cite{DBLP:journals/tcs/Antimirov96} from strings to trees and propose a tree equation automaton which is constructed from a derivation of a linearized version of rational expressions. They use the ZPC structure~\cite{DBLP:journals/ijac/ChamparnaudZ01} to reach best complexity. After that, Mignot et al.~\cite{DBLP:journals/corr/MignotSZ14} propose an efficient algorithm to compute this generalized tree equation automata. Next, Laugerotte et al.~\cite{DBLP:conf/lata/LaugerotteSZ13} generalize position automata to trees.
Finally, the morphic links between these constructions have been defined in~\cite{AFL2014}.
In this paper, we propose a construction of the second way of Kleene Theorem, the passage from a tree automaton to its rational tree expression. For this reason we propose a generalization of Arden's
Lemma
for strings to trees.
The
complexity of a such construction is exponential.
Section \ref{pre} recalls some preliminaries and basic properties.
We generalize the notion of equation system in Section~\ref{sec eq syst}.
Next the generalization of Arden's lemma to trees and its proof is given in Section \ref{ar}, leading to the computation of some solutions for particular recursive systems.
Finally, we show how to compute a rational expression denoting the language recognized by a tree automaton in Section~\ref{co}.
\section{Preliminaries and Basic Properties}\label{pre}
Let $\Sigma=\bigcup_{n\geq 0} \Sigma_n$ be a graded alphabet.
A \emph{tree} $t$ over $\Sigma$ is inductively defined by $t=f(t_1,\ldots,t_n)$ with $f\in\Sigma_n$ and $t_1,\ldots,t_n$ any $n$ trees over $\Sigma$.
A \emph{tree language} is a subset of $T(\Sigma)$.
The \emph{subtrees set} $\mathrm{St}(t)$ of a tree $t=f(t_1,\ldots,t_n)$ is defined by $\mathrm{St}(t)=\{t\}\cup \bigcup ^n_{k=1} \mathrm{St}(t_k)$.
This set is extended to tree languages, and the \emph{subtrees set} $\mathrm{St}(L)$ of a tree language $L\subset T(\Sigma)$ is $\mathrm{St}(L)=\bigcup_{t\in L} \mathrm{St}(t)$.
The \emph{height} of a tree $t$ in $T(\Sigma)$ is defined inductively by $\mathrm{Height}(f(t_1,\ldots,t_n))=1+\max\{\mathrm{Height}(t_i)\mid 1\leq i\leq n\}$ where $f$ is a symbol in $\Sigma_n$ and $t_1,\ldots,t_n$ are any $n$ trees over $\Sigma$.
A \emph{finite tree automaton} (FTA) over $\Sigma$ is a $4$-tuple $\mathcal{A}=(\Sigma,Q,Q_f,\Delta)$ where $Q$ is a finite set of \emph{states}, $Q_f\subset Q$ is the set of \emph{final states} and $\Delta \subset \bigcup_{n\geq 0}\Sigma_n \times Q^{n+1}$ is a finite set of \emph{transitions}.
The \emph{output} of $\mathcal{A}$, noted $\delta$, is a function from $T(\Sigma)$ to $2^Q$ inductively defined for any tree $t=f(t_1,\ldots,t_n)$ by $\delta(t)=\{q\in Q \mid \exists (f,q_1,\ldots,q_n,q)\in \Delta, (\forall 1\leq i\leq n,q_i\in\delta(t_i))\}$.
The \emph{accepted language} of $\mathcal{A}$ is $L(\mathcal{A})=\{t\in T(\Sigma)| \delta(t)\cap Q_f\neq\emptyset\}$.
The \emph{state language} $L(q)$ (also known as down language~\cite{conf/stringology/CleophasKSW09}) of a state $q\in Q$ is defined by $L(q)=\{t \in T(\Sigma) | q\in \delta(t)\}$.
Obviously,
\begin{align}
L(\mathcal{A})=\bigcup_{q\in Q_f}L(q) \label{eq lang et lang bas}
\end{align}
In the following of this paper, we consider \emph{accessible} FTAs, that are FTAs any state $q$ of which satisfies $L(q)\neq\emptyset$.
Obviously, any FTA admits an equivalent accessible FTA obtained by removing the states the down language of which is empty.
Given a symbol $c$ in $\Sigma_0$, the $c$-\emph{product} is the operation $\cdot_c$ defined for any tree $t$ in $T(\Sigma)$ and for any tree language $L$ by
\begin{equation}
t\cdot_c L=
\left\{
\begin{array}{l@{\ }l}
L & \text{ if }t=c,\\
\{d\} & \text{ if }t=d\in\Sigma_0\setminus\{c\},\\
f(t_1\cdot_c L,\ldots,t_n\cdot_c L ) & \text{ otherwise if } t=f(t_1,\ldots,t_n)\\
\end{array}
\right.\label{eq def lang sub}
\end{equation}
This $c$-product is extended for any two tree languages $L$ and $L'$ by $L\cdot_c L'=\bigcup_{t\in L} t\cdot_c L'$.
In the following of this paper, we use some equivalences over expressions using some properties of the $c$-product.
Let us state these properties of the $c$-product.
As it is the case of catenation product in the string case, it distributes over the union:
\begin{lemma}\label{lem distrib prod sum}
Let $L_1$, $L_2$ and $L_3$ be three tree languages over $\Sigma$.
Let $c$ be a symbol in $\Sigma_0$.
Then:
\begin{align*}
(L_1\cup L_2)\cdot_c L_3 &= (L_1 \cdot_c L_3) \cup (L_2\cdot_c L_3)
\end{align*}
\end{lemma}
\begin{proof}
Let $t$ be a tree in $T(\Sigma)$.
Then:
\begin{align*}
t\in (L_1\cup L_2)\cdot_c L_3 & \Leftrightarrow \exists u\in L_1\cup L_2, \exists v\in L_3, t=u\cdot_c v \\
& \Leftrightarrow (\exists u\in L_1, \exists v\in L_3, t=u\cdot_c v )\vee (\exists u\in L_2, \exists v\in L_3, t=u\cdot_c v) \\
& \Leftrightarrow t\in (L_1 \cdot_c L_3) \cup (L_2\cdot_c L_3)
\end{align*}
\qed
\end{proof}
Another common property with the catenation product is that any operator $\cdot_c$ is associative:
\begin{lemma}
Let $t$ and $t'$ be any two trees in $T(\Sigma$), let $L$ be a tree language over $\Sigma$ and let $c$ be a symbol in $\Sigma_0$.
Then:
\begin{align*}
t\cdot_c(t'\cdot_c L)& = (t\cdot_c t')\cdot_c L
\end{align*}
\end{lemma}
\begin{proof}
By induction over the structure of $t$.
\begin{enumerate}
\item Consider that $t=c$.
Then $t\cdot_c(t'\cdot_c L)=t'\cdot_c L=(t\cdot_c t')\cdot_c L$.
\item Consider that $t\in\Sigma_0\setminus\{c\}$.
Then $t\cdot_c(t'\cdot_c L)=t=(t\cdot_c t')\cdot_c L$.
\item Let us suppose that $t=f(t_1,\ldots,t_n)$ with $n>0$. Then, following Equation~\eqref{eq def lang sub}:
\begin{align*}
f(t_1,\ldots,t_n) \cdot_c(t'\cdot_c L)& = f(t_1\cdot_c(t'\cdot_c L),\ldots,t_n\cdot_c(t'\cdot_c L)) \\
& = f((t_1\cdot_c t')\cdot_c L,\ldots,(t_n\cdot_c t')\cdot_c L) & \text{(Induction hypothesis)}\\
& =f(t_1\cdot_c t',\ldots,t_n\cdot_c t')\cdot_c L \\
& =(f(t_1,\ldots,t_n)\cdot_c t')\cdot_c L
\end{align*}
\end{enumerate}
\qed
\end{proof}
\begin{corollary}\label{cor cdotc assoc}
Let $L$, $L'$ and $L''$ be any three tree languages over a graded alphabet $\Sigma$ and let $c$ be a symbol in $\Sigma_0$.
Then:
\begin{align*}
L\cdot_c(L'\cdot_c L'') &= (L\cdot_c L')\cdot_c L''
\end{align*}
\end{corollary}
However, the associativity is not necessarily satisfied if the substitution symbols are different; as an example, $(f(a,b)\cdot_a b)\cdot_b c\neq f(a,b) \cdot_a (b\cdot_b c)$.
Finally, the final common property is that the operation $\cdot_c$ is compatible with the inclusion:
\begin{lemma}
Let $t$ be a tree over $\Sigma$, and let $L\subset L'$ be two tree languages over $\Sigma$.
Then:
\begin{align*}
t\cdot_c L & \subset t\cdot_c L'
\end{align*}
\end{lemma}
\begin{proof}
By induction over the structure of $t$.
\begin{enumerate}
\item Consider that $t=c$.
Then $c\cdot_c L =L \subset L'=c\cdot_c L'$.
\item Consider that $t\in\Sigma_0\setminus\{c\}$.
Then $t \cdot_c L =\{t\}=t\cdot_c L'$.
\item Let us suppose that $t=f(t_1,\ldots,t_n)$.
\begin{align*}
\intertext{Then}
f(t_1,\ldots,t_n) \cdot_c L & = f(t_1\cdot_c L ,\ldots,t_n\cdot_c L)
\intertext{By induction hypothesis,}
\forall 1\leq j\leq n, t_j\cdot_c L & \subset t_j \cdot_c L'
\intertext{Therefore,}
f(t_1\cdot_c L ,\ldots,t_n\cdot_c L)& \subset f(t_1\cdot_c L' ,\ldots,t_n\cdot_c L')=t\cdot_c L'
\end{align*}
\end{enumerate}
\qed
\end{proof}
\begin{corollary}\label{cor cdotc comp incl}
Let $L$, $L'\subset L''$ be any three tree languages over $\Sigma$ and let $c$ be a symbol in $\Sigma_0$.
Then:
\begin{align*}
L\cdot_c L' & \subset L\cdot_c L''
\end{align*}
\end{corollary}
The first property not shared with the classical catenation product is that the $c$-product may distribute over other products:
\begin{lemma}\label{lem distribut sub}
Let $t_1$, $t_2$ and $t_3$ be any three trees in $T(\Sigma)$.
Let $a$ and $b$ be two distinct symbols in $\Sigma_0$ such that $a$ does not appear in $t_3$.
Then:
\begin{align*}
(t_1\cdot_a t_2)\cdot_b t_3 & =(t_1 \cdot_b t_3)\cdot_a (t_2\cdot_b t_3)
\end{align*}
\end{lemma}
\begin{proof}
By induction over $t_1$.
\begin{enumerate}
\item If $t_1=a$, then
\begin{align*}
(t_1\cdot_a t_2)\cdot_b t_3 = t_2 \cdot_b t_3 =(t_1 \cdot_b t_3)\cdot_a (t_2\cdot_b t_3)
\end{align*}
\item If $t_1=b$, then
\begin{align*}
(t_1\cdot_a t_2)\cdot_b t_3 = t_3 =(t_1 \cdot_b t_3)\cdot_a (t_2\cdot_b t_3)
\end{align*}
\item If $t_1=c\in\Sigma_0\setminus\{a,b\}$, then
\begin{align*}
(t_1\cdot_a t_2)\cdot_b t_3 = t_1 =(t_1 \cdot_b t_3)\cdot_a (t_2\cdot_b t_3)
\end{align*}
\item If $t_1=f(u_1,\ldots,u_n)$ with $n>0$, then, following Equation~\eqref{eq def lang sub}:
\begin{align*}
(t_1\cdot_a t_2)\cdot_b t_3 & = (f(u_1\cdot_a t_2,\ldots,u_n\cdot_a t_2))\cdot_b t_3\\
& = f((u_1\cdot_a t_2)\cdot_b t_3,\ldots,(u_n\cdot_a t_2)\cdot_b t_3) \\
& = f((u_1\cdot_b t_3)\cdot_a (t_2\cdot_b t_3),\ldots,(u_n\cdot_b t_3)\cdot_a (t_2\cdot_b t_3)) & \text{(Induction Hypothesis)}\\
& = f(u_1\cdot_b t_3,\ldots,u_n\cdot_b t_3)\cdot_a (t_2\cdot_b t_3) \\
& = (f(u_1,\ldots,u_n)\cdot_b t_3)\cdot_a (t_2\cdot_b t_3)
\end{align*}
\end{enumerate}
\qed
\end{proof}
\begin{corollary}\label{cor distribut sub}
Let $L_1$, $L_2$ and $L_3$ be any three tree languages over $\Sigma$.
Let $a$ and $b$ be two distinct symbols in $\Sigma_0$ such that $L_3\subset T(\Sigma\setminus\{a\})$.
Then:
\begin{align*}
(L_1\cdot_a L_2)\cdot_b L_3 & =(L_1 \cdot_b L_3)\cdot_a (L_2\cdot_b L_3)
\end{align*}
\end{corollary}
In some particular cases, two products commute:
\begin{lemma}\label{lem commut sub}
Let $t_1$, $t_2$ and $t_3$ be any three trees in $T(\Sigma)$.
Let $a$ and $b$ be two distinct symbols in $\Sigma_0$ such that $a$ does not appear in $t_3$ and such that $b$ does not appear in $t_2$.
Then:
\begin{align*}
(t_1\cdot_a t_2)\cdot_b t_3 & =(t_1 \cdot_b t_3)\cdot_a t_2
\end{align*}
\end{lemma}
\begin{proof}
By induction over $t_1$.
\begin{enumerate}
\item If $t_1=a$, then
\begin{alignat*}{2}
(t_1\cdot_a t_2)\cdot_b t_3 & = (a\cdot_a t_2) \cdot_b t_3 & & = t_2 \cdot_b t_3\\
& = t_2 & & = a \cdot_a t_2\\
& = (a \cdot_b t_3) \cdot_a t_2 & & =(t_1 \cdot_b t_3)\cdot_a t_2
\end{alignat*}
\item If $t_1=b$, then
\begin{alignat*}{2}
(t_1\cdot_a t_2)\cdot_b t_3 & = (b\cdot_a t_2)\cdot_b t_3&& = b \cdot_b t_3\\
& = t_3 && = t_3\cdot_a t_2 \\
& = (b \cdot_b t_3)\cdot_a t_2 && = (t_1 \cdot_b t_3)\cdot_a t_2)
\end{alignat*}
\item If $t_1=c\in\Sigma_0\setminus\{a,b\}$, then
\begin{alignat*}{2}
(t_1\cdot_a t_2)\cdot_b t_3 & = (c \cdot_a t_2)\cdot_b t_3 && = c \cdot_b t_3\\
& = c && = c \cdot_a t_2\\
& = (c \cdot_b t_3) \cdot_a t_2
\end{alignat*}
\item If $t_1=f(u_1,\ldots,u_n)$ then, following Equation~\eqref{eq def lang sub}:
\begin{align*}
(t_1\cdot_a t_2)\cdot_b t_3 & = (f(u_1\cdot_a t_2,\ldots,u_n\cdot_a t_2))\cdot_b t_3\\
& = f((u_1\cdot_a t_2)\cdot_b t_3,\ldots,(u_n\cdot_a t_2)\cdot_b t_3)\\
& = f((u_1\cdot_b t_3)\cdot_a t_2,\ldots,(u_n\cdot_b t_3)\cdot_a t_2)& \text{(Induction Hypothesis)}\\
& = f(u_1\cdot_b t_3,\ldots,u_n\cdot_b t_3)\cdot_a t_2\\
& = (f(u_1,\ldots,u_n)\cdot_b t_3)\cdot_a t_2
\end{align*}
\end{enumerate}
\qed
\end{proof}
The \emph{iterated} $c$-\emph{product} is the operation $^{n,c}$ recursively defined for any integer $n$ by:
\begin{align}
L^{0,c} & = \{c\}\label{eq def iter prod 1}\\
L^{n+1,c} & = L^{n,c} \cup L\cdot_c L^{n,c}\label{eq def iter prod 2}
\end{align}
The $c$-\emph{closure} is the operation $^{*_{c}}$ defined by $L^{*_c} = \bigcup_{n\geq 0} L^{n,c}$.
Notice that, unlike the string case, the products may commute with the closure in some cases:
\begin{lemma}\label{lem permut sub star}
Let $L_1$ and $L_2$ be any two tree languages over $\Sigma$.
Let $a$ and $b$ be two distinct symbols in $\Sigma_0$ such that $L_2\subset T(\Sigma\setminus\{a\})$.
Then:
\begin{align*}
L_1^{*_a}\cdot_b L_2 & =(L_1\cdot_b L_2)^{*_a}
\end{align*}
\end{lemma}
\begin{proof}
Let us show by recurrence over the integer $n$ that $L_1^{n,a}\cdot_b L_2 =(L_1\cdot_b L_2)^{n,a}$.
\begin{enumerate}
\item If $n=0$, then, according to Equation~\eqref{eq def iter prod 1}):
\begin{align*}
L_1^{0,a}\cdot_b L_2 = \{a\} = (L_1\cdot_b L_2)^{0,a}
\end{align*}
\item If $n>0$, then, following Equation~\eqref{eq def iter prod 2}):
\begin{align*}
L_1^{n+1,a}\cdot_b L_2 & = (L_1^{n,a}\cdot_a L_1 \cup L_1^{n,a})\cdot_b L_2 \\
& = (L_1^{n,a}\cdot_a L_1)\cdot_b L_2 \cup (L_1^{n,a})\cdot_b L_2 &\text{(Lemma~\ref{lem distrib prod sum})}\\
& = ((L_1^{n,a}\cdot_b L_2)\cdot_a (L_1\cdot_b L_2)) \cup (L_1^{n,a})\cdot_b L_2 & \text{(Corollary~\ref{cor distribut sub})}\\
& = ((L_1\cdot_b L_2)^{n,a} \cdot_a (L_1\cdot_b L_2)) \cup (L_1\cdot_b L_2)^{n,a} & \text{(Induction Hypothesis)}\\
& = (L_1\cdot_b L_2)^{n+1,a}
\end{align*}
\end{enumerate}
As a direct consequence, $L_1^{*_a}\cdot_b L_2 =(L_1\cdot_b L_2)^{*_a}$.
\qed
\end{proof}
A \emph{rational expression} $E$ over $\Sigma$ is inductively defined by:
\begin{align*}
\begin{gathered}
\begin{aligned}
E &=0, & E&=f(E_1,\ldots,E_n),
\end{aligned}\\
\begin{aligned}
E&=E_1+E_2, & E&=E_1\cdot_c E_2,& E&=E_1^{*_c}
\end{aligned}
\end{gathered}
\end{align*}
where $f$ is any symbol in $\Sigma_n$, $c$ is any symbol in $\Sigma_0$ and $E_1,\ldots,E_n$ are any $n$ rational expressions.
The \emph{language denoted by} $E$ is the tree language $L(E)$ inductively defined by:
\begin{align*}
\begin{gathered}
\begin{aligned}
L(0)&=\emptyset, & L(f(E_1,\ldots,E_n))&=f(L(E_1),\ldots,L(E_n)),
\end{aligned}\\
\begin{aligned}
L(E_1+E_2)&=L(E_1)\cup L(E_2), & L(E_1\cdot_c E_2)&=L(E_1)\cdot_c L(E_2),& L(E_1^{*_c})&=(L(E_1))^{*_c}
\end{aligned}
\end{gathered}
\end{align*}
where $f$ is any symbol in $\Sigma_n$, $c$ is any symbol in $\Sigma_0$ and $E_1,\ldots,E_n$ are any $n$ rational expressions.
In the following of this paper, we consider that rational expressions include some \emph{variables}.
Let $X=\{x_1,\ldots,x_k\}$ be a set of $k$ variables.
A rational expression $E$ over $(\Sigma,X)$ is inductively defined by:
\begin{align*}
\begin{gathered}
\begin{aligned}
E &=0, & E&=x_j,& E&=f(E_1,\ldots,E_n),
\end{aligned}\\
\begin{aligned}
E&=E_1+E_2, & E&=E_1\cdot_c E_2,& E&=E_1^{*_c}
\end{aligned}
\end{gathered}
\end{align*}
where $f$ is any symbol in $\Sigma_n$, $c$ is any symbol in $\Sigma_0$, $1\leq j\leq k$ is any integer and $E_1,\ldots,E_n$ are any $n$ rational expressions over $(\Sigma,X)$.
The language denoted by an expression with variables needs a context to be computed: indeed, any variable has to be evaluated according to a tree language.
Let $\mathcal{L}=(L_1,\ldots,L_k)$ be a $k$-tuple of tree languages over $\Sigma$.
The $\mathcal{L}$-language denoted by $E$ is the tree language $L_\mathcal{L}(E)$ inductively defined by:
\begin{align*}
\begin{gathered}
\begin{aligned}
L_\mathcal{L}(0)&=\emptyset, & L_\mathcal{L}(x_j)&=L_j,
\end{aligned}\\
\begin{aligned}
L_\mathcal{L}(f(E_1,\ldots,E_n))&=f(L_\mathcal{L}(E_1),\ldots,L_\mathcal{L}(E_n)),
\end{aligned}\\
\begin{aligned}
L_\mathcal{L}(E_1+E_2)&=L_\mathcal{L}(E_1)\cup L_\mathcal{L}(E_2)
\end{aligned}\\
\begin{aligned}
L_\mathcal{L}(E_1\cdot_c E_2)&=L_\mathcal{L}(E_1)\cdot_c L_\mathcal{L}(E_2),& L_\mathcal{L}(E_1^{*_c})&=(L_\mathcal{L}(E_1))^{*_c}
\end{aligned}
\end{gathered}
\end{align*}
where $f$ is any symbol in $\Sigma_n$, $c$ is any symbol in $\Sigma_0$, $1\leq j\leq k$ is any integer and $E_1,\ldots,E_n$ are any $n$ rational expressions over $(\Sigma,X)$.
Two rational expressions $E$ and $F$ with variables are \emph{equivalent}, denoted by $E\sim F$, if for any tuple $\mathcal{L}$ of languages over $\Sigma$, $L_\mathcal{L}(E)= L_\mathcal{L}(F)$.
Let $\Gamma\subset\Sigma$.
Two rational expressions $E$ and $F$ with variables are $\Gamma$-\emph{equivalent}, denoted by $E\sim_\Gamma F$, if for any tuple $\mathcal{L}$ of languages over $\Gamma$, $L_\mathcal{L}(E)= L_\mathcal{L}(F)$.
By definition,
\begin{align}
E\sim F &\Rightarrow E\sim_\Gamma F\label{eq impl sim alpha}
\end{align}
Notice that any expression over $(\Sigma,X)$ is also an expression over $\Sigma\cup X$.
However, two equivalent rational expressions over $(\Sigma,X)$ are not necessarily equivalent as rational expressions over $\Sigma\cup X$.
As an example, $x\cdot_a b$ is equivalent to $x$ as expressions over $\{a,b,x\}$, but not as expressions over $(\{a,b\},\{x\})$:
\begin{alignat*}{4}
L(x\cdot_a b) &=\{x\} &&= L(x)\\
L_{\{a\}}(x\cdot_a b)& =\{b\} &&\neq L_{\{a\}}(x) &&=\{a\}
\end{alignat*}
In the following, we denote by $E_{x\leftarrow E'}$ the expression obtained by substituting any symbol $x$ by the expression $E'$ in the expression $E$.
Obviously, this transformation is inductively defined as follows:
\begin{align*}
\begin{gathered}
\begin{aligned}
a_{x\leftarrow E'} & =a & 0_{x\leftarrow E'} & =0\\
y_{x\leftarrow E'} & =y & x_{x\leftarrow E'} & =E'
\end{aligned}\\
\begin{aligned}
(f(E_1,\ldots,E_n))_{x\leftarrow E'} =f((E_1)_{x\leftarrow E'},\ldots,(E_n)_{x\leftarrow E'})
\end{aligned}\\
\begin{aligned}
(E_1+E_2)_{x\leftarrow E'} & =(E_1)_{x\leftarrow E'}+(E_2)_{x\leftarrow E'} & (E_1\cdot_c E_2)_{x\leftarrow E'} & =(E_1)_{x\leftarrow E'}\cdot_c(E_2)_{x\leftarrow E'}
\end{aligned}\\
\begin{aligned}
(E_1^{*_c})_{x\leftarrow E'} &=((E_1)_{x\leftarrow E'})^{*_c}
\end{aligned}
\end{gathered}
\end{align*}
where $a$ is any symbol in $\Sigma_0$, $x\neq y$ are two variables in $X$, $f$ is any symbol in $\Sigma_n$, $c$ is any symbol in $\Sigma_0$ and $E_1,\ldots,E_n$ are any $n$ rational expressions over $(\Sigma,X)$.
This transformation preserves the language in the following case:
\begin{lemma}\label{lem sub cons lang}
Let $E$ be an expression over an alphabet $\Sigma$ and over a set $X=\{x_1,\ldots,x_n\}$ of variables.
Let $F$ be a rational expression over $(\Sigma,X)$.
Let $x_j$ be a variable in $X$.
Let $\mathcal{L}=(L_1,\ldots,L_n)$ be a $n$-uple of tree languages such that $L_j=L_\mathcal{L}(F)$.
Then:
\begin{align*}
L_\mathcal{L}((E)_{x_j\leftarrow F}) & = L_\mathcal{L}(E)
\end{align*}
\end{lemma}
\begin{proof}
By induction over the structure of $E$.
\begin{enumerate}
\item If $E\in\{a,y,0\}$ with $a\in\Sigma_0$ and $y\neq x_j$, $(E)_{x_j\leftarrow F}=E$.
\item If $E=x_j$, then $(E)_{x_j\leftarrow F}=F$.
Therefore
\begin{alignat*}{2}
L_\mathcal{L}((E)_{x_j\leftarrow F}) &=L_{\mathcal{L}}(F) &&=L_j\\
&=L_{\mathcal{L}}(x_j) && =L_{\mathcal{L}}(E)
\end{alignat*}
\item If $E=f(E_1,\ldots,E_n)$, with $f\in\Sigma_k$, $k>0$ then:
\begin{align*}
L_\mathcal{L}((E)_{x_j\leftarrow F}) &=L_{\mathcal{L}}(f((E_1)_{x_j\leftarrow F},\ldots,(E_n)_{x_j\leftarrow F}))\\
&=f(L_{\mathcal{L}}((E_1)_{x_j\leftarrow F}),\ldots,L_{\mathcal{L}}((E_n)_{x_j\leftarrow F}))\\
&=f(L_{\mathcal{L}}(E_1),\ldots,L_{\mathcal{L}}(E_n)) & \text{(Induction Hypothesis)}\\
&=L_{\mathcal{L}}(f(E_1,\ldots,E_n))
\end{align*}
\item If $E=E_1+E_2$, then
\begin{align*}
L_\mathcal{L}((E_1+E_2)_{x_j\leftarrow F}) &=L_\mathcal{L}((E_1)_{x_j\leftarrow F}+(E_2)_{x_j\leftarrow F})\\
&=L_\mathcal{L}((E_1)_{x_j\leftarrow F})\cup L_\mathcal{L}((E_2)_{x_j\leftarrow F}))\\
&=L_\mathcal{L}(E_1) \cup L_\mathcal{L}(E_2) & \text{(Induction Hypothesis)}\\
&=L_\mathcal{L}(E_1+E_2)
\end{align*}
\item If $E=E_1\cdot_c E_2$, then
\begin{align*}
L_\mathcal{L}((E_1\cdot_c E_2)_{x_j\leftarrow F}) &=L_\mathcal{L}((E_1)_{x_j\leftarrow F}\cdot_c(E_2)_{x_j\leftarrow F})\\
&=L_\mathcal{L}((E_1)_{x_j\leftarrow F})\cdot_c L_\mathcal{L}((E_2)_{x_j\leftarrow F}))\\
&=L_\mathcal{L}(E_1) \cdot_c L_\mathcal{L}(E_2) & \text{(Induction Hypothesis)}\\
&=L_\mathcal{L}(E_1 \cdot_c E_2)
\end{align*}
\item If $E=E_1^{*_c}$, then
\begin{align*}
L_\mathcal{L}((E_1^{*_c})_{x_j\leftarrow F}) &=(L_\mathcal{L}((E_1)_{x_j\leftarrow F}))^{*_c}\\
&=(L_\mathcal{L}(E_1))^{*_c} & \text{(Induction Hypothesis)}\\
&=L_\mathcal{L}(E_1^{*_c})
\end{align*}
\end{enumerate}
\qed
\end{proof}
In the following, we denote by $\mathrm{op}(E)$ the set of the operators that appear in a rational expression $E$.
The previous substitution can be used in order to factorize an expression w.r.t. a variable.
However, this operation does not preserve the equivalence; \emph{e.g.}
\begin{align*}
L_{\{b\}}(x\cdot_b c) = \{c\} \neq L_{\{b\}}((a\cdot_b c) \cdot_a x)=\{b\}
\end{align*}
Nevertheless, this operation preserves the language if it is based on a restricted alphabet:
\begin{proposition}\label{prop cas sub var par lettre ok}
Let $E$ be a rational expression over a graded alphabet $\Sigma$ and over a set $X$ of variables.
Let $x$ be a variable in $X$.
Let $\Gamma\subset\Sigma$ be the subset defined by $\Gamma=\{b\in\Sigma_0\mid \{\cdot_b,^{*_b}\}\cap\mathrm{op}(E)\neq\emptyset\}$.
Let $a$ be a symbol not in $\Sigma$.
Then:
\begin{align*}
E\sim_{\Sigma\setminus\Gamma} (E)_{x\leftarrow a}\cdot_a x
\end{align*}
\end{proposition}
\begin{proof}
By induction over the structure of $E$.
\begin{enumerate}
\item If $E=x$, then since $x\sim_{\Sigma\cup\{a\}} a\cdot_a x$, it holds from Equation~\eqref{eq impl sim alpha} that $E\sim_{\Sigma\setminus\Gamma} (E)_{x\leftarrow a}\cdot_a x$.
\item If $E\in\{0\} \cup \Sigma\cup X\setminus\{x\}$, since $x$ does not appear in $E$, it holds $E=E_{x\leftarrow a}$.
\item If $E=f(E_1,\ldots,E_n)$, then
\begin{align*}
(f(E_1,\ldots,E_n))_{x\leftarrow a}\cdot_a x &= f((E_1)_{x\leftarrow a},\ldots,(E_n)_{x\leftarrow a})\cdot_a x\\
&\sim f((E_1)_{x\leftarrow a}\cdot_a x,\ldots,(E_n)_{x\leftarrow a}\cdot_a x) & \text{(Equation~\eqref{eq def lang sub})}\\
&\sim_{\Sigma\setminus\Gamma} f(E_1,\ldots,E_n) & \text{(Induction hypothesis)}
\end{align*}
\item If $E=E_1+E_2$, then
\begin{align*}
(E_1+E_2)_{x\leftarrow a}\cdot_a x &= ((E_1)_{x\leftarrow a}+(E_2)_{x\leftarrow a})\cdot_a x\\
&\sim ((E_1)_{x\leftarrow a})\cdot_a x+((E_2)_{x\leftarrow a})\cdot_a x & \text{(Lemma~\ref{lem distrib prod sum})}\\
& \sim_{\Sigma\setminus\Gamma} E_1+ E_2 & \text{(Induction hypothesis})
\end{align*}
\item If $E=E_1\cdot_c E_2$, then
\begin{align*}
(E_1\cdot_c E_2)_{x\leftarrow a}\cdot_a x &= ((E_1)_{x\leftarrow a}\cdot_c (E_2)_{x\leftarrow a})\cdot_a x\\
&\sim_{\Sigma} (((E_1)_{x\leftarrow a})\cdot_a x)\cdot_c(((E_2)_{x\leftarrow a})\cdot_a x) & \text{(Corollary~\ref{cor distribut sub})}\\
& \sim_{\Sigma\setminus\Gamma} E_1 \cdot_c E_2 & \text{(Induction hypothesis})
\end{align*}
\item If $E=E_1^{*_c}$, then
\begin{align*}
(E_1^{*_c})_{x\leftarrow a}\cdot_a x &= ((E_1)_{x\leftarrow a})^{*_c}\cdot_a x\\
&\sim_{\Sigma} (((E_1)_{x\leftarrow a})\cdot_a x)^{*_c} & \text{(Lemma~\ref{lem permut sub star})}\\
& \sim_{\Sigma\setminus\Gamma} E_1^{*_c} & \text{(Induction hypothesis})
\end{align*}
\end{enumerate}
\qed
\end{proof}
\section{Equations Systems for Tree Languages}\label{sec eq syst}
Let $\Sigma$ be an alphabet and $\mathbb{E}=\{\mathbb{E}_1,\ldots,\mathbb{E}_n\}$ be a set of $n$ variables.
An \emph{equation} over $(\Sigma,\mathbb{E})$ is an expression $\mathbb{E}_j=F_j$, where $1\leq j\leq n$ is any integer and $F_j$ is a rational expression over $(\Sigma,\mathbb{E})$.
An \emph{equation system} over $(\Sigma,\mathbb{E})$ is a set $\mathcal{X}=\{\mathbb{E}_j=F_j \mid 1\leq j\leq n\}$ of $n$ equations.
Let $\mathcal{L}=(L_1,\ldots,L_n)$ be a $n$-tuple of tree languages.
The tuple $\mathcal{L}$ is a \emph{solution} for an equation $(\mathbb{E}_j=F_j)$ if $L_j=L_{\mathcal{L}}(F_j)$.
The tuple $\mathcal{L}$ is a \emph{solution} for $\mathcal{X}$ if for any equation $(\mathbb{E}_j=F_j)$ in $\mathcal{X}$, $\mathcal{L}$ is a solution of $(\mathbb{E}_j=F_j)$.
\begin{example}\label{ex syst eq}
Let us define the equation system $\mathcal{X}$ as follows:
\begin{align*}
\mathcal{X}&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,\mathbb{E}_4)\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,\mathbb{E}_4)\\
\mathbb{E}_3 & =a+h(\mathbb{E}_4)\\
\mathbb{E}_4 & =a+h(\mathbb{E}_3)
\end{cases}
\end{align*}
The tuple $(\emptyset,\emptyset,\emptyset,\emptyset)$ is a solution for the equation $\mathbb{E}_1=F_1$, but not of the system $\mathcal{X}$.
\end{example}
Two systems over the same variables are \emph{equivalent} if they admit the same solutions.
Notice that a system does not necessarily admit a unique solution.
As an example, any language is a solution of the system $\mathbb{E}_1=\mathbb{E}_1$.
Obviously,
\begin{proposition}\label{prop pas de var sol un}
If $\mathcal{X}$ only contains equations $\mathbb{E}_k=F_k$ with $F_k$ a rational expression without variables, then $(L(F_1),\ldots,L(F_n))$ is the unique solution of $\mathcal{X}$.
\end{proposition}
Let us now define the operation of substitution, computing an equivalent system.
\begin{definition}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system.
The \emph{substitution of} $(\mathbb{E}_k=F_k)$ in $\mathcal{X}$ is the system $\mathcal{X}^k=\{\mathbb{E}_k=F_k\}\cup\{\mathbb{E}_j=(F_j)_{\mathbb{E}_k\leftarrow F_k} \mid j\neq k\wedge 1\leq j\leq n\}$.
\end{definition}
As a direct consequence of Lemma~\ref{lem sub cons lang},
\begin{proposition}\label{prop sub ok dans syst}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system.
Let $\mathbb{E}_k=F_k$ be an equation in $\mathcal{X}$.
Let $\mathcal{L}$ be a solution of $\mathcal{X}$.
Then for any integer $1\leq j,k\leq n$ with $j\neq k$,
\begin{equation*}
\mathcal{L}\text{ is a solution of }\mathbb{E}_j=(F_j)_{\mathbb{E}_k\leftarrow F_k}.
\end{equation*}
\end{proposition}
And following Proposition~\ref{prop sub ok dans syst},
\begin{proposition}\label{prop sub cons sol}
Let $\mathcal{X}$ be an equation system over $n$ variables.
Let $k\leq n$ be an integer.
Then:
\begin{equation*}
\mathcal{X}\text{ and } \mathcal{X}^k\text{ are equivalent.}
\end{equation*}
\end{proposition}
\begin{example}\label{ex syst substi}
Let us consider the system $\mathcal{X}$ of Example~\ref{ex syst eq}. Then:
\begin{align*}
\mathcal{X}^4&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_3 & =a+h(a+h(\mathbb{E}_3))\\
\mathbb{E}_4 & =a+h(\mathbb{E}_3)
\end{cases}
\end{align*}
\end{example}
Let us determine a particular case that can be solved by successive substitutions.
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system.
The relation $<_\mathcal{X}$ is defined for any two variables $\mathbb{E}_j$ and $\mathbb{E}_k$ by
\begin{align*}
\mathbb{E}_j <_\mathcal{X} \mathbb{E}_k \Leftrightarrow &\ \mathbb{E}_j\text{ appears in }F_k
\end{align*}
The relation $\preceq_\mathcal{X}$ is defined as the transitive closure of $<_\mathcal{X}$.
In the case where $\mathbb{E}_k<_\mathcal{X} \mathbb{E}_k$ , the equation $\mathbb{E}_k=F_k$ is said to be \emph{recursive}.
Let us say that a system is \emph{recursive} if there exists two symbols $\mathbb{E}_j$ and $\mathbb{E}_k$ such that $\mathbb{E}_j\preceq_\mathcal{X} \mathbb{E}_k$ and $\mathbb{E}_k\preceq_\mathcal{X} \mathbb{E}_j$.
If a system is not recursive, it can be solved by successive substitutions.
If $\mathbb{E}_k$ is a variable that does not appear in any right side of an equation of $\mathcal{X}$, we denote by $\mathcal{X}\setminus (\mathbb{E}_k=F_k)$ the system obtained by removing $\mathbb{E}_k=F_k$ of $\mathcal{X}$, and by reindexing any symbol $\mathbb{E}_j$ with $j>k$ into $\mathbb{E}_{j-1}$.
\begin{lemma}\label{lem sub cas eq pas rec}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system over a graded alphabet $\Sigma$ and over $n$ variables $\{\mathbb{E}_1,\ldots,\mathbb{E}_n\}$.
Let $\mathbb{E}_k=F_k$ be an equation in $\mathcal{X}$ such that $\mathbb{E}_k=F_k$ is not recursive.
Then for any $n-1$-tuple $Z=(L_1,\ldots,L_{k-1},L_{k+1},\ldots,L_n)$, the two following conditions are equivalent:
\begin{enumerate}
\item $(L_1,\ldots,L_{k-1},L_{Z}(F_k),L_{k+1},\ldots,L_n)$ is a solution of $\mathcal{X}$
\item $(L_1,\ldots,L_{k-1},L_{k+1},\ldots,L_n)$ is a solution of $\mathcal{X}^k\setminus\{\mathbb{E}_k=F_k\}$
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\mathcal{L}=(L_1,\ldots,L_{k-1},L_{Z}(F_k),L_{k+1},\ldots,L_n)$ and $\mathcal{L}'=(L_1,\ldots,L_{k-1},L_{k+1},\ldots,L_n)$.
Obviously, $\mathcal{L}$ is a solution for the (non recursive) equation $\mathbb{E}_k=F_k$.
From Proposition~\ref{prop sub cons sol},
\begin{align*}
\mathcal{L}\text{ is a solution of }\mathcal{X} & \Leftrightarrow \mathcal{L}\text{ is a solution of }\mathcal{X}^k
\intertext{Consequently, for any integer $j\neq k$,}
\mathcal{L}\text{ is a solution of }\mathbb{E}_j=F_j & \Leftrightarrow \mathcal{L}\text{ is a solution of }\mathbb{E}_j=(F_j)_{\mathbb{E}_k\leftarrow F_k}
\intertext{Moreover, by definition of $\mathcal{L}'$, for any integer $j\neq k$,}
\mathcal{L}\text{ is a solution of }\mathbb{E}_j=(F_j)_{\mathbb{E}_k\leftarrow F_k} & \Leftrightarrow \mathcal{L}'\text{ is a solution of }\mathbb{E}_j=(F_j)_{\mathbb{E}_k\leftarrow F_k}\\
& \Leftrightarrow \mathcal{L}'\text{ is a solution of }\mathcal{X}^k\setminus\{\mathbb{E}_k=F_k\}
\end{align*}
\qed
\end{proof}
As a direct consequence of the previous lemma, a non-recursive system can be solved by solving a smaller system, obtained by substitution:
\begin{corollary}\label{cor sub sol}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system over a graded alphabet $\Sigma$ and over $n$ variables $\{\mathbb{E}_1,\ldots,\mathbb{E}_n\}$.
Let $\mathbb{E}_k=F_k$ be an equation in $\mathcal{X}$ such that $F_k$ is a rational expression.
Then for any $n-1$-tuple $(L_1,\ldots,L_{k-1},L_{k+1},\ldots,L_n)$, the two following conditions are equivalent:
\begin{enumerate}
\item $(L_1,\ldots,L_{k-1},L(F_k),L_{k+1},\ldots,L_n)$ is a solution of $\mathcal{X}$
\item $(L_1,\ldots,L_{k-1},L_{k+1},\ldots,L_n)$ is a solution of $\mathcal{X}^k\setminus\{\mathbb{E}_k=F_k\}$
\end{enumerate}
\end{corollary}
Moreover, such a system admits a unique solution:
\begin{proposition}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system that is not recursive over a graded alphabet $\Sigma$ and over variables $\{\mathbb{E}_1,\ldots,\mathbb{E}_n\}$.
Then
\begin{equation*}
\mathcal{X}\text{ admits a unique solution}.
\end{equation*}
\end{proposition}
\begin{proof}
By recurrence over the cardinal of $\mathcal{X}$.
\begin{enumerate}
\item $\mathcal{X}=\{\mathbb{E}_1=F_1\}$, then $F_1$ is a rational expression over $\Sigma$ (with no variable) and therefore $L(F_1)$ is the unique solution of $\mathcal{X}$.
\item Since $\mathcal{X}$ is not recursive, there exists an equation $\mathbb{E}_k=F_k$ with $F_k$ a rational expression over $\Sigma$ (with no variable).
Therefore, according to Corollary~\ref{cor sub sol}, a tuple $(L_1,\ldots,L_{k-1},L(F_k),L_{k+1},\ldots,L_n)$ is a solution of $\mathcal{X}$ if and only if $(L_1,\ldots,L_{k-1},L_{k+1},\ldots,L_n)$ is a solution of $\mathcal{X}^k\setminus\{\mathbb{E}_k=F_k\}$.
By recurrence hypothesis, since $\mathcal{X}^k\setminus\{\mathbb{E}_k=F_k\}$ is not recursive, it admits a unique solution $(L_1,\ldots,L_{k-1},L_{k+1},\ldots,L_n)$.
Thus $(L_1,\ldots,L_{k-1},L(F_k),L_{k+1},\ldots,L_n)$ is a solution of $\mathcal{X}$.
Finally, since for any $L_k\neq L(F_k)$,the tuple $(L_1,\ldots,L_{k-1},L_k,L_{k+1},\ldots,L_n)$ is not a solution for $\mathbb{E}_k=F_k$, $(L_1,\ldots,L_{k-1},L(F_k),L_{k+1},\ldots,L_n)$ is the unique solution of $\mathcal{X}$.
\end{enumerate}
\qed
\end{proof}
\begin{example}
Let us define the equation system $\mathcal{Y}$ as follows:
\begin{align*}
\mathcal{Y}&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_2,\mathbb{E}_3)+f(\mathbb{E}_2,\mathbb{E}_3)\\
\mathbb{E}_2 & =b+f(\mathbb{E}_4,\mathbb{E}_4)\\
\mathbb{E}_3 & =a+h(\mathbb{E}_4)\\
\mathbb{E}_4 & =a+(f(a,b))^{*_b}\cdot_b a
\end{cases}
\end{align*}
Then
\begin{align*}
\mathcal{Y}^4&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_2,\mathbb{E}_3)+f(\mathbb{E}_2,\mathbb{E}_3)\\
\mathbb{E}_2 & =b+f(a+(f(a,b))^{*_b}\cdot_b a,a+(f(a,b))^{*_b}\cdot_b a)\\
\mathbb{E}_3 & =a+h(a+(f(a,b))^{*_b}\cdot_b a)\\
\mathbb{E}_4 & =a+(f(a,b))^{*_b}\cdot_b a
\end{cases}\\
(\mathcal{Y}^4)^3&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_2,a+h(a+(f(a,b))^{*_b}\cdot_b a))+f(\mathbb{E}_2,a+h(a+(f(a,b))^{*_b}\cdot_b a))\\
\mathbb{E}_2 & =b+f(a+(f(a,b))^{*_b}\cdot_b a,a+(f(a,b))^{*_b}\cdot_b a)\\
\mathbb{E}_3 & =a+h(a+(f(a,b))^{*_b}\cdot_b a)\\
\mathbb{E}_4 & =a+(f(a,b))^{*_b}\cdot_b a
\end{cases}\\
((\mathcal{Y}^4)^3)^2&=
\begin{cases}
\mathbb{E}_1 & =f(b+f(a+(f(a,b))^{*_b}\cdot_b a,a+(f(a,b))^{*_b}\cdot_b a),a+h(a+(f(a,b))^{*_b}\cdot_b a))\\
& \quad +f(b+f(a+(f(a,b))^{*_b}\cdot_b a,a+(f(a,b))^{*_b}\cdot_b a),a+h(a+(f(a,b))^{*_b}\cdot_b a))\\
\mathbb{E}_2 & =b+f(a+(f(a,b))^{*_b}\cdot_b a,a+(f(a,b))^{*_b}\cdot_b a)\\
\mathbb{E}_3 & =a+h(a+(f(a,b))^{*_b}\cdot_b a)\\
\mathbb{E}_4 & =a+(f(a,b))^{*_b}\cdot_b a
\end{cases}
\end{align*}
\end{example}
\section{Arden's Lemma for Trees and Recursive Systems}\label{ar}
Arden's Lemma~\cite{Ard61} is a fundamental result in automaton theory.
It gives a solution of the recursive language equation $X=A\cdot X \cup B$ where $X$ is an unknown language.
It can be applied to compute a rational expression from an automaton and therefore prove the second way of Kleene theorem for strings.
Following the same steps as in string case, we generalize this lemma to trees.
\begin{proposition}\label{prop arden}
Let $A$ and $B$ be two tree languages over a graded alphabet $\Sigma$.
Then $A^{*_c}\cdot_c B$ is the smallest language in the family $\mathcal{F}$ of languages $L$ over $\Sigma$ satisfying $L=A\cdot_c L \cup B$.
Furthermore, if $c \notin A$, then $\mathcal{F}=\{A^{*_c}\cdot_c B\}$.
\end{proposition}
\begin{proof}
Let us set $Z=A^{*_c}\cdot_c B$.
\begin{enumerate}
\item Obviously, $Z$ belongs to $\mathcal{F}$:
\begin{align*}
A \cdot_c (A^{*_c}\cdot_c B)\cup B
& = (A \cdot_c A^{*_c})\cdot_c B\cup B & \text{from Corollary}~\ref{cor cdotc assoc}\\
& = (A \cdot_c A^{*_c})\cdot_c B \cup \{c\} \cdot_c B\\
& = ((A \cdot_c A^{*_c}) \cup \{c\} ) \cdot_c B\\
& = A^{*_c}\cdot_c B
\end{align*}
\item Let us now show that if $C$ belongs to $\mathcal{F}$, then $Z\subset C$.
To do so, let us show that for any integer $n\geq 0$, $A^{n,c}\cdot_c B\subset C$.
Since $C$ belongs to $\mathcal{F}$, then $C=A\cdot_c C \cup B$.
Therefore $A^{0,c}\cdot_c B=B\subset C$ and $A\cdot_c C \subset C$.
Suppose that $A^{n,c}\cdot_c B \subset C$ for some integer $n\geq 0$.
Therefore, from Corollary~\ref{cor cdotc comp incl}, $A\cdot_c (A^{n,c}\cdot_c B) \subset A\cdot_c (C)$ and from Corollary~\ref{cor cdotc assoc}, $A^{n+1,c}\cdot_c B \subset A\cdot_c C \subset C$.
Consequently, since for any integer $n$, $A^{n,c}\cdot_c B\subset C$, it holds that $Z=A^{*_c}\cdot_c B \subset C$.
\item Finally, let us show that if $c\notin A$, then any language $Y$ in $\mathcal{F}$ satisfies $Y\subset Z$, implying that $\mathcal{F}=\{Z\}$.
Let $Y \neq Z $ satisfying $Y=A\cdot_c Y\cup B$.
Suppose that $Y\not\subset Z$.
Let $t$ be a tree in $Y\setminus Z$ such that $\mathrm{Height}(y)$ is minimal.
Obviously, since $B\subset Z$, $t$ is not in $B$.
Consequently, $t$ belongs to $A\cdot_c Y$ and therefore $t=t_1\cdot_c t_2$ with $t_1\in A$ and $t_2\in Y$.
Since $c\notin A$, $t_1\neq c$.
Furthermore, if $c$ does not appear in $t_1$, then $t=t_1\in A$ and consequently, $t\in A^{*_c }\cdot_c B=Z$, contradicting the fact that $t\notin Z$.
Therefore $c$ appears in $t_1$ and then $\mathrm{Height}(t_2)<\mathrm{Height}(t)$, contradicting the minimality of the height of $t$.
As a direct consequence, any language $Y$ in $\mathcal{F}$ satisfies $Y\subset Z$.
Following previous point, since $Z\subset Y$, it holds that $Y=Z$.
\end{enumerate}
\qed
\end{proof}
By successive substitutions, any recursive system can be transformed into another equivalent system such that there exists a symbol $\mathbb{E}_j$ satisfying $\mathbb{E}_j<_\mathcal{X} \mathbb{E}_j$.
Let us enlighten a specific case where recursive equations can be solved.
For an integer $k$, the $k$-\emph{split} of an expression $F$ over $(\Sigma,\{\mathbb{E}_1,\ldots,\mathbb{E}_n\})$ is the couple $k\mathrm{-split}(F)$ inductively defined by:
\begin{align*}
k\mathrm{-split}(F) & =
\begin{cases}
(E_1'+E'_2,E''_1+E''_2) &\text{ if }F=E_1+E_2 \\
& \quad \wedge k\mathrm{-split}(E_1)=(E'_1,E''_1) \wedge k\mathrm{-split}(E_2)=(E'_2,E''_2),\\
(F,0) & \text{ otherwise if }\mathbb{E}_k\text{ appears in }F,\\
(0,F) & \text{ otherwise.}
\end{cases}
\end{align*}
Obviously, if $k\mathrm{-split}(F)=(F',F'')$, $F\sim F'+F''$.
This tuple can be used to factorize a recursive equation in order to apply Arden's Lemma.
Indeed, as a direct consequence of Proposition~\ref{prop cas sub var par lettre ok},
\begin{proposition}\label{prop sub par split ok}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system.
Let $a$ be a symbol not in $\Sigma$.
Let $1\leq k\leq n$ be an integer.
Let $\Gamma\subset\Sigma$ be the subset defined by $\Gamma=\{c\in\Sigma_0\mid \{\cdot_c,^{*_c}\}\cap\mathrm{op}(F_k)\neq\emptyset\}$.
Let $\mathcal{L}$ be a $n$-tuple of tree languages over the alphabet $\Sigma\setminus\Gamma$.
Let $k\mathrm{-split}(F)=(F'_k,F''_k)$.
Then the two following conditions are equivalent:
\begin{enumerate}
\item $\mathcal{L}$ is a solution for $\mathbb{E}_k=F_k$,
\item $\mathcal{L}$ is a solution for $\mathbb{E}_k=(F'_k)_{\mathbb{E}_k\leftarrow a}\cdot_a \mathbb{E}_k+F''_k$.
\end{enumerate}
\end{proposition}
Once an equation factorized, the Arden's Lemma can be applied by \emph{contraction}:
\begin{definition}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system.
Let $1\leq k\leq n$ be an integer such that $\mathbb{E}_k=F'_k\cdot_c \mathbb{E}_k+F''_k$.
The \emph{contraction of} $(\mathbb{E}_k=F_k)$ in $\mathcal{X}$ is the system $\mathcal{X}_{k}=\{\mathbb{E}_k=(F'_k)^{*_c}\cdot_c F''_k)\}\cup\{\mathbb{E}_j=F_j\mid j\neq k\wedge 1\leq j\leq n\}$.
\end{definition}
Following Proposition~\ref{prop arden}, such a contraction preserves the language:
\begin{proposition}\label{prop sol pour eq rec}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be an equation system.
Let $1\leq k\leq n$ be an integer such that $\mathbb{E}_k=F'_k\cdot_c \mathbb{E}_k+F''_k$.
Let $\mathcal{L}=(L_1,\ldots,L_n)$ be a $n$-tuple of tree languages.
Then the two following conditions are equivalent:
\begin{enumerate}
\item $\mathcal{L}$ is a solution of $\mathcal{X}$,
\item $\mathcal{L}$ is a solution of $\mathcal{X}_k$.
\end{enumerate}
Furthermore, if $c$ is not in $L_\mathcal{L}(F'_k)$ then for any language $L'_k\neq L_k$,
\begin{equation*}
(L_1,\ldots,L_{k-1},L'_k,L_{k+1},\ldots,L_n)\text{ is not a solution of }\mathcal{X}.
\end{equation*}
\end{proposition}
\begin{example}
Let us consider the system $\mathcal{X}_4$ of Example~\ref{ex syst substi}:
\begin{align*}
\mathcal{X}^4&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_3 & =a+h(a+h(\mathbb{E}_3))\\
\mathbb{E}_4 & =a+h(\mathbb{E}_3)
\end{cases}
\end{align*}
The $2-\mathrm{split}$ of $b+f(\mathbb{E}_2,a+h(\mathbb{E}_3))$ is $f(x_2,a+h(\mathbb{E}_3))\cdot_{x_2}\mathbb{E}_2 +b$, contracted in $f(x_2,a+h(\mathbb{E}_3))^{*_{x_2}}\cdot_{x_2} b$.
\end{example}
However, as it was recalled in Proposition~\ref{prop sub par split ok}, the factorization that precedes a contraction does not necessarily produce an equivalent expression.
Let us now define a sufficient property in order to detect solvable systems.
Obviously, it is related to the symbols that appear in a product or a closure.
The \emph{scope} of an operator is its operands.
An occurrence of a symbol $c$ in $\Sigma_0$ is said to be bounded if it appears in the scope or if it is the symbol of an operator $\cdot_c$ or $^{*_c}$.
An expression (resp. a system $\mathcal{X}$) is said to be \emph{closed} if all of the occurrences of a bounded symbol are bounded.
In this case, the set $\mathrm{free}(\mathcal{X})$ contains the symbols of $\Sigma_0$ that are not bounded.
Let us first show that the closedness is preserved by substitution, factorization and contraction.
\begin{lemma}\label{lem sub pres closed}
Let $F$ and $F'$ be two closed expressions over $\Sigma,\mathbb{E}$ such that the bounded symbols of $F$ are bounded in $F'$.
Let $\mathbb{E}_k$ be a variable in $\mathbb{E}$.
Then:
\begin{align*}
F_{\mathbb{E}_k\leftarrow F'}\text{ is closed.}
\end{align*}
\end{lemma}
\begin{proof}
By induction over the structure of $F$.
Let us define for any expression $H$, the expression $G(H)=H_{\mathbb{E}_k\leftarrow F'}$.
Let us set $G=G(F)$.
\begin{enumerate}
\item If $F\in\Sigma_0\cup\{0\}\cup\mathbb{E}\setminus\{\mathbb{E}_k\}$, then $G=F$.
Therefore $G$ is closed.
\item If $F=\mathbb{E}_k$, then $G=F'$.
Therefore $G$ is closed.
\item If $F=f(E_1,\ldots,E_n)$, then $G=f(G(E_1),\ldots,G(E_n))$.
By induction hypothesis, $G(E_1)$,$\ldots$, and $G(E_n)$ are closed, and as a consequence so is $G$.
\item If $F=E_1+E_2$, then $G=G(E_1)+G(E_2)$.
By induction hypothesis, $G(E_1)$ and $G(E_2)$ are closed, and therefore so is $G$.
\item If $F=E_1\cdot_c E_2$, then $G=G(E_1)\cdot_c G(E_2)$.
By induction hypothesis, $G(E_1)$ and $G(E_2)$ are closed.
Since the bounded symbols of $F$ are bounded in $F'$, $c$ is bounded in $G(E_1)$.
Consequently, $G$ is closed.
\item If $F=E_1^{*_c}$, then $G=(G(E_1))^{*_c}$.
By induction hypothesis, $G(E_1)$ is closed.
Since the bounded symbols of $F$ are bounded in $F'$, $c$ is bounded in $G(E_1)$.
Consequently, $G$ is closed.
\end{enumerate}
\qed
\end{proof}
As two direct consequences of Lemma~\ref{lem sub pres closed}:
\begin{corollary}\label{cor sub pres closed}
Let $\mathcal{X}$ be an equation system over $n$ variables.
Let $1\leq k\leq n$ be an integer.
Then:
\begin{equation*}
\mathcal{X}^k\text{ is closed.}
\end{equation*}
\end{corollary}
\begin{corollary}\label{cor forme fact pre clos}
Let $F$ be a closed expressions over $\Sigma,\mathbb{E}$.
Let $\mathbb{E}_k$ be a variable in $\mathbb{E}$.
Let $k-\mathrm{split}(F_n)=(F',F'')$.
Let $a$ be a symbol not in $\Sigma$.
Then:
\begin{align*}
(F')_{\mathbb{E}_k\leftarrow a}\cdot_a \mathbb{E}_k + F'' \text{ is closed.}
\end{align*}
\end{corollary}
The stability of the closedness by contraction is even easier to prove; since it is not an inductive transformation:
\begin{lemma}\label{lem contrac pres close}
Let $E=F\cdot_c F'+F''$ be a closed expression.
Then:
\begin{align*}
F^{*_c}\cdot_c F'' \text{ is a closed expression.}
\end{align*}
\end{lemma}
\begin{proof}
Let $E'=F^{*_c}\cdot_c F''$.
Suppose that $E'$ is not closed.
Either there exists an occurrence of $c$ that is not bounded in $F''$, or there exists an operator in $\{\cdot_a,^{*_a}\}$ appearing in $F$ (resp. $F''$) such that an occurrence of $a$ is not bounded in $F''$ (resp. in $F$).
Contradiction with the closedness of $E$.
\qed
\end{proof}
\begin{corollary}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be a closed equation system.
Let $1\leq k\leq n$ be an integer such that $\mathbb{E}_k=F'_k\cdot_c \mathbb{E}_k+F''_k$.
Then:
\begin{equation*}
\mathcal{X}_k\text{ is closed.}
\end{equation*}
\end{corollary}
Finally, let us show that a closed system can be effectively solved: we show that it admits some rational solutions, \emph{i.e.} solutions formed by rational languages.
And we give a way to compute expressions to denote it.
In the following, we say that a $n$-tuple of rational expressions $(E_1,\ldots,E_n)$ \emph{denotes} a rational solution $(L_1,\ldots, L_n)$ if $L_i=L(E_i)$ for any $1\leq i\leq n$.
The following example illustrates how to compute some rational expressions denoting a solution.
\begin{example}
Let us consider the closed system $\mathcal{X}$ of Example~\ref{ex syst eq}.
By substitution of $\mathbb{E}_3$, we obtain
\begin{align*}
\mathcal{X}^3&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,\mathbb{E}_4)\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h(\mathbb{E}_4))\\
\mathbb{E}_3 & =a+h(\mathbb{E}_4)\\
\mathbb{E}_4 & =a+h(a+h(\mathbb{E}_4))
\end{cases}
\end{align*}
The $4\mathrm{-split}$ of $a+h(a+h(\mathbb{E}_4))$ leads to the factorization $(h(a+h(x_4)))\cdot_{x_4}\mathbb{E}+a$, contracted in $(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a$.
Then, we obtain
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,\mathbb{E}_4)\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h(\mathbb{E}_4))\\
\mathbb{E}_3 & =a+h(\mathbb{E}_4)\\
\mathbb{E}_4 & =(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a
\end{cases}
\end{align*}
By substitution,
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a))\\
\mathbb{E}_3 & =a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)\\
\mathbb{E}_4 & =(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a
\end{cases}
\end{align*}
The $2\mathrm{-split}$ of $b+f(\mathbb{E}_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a))$ leads to the factorization $(f(x_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)))\cdot_{x_2}\mathbb{E}_2+b$, contracted in $(f(x_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)))^{*_{x_2}}\cdot_{x_2} b$. Thus, we obtain the new system
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f((f(x_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)))^{*_{x_2}}\cdot_{x_2} b,(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)\\
\mathbb{E}_2 & =(f(x_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)))^{*_{x_2}}\cdot_{x_2} b\\
\mathbb{E}_3 & =a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)\\
\mathbb{E}_4 & =(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a
\end{cases}
\end{align*}
Finally, factorizing/contracting the first equation, we obtain the solution
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =(f(x_1,x_1))^{*_{x_1}}\cdot_{x_1}(f((f(x_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)))^{*_{x_2}}\cdot_{x_2} b,(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a))\\
\mathbb{E}_2 & =(f(x_2,a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)))^{*_{x_2}}\cdot_{x_2} b\\
\mathbb{E}_3 & =a+h((h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a)\\
\mathbb{E}_4 & =(h(a+h(x_4)))^{*_{x_4}}\cdot_{x_4}a
\end{cases}
\end{align*}
\end{example}
Any closed system admits a \emph{canonical} resolution, defined in the proof of the following theorem.
\begin{theorem}\label{thm resol syst clos}
Let $\mathcal{X}=\{(\mathbb{E}_j=F_j)\mid 1\leq j\leq n\}$ be a closed equation system over a graded alphabet $\Sigma$ and over variables $\{\mathbb{E}_1,\ldots,\mathbb{E}_k\}$.
Then
\begin{equation*}
\mathcal{X}\text{ admits a regular solution over }\mathrm{free}(\mathcal{X}).
\end{equation*}
Furthermore, a $n$-tuple of rational expressions denoting this solution can be computed.
\end{theorem}
\begin{proof}
By recurrence over the cardinal of $\mathcal{X}$.
\begin{enumerate}
\item\label{item preuve} Suppose that the equation $\mathbb{E}_n=F_n$ is not recursive.
\begin{enumerate}
\item If $n=1$, then $F_1$ is a rational expression and therefore $L(F_1)$ is the unique solution for $\mathcal{X}$.
Since $\mathcal{X}$ is closed, $L(F_1)\subset T(\mathrm{free}(\mathcal{X}))$.
\item Otherwise, consider the system $\mathcal{X}'=\mathcal{X}^k\setminus\{\mathbb{E}_n=F_n\}$.
From Corollary~\ref{cor sub pres closed}, the system $\mathcal{X}'$ is closed.
By recurrence hypothesis, $\mathcal{X}'$ admits a regular solution $Z=(L_1,\ldots,L_{n-1})$ over $\mathrm{free}(\mathcal{X})$ denoted by $(E_1,\ldots,E_{n-1})$.
From Lemma~\ref{lem sub cas eq pas rec}, this implies that $(L_1,\ldots,L_{n-1},L_Z(F_n))$ is a solution for $\mathcal{X}$ that is, by construction of $Z$, a solution over $\mathrm{free}(\mathcal{X})$.
From Lemma~\ref{lem sub cons lang}, $L_Z(F_n)$ is denoted by $E_n=(\ldots(F_n)_{\mathbb{E}_1\leftarrow E_1}\ldots)_{_{\mathbb{E}_{n-1}\leftarrow E_{n-1}}}$, that is a rational expression with no variables.
Therefore $\mathcal{X}$ admits a regular solution $(L_1,\ldots,L_{n-1},L_Z(F_n))$ over $\mathrm{free}(\mathcal{X})$ denoted by $(E_1,\ldots,E_{n})$.
\end{enumerate}
\item Consider that the equation $\mathbb{E}_n=F_n$ is recursive.
Let $k\mathrm{split}(F_n)=(F',F'')$.
Let $a$ be a symbol not in $\Sigma$.
Let $F'_n=(F')_{\mathbb{E}_k\leftarrow a}\cdot_a \mathbb{E}_k + F''$.
Since $\mathcal{X}$ is closed, it holds from Proposition~\ref{prop sub par split ok} that $\mathcal{X}$ admits a solution over $\mathrm{free}(\mathcal{X})$ if and only if $\mathcal{X}'=(\mathcal{X}\setminus\{\mathbb{E}_n=F_n\})\cup\{\mathbb{E}_n=F'_n\}$ does.
From Corollary~\ref{cor forme fact pre clos}, $\mathcal{X}'$ is closed.
From Proposition~\ref{prop sol pour eq rec}, $\mathcal{X}'$ admits a solution over $\mathrm{free}(\mathcal{X})$ if and only if $\mathcal{X}'_n$ does.
From Lemma~\ref{lem contrac pres close}, $\mathcal{X}'_n$ is closed, and contains the equation $\mathbb{E}_k=F'^{*_c}\cdot_c F''$, that is not recursive.
The existence of the solution is then proved from the point~(\ref{item preuve}).
\end{enumerate}
\qed
\end{proof}
In other words,
\begin{theorem}\label{thm closed syst solv}
Any closed equation system is effectively solvable.
\end{theorem}
\section{Construction of a Rational Tree Expression from an Automaton}\label{co}
In this section, we show how to extract a tree languages equations system from a given FTA $\mathcal{A}=(\Sigma,Q,Q_f,\Delta)$.
Then, using the Arden's Lemma and the transformations (contraction and substitution) defined in the previous sections, we show how to resolve it and compute an equivalent rational expression $E_q$ by associating with a state $q$ in $Q$ an equation defining $L(q)$.
Let us first recall a basic property of the down language of a state:
\begin{lemma}\label{lem decompo down lang}
Let $\mathcal{A}=(\Sigma,Q,Q_f,\Delta)$ be a FTA.
Let $q\in Q$ be a state.
Then:
\begin{equation*}
L(q)= \bigcup_{(f,q_1,\ldots,q_n,q)\in \Delta}f(L(q_1),\ldots,L(q_n))
\end{equation*}
\end{lemma}
\begin{proof}
Let us set $L'(q)=\bigcup_{(f,q_1,\ldots,q_n,q)\in \Delta}f(L(q_1),\ldots,L(q_n))$.
Let $t=f(t_1,\ldots,t_n)$ be a tree in $T(\Sigma)$.
Let us show that $t\in L(q)$ $\Leftrightarrow$ $t\in L'(q)$.
By definition, $t\in L(q) \Leftrightarrow q\in \delta(t)$.
Then:
\begin{align*}
q\in \delta(t) & \Leftrightarrow \exists (f,q_1,\ldots,q_n,q)\in \Delta, (\forall 1\leq i\leq n,q_i\in\delta(t_i))\\
& \Leftrightarrow \exists (f,q_1,\ldots,q_n,q)\in \Delta, (\forall 1\leq i\leq n,t_i\in L(q_i))\\
& \Leftrightarrow \exists (f,q_1,\ldots,q_n,q)\in \Delta, t\in f(L(q_1),\ldots,L(q_n))\\
& \Leftrightarrow t\in L'(q)
\end{align*}
\qed
\end{proof}
The previous lemma can be used to define an equation system that can describe the relations between the down languages of the states of a given FTA.
Let $\mathcal{A}=(\Sigma,Q,Q_f,\Delta)$ be a FTA with $Q=\{1,\ldots,n\}$.
The \emph{equation system} associated with $\mathcal{A}$ is the set of equations $\mathcal{X}_{\mathcal{A}}$ over the variables $\mathbb{E}_1,\ldots,\mathbb{E}_n$ defined by $\mathcal{X}_{\mathcal{A}}=\{\mathcal{E}_q \mid q\in Q\}$ where for any state $q$ in $Q$, $\mathcal{E}_q$ is the equation $\mathbb{E}_q=F_q$ with $F_q= \sum_{(f,q_1,\ldots,q_n,q)\in \Delta} f(\mathbb{E}_{q_1},\ldots,\mathbb{E}_{q_n})$.
Let us show that any solution of $\mathcal{X}_{\mathcal{A}}$ denotes the down languages of the states of $\mathcal{A}$.
\begin{proposition}\label{prop lang bas ds syst}
Let $\mathcal{A}=(\Sigma,Q,Q_f,\Delta)$ be a FTA with $Q=\{1,\ldots,n\}$.
Let $\mathbb{E}=(E_1,\ldots,E_n)$ be a solution of $\mathcal{X}_{\mathcal{A}}$.
Then:
\begin{equation*}
\forall 1\leq j\leq n, L(E_j)=L(j).
\end{equation*}
\end{proposition}
\begin{proof}
Let $t$ be tree over $\Sigma$.
Let us show by induction over $t$ that $t\in L(E_j)$ $\Leftrightarrow$ $t\in L_j$.
\begin{enumerate}
\item Consider that $t\in\Sigma_0$.
Then
\begin{align*}
t\in L(E_j) & \Leftrightarrow t\in L(\sum_{(f,q_1,\ldots,q_n,j)\in\Delta} f(E_{q_1},\ldots,E_{q_n}))\\
& \Leftrightarrow (t,j)\in\Delta\\
& \Leftrightarrow t\in L(j)
\end{align*}
\item Otherwise, $t=g(t_1,\ldots,t_k)$ and
\begin{align*}
t\in L(E_j) & \Leftrightarrow t\in L(\sum_{(f,q_1,\ldots,q_n,j)\in\Delta} f(E_{q_1},\ldots,E_{q_n}))\\
& \Leftrightarrow \exists (g,q_1,\ldots,q_k,j)\in\Delta\wedge \forall 1\leq l\leq k, t_l\in L(E_{q_l})\\
& \Leftrightarrow \exists (g,q_1,\ldots,q_k,j)\in\Delta\wedge \forall 1\leq l\leq k, t_l\in L(q_l) & \text{ (induction hypothesis)}\\
& \Leftrightarrow t\in \bigcup_{(f,q_1,\ldots,q_n,j)\in\Delta} f(L(q_1),\ldots,L(q_n))\\
& \Leftrightarrow t\in L(j)& \text{(Lemma~\ref{lem decompo down lang})}
\end{align*}
\end{enumerate}
\qed
\end{proof}
Since $\mathcal{X}_A$ is by definition closed, it holds from Theorem~\ref{thm closed syst solv} that
\begin{theorem}\label{thm syst aut solv}
Let $\mathcal{A}=(\Sigma,Q,Q_f,\Delta)$ be a FTA.
Then:
\begin{equation*}
\mathcal{X}_{\mathcal{A}}\text{ can be effectively solved.}
\end{equation*}
\end{theorem}
As a direct consequence of Theorem~\ref{thm syst aut solv} and of Proposition~\ref{prop lang bas ds syst}, following Equation~\eqref{eq lang et lang bas},
\begin{theorem}
Let $\mathcal{A}=(\Sigma,\{1,\ldots,n\},Q_f,\Delta)$ be a FTA.
Let $(E_1,\ldots,E_n)$ denoting a solution of $\mathcal{X}_A$.
Then:
\begin{align*}
L(\mathcal{A}) \text{ is denoted by the rational expression }\sum_{j\in Q_f} E_j.
\end{align*}
\end{theorem}
\begin{example}
Let us consider the FTA $A$ in Figure~\ref{fig ex aut}.
The system associated with $A$ is the system $\mathcal{X}$ in Example~\ref{ex syst eq}:
\begin{align*}
\mathcal{X}&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,\mathbb{E}_4)\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,\mathbb{E}_4)\\
\mathbb{E}_3 & =a+h(\mathbb{E}_4)\\
\mathbb{E}_4 & =a+h(\mathbb{E}_3)
\end{cases}
\end{align*}
Let us apply the resolution defined in the proof of Theorem~\ref{thm resol syst clos}.
We first compute $\mathcal{X}_4$:
\begin{align*}
\mathcal{X}^4&=
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_3 & =a+h(a+h(\mathbb{E}_3))\\
\mathbb{E}_4 & =a+h(\mathbb{E}_3)
\end{cases}
\end{align*}
Then we have to solve the closed subsystem
\begin{align}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_3 & =a+h(a+h(\mathbb{E}_3))
\end{cases}\label{eq syst}
\end{align}
The $3-\mathrm{split}$ of $a+h(a+h(\mathbb{E}_3))$ leads to the factorization $h(a+h(x_3))\cdot_{x_3} \mathbb{E}_3+a$, contracted in $(h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a$.
Thus, the system~\eqref{eq syst} is equivalent to
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h(\mathbb{E}_3))\\
\mathbb{E}_3 & =(h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a
\end{cases}
\end{align*}
and by substitution of $\mathbb{E}_3$ to
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))\\
\mathbb{E}_3 & =(h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a
\end{cases}
\end{align*}
Now, let us solve the new subsystem
\begin{align}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))\\
\mathbb{E}_2 & =b+f(\mathbb{E}_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))
\end{cases}\label{eq syst 2}
\end{align}
The $2\mathrm{-split}$ of $b+f(\mathbb{E}_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))$ leads to the factorization $(f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a)))\cdot_{x_2}\mathbb{E}_2+b$, contracted in $((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b$.
Consequently, the system~\eqref{eq syst 2} is equivalent to
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(\mathbb{E}_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))\\
\mathbb{E}_2 & =((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b
\end{cases}
\end{align*}
and by substitution to
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =f(\mathbb{E}_1,\mathbb{E}_1)+f(((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))\\
\mathbb{E}_2 & =((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b
\end{cases}
\end{align*}
Then, by factorization/contraction,
\begin{align*}
\mathbb{E}_1 & =(f(x_1,x_1))^{*_{x_1}}\cdot_{x_1} (f(((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b, a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a)))
\end{align*}
Finally, we obtain the solution
\begin{align*}
\begin{cases}
\mathbb{E}_1 & =(f(x_1,x_1))^{*_{x_1}}\cdot_{x_1} (f(((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a)))\\
\mathbb{E}_2 & =((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b\\
\mathbb{E}_3 & =(h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a\\
\mathbb{E}_4 & =a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a)
\end{cases}
\end{align*}
Since the final states are $1$ and $3$, it holds that $L(\mathcal{A})$ is denoted by:
\begin{align*}
\begin{split}
&(f(x_1,x_1))^{*_{x_1}}\cdot_{x_1} (f(((f(x_2,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a))))^{*_{x_2}}\cdot_{x_2} b,a+h((h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a)))\\
&\quad +(h(a+h(x_3)))^{*_{x_3}}\cdot_{x_3}a
\end{split}
\end{align*}
\end{example}
\begin{figure}[H]
\centerline{
\begin{tikzpicture}[node distance=2.5cm,bend angle=30,transform shape,scale=1]
\node[state,accepting] (q1) {$1$};
\node[state, below left of=q1] (q2) {$2$};
\node[state, below right of=q1] (q4) {$4$};
\node[state, right of=q4,accepting] (q3) {$3$};
\draw (q2) ++(-1cm,0cm) node {$b$} edge[->] (q2);
\draw (q3) ++(1cm,0cm) node {$a$} edge[->] (q3);
\draw (q4) ++(0cm,-1cm) node {$a$} edge[->] (q4);
\path[->]
(q3) edge[->,below,bend left] node {$h$} (q4)
(q4) edge[->,above,bend left] node {$h$} (q3)
;
\draw (q2) ++(0.5cm,1cm) edge[->,in=135,out=135,looseness=2] node[above right,pos=0] {$f$} (q2) edge[shorten >=0pt] (q2) edge[shorten >=0pt] (q4);
\draw (q4) ++(-0.5cm,1cm) edge[->] node[above right,pos=0] {$f$} (q1) edge[shorten >=0pt] (q2) edge[shorten >=0pt] (q4);
\draw (q1) ++(0.5cm,0.5cm) edge[->,in=90,out=45,looseness=3] node[right,pos=0] {$f$} (q1) edge[shorten >=0pt,bend left] (q1) edge[shorten >=0pt,bend right] (q1);
\end{tikzpicture}
}
\caption{The FTA $A$.}
\label{fig ex aut}
\end{figure}
\section{Conclusion}\label{con}
We present a new construction of a rational expression from a tree automaton.
This construction, based on a generalization of Arden's Lemma, gives another way to prove Kleene's theorem for tree.
In order to produce the expression, we studied the notion of tree languages equation systems and determine a sufficient condition to solve them.
The next step is to study the different links that may exist between the different methods of computation of an expression from an automaton, like it was studied in~\cite{Sak05}.
\bibliographystyle{plain} |
1408.0856 | \section*{Acknowledgements}
EC acknowledges support from CIA Postdoctoral Fellowship 2012-12062800003. GA acknowledges support from NSF DMS 1209017, 1264058, and 1317602.
RB acknowledges support from ARO MURI W911NF-09-1-0383 and AFOSR grant FA9550-14-1-0088.
\section{Introduction}
\label{sec:introduction}
In the biclustering problem, we seek to simultaneously group observations (columns) and features (rows) in a data matrix. Such data is sometimes described as two-way, or transposable, to put the rows and columns on equal footing and to emphasize the desire to uncover structure in both the row and column variables. Biclustering is used for visualization and exploratory analysis in a wide array of domains. For example, in text mining, biclustering can identify subgroups of documents with similar properties with respect to a subgroup of words \citep{Dhi2001}. In collaborative filtering, it can be used to identify subgroups of customers with similar preferences for a subset of products \citep{HofPuz1999}. Comprehensive reviews of biclustering methods can be found in \citep{MadOli2004,TanSha2005,BusPro2008}.
In this work, we focus on biclustering to identify patterns in high dimensional cancer genomic data. While a cancer, such as breast cancer, may present clinically as a homogenous disease, it typically consists of several distinct subtypes at the molecular level. A fundamental goal of cancer research is the identification of subtypes of cancerous tumors that have similar molecular profiles and the genes that characterize each of the subtypes. Identifying these patterns is the first step towards developing personalized treatment strategies targeted to a patient's particular cancer subtype.
Subtype discovery can be posed as a biclustering problem in which gene expression data is partitioned into a checkerboard-like pattern that highlights the associations between groups of patients and groups of genes that distinguish those patients. Biclustering has had some notable successes in subtype discovery. For instance,
biclustering breast cancer data has identified sets of genes whose expression levels segregated patients into five subtypes with distinct survival outcomes \citep{SPerTib2001}. These subtypes have been reproduced in numerous studies \citep{STibPar2003}. Encouraged by these results, scientists have searched for molecular subtypes in other cancers, such as ovarian cancer \citep{TotTinGeo2008}. Unfortunately, the findings in many of these additional studies have not been as reproducable as those identified by \cite{SPerTib2001}. The failure to reproduce these other results may reflect a genuine absence of biologically meaningful groupings. But another possibility may be related to issues inherent in the computational methods currently used to identify biclusters.
While numerous methods for biclustering genomic data have been proposed \citep{MadOli2004,BusPro2008}, the most popular approach to biclustering cancer genomics data is the clustered dendrogram. This method performs hierarchical clustering \citep{HasTib2009} on the patients (columns) as well as on the genes (rows). The matrix is then re-ordered according to these separate row and column dendrograms and visualized as a heatmap with dendrograms plotted alongside the row and column axes. \Fig{lung500_hclust} illustrates an example of a clustered dendrogram of expression data from a lung cancer study. The data consists of the expression levels of 500 genes across 56 individuals, a subset of the data studied in \cite{LeeSheHua2010}. Subjects belong to one of four subgroups: Normal, Carcinoid, Colon, or Small Cell.
\begin{figure}
\centering
\begin{tabular}{cc}
\subfloat[Raw Data]{\includegraphics[scale=0.4125]{lung_hclust}
\label{fig:lung500_hclust}}
& \subfloat[COBRA Smoothed Estimate]{\includegraphics[scale=0.4125]{referee2_3_th}
\label{fig:lung500_cba}}\\ \vspace{2mm}
\end{tabular}
\caption{Heatmaps of the expression of 500 genes (rows) across 56 subjects (columns). \Fig{lung500_hclust} depicts the clustered dendrogram applied to the raw data; \Fig{lung500_cba} depicts COBRA smoothed data after reordering the columns and rows via the seriation package \citep{Hahsler2008}. Subjects belong to one of four subgroups: Normal (o), Carcinoid (x), Colon (*), and Small Cell (+).}
\end{figure}
This simple strategy seems to reasonably recover the clinical diagnoses and identify sets of genes whose dysregularization characterizes the subtypes.
As an algorithmic procedure, however, the clustered dendrogram has two characteristics that make it less than ideal for generating reproducible results. Dendrograms are constructed by greedily fusing observations (features) to decrease some criterion. Consequently the algorithm may return a biclustering that is only locally optimal with respect to the criterion. Since solutions may vary depending on how the algorithm is initialized, such procedures tend to be run from multiple initializations, but even then there is no guarantee that a global optimum will be reached. The algorithm is also not stable in the sense that small perturbations in the data can lead to large changes in the clustering assignments.
More sophisticated methods have been proposed for biclustering, some based on the singular value decomposition (SVD) of the data matrix \citep{LazOwe2002,BerIhm2003,TurBai2005,WitTib2009,LeeSheHua2010,SilKaiKop2011}, and others based on graph cuts \citep{Dhi2001,KluBasGer2003}. Some approaches are similar in spirit to the clustered dendrogram and directly cluster the rows and columns \citep{CoiGav2011,TanWit2013}. While these methods may provide worthwhile improvements in empirical performance, none of them address the two fundamental issues that dog the clustered dendrogram. Moreover, scientists may shy from using many of these methods, since their outputs are typically not as easy to visualize as compared to the simple clustered dendrogram. From a reproducible research perspective, a biclustering method should (i) give the same, ideally unique, answer regardless of how the algorithm is initialized, and (ii) be stable with respect perturbations in the data.
In this paper, we pose biclustering as a convex optimization problem and introduce a novel Convex BiclusteRing Algorithm (COBRA) for iteratively solving it. COBRA outputs results that retain the simple interpretability and visualization of the clustered dendrogram and also possesses several key advantages over existing techniques:
\begin{inparaenum}[(i)]
\item {\bf Stability and Uniqueness:} COBRA produces the unique global minimizer to a convex program and this minimizer is continuous in the data. This means that COBRA always maps the data to a single biclustering assignment, and this solution is stable.
\item {\bf Simplicity:} COBRA employs a single tuning parameter that controls the number of biclusters.
\item {\bf Data Adaptivity:} COBRA admits a simple and principled data adaptive procedure for choosing the tuning parameter that involves solving a convex matrix completion problem.
\end{inparaenum}
Returning to our motivating lung cancer example, \Fig{lung500_cba} illustrates the results of COBRA with the tuning parameter selected according to our data adaptive procedure. After reordering the columns and rows via the seriation package \citep{Hahsler2008}, with the `TSP' option, a clearer biclustering patterns emerge.
\section{A Convex Formulation of Biclustering}
\label{sec:formulation}
Our goal is to identify the groups of rows and groups of columns in a data matrix that are associated with each other. As seen in \Fig{lung500_cba}, when rows and columns are reordered according to their groupings, a checkerboard pattern emerges, namely the elements of the matrix partitions defined by row and column groups tend to display a uniform intensity.
We now describe a probabilistic model that can generate the observed checkerboard pattern. Our data, $\M{X} \in \mathbb{R}^{p \times n}$, consists of $n$ samples drawn from a $p$-dimensional feature space. Suppose that the latent checkerboard structure is defined by $R$ feature groups and $C$ observation groups. If the $ij$th entry in $\M{X}$ belongs to the cluster defined by the $r$th feature group and $c$th observation group, then we assume that the observed value $\VE{x}{ij}$ is given by
$\ME{x}{ij} = \ME{\mu}{0} + \ME{\mu}{rc} + \VE{\varepsilon}{ij}$,
where $\ME{\mu}{0}$ is a baseline or grand mean shared by all entries of the data matrix, $\ME{\mu}{rc}$ is the mean of the cluster defined by the $r$th row partition and $c$th column partition, and $\VE{\varepsilon}{ij}$ are iid\@ $N(0,\sigma^2)$ for some $\sigma^2 > 0$. To ensure identifiability of the mean parameters, we assume that $\ME{\mu}{0} = 0$, which can be achieved by removing the grand sample mean from the data matrix $\M{X}$.
This biclustering model corresponds to a checkerboard mean model \citep{MadOli2004}. This model is most similar to that assumed by \cite{TanWit2013} who propose methods to estimate a checkerboard-like structure with some of the bicluster mean entries being sparse. The checkerboard model is exhaustive in that each matrix element is assigned to one bicluster. This is in contrast to other biclustering models that identify potentially overlapping row and column subsets that are not exhaustive; these are typically estimated using SVD-like methods \citep{LazOwe2002,BerIhm2003,TurBai2005,WitTib2009,LeeSheHua2010,SilKaiKop2011} or methods to find hot-spots \citep{ShaWeiNob2009}.
Estimating the checkerboard model parameters consists of finding the partitions and the mean values of each partition. Estimating $\ME{\mu}{rc}$, given feature and observation clusterings, is trivial. Let $\mathcal{R}$ and $\mathcal{C}$ denote the indices of the $r$th row partition and $c$th column partition. The maximum likelihood estimate of $\ME{\mu}{rc}$ is simply the sample mean of the entries of $\M{X}$ over the indices defined by $\mathcal{R}$ and $\mathcal{C}$, namely
$\ME{\mu}{rc} = \frac{1}{\lvert \mathcal{R} \rvert\lvert \mathcal{C} \rvert}\sum_{i \in \mathcal{R}, j \in \mathcal{C}} \ME{X}{ij}.$
In contrast, estimating the row and column partitions, is a combinatorially hard problem and characterizes the main objective of biclustering. This task is akin to best subset selection in regression problems \citep{HasTib2009}. However, just as the best subset selection problem has been successfully attacked by solving a convex surrogate problem, namely the Lasso \citep{Tib1996}, we will develop a convex relaxation of the combinatorally hard problem of selecting the row and column partitions.
We propose to identify the partitions by minimizing the following convex criterion
\begin{eqnarray}
F_{\gamma}(\M{U}) & = & \frac{1}{2} \lVert \M{x} - \M{U} \rVert_{\text{F}}^2 + \gamma \underbrace{\left [\Omega_{\M{w}}(\M{U}) + \Omega_{\Mtilde{W}}(\M{U}\Tra) \right ]}_{J(\M{U})},
\label{eq:biclust_objective_function}
\end{eqnarray}
where $\Omega_{\M{w}}(\M{U}) = \sum_{i<j}w_{ij} \|\M{U}_{\cdot i}-\M{U}_{\cdot j} \rVert_2$,
and $\M{U}_{\cdot i}$ ($\M{U}_{i \cdot})$ denotes the $i$th column (row) of the matrix $\M{U}$.
We have posed the biclustering problem as a penalized regression problem, where
the matrix $\M{U} \in \mathbb{R}^{p \times n}$ is our estimate of the means matrix $\M{\mu}$. The quadratic term quantifies how well $\M{U}$ approximates $\M{X}$. The regularization term $J(\M{U})$ penalizes deviations away from a checkerboard pattern. The parameter $\gamma \geq 0$ tunes the tradeoff between the two terms. The parameters $w_{ij} = w_{ji}$ and $\tilde{w}_{ij} = \tilde{w}_{ji}$ are non-negative weights that will be explained shortly.
The penalty term $J(\M{U})$ is closely related to other sparsity inducing penalties. When only the rows or columns are being clustered, minimizing the objective function in \Eqn{biclust_objective_function} corresponds to solving a convex clustering problem \citep{PelDeSuy2005,HocVerBac2011,LinOhlLju2011} under an $\ell_2$-norm fusion penalty.
The convex clustering problem in turn can be seen as a generalization of the Fused Lasso \citep{TibSauRos2005}.
When the $\ell_1$-norm is used in place of the $\ell_2$-norm in $\Omega_{\M{W}}(\M{U})$, we recover a special case of the general Fused Lasso \citep{TibTay2011}.
Other norms can also be employed in our framework; see for example \cite{ChiLan2015}. In this paper, we restrict ourselves to the $\ell_2$-norm since it is rotationally invariant. In general, we do not want a procedure whose biclustering output may non-trivially change when the coordinate representation of the data is trivially changed.
To understand how the regularization term $J(\M{U})$ steers solutions toward a checkerboard pattern, consider the effects of $\Omega_{\M{W}}(\M{U})$ and $\Omega_{\Mtilde{W}}(\M{U}\Tra)$ separately. Suppose $J(\M{U}) = \Omega_{\M{W}}(\M{U})$. The $i$th column $\M{U}_{\cdot i}$ of the matrix $\M{U}$ can be viewed as a cluster center or centroid attached to the $i$th column $\M{x}_{\cdot i}$ of the data matrix $\M{X}$. When $\gamma=0$, the minimum is attained when $\M{U}_{\cdot i}=\M{x}_{\cdot i}$, and each column occupies a unique column cluster.
As $\gamma$ increases, the cluster centroids are shrunk together and in fact begin to coalesce. Two columns $\M{x}_{\cdot i}$ and $\M{x}_{\cdot j}$ are assigned to the same column partition if $\M{U}_{\cdot i} = \M{U}_{\cdot j}$. We will prove later that for sufficiently large $\gamma$, all columns coalesce into a single cluster. Similarly, suppose $J(\M{U}) = \Omega_{\Mtilde{W}}(\M{U}\Tra)$ and view the rows of $\M{U}$ as the cluster centroids attached to the rows of $\M{X}$. As $\gamma$ increases, the row centroids will begin to coalesce, and we likewise say that the $i$th and $j$th rows of $\M{X}$ belong to the same row partition if their centroid estimates $\M{U}_{i \cdot}$ and $\M{U}_{j \cdot}$ coincide.
When $J(\M{U})$ includes both $\Omega_{\M{W}}(\M{U})$ and $\Omega_{\Mtilde{W}}(\M{U}\Tra)$, rows and columns of $\M{U}$ are {\em simultaneously} shrunk towards each other as the parameter $\gamma$ increases. The penalized estimates exhibit the desired checkerboard structure as seen in \Fig{lung500_cba}. Note this shrinkage procedure is fundamentally different from methods like the clustered dendrogram, which independently cluster the rows and columns. By coupling row and column clustering, our formulation explicitly seeks out a solution with a checkerboard mean structure.
We now address choosing the weights $w_{ij}$ and $\tilde{w}_{ij}$. A judicious choice of the weights enables us to (i) employ a single regularization parameter $\gamma$, (ii) obtain more parsimonious clusterings, and (iii) speed up the convergence of key subroutines employed by COBRA. With these goals in mind, we recommend weights having following properties:
\begin{inparaenum}[(i)]
\item The column weights should sum to $1/\sqrt{n}$ and the row weights should sum to $1/\sqrt{p}$.
\item The column weight $w_{ij}$ should be inversely proportional to the distance between the $i$th and $j$th columns $\lVert \M{X}_{\cdot i} - \M{X}_{\cdot j} \rVert_2$. The row weights should be assigned analogously.
\item The weights should be sparse, namely consist mostly of zeros.
\end{inparaenum}
We now discuss the rationale behind our weight recommendations. The key to ensuring that a single tuning parameter suffices for identifying the checkerboard pattern, is keeping the two penalty terms
$\Omega_{\M{W}}(\M{U})$ and $\Omega_{\Mtilde{W}}(\M{U}\Tra)$ on the same scale. If this does not hold, then either column or row clusterings will dominate. Consequently, since the columns are in $\mathbb{R}^p$ and the rows are in $\mathbb{R}^n$, we choose the column weights to sum to $1/\sqrt{p}$ and the row weights to sum to $1/\sqrt{n}$. Using weights that are inversely proportional to the distances between points, more aggressively shrinks rows and columns that are more similar to each other, and less aggressively shrinks rows and columns that are less similar to each other. Finally, sparse weights expedite computation. COBRA solves a sequence of convex clustering problems. The algorithm we employ for solving the convex clustering subproblem scales in storage and computational operations as $\mathcal{O}(npq)$, where $q$ is the number of non-zero weights. Shrinking all pairs of columns (rows) and taking $q = n^2$ ($q = p^2$) not only increases computational costs but also typically produces inferior clustering to sparser ones as seen in \cite{ChiLan2015}. We employ the sparse Gaussian kernel weights described in \cite{ChiLan2015}, which satisfy the properties outlined above. Additional discussion on this choice of weights is given in Web Appendix A.
\section{Properties of the Convex Formulation and COBRA's Solution}
\label{sec:solution}
The solution to minimizing \Eqn{biclust_objective_function} has several attractive properties as a function of the data $\M{X}$, the regularization parameter $\gamma$, and its weights $\M{W} = \{w_{ij}\}$ and $\Mtilde{W} = \{\tilde{w}_{ij}\}$, some of which can be exploited to expedite its numerical computation. We emphasize that these results are inherent to the minimization of the objective function \Eqn{biclust_objective_function}. They hold regardless of the algorithm used to find the minimum point, since they are a consequence of casting the biclustering problem as a convex program. Proofs of all propositions can be found in Web Appendix B. First, minimizing \Eqn{biclust_objective_function} is a well-posed optimization problem.
\begin{proposition}
\label{prop:existence_uniqueness} The function $F_\gamma(\M{U})$ defined in \Eqn{biclust_objective_function} has a unique global minimizer.
\end{proposition}
Furthermore, since $F_\gamma(\M{U})$ is convex, the only local minimum is the global minimum, and any algorithm that converges to a stationary point of $F_\gamma(\M{U})$ will converge to the global minimizer.
The next result will have consequences for numerical computation and is also the foundation underpinning COBRA's stability.
\begin{proposition}
\label{prop:solution_path_continuity} The minimizer $\M{U}^\star$ of \Eqn{biclust_objective_function} is jointly continuous in $(\M{X},\gamma, \M{W},\Mtilde{W})$.
\end{proposition}
Continuity of $\M{U}^\star$ in the regularization parameter $\gamma$ suggests employing warm-starts, or using the solution at one $\gamma$ as the starting point for a problem with a slightly larger $\gamma$, because small changes in $\gamma$ will result in small changes in $\M{U}^\star$. Continuity of $\M{U}^\star$ in $\M{X}$ tells us that the solution varies smoothly with perturbations in the data, our main stability result. Recall that the $i$th and $j$th columns (rows) are assigned to the same cluster if $\M{U}^\star_{\cdot i} = \M{U}^\star_{\cdot j}$ ($\M{U}^\star_{i\cdot} = \M{U}^\star_{j\cdot}$). Since $\M{U}^\star$ varies smoothly in the data, so do the differences $\M{U}^\star_{\cdot i} - \M{U}^\star_{\cdot j}$. While assigning entries to biclusters is an inherently discrete operation, we will see in \Sec{stability} that this continuity result manifests itself in assignments that are robust to perturbations in the data. Continuity of $\M{U}^\star$ in the weights also give us stability guarantees and assurances that COBRA's biclustering results should be locally insensitive to changes in the weights.
The parameter $\gamma$ tunes the complexity of COBRA's solution. We next show that COBRA's most complicated (small $\gamma$) and simplest (large $\gamma$) solutions coincide with the clustered dendrogram's most complicated and simplest solutions.
Clearly when $\gamma = 0$, the solution is just the data, namely $\M{U}^\star = \M{X}$. To get some intuition on how the solution behaves as $\gamma$ increases, observe that the penalty $J(\M{U})$ is a semi-norm. Moreover, under suitable conditions on the weights, spelled out in \As{connectedness} below, $J(\M{U})$ is zero if and only if $\M{U}$ is a constant matrix.
\begin{assumption}
\label{as:connectedness}
For any pair of columns (rows), indexed by $i$ and $j$ with $i < j$, there exists a sequence of indices $i \rightarrow k \rightarrow \cdots \rightarrow l \rightarrow j$ along which the weights, $w_{ik}, \ldots, w_{lj}$ $(\tilde{w}_{ik}, \ldots, \tilde{w}_{lj})$ are positive.
\end{assumption}
\begin{proposition}
\label{prop:zero}
Under \As{connectedness}, $J(\M{U}) = 0$ if and only if $\M{U} = c\V{1}\V{1}\Tra$ for some $c \in \mathbb{R}$.
\end{proposition}
This result suggests that as $\gamma$ increases the solution to the biclustering problem converges to the solution of the following constrained optimization problem:
\begin{eqnarray*}
\underset{\M{U}}{\min}\; \frac{1}{2} \lVert \M{X} - \M{U} \rVert_{\text{F}}^2 \quad \text{subject to $\M{U} = c\V{1}\V{1}\Tra$ for some $c \in \mathbb{R}$},
\end{eqnarray*}
the solution to which is just the global mean $\Mbar{X}$, whose entries are all identically the average value of $\M{X}$ over all its entries. The next result formalizes our intuition that the centroids eventually coalesce to $\Mbar{X}$ as $\gamma$ becomes sufficiently large.
\begin{proposition}
\label{prop:coalesce}
Under \As{connectedness}, $F_{\gamma}(\M{U})$ is minimized by the grand mean $\Mbar{\M{x}}$ for $\gamma$ sufficiently large.
\end{proposition}
Thus, as $\gamma$ increases from 0, the centroids matrix $\M{U}^\star$ traces a continuous solution path that starts from $np$ biclusters, consisting of $\ME{U}{ij} = \ME{x}{ij}$, to a single bicluster, where $\ME{U}{ij} = (1/np)\sum_{i'j'} \ME{X}{i'j'}$ for all $i,j$.
\section{Estimation of Biclusters with COBRA}
\label{sec:algorithm}
Having characterized our estimator of the checkerboard means as the minimizer to \Eqn{biclust_objective_function}, we now turn to the task of computing it. From here on, we fix the data $\M{X}$ and the weights $\M{W}$ and $\Mtilde{W}$ and consider the biclustering solution as
a function of the parameter $\gamma$, denoting the solution as $\M{U}_\gamma$.
The penalty term $J(\M{U})$ in \Eqn{biclust_objective_function} makes minimization challenging since it is non-smooth and not separable over any block partitioning of $\M{U}$. Coordinate descent \citep{FriHasH2007,WuLan2008} is an effective solution when the non-smooth penalty term is separable over some block partitioning of the variables, which is unfortunately not the case for \Eqn{biclust_objective_function}. Another popular iterative method for minimizing non-smooth convex functions is the alternating direction method of multipliers (ADMM) \citep{BoyParChu2011}. While an ADMM algorithm is feasible, we take a more direct approach with the Dykstra-like proximal algorithm (DLPA) proposed by \cite{BauCom2008} because it yields a simple meta-algorithm that can take advantage of fast solvers for the convex clustering problem.
DLPA generalizes a classic algorithm for fitting restricted least squares regression problems \citep{Dyk1983} and solves minimization problems of the form
\begin{eqnarray}
\label{eq:dlpa}
\underset{\M{U}}{\min}\; \frac{1}{2} \lVert \M{X} - \M{U} \rVert_{\text{F}}^2 + f(\M{U}) + g(\M{U}),
\end{eqnarray}
where $f$ and $g$ are lower-semicontinuous, convex functions. The biclustering problem is clearly an instance of \Eqn{dlpa}. Setting $f = \gamma \Omega_{\M{W}}$ and $g = \gamma\Omega_{\Mtilde{W}}$ in \Eqn{dlpa} gives us the pseudocode for COBRA shown in \Alg{COBRA}. The operation $\mathop{\rm prox}\nolimits_{\gamma \Omega_{\M{W}}}(\M{Z})$ is the proximal mapping of the function $\gamma \Omega_{\M{W}}$ and is defined to be
\begin{eqnarray*}
\mathop{\rm prox}\nolimits_{\gamma \Omega_{\M{W}}}(\M{Z}) & = & \underset{\M{V}}{\arg\min}\;\left[\frac{1}{2} \lVert \M{Z} - \M{v} \rVert_{\text{F}}^2 + \gamma \Omega_{\M{W}}(\M{V})\right].
\end{eqnarray*}
Each proximal mapping in \Alg{COBRA} corresponds to solving a convex clustering problem.
The COBRA is very intuitive. The matrices $\M{Y}_m\Tra$ and $\M{U}_m$ are estimates of the means matrix at the $m$th iteration. The matrices $\M{P}_m$ and $\M{Q}_m$ encode discrepancies between these two estimates. We alternate between clustering the rows of the matrix $\M{U}_m + \M{P}_m$ and the columns of the matrix $\M{Y}_m + \M{Q}_m$. The following result guarantees that $\M{Y}_m\Tra$ and $\M{U}_m$ converge to the desired solution.
\begin{proposition}
\label{prop:COBRA} The COBRA iterates $\M{U}_m$ and $\M{Y}_m\Tra$ in \Alg{COBRA} converge to the unique global minimizer of the convex biclustering objective \Eqn{biclust_objective_function}.
\end{proposition}
\Prop{COBRA} not only ensures the algorithmic convergence of \Alg{COBRA}, but it also provides a natural stopping rule. We stop iterating once
$\lVert \M{U}_m - \M{Y}_m\Tra \rVert_{\text{F}}$ falls below some tolerance $\tau > 0$.
A proof of \Prop{COBRA}, as well as additional technical details and discussion on DLPA and COBRA, can be found in Web Appendix C.
\begin{algorithm}[t]
Set $\M{u}_0 = \M{x}, \M{p}_0 = \M{0}, \M{q}_{0} = \V{0}$ for $m=0, 1, \ldots$
\begin{algorithmic}[0]
\caption{Convex BiclusteRing Algorithm (COBRA)}
\label{alg:COBRA}
\Repeat
\State $\M{y}_{m} = \mathop{\rm prox}\nolimits_{\gamma\Omega_{\Mtilde{W}}}(\M{u}_m\Tra + \M{p}_m\Tra)$
\Comment Convex Clustering of Rows
\State $\M{p}_{m+1} = \M{u}_m + \M{p}_m - \M{y}_m\Tra$
\State $\M{u}_{m+1} = \mathop{\rm prox}\nolimits_{\gamma\Omega_{\M{W}}}(\M{y}_m\Tra + \M{q}_m\Tra)$
\Comment Convex Clustering of Columns
\State $\M{q}_{m+1} = \M{y}_m + \M{q}_m - \M{u}_{m+1}\Tra$
\Until{convergence}
\end{algorithmic}
\end{algorithm}
The advantage of using DLPA is that COBRA is agnostic to the actual algorithm used to solve the proximal mapping. This is advantageous since we cannot analytically compute $\mathop{\rm prox}\nolimits_{\gamma\Omega_{\M{W}}}(\M{Z})$. In this paper we use the alternating minimization algorithm (AMA) introduced in \cite{ChiLan2015} to solve the convex clustering problem. The algorithm performs projected gradient ascent on the Lagrangian dual problem. Its main advantage is that it requires computational work and storage that is linear in the size of the data matrix $\M{X}$, when we use the sparse Gaussian kernel weights described in Web Appendix A. A second advantage is that hard clustering assignments are trivially obtained from variables employed in the splitting method. Nonetheless, the DLPA framework makes it trivial to swap in more efficient solvers that may become available in the future.
\section{Model Selection}
\label{sec:tuning_parameter}
Estimating the number of clusters or biclusters in a data set is a major challenge. With many existing biclustering methods, this is further exacerbated by the many tuning parameters that must be selected and the fact that biclustering assignments do not always change smoothly with the number of biclusters. For example, the sparse SVD method \citep{LeeSheHua2010} requires three tuning parameters: two parameters controlling the sparsity of the left and right singular vectors of the sparse SVD and one controlling its rank. Furthermore, selecting the number of biclusters and other tuning parameters can be a major computational burden for large data sets. For example, the sparse biclustering method \citep{TanWit2013} uses cross-validation to select the number of row and column partitions. This can be time consuming if a large range of possible number of row and column partitions are explored. In contrast, COBRA has one parameter $\gamma$, that controls both the number of biclusters and bicluster assignments; moreover, the number of biclusters and assignments varies smoothly with $\gamma$.
\subsection{Hold-Out Validation}
We present a simple but principled approach to selecting $\gamma$ in a data-driven manner by posing the model selection problem as another convex program.
We randomly select a hold-out set of elements in the data matrix and assess the quality of a model $\M{U}_\gamma$ on how well it predicts the hold-out set. This idea was first proposed by \cite{Wol1978} for model selection in principal component analysis and has been used more recently to select tuning parameters for matrix completion problems \citep{MazHasTib2010}. Denote these index pairs $\Theta \subset \{1, \ldots, p\} \times \{1, \ldots, n\}$, and let $\lvert \Theta \rvert$ denote the cardinality of the set $\Theta$. We may select a relatively small fraction of the elements, say 10\%, for validation, namely $\lvert \Theta \rvert \approx 0.1 \times np$. Denote the projection operator onto the set of indices $\Theta$ by $\mathcal{P}_{\Theta}(\M{X})$. The $ij$th entry of $\mathcal{P}_{\Theta}(\M{X})$ is $\ME{x}{ij}$ if $(i,j) \in \Theta$ and is zero otherwise.
We then solve the following convex optimization problem
\begin{eqnarray}
\label{eq:validation}
\underset{\M{U}}{\min}\; \tilde{F}_\gamma(\M{U}) & := &
\frac{1}{2} \lVert \mathcal{P}_{\Theta^c}(\M{x}) - \mathcal{P}_{\Theta^c}(\M{U}) \rVert_{\text{F}}^2 + \gamma J(\M{U})
\end{eqnarray}
for a sequence of $\gamma \in \mathcal{G} = \{\gamma_1=0, \ldots, \gamma_{\max}\}$. Recall that we denote the minimizer of $\tilde{F}_\gamma(\M{U})$ by $\M{U}_{\gamma}$.
We choose the $\gamma$ that minimizes the prediction error over the hold-out set $\Theta$, namely
$\gamma^\star = \underset{\gamma \in \mathcal{G}}{\arg\min}\; \lVert \mathcal{P}_\Theta(\M{X}) - \mathcal{P}_\Theta(\M{U}_\gamma) \rVert_\text{F}.$
\subsection{Solving the Hold-Out Problem}
\label{sec:MM_algorithm}
The problem defined in (\ref{eq:validation}) can be seen as a convex matrix completion problem. \Alg{MM} summarizes a simple procedure for reducing the problem of minimizing $\tilde{F}_\gamma(\M{U})$ to solving a sequence of the complete biclustering problems \Eqn{biclust_objective_function}. The solution from the previous iteration is used to fill in the missing entries in the current iteration; COBRA is then applied to the complete data matrix. This approach is identical to the soft-impute approach of \cite{MazHasTib2010} for solving the matrix completion problem using a nuclear norm penalty instead of our fusion penalty $J(\M{U})$. The similarity is not a coincidence as both procedures are instances of a majorization-minimization (MM) algorithm \citep{LanHunYan2000} which apply the same majorization on the smooth quadratic term. We defer details on this connection to Web Appendix D. \Alg{MM} has the following convergence guarantees for the imputation algorithm.
\begin{proposition}
\label{prop:mm_algorithm}
The limit points of the sequence of iterates $\Mn{U}{m}$ of \Alg{MM} are solutions to \Eqn{validation}.
\end{proposition}
Thus, we have turned the model selection problem of selecting both the number of biclusters and bicluster assignments into a principled convex program with strong convergence guarantees.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\caption{COBRA with missing data}
\label{alg:MM}
\State Initialize $\Mn{U}{0}$.
\Repeat
\State $\M{M} \gets \mathcal{P}_{\Theta^c}(\M{X}) + \mathcal{P}_{\Theta}(\Mn{U}{m})$
\State $\Mn{U}{m+1} \gets \text{COBRA}(\M{M})$
\Until{convergence}
\end{algorithmic}
\end{algorithm}
\section{Simulation Studies}
\label{sec:comparison}
We compare COBRA and two other biclustering methods that also assume an underlying checkerboard mean structure. The first is the clustered dendrogram; hard biclustering assignments are made for the clustered dendrogram using the widely used dynamic tree cutting algorithm \citep{LanZha2008} implemented in the package {\tt dynamicTreeCut}. The second is the sparse biclustering method \citep{TanWit2013} implemented in the package {\tt sparseBC} All parameters were selected according to methods in the R packages.
Assessing the quality of a clustering is almost as hard as the clustering problem itself, as evidenced by a plethora of quantitative measures for comparing how similar two clusterings are. In this paper, we use the following three measures: the Rand index (RI), the adjusted Rand index (ARI), and the variation of information (VI). We included the RI \citep{Ran1971}, since it is one of the most widely used criteria for comparing partitions; it maps a pair of partitions to a number between 0 and 1, where 1 indicates perfect agreement between two partitions. Despite its popularity, the RI has some limitations (See Web Appendix E). Consequently, we also use the ARI \citep{HubAra1985}, which was engineered to address deficiencies in the RI. Like the RI, the ARI takes a maximum value of 1 if the clustering solution is identical to the true structure and takes a value close to 0 if the clustering result is obtained from random partitioning. Finally, we compared clustering results using the VI \citep{Meila2007}. Unlike the RI and ARI, the VI is a metric. Consequently, under the VI we can speak rigorously about a neighborhood around a given clustering. In fact, the nearest neighbors of a clustering $\mathcal{C}$ under the VI metric are clusterings obtained by splitting or merging small clusters in $\mathcal{C}$. While not as popular as the RI and ARI, the VI is perhaps the most appropriate for assessing two hierarchical clusterings. As a metric, the VI takes a minimum value of 0 when there is perfect agreement between two partitions. Definitions of these criteria are given in Web Appendix E.
To simulate data with a checkerboard partition mean pattern, we consider the partition of the data matrix as the set of biclusters induced by taking the cross-product of the row and column groups. For all experiments, we used a single fixed weight assignment rule (See Web Appendix A), therefore COBRA selects a single tuning parameter $\gamma$ by validation. We perform computations in serial on a multi-core computer with 24 3.3 GHz Intel Xeon processors and 189 GB of RAM. Note that run times may vary depending on input parameters chosen. For example, COBRA estimates can be computed to high accuracy at greater computation, and sparse biclustering can explore a wider range of candidate row and column clusters at greater computation. To be fair, we did not cherry pick these parameters but picked some reasonable values and used them throughout our numerical experiments. For example, in sparse biclustering when there are 8 true row clusters and 8 true column clusters, we set the range of row and column clusters to be 1 to 12.
Finally, we also considered two variants of COBRA that employ standard refinements on the Lasso: the adaptive Lasso \citep{Zou2006} and the thresholded Lasso \citep{Meinshausen2009}. These refinements address the well known issue that the Lasso tends to select too many variables. Thus, we anticipate that COBRA estimates may identify too many biclusters. Consequently, we also compared the performance of an adaptive COBRA and thresholded COBRA in our study. Details on these refinements are in Web Appendix G.
We compare COBRA, the clustered dendrogram, and the sparse biclustering (spBC) algorithm of \cite{TanWit2013} on their abilities to recover a checkerboard pattern.
Again for the clustered dendrogram, row and column assignments are made with the dynamic tree cutting (DCT) method \citep{LanZha2008}. We simulate a $200\times200$ data matrix with a checkerboard bicluster structure, where $\ME{X}{ij} \sim$ iid~$N(\mu_{rc},\sigma^2)$ and $\mu_{rc}$ took on one of 25 equally spaced values between -6 and 6, namely $\mu_{rc} \sim$ Uniform$\{-6,-5.5,\ldots,5.5,6\}$. We consider a high signal-to-noise ratio (SNR) situation where the minimum difference among bicluster means ($0.5$) is comparable to the noise ($\sigma = 1.5$) and a low SNR one where the minimum difference is dominated by the noise ($\sigma=3.0$).
To assess the performance as the number of column and row clusters are varied, we generated data using 16, 32, and 64 biclusters, corresponding to 2, 4, and 8 row groups and 8 column groups, respectively.
Since typical clusters will not be equal in size, rows and columns are assigned to each of the groups randomly according to a non-uniform distribution. The probability that a row is assigned to the $i$th group is inversely proportional to $i$. Columns are assigned analogously to groups.
We computed the RI, ARI, and VI between the true biclusters and those obtained by the COBRA variants, DCT, and spBC. \Tab{gaussian_checkerboard} reports the average RI, ARI, and VI over 50 replicates as well as the average number of recovered biclusters $\hat{N}_b$ and run times; $N_b$ denotes the true number of biclusters. In the high SNR scenario, all methods do well across all measures. In the low SNR scenario, the COBRA variants often perform nearly as well or better than the other two methods. While DCT is significantly faster than all other methods, it also performs the worst at recovering the true biclusters in the low SNR scenario. The run times for COBRA indicate that it is computationally competitive compared to alternative biclustering solutions.
\begin{table}[th]
\begin{tabular}{ l c c c c c c c }
& $N_b$ & $\sigma$ & COBRA & COBRA (A) & COBRA (T) & DCT & spBC \\ \hline
RI & 16 & 1.5 & 0.993 & {\bf 0.994} & 0.993 & 0.952 & 0.969 \\
& 32 & 1.5 & {\bf 0.999} & {\bf 0.999} & {\bf 0.999} & 0.993 & 0.997 \\
& 64 & 1.5 & {\bf 0.999} & {\bf 0.999} & {\bf 0.999} & {\bf 0.999} & {\bf 0.999} \\
& 16 & 3.0 & 0.971 & {\bf 0.982} & 0.959 & 0.944 & 0.966 \\
& 32 & 3.0 & {\bf 0.997} & {\bf 0.997} & 0.996 & 0.990 & {\bf 0.997} \\
& 64 & 3.0 & {\bf 0.999} & {\bf 0.999} & {\bf 0.999} & {\bf 0.999} & {\bf 0.999} \\ \hline
ARI & 16 & 1.5 & {\bf 0.952} & 0.924 & 0.950 & 0.713 & 0.804 \\
& 32 & 1.5 & 0.995 & 0.978 & {\bf 0.996} & 0.916 & 0.965 \\
& 64 & 1.5 & {\bf 0.999} & 0.996 & {\bf 0.999} & 0.981 & 0.995 \\
& 16 & 3.0 & 0.798 & {\bf 0.909} & 0.741 & 0.449 & 0.784 \\
& 32 & 3.0 & 0.958 & {\bf 0.982} & 0.945 & 0.844 & 0.958 \\
& 64 & 3.0 & 0.992 & 0.992 & 0.985 & 0.979 & {\bf 0.993} \\ \hline
VI & 16 & 1.5 & {\bf 0.117} & 0.132 & {\bf 0.117} & 0.713 & 0.180 \\
& 32 & 1.5 & 0.013 & 0.032 & {\bf 0.009} & 0.195 & 0.150 \\
& 64 & 1.5 & {\bf 0.001} & 0.022 & {\bf 0.001} & 0.043 & 0.271 \\
& 16 & 3.0 & 0.627 & 0.446 & 0.730 & 2.245 & {\bf 0.260} \\
& 32 & 3.0 & {\bf 0.097} & 0.125 & 0.127 & 0.516 & 0.166 \\
& 64 & 3.0 & {\bf 0.020} & 0.046 & 0.034 & 0.061 & 0.181 \\ \hline
$\hat{N}_{b}$ & 16 & 1.5 & 18.8 & 25.3 & {\bf 16.8} & 10.7 & 14.7 \\
& 32 & 1.5 & 32.9 & 38.1 & {\bf 32.1} & 28.8 & 31.2 \\
& 64 & 1.5 & {\bf 64.3} & 69.7 & {\bf 64.3} & 62.5 & 60.3 \\
& 16 & 3.0 & 43.7 & 46.3 & 30.3 & 54.1 & {\bf 13.9} \\
& 32 & 3.0 & {\bf 30.4} & 44.2 & 29.9 & 44.7 & 30.1 \\
& 64 & 3.0 & 65.1 & 77.1 & {\bf 63.2} & 65.9 & 61.0 \\ \hline
time (sec) & 16 & 1.5 & 26.83 & 50.36 & 27.30 & {\bf 0.21} & 558.27 \\
& 32 & 1.5 & 30.07 & 57.49 & 30.49 & {\bf 0.21} & 401.96 \\
& 64 & 1.5 & 29.93 & 58.54 & 30.35 & {\bf 0.21} & 288.91 \\
& 16 & 3.0 & 35.55 & 67.56 & 36.02 & {\bf 0.22} & 564.85 \\
& 32 & 3.0 & 33.99 & 66.06 & 34.44 & {\bf 0.21} & 432.71 \\
& 64 & 3.0 & 33.09 & 65.10 & 33.50 & {\bf 0.20} & 284.24 \\ \hline \\
\end{tabular}
\caption{\label{tab:gaussian_checkerboard}
Checkerboard mean structure with iid $N(0,\sigma^2)$ noise: low-noise ($\sigma = 1.5$) and high-noise ($\sigma = 3.0$). COBRA (A) is the adaptive COBRA and COBRA (T) is the thresholded COBRA. Details on these two variants are in Web Appendix G.}
\end{table}
In closing our discussion on simulations, we reiterate that COBRA is designed to recover checkerboard patterns. While checkerboard patterns feature prominently in a range of applications, we also acknowledge that they are not universal. Nonetheless, by examining both the estimated biclusters and bicluster means, COBRA can potentially identify the correct biclusters even when the checkerboard assumption is violated. We discuss how COBRA can accomplish this in more detail with a case study in Web Appendix F.
\section{Application to Genomics}
\label{sec:stability}
To illustrate COBRA in action on a real example, we revisit the lung cancer data studied by \cite{LeeSheHua2010}. We have selected the 500 genes with the greatest variance from the original collection of 12,625 genes.
Subjects belong to one of four subgroups; they are either normal subjects (Normal) or have been diagnosed with one of three types of cancers: pulmonary carcinoid tumors (Carcinoid), colon metastases (Colon), and small cell carcinoma (Small Cell).
We first illustrate how the solution $\M{U}_\gamma$ evolves as $\gamma$ varies. \Fig{lung_cba_path} shows snap shots of the COBRA solution path of this data set, as the parameter $\gamma$ increases. The path captures the whole range of behavior between under-smoothed estimates of the mean structure (small $\gamma$), where each cell is assigned its own bicluster, to over-smoothed estimates (large $\gamma$), where all cells belong to a single bicluster. In between these extremes, we see rows and columns ``fusing" together as $\gamma$ increases. Thus we have visual confirmation that minimizing \Eqn{biclust_objective_function} over a range of $\gamma$, yields a convex formulation of the clustered dendrogram.
While generating the entire solution path enables us to visualize the hierarchical relationships between biclusterings for different $\gamma$, we may ultimately require a hard biclustering assignment. By applying the validation procedure described in \Sec{tuning_parameter}, we arrive at the smoothed mean estimate shown previously in \Fig{lung500_cba}
We next conduct a simulation experiment based on the lung cancer data to test the stability and reproducibility of biclustering methods, critical qualities for real scientific analysis. To expedite computation in these experiments, we restrict our attention to the 150 genes with the highest variance. We first apply the biclustering methods on the original data to obtain baseline biclusterings. We then add iid $N(0,\sigma^2)$, noise where $\sigma = 0.5, 1.0, 1.5$ to create a perturbed data set on which to apply the same set of methods. We compute the RI, ARI, and VI between the baseline clustering and the one obtained on the perturbed data. \Tab{stability} shows the average RI, ARI, and VI of 50 replicates as well as run times. For all values of $\sigma$, we see that the two COBRA variants tend to produce the most stable and reproducible results. The fact that the ARI scores are poor for plain COBRA but the RI and VI scores are good indicate that COBRA tends to shrink the same sets of rows and columns together even if it fails to fuse them together consistently. Again the run time results indicate that COBRA is computationally competitive. For completeness, results from an identical stability study for methods that do not assume a checkerboard pattern can be found in Web Appendix H.
\begin{figure}
\centering
\subfloat[$\gamma = 0$]{\label{fig:lung0}%
\includegraphics[width=1.7in]{lung_gamma=0_th}} \hspace{-1cm}
\subfloat[$\gamma = 10^{1.45}$]{\label{fig:lung5}%
\includegraphics[width=1.7in]{lung_log_gamma=145_th}} \hspace{-1cm}
\subfloat[$\gamma = 10^{1.79}$]{\label{fig:lung7}%
\includegraphics[width=1.7in]{lung_log_gamma=179_th}} \hspace{-1cm}
\subfloat[$\gamma = 10^{2.01}$]{\label{fig:lung9}%
\includegraphics[width=1.7in]{lung_log_gamma=201_th}} \\ \vspace{-0.5cm}
\subfloat[$\gamma = 10^{2.24}$]{\label{fig:lung12}%
\includegraphics[width=1.7in]{lung_log_gamma=224_th}} \hspace{-1cm}
\subfloat[$\gamma = 10^{2.35}$]{\label{fig:lung17}%
\includegraphics[width=1.7in]{lung_log_gamma=235_th}} \hspace{-1cm}
\subfloat[$\gamma = 10^{3.03}$]{\label{fig:lung18}%
\includegraphics[width=1.7in]{lung_log_gamma=303_th}} \hspace{-1cm}
\subfloat[$\gamma = 10^{3.14}$]{\label{fig:lung20}%
\includegraphics[width=1.7in]{lung_log_gamma=314_th}} \\
\caption{Snap shots of the COBRA solution path of the lung cancer data set, as the parameter $\gamma$ increases. The path captures the whole range of behavior between under-smoothed estimates of the mean structure (small $\gamma$), where each cell is assigned its own bicluster, to over-smoothed estimates (large $\gamma$), where all cells belong to a single bicluster.}
\label{fig:lung_cba_path}
\end{figure}
\begin{table}[th]
\begin{tabular}{ l c c c c c c }
& $\sigma$ & COBRA & COBRA (A) & COBRA (T) & DCT & spBC \\ \hline
RI & 0.5 & 0.984 & {\bf 0.992} & 0.959 & 0.979 & 0.974 \\
& 1.0 & 0.981 & {\bf 0.990} & 0.944 & 0.974 & 0.965 \\
& 1.5 & 0.973 & {\bf 0.989} & 0.896 & 0.973 & 0.936 \\ \hline
ARI & 0.5 & 0.350 & 0.788 & {\bf 0.813} & 0.530 & 0.642 \\
& 1.0 & 0.233 & 0.686 & {\bf 0.766} & 0.439 & 0.544 \\
& 1.5 & 0.201 & {\bf 0.667} & 0.644 & 0.340 & 0.397 \\ \hline
VI & 0.5 & 1.924 & 0.882 & {\bf 0.776} & 2.120 & 1.568 \\
& 1.0 & 2.380 & 1.276 & {\bf 0.962} & 2.769 & 2.174 \\
& 1.5 & 2.721 & {\bf 1.312} & 1.320 & 3.505 & 2.915 \\ \hline
time (sec) & 0.5 & 15.44 & 23.46 & 15.65 & {\bf 0.07} & 151.59 \\
& 1.0 & 25.21 & 34.51 & 25.53 & {\bf 0.11} & 197.00 \\
& 1.5 & 18.18 & 26.43 & 18.50 & {\bf 0.12} & 207.88 \\ \hline \\
\end{tabular}
\caption{\label{tab:stability}
Stability and reproducibility of biclusterings in lung cancer microarray data. COBRA variants, the clustered dendrogram with dynamic tree cutting, and sparse Biclustering are applied to the lung cancer data to obtain baseline biclusterings. We then perturb the data by adding iid\@ $N(0,\sigma^2)$ noise where $\sigma = 0.5$ (Small Pert.), 1.0 (Medium Pert.), 1.5 (Large Pert.).}
\end{table}
\section{Discussion}
\label{sec:discussion}
Our proposed method for biclustering, COBRA, can be considered a principled reformulation of the clustered dendrogram. Unlike the clustered dendrogram, COBRA returns a unique global minimizer of a goodness-of-fit criterion, but like the clustered dendrogram, COBRA is simple to interpret. COBRA also sports two key improvements over existing biclustering methods. First, it is more stable. COBRA biclustering assignments on perturbations of the data agree noticeably more frequently than those of existing biclustering algorithms. Second, it admits an effective and efficient model selection procedure for selecting the number of biclusters, that reduces the problem to solving a sequence of convex biclustering problems. The upshot of these two qualities is that COBRA produces results that are both simple to interpret and reproducible.
The simplicity of our means model is also its greatest weakness, since we consider only checkerboard patterns, namely we
assign each observation to exactly one bicluster and do not consider overlapping biclusters \citep{CheChu2000,LazOwe2002,ShaWeiNob2009}. Nonetheless, while models that allow for overlapping biclusters might be more flexible, they are also harder to interpret.
While our simulation studies demonstrated the effectiveness of COBRA, there is room for improvement. We highlight an intriguing suggestion made during the review of this article. In many real-world applications there is no ``true" fixed number of biclusters. Instead, the underlying latent structure may be a continuum of biclusters at different scales of row and column aggregation. Indeed, COBRA has the potential to estimate a multiscale model of the data. When the weights are uniform, all columns (rows) are averaged together. When the weights are positive only among nearest neighbors, only nearest neighboring columns (rows) are averaged together. Thus, by tuning the weights, we can obtain smoothed estimates of the data at different scales of resolution.
The ability to smooth estimates at different scales suggests a connection to computational harmonic analysis. Indeed, \cite{CoiGav2011} explore the biclustering problem through a wavelet representation of the data matrix. They also seek a representation that is smooth with respect to partitions of the row and column graphs that specify the similarity among the observations and features. A checkerboard mean structure at different scales can be obtained via operations in the wavelet domain, namely by thresholding wavelet coefficients corresponding to different scales. We are currently exploring how to adapt \cite{CoiGav2011}'s strategy to solve a sequence of COBRA problems at different scales in order to recover a continuum of biclusters.
An R package, called cvxbiclustr, implementing COBRA is available on CRAN.
\section*{Web Appendix A. Column and Row Weights}
\label{sec:weights}
\setcounter{equation}{3}
Recall that our goal is to minimize the following convex criterion
\begin{eqnarray}
\label{eq:biclust_objective_function}
F_{\gamma}(\M{U}) & = & \frac{1}{2} \lVert \M{x} - \M{U} \rVert_{\text{F}}^2 + \gamma \underbrace{\left [\Omega_{\M{w}}(\M{U}) + \Omega_{\Mtilde{W}}(\M{U}\Tra) \right ]}_{J(\M{U})},
\end{eqnarray}
where $\Omega_{\M{w}}(\M{U}) = \sum_{i<j}w_{ij} \|\M{U}_{\cdot i}-\M{U}_{\cdot j} \rVert_2$, and $\M{U}_{\cdot i}$ ($\M{U}_{i \cdot})$ denotes the $i$th column (row) of the matrix $\M{U}$. In this work, we use the sparse Gaussian kernel weights proposed in \cite{ChiLan2015} for the weights $\M{W}$ and $\Mtilde{W}$ that define
the terms $\Omega_{\M{W}}(\M{U})$ and $\Omega_{\Mtilde{W}}(\M{U})$.
We construct the weights in two steps. We describe these steps for computing the column weights; the row weights are computed analogously. We start by computing pre-weights between the $i$th and $j$th columns as $\hat{w}_{ij} = \iota^k_{\{i,j\}} \exp(-\phi \lVert \M{x}_{\cdot i} - \M{x}_{\cdot _j} \rVert_2^2)$, as the product of two terms. The first factor $\iota^k_{\{i,j\}}$ is 1 if $j$ is among $i$'s $k$-nearest-neighbors or vice versa and 0 otherwise. The first term controls the sparsity of the weights. The second factor is a Gaussian kernel that puts greater pressure on similar columns to fuse and less pressure on dissimilar columns to fuse. The nonnegative constant $\phi$ controls the rate at which the pressure to fuse is applied as a function of the distance between columns; the value $\phi = 0$ corresponds to uniform weights. The pre-weights $\hat{w}_{ij}$ are then normalized to sum to $1/\sqrt{p}$.
For all experiments in the paper, we set $\phi = 0.5$ and set $k = 10$ for both row and column weights.
We briefly discuss the rationale behind our weight choice here and refer readers to \cite{ChiLan2015} for a more detailed exposition.
\cite{ChiLan2015} give several examples that show that restricting positive weights to nearest neighbors enhances both computational efficiency and clustering quality. In their examples they showed that if dense Gaussian kernel weights were used, cluster centroids shrunk towards each other
as the tuning parameter $\gamma$ increased but no fusions would occur along the path save a single simultaneous fusion of all cluster centroids for a sufficiently large $\gamma$. Thus, while the two factors defining the weights act similarly, sensible fusions along the solution path could be achieved only by using them together. This is best illustrated in the half-moons example in \citep{ChiLan2015}.
\section*{Web Appendix B. Proofs of Solution Properties}
In this appendix, we give proofs of propositions in Section 3 of our paper.
\setcounter{section}{3}
\begin{proposition}[Existence and Uniqueness]
\label{prop:existence_uniqueness} The function $F_\gamma(\M{U})$ defined in (1) has a unique global minimizer.
\end{proposition}
We first recall a few definitions and concepts useful in optimization \citep{Lan2013}. A function is {\em coercive} if all its sub level sets are compact. A function $f$ is {\em convex} if $f(\alpha\V{x} + (1-\alpha)\V{y}) \leq \alpha f(\V{x}) + (1-\alpha)f(\V{y})$
for all $\alpha \in (0,1)$ and $\V{x}, \V{y}$ in its domain. A function $f$ is {\em strictly convex} if the inequality is strict.
\begin{proof}
The existence and uniqueness of a global minimizer $\M{U}^\star$ are immediate consequences of the coerciveness and strict convexity of $F_{\gamma}(\M{U})$.
$\square$
\end{proof}
\begin{proposition}[Continuity]
The solution $\M{U}^\star$ of (1) is jointly continuous in $(\M{X},\gamma, \M{W},\Mtilde{W})$.
\end{proposition}
\begin{proof}
Without loss of generality, we can absorb the regularization parameter $\gamma$ into the weights $\V{w} = (\mathop{\rm vec}\nolimits(\M{W})\Tra,\mathop{\rm vec}\nolimits(\Mtilde{W})\Tra)\Tra \in \mathbb{R}^{\frac{p(p-1)}{2} + \frac{n(n-1)}{2}}$. Thus, we can check to see if the solution $\M{U}^\star$ is continuous in the variable $\V{\zeta} = (\mathop{\rm vec}\nolimits(\M{X})\Tra,\V{w}\Tra)\Tra$. It is easy to verify that the following function is jointly continuous in $\M{U}$ and $\V{\zeta}$
\begin{eqnarray}
f(\M{U},\V{\zeta}) & = & \frac{1}{2} \lVert \M{X} - \M{U} \rVert_{\text{F}}^2 + J_{\V{W}}(\M{U}),
\end{eqnarray}
where
\begin{eqnarray}
J_{\V{W}}(\M{U}) & = & \frac{1}{\sqrt{p}} \Omega_{\M{w}}(\M{U}) + \frac{1}{\sqrt{n}}\Omega_{\Mtilde{w}}(\M{U}\Tra)
\end{eqnarray}
is a convex function of $\M{U}$ that is continuous in $\V{W}$. Let
\begin{eqnarray}
\M{U}^\star(\V{\zeta}) & = & \underset{\M{U}}{\arg\min}\; f(\M{U},\V{\zeta}).
\end{eqnarray}
We proceed with a proof by contradiction. Suppose $\M{U}^\star(\V{\zeta})$ is not continuous at a point $\V{\zeta}$. Then there exists an $\epsilon > 0$ and a sequence $\{\Vn{\zeta}{m}\}$ converging to $\M{\zeta}$ such that $\lVert \Mn{U}{m} - \M{U}^\star(\V{\zeta}) \rVert_{\text{F}} \geq \epsilon$ for all $m$ where
\begin{eqnarray}
\Mn{U}{m} & = & \underset{\M{U}}{\arg\min}\; f(\M{U},\Vn{\zeta}{m}).
\end{eqnarray}
Note that since $f(\M{U},\V{\zeta})$ is strongly convex in $\M{U}$, the minimizers $\Mn{U}{m}$ and $\M{U}^\star(\V{\zeta})$ exist and are unique. Without loss of generality we can assume
$\lVert \Vn{\zeta}{m} - \V{\zeta} \rVert_{\text{F}} \leq 1$.
This fact will be used later in proving the boundedness of the sequence $\Mn{U}{m}$.
Fix an arbitrary point $\Mtilde{U}$. If $\Mn{U}{m}$ is a bounded sequence then we can pass to a convergent subsequence with limit $\Mbar{U}$. Note that $f(\Mn{U}{m}, \Vn{\zeta}{m}) \leq f(\Mtilde{U}, \Vn{\zeta}{m})$ for all $m$. Since $f$ is continuous in $(\M{U},\V{\zeta})$, taking limits gives us the inequality
\begin{eqnarray}
f(\Mbar{U}, \V{\zeta}) & \leq & f(\Mtilde{U}, \V{\zeta}).
\end{eqnarray}
Since $\Mtilde{U}$ was selected arbitrarily, it follows that $\Mbar{U} = \M{U}^\star(\V{\zeta})$, which is a contradiction. It only remains for us to show that the sequence $\Mn{U}{m}$ is bounded.
Consider the function
\begin{eqnarray}
g(\M{U}) & = & \underset{\Vtilde{\zeta} : \lVert \Vtilde{\zeta}- \V{\zeta} \rVert_{\text{F}} \leq 1}{\sup}\;
\frac{1}{2} \lVert \Mtilde{X} - \M{U} \rVert_{\text{F}}^2 + J_\Vtilde{W}(\M{U}).
\end{eqnarray}
Note that $g$ is convex, since it is the point-wise supremum of a collection of convex functions. Since $f(\M{U}, \Vn{\zeta}{m}) \leq g(\M{U})$ and $f$ is strongly convex in $\M{U}$, it follows that $g(\M{U})$ is also strongly convex and therefore has a unique global minimizer $\M{U}^*$ such that $g(\M{U}^*) < \infty$. It also follows that
\begin{eqnarray}
\label{eq:ineqA}
f(\Mn{U}{m},\Vn{\zeta}{m}) & \leq & f(\M{U}^*,\Vn{\zeta}{m}) \mathop{\:\:\,}\nolimits \leq \mathop{\:\:\,}\nolimits g(\M{U}^*)
\end{eqnarray}
for all $m$. By the reverse triangle inequality it follows that
\begin{eqnarray}
\label{eq:ineqB}
\frac{1}{2} \left (
\lVert \Mn{U}{m} \rVert_{\text{F}} - \lVert \Mn{X}{m} \rVert_{\text{F}}
\right )^2 \mathop{\:\:\,}\nolimits \leq \mathop{\:\:\,}\nolimits \frac{1}{2} \lVert \Mn{U}{m} - \Mn{X}{m} \rVert_{\text{F}}^2 \mathop{\:\:\,}\nolimits \leq \mathop{\:\:\,}\nolimits
f(\Mn{U}{m},\Vn{\zeta}{m}).
\end{eqnarray}
Combining the inequalities in \Eqn{ineqA} and \Eqn{ineqB}, we arrive at the conclusion that
\begin{eqnarray}
\frac{1}{2} \left (
\lVert \Mn{U}{m} \rVert_{\text{F}} - \lVert \Mn{X}{m} \rVert_{\text{F}}
\right )^2 \mathop{\:\:\,}\nolimits \leq \mathop{\:\:\,}\nolimits g(\M{U}^*),
\end{eqnarray}
for all $m$. Suppose the sequence $\Mn{U}{m}$ is unbounded, namely $\lVert \Mn{U}{m} \rVert_{\text{F}} \rightarrow \infty$.
But since $\Mn{X}{m}$ converges to $\M{X}$, the left hand side must diverge. Thus, we arrive at a contradiction if $\Mn{U}{m}$ is unbounded.
$\square$
\end{proof}
\begin{proposition}[Zeroes of the fusion penalty]
Under Assumption 1, \\ $J(\M{U}) = 0$ if and only if $\M{U} = c\V{1}\V{1}\Tra$ for some $c \in \mathbb{R}$.
\end{proposition}
\begin{proof}
We first show that
\begin{eqnarray}
\Omega_{\M{W}}(\M{U}) & = & \sum_{i < j} \VE{w}{ij} \lVert \M{U}_{\cdot i} - \M{U}_{\cdot i} \rVert_{\text{F}}
\end{eqnarray}
is positive if and only if $\M{U}_{\cdot i} = \M{U}_{\cdot j}$ for all $i < j$, namely all the columns of $\M{U}$ are the same. Clearly if the columns of $\M{U}$ are the same, then $\Omega_{\M{W}}(\M{U})$ is zero. Suppose that $\Omega_{\M{W}}(\M{U})$ is zero. Then it must that be $\M{U}_{\cdot i} = \M{U}_{\cdot j}$ for every $\VE{w}{ij} > 0$. Consider a pair $(i,j)$ such that $\VE{w}{ij} = 0$. By Assumption 1, there exists a path $i \rightarrow k \rightarrow \cdots \rightarrow l \rightarrow j$ along which the weights are positive. Let $w$ denote the smallest weight along this path, namely $w = \min \{\VE{w}{ik}, \ldots, \VE{w}{lj}\}$. By the triangle inequality
\begin{eqnarray}
\lVert \M{U}_{\cdot i} - \M{U}_{\cdot j} \rVert_{\text{F}} & \leq & \lVert \M{U}_{\cdot i} - \M{U}_{\cdot k} \rVert_{\text{F}}
+ \cdots + \lVert \M{U}_{\cdot l} - \M{U}_{\cdot j} \rVert_{\text{F}}.
\end{eqnarray}
We can then conclude that
\begin{eqnarray}
w\lVert \M{U}_{\cdot i} - \M{U}_{\cdot j} \rVert_{\text{F}} & \leq & \Omega_{\M{W}}(\M{U}) \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits 0.
\end{eqnarray}
It follows that $\M{U}_{\cdot i} = \M{U}_{\cdot j}$, since $w$ is positive. By a similar argument it follows that
\begin{eqnarray}
\Omega_{\Mtilde{W}}(\M{U}) & = & \sum_{i < j} \tilde{w}_{ij} \lVert \M{U}_{i \cdot} - \M{U}_{j\cdot} \rVert_{\text{F}}
\end{eqnarray}
is zero if and only if $\M{U}_{i \cdot} = \M{U}_{j \cdot}$ for all $i < j$, or in other words if the rows of $\M{U}$ are all the same. Thus, $J_\M{W}(\M{U}) = 0$ if and only if $\M{U}$ is a constant matrix.
\end{proof}
\label{sec:coalesce}
\begin{proposition}[Coalescence]
\label{prop:coalesce}
Under Assumption 1, $F_{\gamma}(\M{U})$ is minimized by the grand mean $\Mbar{\M{x}}$ for $\gamma$ sufficiently large.
\end{proposition}
\begin{proof}
We will show that there is a $\gamma_{\max}$ such that for all $\gamma \geq \gamma_{\max}$, the grand mean matrix $\Mbar{X}$ is the unique global minimizer to the primal objective \Eqn{biclust_objective_function}. We will certify that $\Mbar{X}$ is the solution to the primal problem by showing that the optimal value of a dual problem, which lower bounds the primal, equals $F_\gamma(\Mbar{X})$.
Throughout the proof, we will work with the vectorization of matrices, namely the vector obtained by stacking the columns of a matrix on top of each other. We denote the vectorization of a matrix $\M{X}$ by its corresponding bold lower case, namely $\V{x} = \mathop{\rm vec}\nolimits(\M{X})$. Thus, we will construct a dual to the following representation of the primal problem,
\begin{eqnarray}
\label{eq:primal}
\underset{\V{u}}{\text{minimize}}\; F_\gamma(\V{u}) = \frac{1}{2} \lVert \V{x} - \V{U} \rVert_2^2 + \gamma J(\V{u}).
\end{eqnarray}
In order to rewrite the penalty $J$ in terms of the vector $\V{u}$, we use the identity $\mathop{\rm vec}\nolimits(\M{M}\M{N}\M{P}) = (\M{P}\Tra \Kron \M{M})\mathop{\rm vec}\nolimits(\M{N})$ where $\Kron$ denotes the Kronecker product between two matrices. Thus,
\begin{eqnarray}
J(\V{u}) & = & \sum_{i < j} \VE{w}{ij} \lVert \M{A}_{ij} \V{u} \rVert_2 + \sum_{i < j} \tilde{w}_{ij} \lVert \Mtilde{A}_{ij} \V{u} \rVert_2,
\end{eqnarray}
where
\begin{eqnarray}
\M{A}_{ij} & = & (\V{e}_i - \V{e}_j)\Tra \Kron \M{I}, \\
\Mtilde{A}_{ij} & = & \M{I} \Kron (\V{e}_i - \V{e}_j)\Tra,
\end{eqnarray}
and $\V{e}_i$ is the $i$th standard basis vector. To keep things notationally simpler, we have absorbed the normalizations by $\sqrt{p}$ and $\sqrt{n}$ into the weights $\VE{w}{ij}$ and $\tilde{w}_{ij}$.
We first introduce some notation in order to write the relevant dual problem to the primal problem (\ref{eq:primal}). Note that the column weights $\VE{w}{ij}$ can be identified with a column graph of $n$ nodes, where there is an edge between the $i$th and $j$th node if and only if $\VE{w}{ij} > 0$. The row weights $\tilde{w}_{ij}$ can also be identified with an analogous row graph of $p$ nodes.
Let $\mathcal{E}_c$ and $\mathcal{E}_r$ denote the sets of edges in the column and row graphs, and let $\lvert \mathcal{E}_c \rvert$ and $\lvert \mathcal{E}_r \rvert$ denote their respective cardinalities. The edge-incidence matrix of the column graph $\M{\Phi}_c \in \mathbb{R}^{\lvert \mathcal{E}_c \rvert \times n}$ encodes its connectivity and is defined as
\begin{eqnarray}
\ME{\phi}{c,li} = \begin{cases}
1 & \text{If node $i$ is the head of edge $l$,} \\
-1 & \text{If node $i$ is the tail of edge $l$,} \\
0 & \text{otherwise.}
\end{cases}
\end{eqnarray}
The row edge-incidence matrix $\M{\Phi}_r \in \mathbb{R}^{\lvert \mathcal{E}_r \rvert \times p}$ is defined similarly.
We begin deriving the dual problem by recalling that norms possess a variational representation in terms of their dual norms, namely
\begin{eqnarray}
\lVert \V{Y} \rVert & = & \underset{ \lVert \V{Z} \rVert_\dagger \leq 1 }{\max} \langle \V{Z}, \V{Y} \rangle,
\end{eqnarray}
where $\lVert \cdot \rVert_\dagger$ is the dual norm of $\lVert \cdot \rVert$. Using this fact and working through some tedious algebra, we can rewrite the penalty term in \Eqn{primal} compactly as
\begin{eqnarray}
\gamma J(\V{U}) & = & \underset{\V{v} \in C_\gamma}{\max}\; \left \langle \M{A}\Tra \V{v}, \V{u} \right \rangle.
\end{eqnarray}
The vector $\V{v}$ is the concatenation of several vectors, namely
\begin{eqnarray}
\V{v} = (\V{v}_1\Tra,\ldots, \V{v}_{\mathcal{E}_c}\Tra, \Vtilde{v}_1\Tra, \ldots, \Vtilde{v}_{\mathcal{E}_r}\Tra)\Tra,
\end{eqnarray}
where $\V{v}_l \in \mathbb{R}^{n}, \Vtilde{v}_l \in \mathbb{R}^p$. The matrix $\M{A}$ can be expressed in terms of the row and column edge-incidence matrices, namely
\begin{eqnarray}
\M{A} & = & \begin{pmatrix}
\M{\Phi}_c \Kron \M{I} \\
\M{I} \Kron \M{\Phi}_r \\
\end{pmatrix}.
\end{eqnarray}
Finally, the constraints on the vector $\V{v}$ are encoded in the set
$C_\gamma = \{ \V{v} : \lVert \V{V}_{l} \rVert_{2} \leq \VE{w}{l}\gamma$ and $\lVert \Vtilde{V}_{l} \rVert_2 \leq \tilde{w}_{l} \gamma\}$.
Thus, the primal problem \Eqn{primal} can be expressed as the following saddle point problem
\begin{eqnarray}
\underset{\V{v} \in C_\gamma}{\max}\;
\underset{\V{U}}{\min}\; \frac{1}{2} \lVert \V{x} - \V{u} \rVert_2^2 + \left \langle \M{A}\Tra \V{v}, \V{u} \right \rangle.
\end{eqnarray}
By performing the minimization with respect to $\V{u}$, we obtain a dual maximization problem that provides a lower bound on the primal objective
\begin{eqnarray}
\underset{\V{v} \in C_\gamma}{\max}\;
-\frac{1}{2} \lVert \M{A}\Tra\V{v} \rVert_2^2 + \langle \V{v}, \M{A}\V{x} \rangle.
\end{eqnarray}
For sufficiently large $\gamma$, the solution to the dual maximization problem coincides with the solution to the unconstrained maximization problem
\begin{eqnarray}
\underset{\V{v}}{\max}\;
-\frac{1}{2} \lVert \M{A}\Tra\V{v} \rVert_2^2 + \langle \V{v}, \M{A}\V{x} \rangle,
\end{eqnarray}
whose solution is $\V{v}^\star = \left (\M{A}\M{A}\Tra \right )^\dagger\M{A}\V{x}$. Plugging $\V{v}^\star$ into the dual objective gives an optimal value of
\begin{eqnarray}
\frac{1}{2} \lVert \M{A}\Tra\left (\M{A}\M{A}\Tra \right )^\dagger \M{A} \V{x} \rVert_2^2 ,
\end{eqnarray}
which we rewrite as
\begin{eqnarray}
\frac{1}{2} \lVert \V{x} - \left [\M{I} - \M{A}\Tra\left (\M{A}\M{A}\Tra \right )^\dagger \M{A} \right ]\V{x} \rVert_2^2.
\end{eqnarray}
Note that $\left [\M{I} - \M{A}\Tra\left (\M{A}\M{A}\Tra \right )^\dagger \M{A} \right ]$
is the projection onto the orthogonal complement of the column space of $\M{A}\Tra$, which is equivalent to the null space or kernel of $\M{A}$, denoted Ker$(\M{A})$. We will show shortly that Ker($\M{A}$) is the span of the all ones vector. Therefore, $\left [\M{I} - \M{A}\Tra\left (\M{A}\M{A}\Tra \right )^\dagger \M{A} \right ]\V{x} = \frac{1}{np}\langle \V{x}, \V{1} \rangle \V{1}$.
Before showing that Ker($\M{A}$) is the span of $\V{1}$, we note that the smallest $\gamma$ such that $\V{v}^\star \in C_\gamma$ is an upper bound on $\gamma_{\max}$.
We now argue that Ker($\M{A}$) is the span of $\V{1} \in \mathbb{R}^{np}$.
We rely on the following fact: If $\M{\Phi}$ is an incidence matrix of a connected graph with $n$ vertices, then the rank of $\M{\Phi}$ is $n-1$
(See Theorem 7.2 in Chapter 7 of \cite{Deo1974}). According to Assumption 1 in the paper, the column and row graphs are connected; it follows that $\M{\Phi}_c \in \{-1,0,1\}^{\lvert \mathcal{E}_c \rvert \times n}$ has rank $n-1$ and $\M{\Phi}_r \in \{-1,0,1\}^{\lvert \mathcal{E}_r \rvert \times p}$ has rank $p-1$. It follows then that Ker($\M{\Phi}_c$) and Ker($\M{\Phi}_r$) have dimension one. Furthermore, since each row of $\M{\Phi}_c$ and $\M{\Phi}_r$ has one $1$ and one $-1$, it follows that $\V{1} \in$ Ker($\M{\Phi}_c$) $\subset \mathbb{R}^n$, and likewise $\V{1} \in$ Ker($\M{\Phi}_r$) $\subset \mathbb{R}^p$. A vector $\V{z} \in $Ker($\M{A}$) if and only if $\V{z} \in$ Ker($\M{\Phi}_c \Kron \M{I}$) $\cap$ Ker($\M{I} \Kron \M{\Phi}_r$).
Recall that if the singular values of a matrix $\M{A}$ are $\sigma_{\M{A},i}$ and the singular values of a matrix $\M{B}$ are $\sigma_{\M{B},j}$, then the singular values of their Kronecker product $\M{A} \Kron \M{B}$ are $\sigma_{\M{A},i}\sigma_{\M{B},j}$. It follows then that the rank of $\M{A} \Kron \M{B}$ is the product of the ranks of $\M{A}$ and $\M{B}$.
The above rank property of Kronecker products of matrices implies that the dimension of Ker($\M{\Phi}_c \Kron \M{I})$ equals $p$ and the dimension of Ker($\M{I} \Kron \M{\Phi}_r)$ equals $n$. It is easy to see then that the linearly independent set of vectors $\{\V{1} \Kron \V{e}_1, \ldots,
\V{1} \Kron \V{e}_p\}$, where $\V{1} \in \mathbb{R}^n$ and $\V{e}_i \in \mathbb{R}^p$, forms a basis for Ker($\M{\Phi}_c \Kron \M{I}$). Likewise, the linearly independent set of vectors $\{ \V{e}_1 \Kron \V{1} , \ldots,
\V{e}_n \Kron \V{1}\}$, where $\V{1} \in \mathbb{R}^p$ and $\V{e}_i \in \mathbb{R}^n$, forms a basis for Ker($\M{I} \Kron \M{\Phi}_r$).
Take an element from Ker($\M{\Phi}_c \Kron \M{I}$), namely $\V{1} \Kron \V{a}$, where $\V{1} \in \mathbb{R}^n$ and $\V{a} \in \mathbb{R}^p$. We will show that in order for $\V{1} \Kron \V{a} \in$ Ker($\M{i} \Kron \M{\Phi}_r$), $\V{a}$ must be a multiple of 1. Consider the relevant matrix-vector product
\begin{eqnarray}
(\M{I} \Kron \M{\Phi}_r) (\V{1} \Kron \V{a}) \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits (\M{I} \Kron \M{\Phi}_r) \mathop{\rm vec}\nolimits(\V{a}\V{1}\Tra) \mathop{\:\:\,}\nolimits = \mathop{\:\:\,}\nolimits\M{\Phi}_r \V{a}\V{1}\Tra,
\end{eqnarray}
where we again used the fact that $\mathop{\rm vec}\nolimits(\M{M}\M{N}\M{P}) = (\M{P}\Tra \Kron \M{M})\mathop{\rm vec}\nolimits(\M{N})$ and the fact that $\mathop{\rm vec}\nolimits(\V{b}\V{c}\Tra) = \V{c}\Kron\V{b}$.
Note that $(\M{I} \Kron \M{\Phi}_r) (\V{1} \Kron \V{a}) = \V{0}$ if and only if $\M{\Phi}_r\V{a} = \V{0}$. But
the only way for $\M{\Phi}_r\V{a}$ to be zero is for $\V{a} = c\V{1}$ for some $c \in \mathbb{R}$. A similar argument shows that the only non-trivial vector in Ker($\M{I} \Kron \M{\Phi}_r$) that also belongs to Ker($\M{\Phi}_c \Kron \M{I}$) is $\tilde{c}\V{1}$ for $\tilde{c} \in \mathbb{R}$. Thus, we have shown that Ker($\M{A}$) is the span of $\V{1}$. $\square$
\end{proof}
\section*{Web Appendix C. DLPA, COBRA, and a Proof of Proposition 4.1}
\label{sec:DLPA}
We give expanded technical treatment of DLPA and COBRA as well as results described in Section 4. We begin by reviewing some basic concepts convex analysis \citep{BauCom2008,ComPes2011}. Recall that the domain of a convex function $f$ is the set of $\V{x}$ such that $f(\V{x}) < \infty$. For $\sigma > 0$ the mapping
\begin{eqnarray}
\mathop{\rm prox}\nolimits_{\sigma f}(\V{u}) & = & \underset{\V{v}}{\arg\min}\;\left[\sigma f(\V{v})+ \frac{1}{2} \lVert \V{u} - \V{v} \rVert_2^2 \right]
\end{eqnarray}
is called the proximal map of the function $f(\V{v})$. The proximal map exists and is unique whenever the function $f(\V{v})$ is convex and lower-semicontinuous.
Norms and semi-norms satisfy these conditions,
and for many norms of interest the proximal map can be evaluated by either an explicit formula or an efficient algorithm. For example, the proximal map for the $\ell_1$-norm is the ubiquitous element-wise soft-thresholding operator, namely the $l$th element of the proximal mapping is given by
\begin{eqnarray}
\left [\mathop{\rm prox}\nolimits_{\sigma \lVert \cdot \rVert_1}(\V{u}) \right]_l & = & \left [ 1 - \frac{\sigma}{| \VE{u}{l} |} \right ]_+ \VE{u}{l}.
\end{eqnarray}
Closer inspection of \Eqn{biclust_objective_function} shows that we seek the proximal mapping of the sum of two lower-semicontinuous, convex functions, namely
\begin{eqnarray}
\mathop{\rm prox}\nolimits_{f + g}(\M{U}) & = & \underset{\M{U}}{\arg\min}\; \frac{1}{2} \lVert \M{X} - \M{U} \rVert_{\text{F}}^2 + f(\M{U}) + g(\M{U}),
\end{eqnarray}
where $f(\M{U}) = \gamma\Omega_{\M{W}}(\M{U})$ and $g(\M{U}) = \gamma\Omega_{\Mtilde{W}}(\M{U}\Tra)$.
This problem is reminiscent of the classic problem of finding the projection of a point onto the intersection of two nonempty and closed convex sets. Indeed, it is the problem when the functions $f$ and $g$ are respectively the indicator functions of two nonempty and closed convex sets $A$ and $B$. Then we can pose the problem of finding the projection of $\M{X}$ onto the set $A \cap B$ as the optimization problem
\begin{eqnarray}
\underset{\M{U}}{\min}\; \frac{1}{2} \lVert \M{X} - \M{U} \rVert_{\text{F}}^2 + \delta_A(\M{U}) + \delta_B(\M{U}),
\end{eqnarray}
where $\delta_A$ is the set indicator function, which is 0 for all $\M{U} \in A$ and $\infty$ for all $\M{U} \not\in A$.
von Neumann's alternating projection method provides an iterative solution when the two sets $A$ and $B$ are vector subspaces \citep{Deu1992}. His strategy was subsequently generalized by Dykstra to closed convex cones in Euclidean spaces \citep{Dyk1983} and generalized further by Boyle and Dykstra to the intersection of convex sets in Hilbert spaces \citep{BoyDyk1986}. Finally, \cite{BauCom2008} derived a Dykstra-like proximal algorithm that iteratively solves for the desired proximal mapping of the sum of two convex functions \citep{BauCom2008,ComPes2011}, which we describe next.
Let $f$ and $g$ be lower-semicontinuous convex functions on $\mathbb{R}^n$, with ${\bf dom}\, f \cap {\bf dom}\, g \not = \emptyset$, and let $\V{x} \in \mathbb{R}^n$. Bauschke and Combettes' algorithm, shown in \Alg{DLPA},
iteratively solves the following problem
\begin{eqnarray}
\label{eq:dlpa}
\underset{\V{u} \in \mathbb{R}^n}{\min}\; \frac{1}{2} \lVert \V{u} - \V{x} \rVert_2^2 + f(\V{u}) + g(\V{u})
\end{eqnarray}
and is guaranteed to converge.
\begin{theorem}[Proposition 5.3 in \cite{ComPes2011}]
\label{thm:dlpa}
\Alg{DLPA} converges to the solution of \Eqn{dlpa}.
\end{theorem}
\setcounter{algorithm}{2}
\begin{algorithm}[t]
Set $\V{u}_0 = \V{x}, \V{p}_0 = \V{0}, \V{q} = \V{0}$ for $m=0, 1, \ldots$
\begin{algorithmic}[0]
\caption{Dykstra-Like Proximal Algorithm (DLPA)}
\label{alg:DLPA}
\Repeat
\State $\V{y}_{m} = \mathop{\rm prox}\nolimits_g(\V{u}_m + \V{p}_m)$
\State $\V{p}_{m+1} = \V{u}_m + \V{p}_m - \V{y}_m$
\State $\V{u}_{m+1} = \mathop{\rm prox}\nolimits_f(\V{y}_m + \V{q}_m)$
\State $\V{q}_{m+1} = \V{y}_m + \V{q}_m - \V{u}_{m+1}$
\Until{convergence}
\end{algorithmic}
\end{algorithm}
Setting $f(\M{U}) = \gamma\Omega_{\M{W}}(\M{U})$ and $g(\M{U}) = \gamma\Omega_{\Mtilde{W}}(\M{U})$ in \Alg{DLPA} yields COBRA outlined in Algorithm 1 in the paper. Consequently, the convergence of COBRA (Proposition 4.1) follows immediately from \Thm{dlpa}, since $\Omega_{\M{W}}(\M{U})$ and $\Omega_{\Mtilde{W}}(\M{U})$ are both continuous convex functions over all of $\mathbb{R}^{np}$.
\section*{Web Appendix D. Majorization-Minimization (MM) algorithms and a Proof of Proposition 5.1}
\label{sec:mm_proof}
Recall that in the model selection problem we seek the minimizer of the following validation objective
\begin{eqnarray}
\label{eq:validation}
\tilde{F}_\gamma(\M{U}) & = &
\frac{1}{2} \lVert \mathcal{P}_{\Theta^c}(\M{x}) - \mathcal{P}_{\Theta^c}(\M{U}) \rVert_{\text{F}}^2 + \gamma J(\M{U}).
\end{eqnarray}
In this appendix, we elaborate on how to extend COBRA via a Majorization-Minimization (MM) algorithm to handle missing data in order to solve \Eqn{validation}.
We begin with a brief review of MM algorithms. The basic strategy behind an MM algorithm is to convert a hard optimization problem into a sequence of simpler ones. The MM principle requires majorizing the objective function $f(\V{u})$ by a surrogate function $g(\V{u} \mid \Vtilde{u})$ anchored at the current point $\Vtilde{u}$. Majorization is a combination of the tangency condition $g(\Vtilde{u} \mid \Vtilde{u}) = f(\Vtilde{u})$ and the domination condition $g(\V{u} \mid \Vtilde{u}) \geq f(\V{u})$ for all $\V{u} \in \mathbb{R}^n$. The associated MM algorithm is defined by the iterates $\Vn{u}{m+1} := \underset{\V{u}}{\arg \min}\; g(\V{u} \mid \Vn{u}{m})$. It is straightforward to verify that the MM iterates generate a descent algorithm driving the objective function downhill, namely that $f(\Vn{u}{m+1}) \leq f(\Vn{u}{m})$ for all $m$.
Returning to our original problem, we observe that the following quadratic function of $\M{U}$ is always nonnegative
\begin{eqnarray}
\label{eq:quad}
\frac{1}{2} \sum_{(i,j) \in \Theta} (\ME{u}{ij} - \tilde{u}_{ij})^2 & \geq & 0,
\end{eqnarray}
and that the inequality becomes equality when $\M{U} = \Mtilde{U}$. Adding the quadratic function in \Eqn{quad}
to $\tilde{F}_\gamma(\M{U})$ gives us the following function
\begin{eqnarray}
g(\M{U} \mid \tilde{\M{U}}) & = & \tilde{F}_\gamma(\M{U}) + \frac{1}{2} \sum_{(i,j) \in \Theta} (\ME{u}{ij} - \tilde{u}_{ij})^2 \\
& = & \frac{1}{2} \left [ \sum_{(i,j) \in \Theta^c} (\ME{x}{ij} - \ME{u}{ij})^2 + \sum_{(i,j) \in \Theta} (\ME{U}{ij} - \tilde{u}_{ij})^2 \right ] + \gamma J(\M{U})\\
& = & \frac{1}{2} \lVert \M{M} - \M{U} \rVert_{\text{F}}^2 + \gamma J(\M{U}),
\end{eqnarray}
where $\M{M} = \mathcal{P}_{\Theta^c}(\M{X}) + \mathcal{P}_{\Theta}(\tilde{\M{U}})$.
The function $g(\M{U} \mid \Mtilde{U})$ majorizes $\tilde{F}_\gamma(\M{U})$ at the point $\Mtilde{U}$, since
$g(\M{U} \mid \Mtilde{U}) \geq \tilde{F}_\gamma(\M{U})$ for all $\M{U}$ and $g(\Mtilde{U} \mid \Mtilde{U}) = \tilde{F}_\gamma(\Mtilde{U})$. Minimizing the majorization $g(\M{U} \mid \Mtilde{U})$ can be accomplished by invoking COBRA on the complete matrix $\M{M}$. Alternating between updating the majorization and applying COBRA to minimize the new majorization yields Algorithm~2 in the paper.
Having derived the majorization being minimized in Algorithm~2, we are almost ready to prove Proposition 5.1.
We need one more ingredient. The convergence theory of monotonically decreasing algorithms, like the MM algorithm, hinges on the properties of the map $\psi(\M{U})$ which returns the next iterate given the last iterate. For easy reference, we state a simple version of Meyer's monotone convergence theorem \citep{Meyer1976} which is the key ingredient in proving convergence in our setting.
\begin{theorem}\label{thm:MM_limit_points}
Let $f(\M{U})$ be a continuous function on a compact domain $S$ and
$\psi(\M{U})$ be a continuous map from $S$ into $S$ satisfying
$f(\psi(\M{U})) < f(\M{U})$ for all $\M{U} \in S$ with $\psi(\M{U}) \neq \M{U}$.
Then all limit points are fixed points of $\psi(\M{U})$.
\end{theorem}
We now prove Proposition 5.1.
\begin{proof}
We first use the above theorem to establish that the iterates of the MM algorithm tend towards the fixed points of the corresponding map, $\psi(\Mtilde{U}) = \arg\min_{\M{U}} g(\M{U} \mid \Mtilde{U})$. Fix an arbitrary starting guess $\Mn{U}{0}$. Set $S = \{ \M{U} : g(\M{U} \mid \Mn{U}{0} ) \leq F_\gamma(\Mn{U}{0}) \}$. Since $g(\M{U} \mid \Mn{U}{0})$ is continuous and coercive in $\M{U}$, it follows that $S$ is compact. Since $g(\M{U} \mid \Mtilde{U})$ is strongly convex in $\M{U}$ it follows that if $\Mtilde{U}$ is not a fixed point then it is not the unique global minimizer of $g(\M{U} \mid \Mtilde{U})$ and therefore $F_\gamma(\psi(\Mtilde{U})) < F_\gamma(\Mtilde{U})$. By \Thm{MM_limit_points} the limit points of the sequence $\Mn{U}{n}$ are fixed points of $\psi(\M{U})$.
We argue that the fixed points of $\psi(\M{U})$ are global minimizers of the validation objective \Eqn{validation}. Note that the mapping $\Mtilde{U} \mapsto \psi(\Mtilde{U})$ is characterized by the condition
\begin{eqnarray}
\psi(\Mtilde{U}) - \mathcal{P}_{\Theta^c}(\M{X}) - \mathcal{P}_{\Theta}(\Mtilde{U}) & \in & \gamma\partial J(\psi(\Mtilde{U})).
\end{eqnarray}
If $\Mtilde{U}$ is a fixed point of $\psi$ then $\Mtilde{U} = \psi(\Mtilde{U})$, and the above optimality condition becomes
\begin{eqnarray}
\mathcal{P}_{\Theta^c}(\Mtilde{U}) - \mathcal{P}_{\Theta^c}(\M{X}) & \in & \gamma\partial J(\Mtilde{U}).
\end{eqnarray}
But this implies that $\Mtilde{U}$ is a global minimizer of the validation objective \Eqn{validation}.
Putting everything together, we have that the limit points of the MM sequence $\Mn{U}{m}$ are global minimizers of the validation objective. $\square$
\end{proof}
Note that there might be infinitely many limit points. The set of limit points, however, must be contained in $S$ and is therefore bounded. It must also be convex and closed.
As a final remark, we point out that we obtain essentially the same MM algorithm, if we substitute the objective $\tilde{F}_\gamma(\M{U})$ with the objective
\begin{eqnarray}
\frac{1}{2} \lVert \mathcal{P}_{\Theta^c}(\M{x}) - \mathcal{P}_{\Theta^c}(\M{U}) \rVert_{\text{F}}^2 + \gamma \lVert \M{U} \rVert_*,
\end{eqnarray}
where $\lVert \M{U} \rVert_*$ denotes the nuclear norm of $\M{U}$. When we substitute $\lVert \M{U} \rVert_*$ for $J(\M{U})$, the resulting MM algorithm is the soft-impute algorithm of \cite{MazHasTib2010}.
\section*{Web Appendix E. Measures of Clustering Similarity}
\subsection*{Rand Index}
\label{sec:rand_index}
Consider the problem of clustering $q$ objects. Let $\mathcal{A} = \{A_1, \ldots, A_m\}$ and $\mathcal{B} = \{B_1, \ldots, B_n\}$ denote two partitions of the index set $\{1, \ldots, q\}$. The Rand index \citep{Ran1971} quantifies
how similar the partitions $\mathcal{A}$ and $\mathcal{B}$ are to each other by tallying up how often pairs of objects are similarly assigned and dividing this quantity by the total number of pairs of objects. To compute the Rand Index, we define four events:
\begin{equation}
\begin{split}
C_{ij} & = \text{$\{ i,j \in A_k$ for some $k \}$} \\
D_{ij} & = \text{$\{i,j \in B_l$ for some $l \}$} \\
E_{ij} & = \text{$\{i \in A_k, j \in A_{k'}$ for $k \not = k'\}$} \\
F_{ij} & = \text{$\{i \in B_l, j \in B_{l'}$ for $l \not = l'\}$}.
\end{split}
\end{equation}
The intersection $C_{ij} \cap D_{ij}$ denotes the event that $\mathcal{A}$ and $\mathcal{B}$ have assigned the pair of objects $i$ and $j$ similarly, namely under both partitions, $i$ and $j$ belong to the same cluster.
The intersection $E_{ij} \cap F_{ij}$ denotes the event that $\mathcal{A}$ and $\mathcal{B}$ have assigned the pair of objects $i$ and $j$ similarly in an alternative sense, namely under both partitions, $i$ and $j$ are assigned to different clusters. The Rand index is given by the following ratio
\begin{eqnarray}
\frac{\sum_{i < j} I(C_{ij} \cap D_{ij}) + \sum_{i < j} I(E_{ij} \cap F_{ij})}{ {q \choose 2} },
\end{eqnarray}
where $I(Z)$ is 1 if event $Z$ occurs and 0 otherwise. By dividing by the total number of pairs, the Rand index takes on values between 0 (no agreement between $\mathcal{A}$ and $\mathcal{B}$) and 1 (perfect agreement between $\mathcal{A}$ and $\mathcal{B}$).
\subsection*{Adjusted Rand Index}
\label{sec:adjusted_rand_index}
\begin{table}[th]
\begin{tabular}{ c || c c c c || c }
& $\mathcal{B}_1$ & $\mathcal{B}_2$ & $\cdots$ & $\mathcal{B}_n$ & Row Marginal \\ \hline
$\mathcal{A}_1$ & $q_{11}$ & $q_{12}$ & $\cdots$ & $q_{1n}$ & $q_{1\cdot}$ \\
$\mathcal{A}_2$ & $q_{21}$ & $q_{22}$ & $\cdots$ & $q_{2n}$ & $q_{2\cdot}$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ & $\vdots$ & $\vdots$ \\
$\mathcal{A}_m$ & $q_{m1}$ & $q_{m2}$ & $\cdots$ & $q_{mn}$ & $q_{m\cdot}$ \\
Column Marginal & $q_{\cdot 1}$ & $q_{\cdot 2}$ & $\cdots$ & $q_{\cdot n}$ & $q$ \\ \hline \\
\end{tabular}
\caption{\label{tab:adjusted_rand}
Contingency table comparing two partitions $\mathcal{A} = \{A_1, \ldots, A_m\}$ and $\mathcal{B} = \{B_1, \ldots, B_n\}$ of the index set $\{1, \ldots, q\}$.}
\end{table}
While the Rand index is an intuitive and simple quantitative measure of assessing the similarity of two partitions, it does have some well known defects. For example, since the Rand index makes no probabilistic assumptions on the data, there are no guarantees on the behavior of the Rand index when assessing the similarity between two random partitions.
To address these issues, \cite{HubAra1985} proposed the adjusted Rand index, which assumes a hypergeometric distribution in the null case when two random partitions are being compared. Note that this is a very strong assumption. Nonetheless, the benefit of making this assumption is that under the null case, the expected value of the adjusted Rand index is zero. When there is perfect agreement between the two partitions being compared the adjusted Rand index is 1. Unlike the Rand index, it is possible for the adjusted Rand index to take on negative values. We next review details on calculating the adjusted Rand index.
Again consider the problem of clustering $q$ objects. Let $\mathcal{A} = \{A_1, \ldots, A_m\}$ and $\mathcal{B} = \{B_1, \ldots, B_n\}$ denote two partitions of the index set $\{1, \ldots, q\}$.
We illustrate how the score is computed using the contingency table shown in \Tab{adjusted_rand}. The $ij$th entry in the table $q_{ij}$ denotes the number of elements common to $\mathcal{A}_i$ and $\mathcal{B}_j$.
We denote the $i$th row marginal sum $q_{i\cdot} = \sum_{j} n_{ij}$ and the $j$th column marginal sum $q_{\cdot j} = \sum_{i} q_{ij}$. The adjusted Rand Index is given by the following ratio.
\begin{eqnarray}
\frac{\sum_{i=1}^m \sum_{j=1}^n {q_{ij} \choose 2} - \left [\sum_{i=1}^m {q_{i \cdot} \choose 2} \sum_{j=1}^n {q_{\cdot j} \choose 2}\right ]/{q \choose 2}}
{\frac{1}{2} \left [ \sum_{i=1}^m {q_{i \cdot} \choose 2} + \sum_{j=1}^n {q_{\cdot j} \choose 2} \right ] - \left [ \sum_{i=1}^m {q_{i \cdot} \choose 2} \sum_{j=1}^n {q_{\cdot j} \choose 2} \right ]/{q \choose 2}}.
\end{eqnarray}
\subsection*{Variation of Information}
\label{sec:variation_information}
We first need to define the entropy of a partition and the mutual information between two partitions in order to define the variation of information. As before suppose we have $q$ objects to cluster and two partitions $\mathcal{A}$ and $\mathcal{B}$. Let $\lvert \mathcal{A}_i \rvert$ denote the number of elements in the $i$th partition $\mathcal{A}_i$.
Entropy of a partition $\mathcal{A}$ is given by
\begin{eqnarray}
\mathcal{H}(\mathcal{A}) & = & \sum_{i=1}^m \frac{\lvert \mathcal{A}_i \rvert}{q}\log_2 \left(\frac{\lvert \mathcal{A}_i \rvert}{q} \right).
\end{eqnarray}
The mutual information between two partitions $\mathcal{A}$ and $\mathcal{B}$ is given by
\begin{eqnarray}
\mathcal{I}(\mathcal{A}, \mathcal{B}) & = & \sum_{i=1}^m \sum_{j=1}^n \frac{\lvert \mathcal{A}_i \cap \mathcal{B}_j \vert}{q} \log_2 \left ( \frac{\lvert \mathcal{A}_i \cap \mathcal{B}_j \rvert}{\lvert \mathcal{A}_i\rvert \lvert \mathcal{B}_j\rvert} \right ).
\end{eqnarray}
The variation of information between two clustering is given by
\begin{eqnarray}
\mathcal{VI}(\mathcal{A},\mathcal{B}) & = & \mathcal{H}(\mathcal{A}) + \mathcal{H}(\mathcal{B}) - 2\mathcal{I}(\mathcal{A},\mathcal{B}).
\end{eqnarray}
\section*{Web Appendix F. Non-Checkerboard Mean Structure}
In the checkerboard model, we have assumed that the mean structure is succinctly described by the cross-product of row and column partitions. This next example explores how COBRA performs when this assumption is violated. Consider the case where the observed data matrix $\M{X}$ is a noisy realization of 5 underlying biclusters $(a, b, c, d, e)$ that can be arranged as follows:
\begin{eqnarray}
\label{eq:ref2_1}
\M{X} & = & \begin{bmatrix}
\mu_a\M{1}_a & \mu_a\M{1}_a & \mu_d\M{1}_d \\
\mu_b\M{1}_b & \mu_c\M{1}_c & \mu_e\M{1}_e \\
\end{bmatrix} + \M{\mathcal{E}},
\end{eqnarray}
where $\M{1}_a$ is a matrix of ones, $\mu_a$ is the mean of the bicluster-$a$, and $\mathcal{E}$ is a matrix whose entries are independent draws from a Gaussian distribution. As noted by a referee, this is a scenario that is likely to occur in practice and consists of biclusters that violate the cross-product structure assumed by the checkerboard model. We simulated data $\ME{x}{ij} = \ME{\mu}{k} + \ME{\varepsilon}{ij}$ where $\mu := (\mu_a, \mu_b, \mu_c, \mu_d, \mu_e) = (1,0,0.25,-1,1.25)$ and $\ME{\varepsilon}{ij}$ are i.i.d.\@ draws from $N(0,0.1)$. Using the validation procedure to select the regularization parameter $\gamma$, COBRA identified 2 row partitions and 3 column partitions with bicluster means $\hat{\mu} = (1.000, 1.001, -0.002, 0.252, -0.999, 1.248)$. \Fig{non_checkerboard} shows the smoothed COBRA estimate. While COBRA cannot exactly identify the true bicluster structure, the true structure is readily identifiable from the COBRA output given that the two biclusters in the first row and first two columns have nearly identical estimated means. In short, by examining the estimated biclusters in conjunction with their estimated means, COBRA can potentially identify the correct biclusters even when the checkerboard assumption is violated.
\begin{figure}
\centering
\includegraphics[scale=0.5]{referee2_1}
\caption{Non-Checkerboard mean structure. While COBRA cannot exactly identify the true bicluster structure, the true biclustering structure is readily identifiable from the COBRA estimate.}
\label{fig:non_checkerboard}
\end{figure}
\section*{Web Appendix G. COBRA Refinements}
\subsection*{Adaptive COBRA}
We describe a natural extension of the adaptive Lasso \citep{Zou2006} to the convex biclustering problem. The adaptive COBRA applies the COBRA method twice. Let $\M{U}^\star$ denote the first COBRA solution. Note this first application of COBRA includes the model selection step detailed in Section 5 in the main paper. So, $\M{U}^\star$ corresponds to the smoothed estimate at the chosen $\gamma$ to minimize the hold-out error. We then recompute the weights treating $\M{U}^\star$ as the data and using the same sparse Gaussian kernel weights procedure detailed in Web Appendix A. We then perform a second round of the COBRA method using these new weights on the original data $\M{X}$. This two step procedure shrinks together more strongly similar columns (rows) than the original COBRA method, mimicking the effect of reweighting in the adaptive Lasso.
\subsection*{Thresholded COBRA}
Before we can describe how the thresholded COBRA works, we need to review how clustering assignments are made in convex clustering. Recall that COBRA alternates between applying convex clustering on the rows and columns of the data matrix $\M{X}$. To streamline the discussion, we focus on how convex clustering obtains column clusters; row clusters are obtained analogously. As noted at the end of Section 4, hard clustering assignments are trivially obtained from variables employed in the splitting method introduced in \cite{ChiLan2015}. We elaborate on this comment here. To cluster the columns of $\M{X}$, we solve the following minimization problem.
\begin{eqnarray}
\underset{\M{U}}{\min}\; \frac{1}{2} \lVert \M{X} - \M{U} \rvert_{\text{F}}^2 + \gamma \sum_{i < j} w_{ij} \lVert \M{U}_{\cdot i} - \M{U}_{\cdot j} \rVert_2,
\end{eqnarray}
where $w_{ij}$ are weights that differentially penalize pairwise differences between columns based on their degree of similarity and the tuning parameter $\gamma$ trades off the emphasis between the data fit and the smoothness of the solution. The AMA method proposed by \cite{ChiLan2015} solves the following equivalent minimization problem.
\begin{eqnarray}
\underset{\M{U}, \V{V}_{1,2}, \ldots, \V{V}_{n-1,n}}{\min}\; \frac{1}{2} \lVert \M{X} - \M{U} \rvert_{\text{F}}^2 + \gamma \sum_{i < j} w_{ij} \lVert \V{V}_{i,j} \rVert_2,
\end{eqnarray}
subject to $\V{v}_{i,j} = \M{U}_{\cdot i} - \M{U}_{\cdot j}$ for all $i < j$. We have introduced a dummy variable $\V{v}_{i,j}$ that is the difference between the $i$th and $j$th columns of $\M{U}$. The AMA method iteratively applies group-wise softthresholding to send $\V{v}_{i,j}$ vectors with small magnitude to zero. Thus, cluster assignments are made as follows. If $\V{v}_{i,j} = \V{0}$, then the $i$th and $j$th columns are put in the same group. An analogous set of dummy vectors are obtained after applying convex clustering on the rows of $\M{X}$.
We are now ready to describe a natural extension of the thresholded Lasso \citep{Meinshausen2009} to the convex biclustering problem. As with the adaptive COBRA, the thresholded COBRA performs a postprocessing step that groups together column centroids (row centroids) that are almost but not exactly identical. In solving the COBRA optimization problem we obtain a set of vectors associated with the column differences $\V{v}_{i,j}$. We compute a second set of column difference vectors $\Vtilde{V}_{i,j}$ from the column difference vectors $\V{V}_{i,j}$ as follows.
\begin{eqnarray}
\Vtilde{v}_{i,j} & = & \begin{cases}
\V{v}_{i,j} & \text{if $\lVert \V{v}_{i,j} \rVert_2 \geq \tau$} \\
\V{0} & \text{otherwise}
\end{cases}.
\end{eqnarray}
Under the thresholded COBRA, if $\Vtilde{v}_{i,j} = \V{0}$, then the $i$th and $j$th columns are assigned to the same column group.
In short, the postprocessing consists of hard thresholding of the column difference vectors $\V{V}_{i,j}$. The parameter $\tau$ controls how aggressive the hard-thresholding is. A natural question is how to set $\tau$. In the case of sparse linear regression $\tau$ should be on the order of the noise \citep{Meinshausen2009}. This is typically estimated in practice using the standard deviation in the residuals. In this work we choose $\tau$ to be a fraction of the standard deviation of the 2-norms of the vectors $\V{v}_{i,j}$. To be conservative, we chose $\tau$ to be 1/4 of this standard deviation.
\section*{Web Appendix H. Additional Stability Experiments}
We repeat the stability experiments on the lung cancer data (Section 7 of the main paper) using three other biclustering methods with software available in R.
These methods are (i) the iterative signature algorithm \citep{BerIhm2003} implemented in the package {\tt isa2}, (ii) the sparse SVD method \citep{LeeSheHua2010,SilKaiKop2011} implemented in the package {\tt s4vd}, and (iii) Plaid models \citep{LazOwe2002} implemented in the package {\tt biclust}.
These three methods are popular SVD-based approaches that seek overlapping biclusters.
This is in contrast to the methods compared in the main paper that assume an underlying checkerboard mean structure. The reader should keep this difference in mind since the biclustering output of these three methods are not directly comparable to the biclustering output of the methods considered in the main paper. Nonetheless, despite these differences, it is possible to assess the stability of these methods in the same manner that we evaluated the stability for the methods that assume a checkerboard pattern. We include these results for completeness here. All parameters were selected according to the default methods in the R packages.
As before, we restrict our attention to the 150 genes with the highest variance. We first apply the biclustering methods on the original data to obtain baseline biclusterings. We then add iid $N(0,\sigma^2)$, noise where $\sigma = 0.5, 1.0, 1.5$ to create a perturbed data set on which to apply the same set of methods. We compute the RI, ARI, and VI between the baseline clustering and the one obtained on the perturbed data. \Tab{stability} shows the average RI, ARI, and VI of 50 replicates as well as run times. All three methods are relatively fast and take run times between that of COBRA and the clustered dendrogram with dynamic tree cutting. The Plaid model, however, clearly exhibits the best stability among these approaches that seek overlapping biclusters. For convenience, we have included \Tab{stability_checkerboard} which shows the stability results of the COBRA variants, DCT, and spBC that are presented in Table~2 of the main paper.
\begin{table}[th]
\begin{tabular}{ l c c c c }
& $\sigma$ & ISA & sparse SVD & Plaid \\ \hline
RI & 0.5 & 0.731 & 0.957 & 0.983 \\
& 1.0 & 0.595 & 0.955 & 0.963 \\
& 1.5 & 0.528 & 0.936 & 0.944 \\ \hline
ARI & 0.5 & 0.462 & 0.567 & 0.880 \\
& 1.0 & 0.191 & 0.515 & 0.655 \\
& 1.5 & 0.054 & 0.372 & 0.353 \\ \hline
VI & 0.5 & 1.243 & 0.213 & 0.097 \\
& 1.0 & 1.624 & 0.222 & 0.189 \\
& 1.5 & 1.746 & 0.273 & 0.231 \\ \hline
time (sec) & 0.5 & 2.60 & 9.07 & 2.65 \\
& 1.0 & 2.94 & 10.63 & 3.02 \\
& 1.5 & 2.89 & 10.22 & 2.95 \\ \hline \\
\end{tabular}
\caption{\label{tab:stability}
Stability and reproducibility of biclusterings in lung cancer microarray data. The ISA, sparse SVD, and Plaid biclustering methods are applied to the lung cancer data to obtain baseline biclusterings. We then perturb the data by adding iid\@ $N(0,\sigma^2)$ noise where $\sigma = 0.5$ (Small Pert.), 1.0 (Medium Pert.), 1.5 (Large Pert.).}
\end{table}
\begin{table}[th]
\begin{tabular}{ l c c c c c c }
& $\sigma$ & COBRA & COBRA (A) & COBRA (T) & DCT & spBC \\ \hline
RI & 0.5 & 0.984 & 0.992 & 0.959 & 0.979 & 0.974 \\
& 1.0 & 0.981 & 0.990 & 0.944 & 0.974 & 0.965 \\
& 1.5 & 0.973 & 0.989 & 0.896 & 0.973 & 0.936 \\ \hline
ARI & 0.5 & 0.350 & 0.788 & 0.813 & 0.530 & 0.642 \\
& 1.0 & 0.233 & 0.686 & 0.766 & 0.439 & 0.544 \\
& 1.5 & 0.201 & 0.667 & 0.644 & 0.340 & 0.397 \\ \hline
VI & 0.5 & 1.924 & 0.882 & 0.776 & 2.120 & 1.568 \\
& 1.0 & 2.380 & 1.276 & 0.962 & 2.769 & 2.174 \\
& 1.5 & 2.721 & 1.312 & 1.320 & 3.505 & 2.915 \\ \hline
time (sec) & 0.5 & 15.44 & 23.46 & 15.65 & 0.07 & 151.59 \\
& 1.0 & 25.21 & 34.51 & 25.53 & 0.11 & 197.00 \\
& 1.5 & 18.18 & 26.43 & 18.50 & 0.12 & 207.88 \\ \hline \\
\end{tabular}
\caption{\label{tab:stability_checkerboard}
Stability and reproducibility of biclusterings in lung cancer microarray data. COBRA variants, the clustered dendrogram with dynamic tree cutting, and sparse Biclustering are applied to the lung cancer data to obtain baseline biclusterings. We then perturb the data by adding iid\@ $N(0,\sigma^2)$ noise where $\sigma = 0.5$ (Small Pert.), 1.0 (Medium Pert.), 1.5 (Large Pert.).}
\end{table} |
2205.14704 | \section{Conclusion and Future Work}
We propose {{\textsc{RetroPrompt}}} that decouples knowledge from memorization by introducing retrieval augmentation to further improve the generalization ability of prompt learning on the input side and the whole process of model training and prediction.
{{\textsc{RetroPrompt}}}, is a straightforward yet effective retrieval method that combines both neural demonstrations, $k$NN{} guider for training and prediction.
Our extensive results show that it outperforms other demonstration-enhanced prompt methods and knowledge-enhanced prompt methods in few-shot, zero-shot and fully-supervised settings.
Analyzing the essence of memorization validates the effectiveness of decoupling knowledge from memorization.
Interesting future directions include:
1) apply to other tasks, such as QA and NLG,
2) explore the noise data mining for unsupervised learning,
3) further improve the retrieve efficiency for large datasets, etc.
\section{Experiments}
\begin{table*}[!htp]
\centering
\small
\caption{Results across 9 NLU datasets in the few-shot and zero-shot setting.
We report mean (and standard deviation) results over five different few-shot splits.
``D-demo'' refers to discrete demonstration, and ``KnPr'' is the abbreviation of KnowPrompt.
LOTClass~\cite{meng2020text} is the SOTA model in unsupervised text classification with self-training.
{\dag} donates the model uses \textbf{extra knowledge} and {$^\clubsuit$} means they \textbf{train} the PLM on the whole unlabeled trainset, while we and the other baselines only leverage the vanilla PLM to test without training.
The average scores with {$^*$} denote that we reuse the results of the ``non-demo'' version of the related model to fill in the default values.
Note that the 16-shot results of LM-BFF on GLUE~\cite{DBLP:conf/iclr/WangSMHLB19} tasks are taken from the original paper of LM-BFF~\cite{gao2020making}
}
\scalebox{0.65}{
\begin{tabular}{l|l|lll|lll|l|lll|l}
\toprule
{\multirow{3}{*}{\textbf{St.}}}
& {\multirow{3}{*}{\textbf{Model}}}
& \multicolumn{3}{c|}{\textbf{Single Sentence}}
& \multicolumn{3}{c|}{\textbf{Sentence Pair }}
& {\multirow{3}{*}{\textbf{Model}}}
& \multicolumn{3}{c|}{\textbf{Information Extraction }}
& {\multirow{3}{*}{\textbf{Avg.}}} \\
\cmidrule{3-8}
\cmidrule{10-12}
& &SST-2 & MR &CR &MNLI &QNLI &QQP & &FewN &SemEval &TACRED \\
& & (acc) & (acc) & (acc) & (acc) & (acc) & (F1) & & (acc) & (acc) & (F1) & \\
\midrule
\multirow{5}{*}{16}
& \textsc{FT}~\
& 81.4 \tiny{(3.8)}
& 76.9 \tiny{(5.9)}
& 75.8 \tiny{(3.2)}
& 45.8 \tiny{(6.4)}
& 60.2 \tiny{(6.5)}
& 60.7 \tiny{(4.3)}
& \textsc{FT}
& 52.7 \tiny{(2.2)}
& 66.1 \tiny{(1.2)}
& 25.8 \tiny{(2.8)}
& 60.6 \\
& \textsc{LM-BFF} (man)~\
& 92.7 \tiny{(0.9 )}
& 87.0 \tiny{(1.2)}
& 90.3 \tiny{(1.0)}
& 68.3 \tiny{(2.3)}
& 64.5 \tiny{(4.2 )}
& 65.5 \tiny{(5.3)}
& {KnPr}~\
& 65.3 \tiny{(1.1)}
& 80.9 \tiny{(2.5)}
& 33.2 \tiny{(2.0)}
& 72.0 \\
& \textsc{LM-BFF} (D-demo)
& 92.6 \tiny{(0.5 )}
& 86.6 \tiny{(2.2)}
& 90.2 \tiny{(1.2)}
& 70.7 \tiny{(1.3)}
& 69.2 \tiny{(1.9)}
& 69.8 \tiny{(1.8)}
& {KnPr} (D-demo)~
& ~\ ~\ ---
& ~\ ~\ ---
& ~\ ~\ ---
& 73.2{$^*$} \\
& \textsc{KPT} \dag
& 90.3 \tiny{(1.6)}
& 86.8 \tiny{(1.8)}
& 88.8 \tiny{(3.7)}
& 61.4 \tiny{(2.1)}
& 61.5 \tiny{(2.8)}
& 71.6 \tiny{(2.7)}
& \textsc{KPT} \dag
& 65.9 \tiny{(1.5)}
& 78.8 \tiny{(2.1)}
& 32.8 \tiny{(1.7)}
& 70.9 \\
\cmidrule{2-13}
& \textbf{Ours}
& \textbf{93.9} \tiny{(0.4)}
& \textbf{88.0} \tiny{(0.8)}
& \textbf{91.9} \tiny{(0.7)}
& \textbf{71.1} \tiny{(1.8)}
& \textbf{71.6} \tiny{(1.8)}
& \textbf{74.0} \tiny{(2.0)}
& \textbf{Ours}
& \textbf{67.3} \tiny{(0.9)}
& \textbf{81.5} \tiny{(1.3)}
& \textbf{40.7} \tiny{(0.7)}
& \textbf{75.6} \\
\midrule
\multirow{5}{*}{4}
& \textsc{FT}~\
& 60.2 \tiny{(2.8)}
& 57.6 \tiny{(1.4)}
& 66.4 \tiny{(5.5)}
& 35.0 \tiny{(0.3)}
& 54.2 \tiny{(3.9)}
& 52.8 \tiny{(4.7)}
& \textsc{FT}
& 32.7 \tiny{(2.9)}
& 38.8 \tiny{(2.0)}
& 14.7 \tiny{(2.8)}
& 45.8 \\
& \textsc{LM-BFF} (man)~\
& 90.7 \tiny{(0.8)}
& 85.2 \tiny{(2.8)}
& 89.9 \tiny{(1.8)}
& 51.0 \tiny{(2.5)}
& 61.1 \tiny{(6.1)}
& 48.0 \tiny{(4.9)}
& {KnPr}
& 52.5 \tiny{(1.5)}
& 58.4 \tiny{(3.7)}
& 28.8 \tiny{(2.5)}
& 62.8\\
& \textsc{LM-BFF} (D-demo)~\
& 90.2 \tiny{(1.5)}
& 85.5 \tiny{(2.1)}
& 89.7 \tiny{(0.6)}
& 56.1 \tiny{(1.0)}
& 61.7 \tiny{(7.6)}
& 63.2 \tiny{(5.6)}
& {KnPr} (D-demo)
& ~\ ---
& ~\ ---
& ~\ ---
& {65.1}{$^*$} \\
& \textsc{KPT} \dag
& 88.2 \tiny{(5.7)}
& 83.4 \tiny{(1.5)}
& 87.2 \tiny{(2.5)}
& 53.7 \tiny{(2.7)}
& 59.2 \tiny{(2.8)}
& 54.9 \tiny{(7.9)}
& \textsc{KPT} \dag
& 58.8 \tiny{(2.2)}
& 57.2 \tiny{(3.2)}
& 27.5 \tiny{(2.2)}
& 63.3 \\
\cmidrule{2-13}
& \textbf{Ours}
& \textbf{91.5} \tiny{(0.4)}
& \textbf{87.4} \tiny{(0.5)}
& \textbf{91.4} \tiny{(0.6)}
& \textbf{57.6} \tiny{(5.5)}
& \textbf{62.8} \tiny{(4.5)}
& \textbf{66.1} \tiny{(4.1)}
& \textbf{Ours}
& \textbf{60.9} \tiny{(1.9)}
& \textbf{59.9} \tiny{(1.9)}
& \textbf{32.1} \tiny{(2.0)}
& \textbf{67.7} \\
\midrule
\multirow{7}{*}{0}
& {LOTClass}$^\clubsuit$ ~\
& 71.8
& 81.7
& 50.1
& 50.4
& 36.5
& 55.9
& {LOTClass}$^\clubsuit$ ~\
& 11.5
& 9.8
& 2.5
& 41.1 \\
& \textsc{FT}~\
& 49.1
& 50.0
& 49.8
& 34.4
& 49.5
& 31.6
& {FT}
& 10.0
& 6.2
& 0.5
& 31.2 \\
& \textsc{LM-BFF} (man)~\
& 83.5
& 80.3
& 78.4
& 49.7
& 50.5
& 49.7
& {KnPr}
& 15.9
& 10.3
& 2.3
& 46.7 \\
& \textsc{LM-BFF} (D-demo)~\
& 82.9
& 80.7
& \textbf{81.4}
& 52.2
& 53.5
& 44.0
& {KnPr} (D-demo)
& ~\ ---
& ~\ ---
& ~\ ---
& 47.0{$^*$} \\
& \textsc{KPT} \dag
& 78.4
& 81.9
& 71.4
& 37.1
& 58.4
& 47.5
& \textsc{KPT} \dag
& 24.6
& 11.6
& 0.8
& 45.7 \\
\cmidrule{2-13}
& \textbf{Ours}
& \textbf{89.1}
& \textbf{86.1}
& {79.7}
& \textbf{53.7}
& \textbf{60.1}
& \textbf{65.1}
& \textbf{Ours}
& \textbf{41.3}
& \textbf{12.2}
& \textbf{3.6}
& \textbf{54.5} \\
\bottomrule
\end{tabular}
}
\label{tab:experiment-few-shot}
\end{table*}
\subsection{Datasets and Baselines }
\textbf{Datasets}
We evaluate {{\textsc{RetroPrompt}}} on several types of natural language understanding tasks, including single sentence classification tasks (SST-2~\cite{sst2}, MR~\cite{mr}, and CR~\cite{cr}) and sentence pair classification tasks (MNLI~\cite{mnli}, QNLI~\cite{qnli}, and QQP\footnote{\url{https://www.quora.com/q/quoradata/}.}).
To further evaluate the effectiveness of the proposed approach with multi-class classification, we also conduct experiments on the information extraction tasks,
including FewNERD~\cite{fewnerd}, SemEval 2010 Task 8 (SemEval)~\cite{hendrickx2010semeval}, and TACRED~\cite{DBLP:conf/emnlp/ZhangZCAM17}.
\textbf{Baselines}
We compare with LM-BFF~\cite{gao2020making} for single sentence and sentence pair classification tasks and adopt SOTA prompt learning model KnowPrompt~\cite{chen21knowprompt} as the baseline for information extraction tasks.
Note that the discrete demonstration method cannot be applied to multi-class classification tasks due to the input length limitations; thus, we leave out the experimental table about the results of KnPr (D-demo).
We also compare our {{\textsc{RetroPrompt}}} with the knowledge-enhanced prompt learning method KPT~\cite{KPT} since KPT leverages the external knowledge base for enhancing prompt learning while we focus on utilizing internal trainsets as a knowledge-store.
\subsection{Evaluation protocols and details}
\label{subsec:details}
The experiments are implemented on 1 NVIDIA V100 and utilize Pytorch \cite{DBLP:conf/nips/PaszkeGMLBCKLGA19} as the base library.
We adopt $\text{RoBERTa}_\text{large}$~\cite{liu2019roberta} as the PLM and employ AdamW as the optimizer for all experiments.
To mitigate the influence of diverse templates, we conduct baselines and {{\textsc{RetroPrompt}}} with the same templates for each dataset.
The specific templates we use for each dataset are in Appendix.
As for few-shot and zero-shot experiments, we leverage different settings, respectively.
\textbf{Few-shot Setting.}
We follow the few-shot setting of LM-BFF \cite{gao2020making} to conduct 4-shot and 16-shot experiments and
evaluate the average performance with a fixed set of seeds, $\mathcal{S}_{\text{seed}}$, across different sampled $\mathcal{D}_{\text{train}}$ for each task.
Note that our knowledge-store is constructed with the \textbf{few-shot training set} in this setting.
\textbf{Zero-shot Setting.}
We leverage vanilla $\text{RoBERTa}_\text{large}$ for all baselines (except LOTClass~\cite{meng2020text}) to directly inference on the test set.
To take advantage of retrieval mechanism, {{\textsc{RetroPrompt}}} follows LOTClass~\cite{meng2020text} to utilize \textbf{unlabeled} trainsets for retrieval.
Specifically, we take the vanilla $\text{RoBERTa}_\text{large}$ to tag the pseudo labels on unlabeled trainset and create the open-book knowledge-store with the unlabeled trainsets and pseudo labels.
Lastly, {{\textsc{RetroPrompt}}} make predictions on the test set based on the constructed datastore \textbf{without tuning any of the model parameters}.
\subsection{Experimental Results}
\begin{wrapfigure}{L}{0.28\textwidth}
\centering
\includegraphics[width=0.28\textwidth]{figs/column.pdf}
\caption{Performance on fully-supervised datasets.}
\label{fig:fully_supervised}
\vspace{-0.5cm}
\end{wrapfigure}
\textbf{Few-shot Results.}\quad
As shown in Table~\ref{tab:experiment-few-shot}, we find {{\textsc{RetroPrompt}}} consistently outperforms baseline method LM-BFF and KnowPrompt, both in 4-shot and 16-shot experiments.
Especially for information extraction tasks with multiple classes, discrete demonstrations cannot be applied to the input due to the limited input sequence length, while our neural demonstration can also work and achieves improvement on these multi-class datasets.
Moreover, {{\textsc{RetroPrompt}}} obtain better performance compared with KPT. Compared with KPT with external knowledge, we only focus on referencing the internal few-shot trainsets without visiting the external knowledge base.
Besides, we observe that {{\textsc{RetroPrompt}}} has a relatively lower standard deviation than the baselines.
The reason may lie that the retrieval mechanism can compensate for instabilities in parametric predictions.
\textbf{Zero-shot Results.}\quad
From Table~\ref{tab:experiment-few-shot}, we also observe that {{\textsc{RetroPrompt}}} achieves improvements in the zero-shot setting.
Another notable point is that
{{\textsc{RetroPrompt}}} performs even better than KPT in the zero-shot setting, revealing that exploring own data to decouple knowledge from memorization has more potential than leveraging external knowledge.
Moreover, we achieve superior performance to LOTClass even though we utilize the vanilla $\text{RoBERTa}_\text{large}$ without any training.
\begin{wraptable}{r}{0.4\textwidth}
\centering
\small
\vspace{-0.2cm}
\caption{Results of model generalization to new domains.}
\scalebox{0.74}{
\begin{tabular}{l|c|cc}
\toprule
{\multirow{1}{*}{\textbf{Model}}}
& \multicolumn{1}{c|}{\textbf{Source}}
& \multicolumn{2}{c}{\textbf{Target Domain}}
\\
\cmidrule{1-4}
&16-shot MR &SST-2 & CR \\
\midrule
\textsc{FT}~\
& 76.9
& 71.4
& 64.7
\\
\textsc{LM-BFF} (man)~\
& 87.0
& 88.9
& 86.9
\\
\textsc{LM-BFF} (D-demo)~\
& 86.6 & 89.3 & 87.5
\\
\textsc{KPT}
& 86.8 & 89.1 & 86.7
\\
\cmidrule{1-4}
\textbf{{\textsc{RetroPrompt}}}
& \textbf{88.0} & \textbf{91.4} & \textbf{88.8}
\\
\cmidrule{1-4}
&16-shot QQP &MRPC & RTE \\
\midrule
\textsc{FT}~\
& 60.7
& 43.7
& 48.0
\\
\textsc{LM-BFF} (man)~\
& 65.4
& 20.9
& 65.5
\\
\textsc{LM-BFF} (D-demo)~\
& 68.2
& 38.8
& 66.2
\\
\textsc{KPT}
& 71.6
& 42.3
& 65.8
\\
\cmidrule{1-4}
\textbf{{\textsc{RetroPrompt}}}
& \textbf{74.0}
& \textbf{49.4} & \textbf{67.3}
\\
\bottomrule
\end{tabular}
}
\label{tab:experiment-cross-domain}
\vspace{-0.7cm}
\end{wraptable}
\textbf{Fully-supervised Results.}\quad
As shown in Figure~\ref{fig:fully_supervised},
the experiments in fully-supervised settings with long-tail distribution illustrate that {{\textsc{RetroPrompt}}} achieves improvement compared with baselines.
This indicates that our retrieval mechanism extends the LM's ability to learn hard examples in the fully-supervised datasets.
\subsection{Model Generalization to New Domains}
The scarce data may bring the overfitting problem for the lots of memory parameters of PLMs, even though prompt learning.
Thus, we conduct cross-domain experiments to validate the generalization of our {{\textsc{RetroPrompt}}}.
Specifically, we utilize the model trained on the source datasets and directly test on the other target datasets.
From Table~\ref{tab:experiment-cross-domain}, we can find that our method consistently outperforms baselines. This finding illustrates that {{\textsc{RetroPrompt}}} achieves great model generalization to new domains.
\subsection{Analysis of Memorization}
\label{subsec:memorization}
It is necessary and interesting to further explore the memorization mechanism to help us better understand the utility of retrieval for memorization in NLP.
\textbf{Definition of Memorization Measurement.}\quad
Inspired by the idea of \cite{feldman2020does} in the computer vision area, we define {\it memorization measures} as to how the classification varies when a training instance $\bm{z}$ is deleted from the trainset.
We follow \cite{koh2017understanding,memory} to define and derive the memorization score for a training instance $\bm{z}$ as follows:
\begin{equation}
\small
\label{equ:remove}
\begin{aligned}
{\mathcal{S}}_{\text{delate}}(\bm{z})
&\overset{\text{def}}{=} -\frac{d P(y|\bm{x}; \Hat{\theta}_{\xi, -\bm{z}})}{d \xi} \bigg|_{\xi=0}
&= -\nabla_{\theta}P(y|\bm{x}; \hat{\theta})^{\top}\frac{d \hat{\theta}_{\xi, -\bm{z}}}{d \xi} \bigg|_{\xi=0}
&= -\nabla_{\theta}P(y|\bm{x}; \hat{\theta})^{\top}H^{-1}_{\hat{\theta}}\nabla_{\theta}{\mathcal{L}(\bm{z}, \hat{\theta})},
\end{aligned}
\end{equation}
where $\hat{\theta}_{\xi, -\bm{z}}$ denotes the parameters of the model trained with the instance $\bm{z}$ down-weighted by $\xi$, $\hat{\theta}$ is the parameters of the model trained with all instances and $H_{\hat{\theta}} = \frac{1}{n}\sum^{n}_{i=1}{\nabla^{2}_{\theta}{\mathcal{L}(z_i, \hat{\theta})}}$.
Thus $\mathcal{S}_{\text{delate}}(\bm{z})$ is the amount of change of $P(y|x; \theta)$ when the instance $\bm{z}$ is down-weighted by a small amount $\xi$.
\begin{wraptable}{r}{0.45\textwidth}
\centering
\small
\caption{The upper part shows the average percentage of \emph{positive phrases} over different memory groups of positive/negative instances. The lower part denotes the mean values of memorization score on the SST-2 dataset.
}
\label{table.atypical}
\begin{small}
\scalebox{0.61}{
\begin{tabular}{l|ccc|ccc}
\toprule
\multirow{2}{*}{\textbf{Mem Group}} & \multicolumn{3}{c}{\textbf{Negative}} & \multicolumn{3}{c}{\textbf{Postive}} \\
\cmidrule{2-7}
&FT & LM-BFF & OURS
&FT & LM-BFF & OURS \\
\midrule
Top-10\% &34.29 & 32.78 & 30.23
& 68.75 & 69.71 &75.67 \\
ALL & \multicolumn{3}{c}{23.40} & \multicolumn{3}{c}{86.39} \\
Bottom-10\% & 17.63 & 16.25 & 14.42
& 95.92 & 95.08 & 94.53
\\
\bottomrule
\midrule
& \multicolumn{2}{c}{FT} & \multicolumn{2}{c}{LM-BFF} & \multicolumn{2}{c}{OURS} \\
\midrule
\textsc{Mem Score} & \multicolumn{2}{c}{4.597} & \multicolumn{2}{c}{0.121} & \multicolumn{2}{c}{0.032} \\
\bottomrule
\end{tabular}
}
\end{small}
\vspace{-0.5cm}
\end{wraptable}
\textbf{Top-memorized Instances: Typical or Atypical?}\quad
Since the SST-2 dataset provides the annotations of phrase-level sentiment polarity labels, we adopt SST-2 to analyze the memorization by judging the atypical of an instance by checking the percentage of positive phrases.
We collect such statistics from SST-2 and find that a typical positive instance has a relatively high percentage of positive phrases, and a typical negative instance should have a relatively low percentage of positive phrases.
Based on the above observation, we apply the memorization score defined in Eq.~\ref{equ:remove} to select Top-10\% and Bottom-10\% memorized instances from the trainset and collect the average percentage of positive phrases in these instances.
As shown in Table~\ref{table.atypical},we can conclude following findings:
(1) \textbf{The PLM tends to give atypical samples deeper memory attention.}
Specifically, no matter LM-BFF or our method, the top-10\% memorized negative instances have a higher percentage of positive phrases than the average percentage of positive phrases of all negative instances.
2) LM-BFF has lower memorization scores on hard samples than fine-tuning. We think it owns to \textbf{prompt learning can help PLMs recall what they learned from pre-training without strengthening memory for downstream data.} 3) {{\textsc{RetroPrompt}}} further has lower average memorization scores than fine-tuning and LM-BFF, which illustrates that our method is less memory dependent. This result may be attributed to \textbf{decoupling knowledge from memorization through retrieval to alleviating the rote of PLMs.}
\begin{wraptable}{r}{0.45\textwidth}
\centering
\caption{Detailed ablation experiments in few-shot settings.
``N-demo'' donates the neural demonstration,
and ``refresh'' refers to the asynchronous refresh of the knowledge-tore.
}
\scalebox{0.69}{
\begin{tabular}{l|ccccc}
\toprule
\multirow{2}{*}{\textbf{Model}}
& \multicolumn{5}{c}{\textbf{16-shot}}
\\
\cmidrule{2-6}
& SST-2 & CR & MNLI & QQP & TACRED
\\ \midrule
\textbf{OURS}
& \textbf{93.9} & \textbf{91.9}
& \textbf{71.1}
& \textbf{74.0} & \textbf{40.7}
\\
\midrule
w/o \text{$k$NN{}}-test
& 93.2 & 91.2
& 70.4
& 73.0 & 38.2 \\
w/o \text{$k$NN{}}-train
& 92.0
& 91.2
& 68.8
& 71.3 & 36.5 \\
w/o N-demo
& 92.4
& 90.8
& 69.1
& 72.0 & 37.6 \\
w/o refresh
& 93.5
& 91.5
& 70.7
& 73.6 & 39.9 \\
\bottomrule
\end{tabular}}
\label{tab:ablation}
\vspace{-0.4cm}
\end{wraptable}
\textbf{Case Analysis.}\quad
As shown in Table~\ref{table.vis.sst},
we manually list the top-ranked and bottom-ranked training instances of SST-2 according to our model.
It reveals that the top-ranked memorized instances seem to show universal opinions indirectly. Thus, we inspect them as atypical/hard for sentiment classification.
While those instances with 0 memorization scores are straightforward to show their opinion for sentiment classification, representing the typical instance. Note that {$F(p_{kNN})$} is defined to represent the difficulty of the sample discriminated by $k$NN{} distribution. And the Table~\ref{table.vis.sst} also shows that {$F(p_{kNN})$} indeed reflect atypicality of examples, which validate the effectiveness of the $k$NN{} guided training.
\subsection{Ablation Study}
\label{ablation}
\paragraph{Component Ablation.}\quad
As shown in Table~\ref{tab:ablation}, the performance of component ablation experiments with four variants has a clear drop, which proves the effectiveness of our retrieval component.
We also find that neural demonstration and $k$NN{}-train have more improvement in the few-shot setting than $k$NN{}-test.
Note that $k$NN{}-test is similar to $k$NN{}-LM~\cite{DBLP:conf/iclr/KhandelwalLJZL20,he2021efficient} and the results reveals that
simply incorporate $k$NN{} in the test process of prompt learning has little influence in a few-shot setting.
\begin{wraptable}{r}{0.38\textwidth}
\caption{Performance on 16-shot CR and TACRED with different representations of key and calculate function of $k$NN{} distribution.}
\label{result-ktype}
\centering
\scalebox{0.75}{
\begin{tabular}{llcc}
\toprule
Key Repres. & $k$NN{} Acq. & CR & TAC. \\
\midrule
Prompt & Rep-similar & 91.9 & 40.7\\
\texttt{[CLS]} & Rep-similar & 89.0 & 37.2 \\
Prompt & BM25 & 89.5 & 38.8 \\
\texttt{[CLS]} & BM25 & 88.7 & 36.1 \\
\bottomrule
\end{tabular}
}
\vspace{-0.2cm}
\end{wraptable}
\paragraph{Key Representation and $k$NN{} Acquisition.}\quad We study the effect of using different representations of the key in the knowledge-store. We experiment with two types of representations: (1) prompt-based representation, which is the default setting, and (2) [CLS] based representation of current LM. We also experiment with two types of calculation of $k$NN{} distribution: (1) representation based similarity score (refer as rep-similar), which is the default setting, and (2) BM25 based score, which calculates the correlation score between the query and each key examples with BM25~\cite{bm25} algorithm.
Results in Table~\ref{result-ktype} show that using prompt-based representations for key and representation based similarity scores for $k$NN{} leads to the best performance. It suggests that prompt learn better representations for context similarity and the representation similarity based $k$NN{} distribution is better than BM25 based scores.
\input{case_table}
\section{Introduction}
Large parametric language models ~\cite{radfordimproving, Devl2019bert,joshi2020spanbert,bart} have achieved dramatic empirical success in natural language processing (NLP).
Notably, pre-trained language models (PLMs) have learned a substantial amount of in-depth knowledge from data, and have archived tremendous promise in few-shot/zero-shot learning ability with the natural language prompts \cite{gao2020making,DBLP:journals/corr/abs-2110-08207,DBLP:journals/corr/abs-2109-01652}.
However, Recent studies \cite{liu2021gpt,DBLP:journals/corr/abs-2104-08786,DBLP:journals/corr/abs-2203-00902} observe that prompt learning with PLMs usually generalizes unstably in an extremely low-resource setting or emerging domains.
One potential reason is that, it is non-trivial for parametric models to \emph{learn rare or hard patterns well with rote memorization}, thus, resulting in inefficient generalizable performance.
Intuitively, if we regard the whole training data as a {\it book} and the test phase as the {\it examination}, the current training-test procedure of prompt learning (based on batch data training) can be viewed as {\it page-by-page memorization} and {\it closed-book examination} \cite{meng2021gnnlm}.
During training, vanilla prompt learning may struggle to memorize atypical instances in a fully-supervised setting or overfit shallow patterns with low-shot data \cite{memory,elangovan-etal-2021-memorization}.
Specifically, recent studies\cite{feldman2020does,feldman2020neural} have proposed a long-tail theory, which states that if training data form a long-tail distribution and have small ``sub-populations'' with atypical instances, then PLMs indeed predict on the test data through rote memorizing these atypical instances rather than learning the common patterns \cite{memory,tanzer2022memorisation}.
The limitations of rote memorization remind us of the human learning process of {\emph{``learn by analogy''}} and the proverb that {\emph{``the palest ink is better than the best memory''}}.
Note that humans can perform associative learning to recall relevant skills in deep memories for reinforcing each other, thus, owning the extraordinary abilities to solve few-shot and zero-shot tasks.
Motivated by these, we endeavor to improve the generalization ability of prompt learning with retrieval and association.
Our intuition is that the difficulty of resolving the above limitations can be substantially alleviated if we can decouple the knowledge from memorization by constructing {\it an open-book knowledge-store} from the training data; thus, referring to related knowledge could provide a strong enhancement signal to help the model strike a balance between generalization and memorization.
\begin{wrapfigure}{R}{0.55\textwidth}
\centering
\vspace{-0.3cm}
\includegraphics[width=0.55\textwidth]{figs/intro.pdf}
\caption{
Decoupling knowledge from memorization.
}
\label{fig:motivation}
\vspace{-0.4cm}
\end{wrapfigure}
Specifically, we introduce a novel retrieval-augmented framework based on prompt learning (\textbf{{\textsc{RetroPrompt}}}) as shown in Figure~\ref{fig:motivation}.
The open-book knowledge store $\left(\mathcal{K},\mathcal{V}\right)$, defined as the set of \emph{key: prompt-based example embeddings} and \emph{value: corresponding label words} constructed from the training data, are served as additional references for the model to decouple knowledge from pure memorization to some extent.
Specifically, to integrate retrieved knowledge into the input,
\textbf{Firstly}, we design to incorporate neural demonstrations into the input sequences as in-context augmentation, where the demonstration is retrieved from the knowledge-store.
\textbf{Then}, we apply a non-parametric algorithm $k$NN{} over the input query and knowledge store, and regard $k$NN{} results as an indication of easy vs.~hard examples in the training set.
More specifically, we automatically force the model to focus on the hard examples identified by $k$NN{} by assigning a scaling during training.
\textbf{Lastly}, the $k$NN{} results are further employed at the output of the PLM head to participate in masked prediction during inference.
The model retrieves Top-$k$ nearest reference instances as cues from $\left(\mathcal{K},\mathcal{V}\right)$ and makes inference by linearly interpolating the output of prompt learning with a non-parametric nearest neighbor distribution.
The considerable performance gains on nine tasks in few-shot and zero-shot settings demonstrate that our systemic retrieval mechanism helps the model generalize better with scarce data.
Experiments in the fully-supervised setting with long-tail distribution illustrate that our {{\textsc{RetroPrompt}}} can deal with atypical instances more robustly.
We further adopt self-influence \cite{koh2017understanding} as our memorization scoring function to analyze the memorization process between fine-tuning, prompt learning and our {{\textsc{RetroPrompt}}}.
The final analysis results show that
1) the training instances with the highest memorization scores tend to be atypical,
2) {{\textsc{RetroPrompt}}} generalize better than fine-tuning and convention prompt-tuning with decoupling knowledge from memorization to alleviate the rote of PLMs.
In a nutshell, our work may open up new avenues to improve the generalization of prompting PLMs by retrieving knowledge from memorization.
\section{Preliminaries of Prompt Learning}
\label{background}
Assuming that $\mathcal{M}$, $\mathcal{T}$ respectively denotes the PLM and the template function for prompt tuning.
Formally, the text classification task takes a query sentence $\bm{x} = (x_0,x_1,...,x_n)$ as input, and classify it into a class label ${y} \in \mathcal{Y}$.
While prompt learning converts classification task into a masked language modeling problem with \textit{cloze-style} objectives.
Specifically, the template function $\mathcal{T}$ inserts pieces of texts into $\bm{x}$ as $\hat{\bm{x}} = \mathcal{T}(\bm{x})$, where $\hat{\bm{x}}$ is the corresponding input of $\mathcal{M}$ with a {\tt[MASK]} token in it.
For example, assuming we need to classify the sentence $\bm{x}$ =``The movie makes absolutely no sense.'' into label \textsc{Negative} (labeled as 0) or \textsc{Positive} (labeled as 1), we wrap it into
\begin{equation}
\hat{\bm{x}}=
\texttt{[CLS]} \bm{x} \ \text{It was \texttt{[MASK]}} \texttt{[SEP]}
\end{equation}
The verbalizer $f\colon \mathcal{Y} \mapsto \mathcal{V}$ is defined as a mapping from the label space $\mathcal{Y}$ to a few words in the vocabulary, which form the \emph{label word} set $\mathcal{V}$.
The base component of $\mathcal{M}$ produces the
sequence representation over $\hat{\bm{x}}$, and we choose the hidden vector at the \texttt{[MASK]} position as the contextual representation $\bm{h}_{\hat{\bm{x}}} \in \mathbb{R}^d$, where $d$ is the dimension of hidden states.
Then the MLM head of $\mathcal{M}$ can operate on $\bm{h}_{\hat{\bm{x}}}$ to calculate the probability of each word $v$ in the vocabulary being filled in \texttt{[MASK]} token $P_\mathcal{M}(\texttt{[MASK]}=v|\hat{\bm{x}})$.
We let $\mathcal{V}_y$ to represent the subset of $\mathcal{V}$ that is connected with a specific label $y$, $\cup_{y\in\mathcal{Y}} \mathcal{V}_y = \mathcal{V}$.
Then the probability distribution over the label $y$ is calculated as:
\begin{equation}
\begin{aligned}
P(y|\bm{x}) \!\!=\!\! g\left(P_\mathcal{M}(\texttt{[MASK]}\!\!\!=\!v|\mathcal{T}(\bm{x}))|v\in\mathcal{V}_y\right),
\end{aligned}
\label{eq:pmscore}
\end{equation}
where $g$ is a function transforming the probability of label words into the probability of the classes.
\section{{{\textsc{RetroPrompt}}}: Retrieval-augmented Prompt Learning}
We introduce a simple and general retrieval-augmented framework for prompt learning, named {{\textsc{RetroPrompt}}}, whose basis is the dense retriever (\S \ref{sec:store}) with an open-book knowledge-store to decouple knowledge from memorization.
As shown in Figure \ref{fig:arc},
{{\textsc{RetroPrompt}}} consists of three components: retrieval of neural demonstration for enhancing input (\S \ref{sec:demo}), the $k$NN{} guided training (\S \ref{sec:knn-train}) and the $k$NN{}-based probability for \textit{cloze-style} prediction (\S \ref{sec:knn-test}).
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.62]{figs/model.pdf}
\caption{Overview of {{\textsc{RetroPrompt}}}. Note that $e(\cdot)$ denotes word embedding function in the PLM $\mathcal{M}$, while ``M'',``t'' and ``g'' in $e(\cdot)$ specifically refers to ``[MASK]'', ``terrible'' and ``great''.}
\label{fig:arc}
\end{figure*}
\subsection{Dense Retriever}
\label{sec:store}
\paragraph{Open-book Knowledge-store}
The first step of our proposed framework is to build a knowledge-store for retrieval that can decouple from memorization and captures the semantics of the instance from the training set $\mathcal{C}$.
Specifically,
we utilize the encoder to embed prompt-based instance representation over the $\mathcal{C}$ to construct the knowledge-store.
Given the $i$-th example $\left(\bm{c}_i,{y}_i\right)$ in the training data $\mathcal{C}$,
we compute the key-value pair $(\bm{h}_{\hat{\bm{c}}_i},v_i)$, in which $\hat{\bm{c}}_i=\mathcal{T}({\bm{c}_i})$, $\bm{h}_{\hat{\bm{c}}_i} \in \mathbb{R}^d$ is the embedding of the {\tt[MASK]} token in the last layer of the underlying PLM, and $v_{i}=f(y_{i})$ denotes the label word of the $i$-th example.
We store all pairs $(\bm{h}_{\hat{\bm{c}}},v)$ in a key-value datastore $\left(\mathcal{K},\mathcal{V}\right) $ where $\bm{h}_{\hat{\bm{c}}}$ serves as \emph{key} and $v$ as \emph{value} as follows:
\begin{equation}
\begin{aligned}
\left(\mathcal{K},\mathcal{V}\right) =
\{
\left(\bm{h}_{\hat{\bm{c}}{_i}},v_i\right) \mid \left({\bm{c}_i},y_i\right)\in \mathcal{C}
\}
\end{aligned}
\end{equation}
The knowledge-store is flexible to add, edit or delete any instances and can be asynchronously updated during the training procedure. Note that our knowledge-store is constructed from few-shot trainsets in the corresponding few-shot settings rather than the whole available training data.
\paragraph{Efficient Searching}
Considering that the size of the training data $\mathcal{C}$ can be enormous, we must ensure an efficient retrieval process.
As shown in the above creation of open-book knowledge-store, we can build the matrix $\mathbf{D}\in \mathbb{R}^{|\mathcal{C}|\times d}$ as the index of training examples.
Given a query set $Q$, we first encode each query example with template mapping function $\mathcal{T}(\cdot)$ to get a set of prompt-based query vectors $\bm{h}_{\hat{q}}$ for retrieval augmentation on the fly.
Then, we utilize query vectors to search for the closest examples over the index $\mathbf{D}$ via maximum inner product search (MIPS).
For the retrieval process, we choose FAISS~\cite{DBLP:journals/tbd/JohnsonDJ21} to query the open-book knowledge-store efficiently. FAISS is an excellent open-source library for fast nearest neighbor retrieval in high-dimensional spaces.
\paragraph{Asynchronous Refresh of the Knowledge-store}
\label{refresh}
Since the neural demonstration may lead to the variable contextual representation of instance as the parameters of the PLM are continually updated,
we thus propose to “refresh” the index of retrieval by asynchronously re-embedding and re-indexing all embeddings in an open-book knowledge-store every $j$ training epochs
\footnote{Specifically, we refresh the knowledge-store for each epoch in our experiments.}.
In \S~\ref{ablation}, we empirically demonstrate that this procedure results in performance improvement.
\subsection{Retrieval of Neural Demonstration}
\label{sec:demo}
To enhance the PLMs with the ability to learn by analogy through the knowledge-store, we further combine {{\textsc{RetroPrompt}}} with neural demonstrations, an orthogonal technique enhancing language models, to improve the generalization ability of our model.
For the $t$-th query instance $\bm{q}_t$,
we first utilize prompt-based representation $\bm{h}_{\hat{q}_t}$ to query the cached representations of open-book knowledge-store.
Then we retrieve $m$ nearest neighbors $\{ \{\bm{c}^{(1)}_{1}, ..., \bm{c}^{(1)}_{m}\}, ..., \{\bm{c}^{(L)}_{1}, ..., \bm{c}^{(L)}_{m}\}\}$ of $\bm{q}_t$ for each class, where the superscript $L$ denotes the total number of the classes and the $\bm{c}_{i}^{(l)}$ is retrieved as the $i$-th nearest neighbor in the $l$-th class.
After the model retrieves the Top-$m$ candidates for each class, their corresponding representation $\bm{h}_{\bm{\hat{c}}_{i}}^{(l)}$ and label word $v^{(l)}$ from knowledge-store
will be incorporated into the encoder to act as a demonstration learning.
Since the $\bm{h}_{\bm{\hat{c}}_{i}}^{(l)}$ is already vector, we intuitively aggregate the $m$ neighbor vectors for each class according to their similarity and
incorporate the demonstration into the input representation of $\hat{\bm{x}}$ after the word embedding layer of the $\mathcal{M}$ as follows:
\begin{equation}
\small
\mathcal{I} = {e}(\hat{\bm{x}}) \oplus
[\sum_{i \in [1:m]}\alpha_{i}^{(1)} \bm{h}_{\hat{\bm{c}}_i}^{(1)},
{e}(v^{(1)})]
\oplus ... \oplus
[\sum_{i \in [1:m]}\alpha_{i}^{(L)} \bm{h}_{\hat{\bm{c}}_i}^{(L)},
{e}(v^{(L)})] ;
\alpha_i^{(l)} = \frac{e^{
\bm{h}_{\hat{\bm{q}}}
\cdot \bm{h}_{\hat{\bm{c}}_i}^{(l)}}}
{\sum_{i \in [1:m]} e^{\bm{h}_{\hat{\bm{q}}}
\cdot \bm{h}_{\hat{\bm{c}}_i}^{(l)}}}
\end{equation}
where ${e}(\cdot)$ represents the word embedding layer of $\mathcal{M}$, $\oplus$ denotes the concatenation of input sequences, ${\alpha}_{i}^{(l)}$ is the softmax score for the $i$-th retrieval belonging to $l$-th class label to denote their relevance with $\hat{\bm{q}}$, and $\mathcal{I}$ is the sequence features for inputting the next layer of PLM.
As shown in the above equation, we encode demonstration representation with the weighted sum of the retrieval representation. Thus, retrieval scores are directly used in the final representation, making the framework differentiable.
To this end, we denote this style of demonstration as \emph{neural demonstration}, significantly different from prior work of \emph{discrete demonstration}~\cite{gao2020making}.
\textbf{Neural vs. Discrete Demonstration}
Compared with prior discrete demonstrations described in ~\cite{gao2020making,DBLP:journals/corr/abs-2101-06804,DBLP:journals/corr/abs-2112-08633,kumar-talukdar-2021-reordering}, retrieving weighted neural demonstrations from the knowledge-store to augment prompt learning has advantages in the following three major aspects:
(1) neural demonstrations could be more tolerant of the model's maximum input length than discrete demonstrations, while the discrete demonstration is usually not suitable for multi-class classification tasks due to the limitation of input length, such as relation extraction, etc.
(2) the model needs to deal with large retrieval tokens for discrete demonstration, making it time-consuming and computationally intensive to perform cross-attention operations due to the quadratic attention complexity. In contrast, dealing with much shorter instance representations as neural demonstrations unleashes the potential of cross-attention and accelerates the inference.
(3) when sampling examples based on the similarity between instances, our \textit{cloze-style} contextual representation is more informative and consistent than the contextual representation from \texttt{[CLS]} of Sentence-BERT~\cite{reimers2019sentence} (adopted in LM-BFF)
\subsection{Retrieve $k$NN{} for Guiding Training}
\label{sec:knn-train}
Eager learners, such as PLMs, are trained to provide a global approximating function that maps from input to output space.
Lazy learners such as $k$-nearest neighbor classifiers, on the contrary, focus on approximating the neighborhoods around test examples~\cite{bontempi2001local}.
Since $k$NN{} can easily predict for each encountered query instance based on pre-trained representation without an extra classifier, it is intuitively to leverage the $k$NN{}'s classification results as the \textbf{prior external knowledge} to guide the PLMs' parameters attending to hard examples (hard samples usually refer to atypical samples) during the training process (also referred as $k$NN{}-train for the abbreviation).
Particularly, our intuition is to differentiate between easy and hard examples according to the prediction of $k$NN{}.
Given the $t$-th query instance $\bm{q}_t$,
we leverage the $\bm{h}_{q_t}$ querying the open-book knowledge-store $\left(\mathcal{K},\mathcal{V}\right)$ to retrieve the $k$-nearest neighbors $\mathcal{N}$ of $\bm{q}_t$ according to a similarity function $d(\cdot, \cdot)$, where
$d(\cdot, \cdot)$ typically adopt the inner product similarity.
Then, we compute a distribution over neighbors based on a softmax of their similarities and aggregate probability mass for each label word across all its occurrences in the retrieved targets:
\begin{equation}
\small
\begin{aligned}
P_{\text{$k$NN{}}}\left(y \mid \bm{q}_t \right) & \propto
\sum_{\left(\bm{c}_i,y_i\right)\in \mathcal{N}} \mathbbm{1}_{y=y_i} \exp\left(d\left( \bm{h}_{\hat{\bm{q}}_t}, \bm{h}_{\hat{\bm{c}}_i}\right)\right).
\label{eq:knnscore}
\end{aligned}
\end{equation}
Given the probability $p_{k\text{NN}}$ of the query instance $\bm{q}_t$ being predicted as the \textbf{gold class}, we propose to retrieve the $k$NN{} for guiding the training process of prompt learning.
The $k$NN{} guider reweights the cross-entropy loss $\mathcal{L}_{CE}$ by adjusting the relative loss for the correctly-classified or misclassified instances identified by $k$NN{}, respectively.
Specifically, we apply the negative log-likelihood as the modulating factor $F(p_{k\text{NN}})$.
The final loss $\mathcal{L}$ is defined as:
\begin{equation}
\label{eq:joint}
\small
F(p_{k\text{NN}}) = - \log{}(p_{k\text{NN}}), \quad
\mathcal{L} = \left(1 + \beta F(p_{k\text{NN}}) \right)\mathcal{L}_{CE},
\end{equation}
where $\beta$ denotes a scalar to determine the proportion of each loss term.
Note that $p_{k\text{NN}}$ is computed using the \emph{leave-one-out} distribution on the training set due to the fact that each example in the training set cannot retrieve itself.
The motivation of modulating factor here is similar to Focal-loss~\cite{focal_loss}, while we focus on exploit the application of $k$NN{} in tuning PLMs.
\subsection{$k$NN{} based probability for \textit{Cloze-style} Prediction}
\label{sec:knn-test}
Apart from the neural demonstration on the input side and $k$NN{} guided training process (also referred as $k$NN{}-test for the abbreviation), we further present $k$NN{} based probability for \textit{Cloze-style} prediction on the inference process, providing the PLM ability to retrieve nearest neighbors for decisions rather than making predictions only based on memorized parameters.
Given the non-parametric $k$ nearest neighbor distribution $P_{k\text{NN}}$ of the query instance $\bm{q}_t$ being predicted as $y$, the $P(y\mid \bm{q}_t)$ is reformulated by interpolating the $P_{k\text{NN}}$ with the already-trained base PLM's MLM prediction $P_\mathcal{M}$ using parameter $\lambda$ to produce the final probability of the label:
\begin{equation}
\begin{aligned}
\label{eq:lambda}
P(y \mid \bm{q}_t)=\lambda P_{k\mathrm{NN}}(y \mid \bm{q}_t)
+(1-\lambda) g\left( P_\mathcal{M} ({\text{\tt [MASK]}} = v|\mathcal{T}(\bm{q}_t)) \right) .
\end{aligned}
\end{equation}
Different from $k$NN-LM~\cite{he2021efficient} that uses tokens to augment the language modeling directly, we explicitly take advantage of prompt-based instance representation for classification tasks, which is more deeply rooted in prompt learning.
In this way, we can unlock the model prediction process as an {\it open-book} examination.
\section{Related Work}
\vspace{-0.2cm}
\textbf{Retrieval-enhanced PLMs.}\quad
Our pipeline is partly inspired by discrete demonstration methods such as ~\cite{gao2020making,DBLP:journals/corr/abs-2101-06804,DBLP:journals/corr/abs-2112-08633,kumar-talukdar-2021-reordering,le2021demoner} that retrieves few training examples in a natural language prompt, while we propose neural demonstration for enhancing the input to alleviate the limitations of input length.
Another line researches of retrieval augmentation~\cite{DBLP:journals/corr/abs-2002-08909,DBLP:conf/emnlp/KarpukhinOMLWEC20,DBLP:conf/nips/LewisPPPKGKLYR020} retrieve useful information from a external knowledge corpus (e.g., Wikipedia) for a particular task (e.g., an open-domain question). Unlike these works, we focus on retrieving examples from the internal training data.
Besides, semi-parametric methods~\cite{DBLP:conf/iclr/KhandelwalLJZL20,he2021efficient,DBLP:conf/iclr/KhandelwalFJZL21,DBLP:conf/emnlp/KassnerS20,alon2022neurosymbolic,meng2021gnnlm} have risen to leverage $k$-nearest neighbor classifier that makes the prediction based on representation similarities, to enhance pre-trained language models.
However, unlike these models using nearest neighbors only for augmenting the process of prediction, we aim to develop a comprehensive retrieval mechanism for input, training and test process.
\textbf{Prompt learning for PLMs.}\quad
With the birth of GPT-3~\cite{DBLP:conf/nips/BrownMRSKDNSSAA20},
prompt learning~\cite{liu2021pre} has recently arisen to fill the gap between masked LM objective of PLMs and downstream fine-tuning objective.
Prompt learning has achieves very impressive performance on various tasks~\cite{schick2020automatically,shin2020eliciting,lightner,ma21template,ptr,chen21knowprompt}, especially under the setting of few-shot learning.
Moreover, continuous prompts have also been proposed~\cite{li2021prefix,lester2021power,liu2021gpt} to reduce prompt engineering, which directly appends a series of learnable continuous embeddings as prompts into the input sequence.
Our work is orthogonal to previous prompt learning approaches, which aim to optimize prompts, while we focus on the systematic study of retrieving related examples from training data to enhance prompt learning. |
1504.07254 | \section{Introduction}
\label{sec:intro}
Understanding the mechanisms of star formation at high redshifts is
central to our knowledge of how galaxies formed and subsequently
evolved chemically. This is specially true at $z\sim 2$ when the
cosmic star-formation activity was highest \citep{2014ARA&A..52..415M}.
Stars form out of cold gas, metals and dust in molecular clouds
\citep[e.g.][]{2006ARA&A..44..367S} in the interstellar medium (ISM)
of galaxies. In turn, the
radiative and mechanical feedbacks from stars have a strong impact on the
physical state of the ISM. Studying the ISM at high redshifts, and in particular
deriving the physical properties of the diffuse molecular phase in galaxies, is
therefore crucial for understanding how stars formed in the early Universe.
The best way to derive physical properties accurately is to detect the
tracers of the cold gas in absorption \citep[see][]{2014A&A...566A.112M}.
The neutral, shielded and possibly cold gas clouds at high redshifts can
be searched for in the radio domain by targeting the
neutral atomic-hydrogen (\ion{H}{i}) 21-cm absorption
line \citep[e.g.,][]{2009MNRAS.398..201G}. However, systematic - blind -
surveys have to await the increased sensitivity of new facilities such as
MeerKAT/SKA and ASKAP \citep{2009arXiv0910.2935B,2012MNRAS.426.3385D}.
On the other hand, such a gas can be efficiently traced in the optical wave-bands by
detecting the redshifted \ion{H}{i} damped Lyman-$\alpha$
\citep{2005ApJ...635..123P,2009A&A...505.1087N,2012A&A...547L...1N}
and/or strong \ion{Mg}{ii}
\citep[e.g.,][]{2011AJ....141..137Q,2011MNRAS.416.1871B} lines
imprinted in the spectra of bright enough background sources such as QSOs
or the rapidly fading $\gamma$-ray burst (GRB) afterglows \citep[for
the latter, see][and references therein]{2009ApJS..185..526F}.
Damped Lyman-$\alpha$ systems (hereafter DLAs) observed in QSO spectra
have column densities of $N(\ion{H}{i})\ge 2\times
10^{20}$ atoms cm$^{-2}$ and are known to contain most of the neutral gas
in the Universe in the redshift range $0<z<5$ \citep[see][for a
review]{2005ARA&A..43..861W}. It has been shown however that DLAs
typically probe warm ($T\ga 3000$~K) and diffuse
($n_\mathrm{H}<1$~cm$^{-3}$) neutral gas
\citep[e.g.,][]{2000A&A...364L..26P,2012MNRAS.421..651S}. The
metallicity of DLAs is generically low, i.e., on an average about
1/30$^{\rm th}$ of Solar
\citep{2006fdg..conf..319P,2012ApJ...755...89R} and their dust-to-gas
ratio is typically less than one-tenth of what is observed in the Galactic ISM
\citep[e.g.,][]{2008A&A...478..701V}. This probably explains the low
detection rates of molecular hydrogen
(H$_2$) in DLAs where only about 10\% of the QSO lines-of-sight
intercept H$_2$-bearing gas down to a limit of $N($H$_2)\sim 10^{14}$ molecules
cm$^{-2}$ (e.g., \citealt{2008A&A...481..327N,2014arXiv1402.2672B};
for searches for H$_2$ in DLAs originating from the host galaxies of
GRBs, see \citealt{2009A&A...506..661L,2013A&A...557A..18K}).
Based on the observed correlation between metallicity and dust depletion
in DLAs \citep{2003MNRAS.346..209L}, DLAs with high metallicity are
expected to contain more dust and therefore to exhibit larger H$_2$
fractions \citep[see][]{2006A&A...456L...9P}. However, even in
DLAs with the highest metallicities typical dust signatures like
reddening of the background QSOs, the 2175~\AA\ extinction feature
(hereafter also called ultra-violet [UV] bump), or diffuse interstellar bands, are
not apparent. Even in the rare cases with H$_2$ detections, the inferred
molecular fractions are low and typical of what is seen in Galactic diffuse atomic gas
with $f($H$_2)\equiv 2N($H$_2)/[2N($H$_2)+N(\ion{H}{i})]\la 0.01$ and often
much lower than this \citep[see][]{2008A&A...481..327N}. The primary
reason for this is that the cold and dusty phases are missed probably
because of their reduced cross-sections relative to that of the more
pervasive warm neutral ISM \citep{2006ApJ...643..675Z}. Direct
evidence for the relatively small physical sizes ($\la$~0.15~pc) of
H$_2$-detected clouds in DLAs recently came from the observation of partial
coverage of the QSO broad-line emitting region
(\citealt{2011MNRAS.418..357B,2015MNRAS.448..280K}).
H$_2$-detected clouds in DLAs are found to have kinetic
temperatures in the range $T\sim 70-200$~K and particle densities
$n_\mathrm{H}\sim 1-100$~cm$^{-3}$
\citep[e.g.,][]{2005MNRAS.362..549S}. When detected, H$_2$ is usually
coincident with neutral atomic carbon \citep[\ion{C}{i}; see
also][]{1999ASPC..156..121G}. This is due to the fact that the
ionization potential of neutral carbon ($11.26$~eV) is similar
to the average energy of the Lyman-Werner photons that dissociate H$_2$.
Therefore, shielding of UV photons is essential for these species to remain
at detectable levels. Carbon
monoxide (CO) has long escaped detection even in DLAs with detected
H$_2$, down to $N($CO$)\sim 10^{12}$ molecules cm$^{-2}$ \citep[see,
e.g.,][]{2002MNRAS.332..383P}. This is not surprising since CO, with a
dissociation energy of $11.09$~eV, needs to be even more shielded than
H$_2$ and \ion{C}{i} to be detected. After CO UV absorption bands were detected
for the first time at high redshift, in a sub-DLA towards the QSO
SDSS~J\,143912.05$+$111740.6 \citep{2008A&A...482L..39S}, it became
clear that the best place to detect CO in absorption at high redshift
are the systems with strong \ion{C}{i} absorption. Following this strategy
allowed us to detect carbon monoxide subsequently in five additional systems
\citep{2009A&A...503..765N,2010A&A...523A..80N,2011A&A...526L...7N}.
In Galactic translucent interstellar clouds, CO starts to be produced
in significant amounts when neutral atomic carbon becomes the dominant
carbon species and a large fraction of hydrogen turns molecular
\citep{2006ARA&A..44..367S}. The strength of the \ion{C}{i} absorption
is expected to be such that it could be detected even in a low resolution
spectrum. We therefore embarked in a systematic search for
\ion{C}{i} absorption in QSO spectra from the SDSS-II - Data Release seven
(hereafter DR\,7) - database. In this paper, we present the results of this
search and the basic properties of the detected \ion{C}{i} absorbers.
Note that here we will equally refer to QSO absorption-line systems detected
through \ion{C}{i} absorption as ``\ion{C}{i} systems'' or ``\ion{C}{i} absorbers''.
In Sect.~\ref{sec:identification}, we describe our selection and identification of
\ion{C}{i} absorbers. We discuss the properties of the sample in terms of intervening
\ion{C}{i}-absorber number per unit redshift, proximate systems, and \ion{C}{i}
rest-frame equivalent widths, in Sects.~\ref{sec:nz}, \ref{sec:prox} and \ref{sec:W},
respectively. We then assess the impact of the \ion{C}{i} absorbers on their respective
background QSOs both from the observed QSO optical
colours (Sect.~\ref{sec:colours}) and the reddening these systems induce on the QSO
spectral energy distributions (Sect.~\ref{sec:ebv}). In Sect.~\ref{sec:nhi}, we present
the \ion{H}{i} column-density distribution of the \ion{C}{i} systems
from VLT/UVES spectroscopy. In Sect.~\ref{sec:relations}, we investigate
empirical relations between neutral atomic-carbon and neutral atomic-hydrogen
contents, QSO reddening and the strength of possible 2175~\AA\ extinction features
(whose measurements are described in Sect.~\ref{sec:abump}). We summarise our
findings and conclude in Sect.~\ref{sec:conclusions}.
Throughout this paper, we assume a standard $\Lambda$ cold dark-matter cosmology
with $H_0=70$~km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_\Lambda=0.7$
and $\Omega_\mathrm{M}=0.3$.
\section{\ion{C}{i} absorption-line selection and identification}
\label{sec:identification}
We systematically searched for \ion{C}{i} absorption lines in high-redshift QSO
spectra from the Sloan Digital Sky Survey \citep{2000AJ....120.1579Y} -- DR\,7 \citep{2009ApJS..182..543A} -- quasar catalogue
\citep{2010AJ....139.2360S}. This survey imposed an i-band magnitude cut
of 19.1 for QSO candidates whose colours indicate a probable redshift smaller
than $\sim 3$. The spectra cover the wavelength range 3800--9200~\AA\ at
a resolving power $R\sim 2000$.
We implemented a dedicated IDL procedure to detect and identify absorption-line
features in SDSS QSO spectra automatically. Since SDSS spectra are log-lambda
binned, the pixels have constant velocity size ($\approx 69$~km\,s$^{-1}$).
This makes it straightforward to cross-correlate the spectra with an emission- or
absorption-line template. We
used the method introduced by \citet{2010MNRAS.403..906N} to search for
[\ion{O}{iii}]\,$\lambda\lambda$4959,5007 emission and \ion{Mg}{ii}
absorption lines. We first normalized the spectra iteratively using Savitzky-Golay filtering.
This consists of smoothing the spectra by convolving them with
a Savitsky-Golay kernel that preserves the sharp QSO emission-line peaks but ignores
narrow features such as metal absorption lines, bad CCD pixels and sky emission-line
residuals. Deviant pixels and their neighbours are then masked out and the
resulting data is convolved again in the same way, and so on and so forth. This procedure has the
major advantage as no {\sl a priori} assumption is required about the functional
form of the QSO continuum (i.e., power law or other) and in addition it is computationally
extremely fast.
We then cross-correlated the normalised spectra with a synthetic profile
of \ion{C}{i}\,$\lambda\lambda$1560,1656 absorption lines. We looked for the positive
correlation signal together with peak absorptions detected at more than $2\sigma$
and $2.5\sigma$, respectively, and differing by less than a factor of three.
The simultaneous detections of the \ion{Si}{ii}\,$\lambda$1526
and \ion{Al}{ii}\,$\lambda$1670 absorption lines were required to support the
identifications of the two features as \ion{C}{i} and hence minimize the probability of
chance coincidence. Spurious detections ($\sim 50$\%), most
of them close to the detection limit, were identified visually and removed from
the sample. In total, we find 66 systems, one of which is shown in
Fig.~\ref{fig:discovery}.
The search for \ion{C}{i} lines was limited to the regions of the spectra
redwards of the QSO Lyman-$\alpha$ emission line to avoid the spurious coincidences
that are frequent in the
Lyman-$\alpha$ forest. The wavelength range above 7200~\AA\ was also not considered
in the search to avoid regions of the spectra heavily affected by residuals resulting
from the sky emission-line subtraction.
We requested that the search window encompasses the wavelengths of
the \ion{Si}{ii}\,$\lambda$1526 and \ion{Al}{ii}\,$\lambda$1670 absorption lines
of the putative \ion{C}{i} systems so that the validity of a system does not rely
solely on the detection of two transitions (see above). For a given line-of-sight, the
redshift lower bound ($z_\mathrm{min}$) of the \ion{C}{i} search is therefore
the largest value between $z=3820/1526-1\simeq 1.50$ and
$(1+z_\mathrm{em})\times 1215/1526-1\simeq 0.8\times z_\mathrm{em}-0.2$,
where $z_\mathrm{em}$ is the QSO emission redshift. The redshift upper bound
($z_\mathrm{max}$) of the search is the smallest value between
$z_\mathrm{em}+0.1$ (to not exclude a priori proximate systems with infalling velocities
of up to $+5000$~km\,s$^{-1}$) and $z=7200/1656-1\simeq 3.35$. In order to avoid
too many false positives at low signal-to-noise (S/N) ratio, we requested the median
S/N ratio per pixel to be larger than four for a given spectrum to be actually scanned.
This resulted in a sample of 41\,696 QSOs with $1.5<z_\mathrm{em}<4.46$ whose spectra
were searched for intervening or proximate \ion{C}{i} absorbers. Note that we did not
initially reject Broad Absorption-Line (BAL) quasars because our procedure follows the QSO continuum locally
and can detect narrow absorption lines embedded in broad and not-fully saturated troughs.
Regions of deep absorption are de facto avoided when we study the number of
intervening \ion{C}{i} absorbers per unit redshift in Sect.~\ref{sec:nz} as they have
low S/N ratio per pixel.
Table~\ref{tab:ci} summarises our \ion{C}{i} sample which we refer to
in the following as the overall sample. QSO names with J\,2000
coordinates are given for each absorber. No line-of-sight is found to
feature more than one system\footnote{Note however that there is a second \ion{C}{i}
system towards SDSS~J\,234023.67$-$005327.1, which, with
$z_\mathrm{abs}=1.36$, falls below the redshift cut-off of our survey. This
system happens to be detected in 21-cm absorption hence is also related to cold gas
\citep[see][]{2009MNRAS.398..201G,2010ApJ...712L.148K}.}.
The SDSS plate and fibre numbers as well as the MJD are also provided in
the table as useful
cross-references. The QSO emission redshifts derived by the SDSS team
are indicated together with the absorption redshifts and rest-frame equivalent
widths of the \ion{C}{i} lines. The latter were carefully determined by us
for each individual system. In the next column of the table, we specify the
average S/N ratios per pixel in the regions the two \ion{C}{i} lines are
located. At $z_\mathrm{abs}>2.2$, the Lyman-$\alpha$ line of the
systems is also covered by the SDSS spectra. We therefore provide in
the table a determination of the total neutral atomic-hydrogen column
density of these systems following the method developed by
\citet{2009A&A...505.1087N}. The reliability of the latter is
confirmed in a number of cases with follow-up high-resolution VLT/UVES spectroscopy
(see the last column of Table~\ref{tab:ci}, and Sect.~\ref{sec:nhi}).
This column also has additional $N($\ion{H}{i}$)$ measurements
for several low-redshift systems for which Lyman-$\alpha$ absorption
is not covered in the SDSS spectrum.
\section{Sample properties}
It must be noted that in this survey \ion{C}{i} systems are found without
any presumptions on the presence of neutral atomic hydrogen, i.e., the \ion{C}{i} systems
found in this work do not necessarily have to be DLAs. Moreover, DLA absorbers
can be observed in SDSS spectra only when their redshifts are larger than 2.2 while
\ion{C}{i} lines can be identified down to $z_\mathrm{abs}\simeq 1.50$.
As stated before, the reality of the identified \ion{C}{i} systems in our sample was
checked by visual inspection. Therefore, we believe the \ion{C}{i} detections are
secure. We discuss the completeness of the survey in Sect.~\ref{sec:comp}.
\subsection{Line equivalent widths}
\label{sec:W}
Because we will rely on them in the analysis, we here seek to verify the robustness and accuracy of the equivalent-width measurements of \ion{C}{i}-absorption lines performed in SDSS spectra. For this purpose, we plot in Fig.~\ref{fig:ew} the measured rest-frame equivalent widths of the \ion{C}{i}\,$\lambda\lambda 1560,1656$ lines versus each other. It appears that except in one case the strengths of the two lines are in the expected
range. This gives confidence in the derived values and their associated uncertainties. In the case of the outlier seen in the lower part of the plot (i.e., at
$z_\mathrm{abs} =1.526$ towards SDSS~J\,125552.60$+$223424.4), an unrelated blend to
the $\lambda$1656 line is a probable reason for the observed deviation.
All the systems are located within about $2\sigma$ of the boundaries defined by the optically-thin regime on one hand and the relation expected for heavily saturated profiles on the other hand. We note that because of their large equivalent widths ($W_\mathrm{r}\ga 0.4$~\AA) most of the absorbers, especially those with equivalent-width ratios consistent with the optically-thin regime, are probably made of numerous velocity components.
Here, we assumed that the \ion{C}{i} ground-state is solely responsible for the absorption lines while in reality the absorption from the two fine-structure energy levels of the neutral-carbon ground state ($^3$P$_1$ and $^3$P$_2$) could in principle contribute mildly to the measured equivalent widths. However, this will affect the equivalent widths of both of the $\lambda 1560$ and $\lambda 1656$ lines in the same way so that any departure from the assumed relations due to this blending will be small.
\subsection{Completeness}
\label{sec:comp}
Before discussing the number of \ion{C}{i} absorbers per unit redshift ($n_\mathrm{\ion{C}{i}}$;
see the following section), we first need to estimate the completeness of the sample. Given the resolving
power $R$ of the SDSS spectra, the \ion{C}{i}\,$\lambda 1560$ line rest-frame equivalent width limit is
given by:
\begin{equation}\label{eq:ewl}
W_\mathrm{r,lim}(\lambda 1560)\simeq n\times\frac{1560}{R}\times\mathrm{S/N}^{-1}
\end{equation}
where $n=2$ is the number of standard deviations above which the peak absorption must be
detected and $\mathrm{S/N}>4$ is the limit on the signal-to-noise ratio per pixel at the
corresponding line position. Note that the FWHM of the lines is sampled by two velocity pixels of
constant value. Our survey should therefore be complete down to
$W_\mathrm{r,lim}(\lambda 1560)\simeq 0.4$~\AA.
We checked the exact level of completeness of our survey at this equivalent-width limit by
implementing the following procedure. For this purpose, we used the same data set that
we used for the calculation of
$n_\mathrm{\ion{C}{i}}$ in Sect.~\ref{sec:nz}, i.e., the same quasar sample,
the same [$z_\mathrm{min}$,$z_\mathrm{max}$] values and the same mean $\mathrm{S/N}>4$
limit. We then randomly selected 1000 QSO spectra and introduced an artificial \ion{C}{i} system
of rest-frame equivalent width $W_\mathrm{r}(\lambda 1560)$ at different positions in
the spectra where the local S/N ratio at both \ion{C}{i} lines is larger than four. The
distribution of the equivalent width ratio of the $\lambda$1560 and $\lambda$1656 lines is assumed to
be a normal distribution with a dispersion corresponding to what is seen in Fig.~\ref{fig:ew}.
Note however that neither the equivalent width ratio nor the exact number of artificial systems
used in the simulation has any significant impact on the completeness we infer.
We implemented about 40\,000 \ion{C}{i} systems that we sought to recover by using
the same automatic procedure described in Sect.~\ref{sec:identification}. We
varied $W_\mathrm{r}$ over the range 0.1-1.0~\AA\ and defined the completeness as the ratio
of the number of recovered systems to the total number of systems introduced in the spectra.
The results are displayed in Fig.~\ref{fig:CIcomp}. It can be seen that the completeness is
larger than 80\%
for $W_\mathrm{r}(\lambda 1560)\ge 0.4$~\AA.
\subsection{Number of absorbers per unit redshift}
\label{sec:nz}
We calculated the sensitivity function, $g(z)$, of our survey, i.e., the number of
lines-of-sight probing a given redshift $z$ and having $\mathrm{S/N}>4$ at the expected positions of
both \ion{C}{i} lines. This function is shown in Fig.~\ref{fig:newgz}. It
combines together the [$z_\mathrm{min}$,$z_\mathrm{max}$] pairs, previously defined
in Sect.~\ref{sec:identification}, for all the lines-of-sight. We further excluded
the regions with velocities relative to the QSO emission redshifts smaller than
$5000$~km\,s$^{-1}$, which could, in principle, be influenced by the quasar (see
Sect.~\ref{sec:prox}). Note that the uncertainties on $z_\mathrm{em}$ are of the order of
$500$~km\,s$^{-1}$ \citep[see, e.g.,][]{2012A&A...548A..66P} and therefore are small
enough not to affect the statistics. The total statistical absorption path length probed by the
QSO sample over $z=1.50-3.35$ is $\Delta z\approx 13\,000$ with an average redshift
$\avg{z}=1.9$. From Fig.~\ref{fig:newgz}, it is apparent that the sensitivity of the
survey is an increasing function towards lower redshifts. One therefore expects a
larger number of absorbers to be found at $z<2$. This is what is observed in practice as
indicated by the redshift histogram of the detected intervening \ion{C}{i} absorbers
over-plotted on the same figure. On the other hand, the small number of \ion{C}{i}
systems found at $z_\mathrm{abs}>2.2$ (i.e., eight systems out of
a total of 66 systems or, equivalently, 12\% of the sample) is striking.
To investigate this further, we calculated the number of intervening \ion{C}{i} absorbers
per unit redshift, $n_\mathrm{\ion{C}{i}}$, in two redshift bins of roughly equal total
absorption path length (with boundary redshift $z = 1.9$). Here, we only considered the systems with
rest-frame equivalent widths above the completeness limit of the
survey, i.e., $W_\mathrm{r}(\lambda 1560)\ge 0.4$~\AA. The results are
summarised in Table~\ref{tab:dNdz} and shown
in Fig.~\ref{fig:dndz}. In the table, we also separated
the strongest from the weaker absorbers (around a median rest-frame equivalent width of
0.64~\AA) but there is no obvious difference between the redshift evolution of these two groups.
The $n_\mathrm{\ion{C}{i}}\sim 1.4\times 10^{-3}$ we measure in the higher redshift
bin ($1.9<z_\mathrm{abs}<3.35$),
taking into account the effect of incompleteness estimated in Sect.~\ref{sec:comp},
implies that \ion{C}{i} systems with $W_\mathrm{r}(\lambda 1560)\ge 0.4$~\AA\
are more than one hundred-times
rarer than DLAs at $z_\mathrm{abs}=2.5$ \citep[see][]{2012A&A...547L...1N}.
An evolution of $n_\mathrm{\ion{C}{i}}$, with nearly thrice as many systems
below $z=1.9$ than above that, is also observed. Compared to the redshift behaviour of
a non-evolving population, this is significant at the
$4.3\sigma$ level. Such an evolution is interesting and should be studied further as
it depends on the balance between dust shielding and the UV radiation field. This may
imply a strong evolution of the shielding of 10~eV photons by dust between $z=2.5$
and $z=1.5$.
\subsection{Proximate systems}
\label{sec:prox}
There are 14 \ion{C}{i} systems with velocities relative to the QSO
emission redshifts smaller than $5000$~km\,s$^{-1}$. These could
be associated with the QSO host galaxy or nearby
environment. Six systems even have absorption redshifts larger than
the corresponding QSO emission redshifts (by up to $\sim
4000$~km\,s$^{-1}$) which is difficult to explain by large peculiar
velocities in intervening systems. Imposing the same data-quality cuts
and minimum equivalent widths as in the previous section, we find that
the incidence of \ion{C}{i} absorbers at small velocity differences
from the quasars is consistent with that of intervening systems.
However, associated errors are large due to small number statistics.
Because of the clustering of galaxies around the massive QSO host
galaxies, an excess of proximate \ion{C}{i} systems could be
expected. However, \ion{C}{i} is a fragile species which can easily be
photo-ionised by the intense UV radiation emitted by the
QSO engine. Interestingly, the lack of a significant excess of proximate
systems was also observed by \citet{2008ApJ...675.1002P} considering DLAs. In
this case, the abundance of proximate DLAs is only a factor of two
larger than that of the overall DLA population. A similarly low
over-abundance factor was observed by \citet{2013A&A...558A.111F} for
strong DLAs with $\log N(\ion{H}{i})>21.3$ (atoms~cm$^{-2}$). This is
much less than what is
expected based on clustering arguments alone.
In this work, we do not observe that the properties of proximate
\ion{C}{i} systems are different from those of intervening \ion{C}{i} systems. This
is true for redshift, \ion{C}{i} equivalent-width, \ion{H}{i} content,
reddening and UV bump-strength distributions. Nevertheless, because of
the possibly different origin of these absorbers and the possible requirement of
strong dust shielding from the nearby QSOs, we discriminate in the following
proximate \ion{C}{i} systems from the rest of the
population and comment, whenever possible, on proximate \ion{C}{i} systems of interest.
\section{Evidence for dust}
\subsection{QSO optical colours}
\label{sec:colours}
In order to assess the impact of the \ion{C}{i} absorbers on their
background QSOs and check for the existence of dust in these systems,
we first consider the observed colours of these QSOs and compare them
with the colours of the overall QSO population used as a control
sample.
In Fig.~\ref{fig:colours}, we show the distributions of $(g-r)$,
$(r-i)$ and $(r-z)$ colours for the 41\,696 QSOs whose spectra were
searched for \ion{C}{i} absorption. In the upper panels of this
figure, it is apparent that the lines-of-sight with detected
\ion{C}{i} absorption do not distribute in the same way as the other
lines-of-sight. They are displaced altogether towards redder
optical colours compared to the average loci of the QSO redshift
sequences. The effect is most easily seen in the lower panels of
Fig.~\ref{fig:colours}, which compare the colour histograms of the
two QSO populations (i.e., the \ion{C}{i}-detected QSO sample and the
overall QSO sample). The two-sided Kolmogorov-Smirnov test probability
that the two samples are drawn from the same parent distribution is as
small as $\la 10^{-10}$. The typical colour excess is $\sim 0.15$~mag,
i.e., about five times larger than the mean $(r-z)$ colour excess of
0.03~mag derived by \citet{2008A&A...478..701V} in
$z_\mathrm{abs}\approx 2.8$ DLAs from SDSS DR\,5. A similar result for
DLAs was found by \citet{2012MNRAS.419.1028K} based on $(g-i)$ colours
of SDSS-DR\,7 QSOs. This is clear evidence for the presence of dust
among \ion{C}{i} absorbers.
\subsection{Reddening}
\label{sec:ebv}
Motivated by the unequivocal signature of dust in the form of a colour
excess of the background QSOs with detected \ion{C}{i} absorption, we
now aim at constraining the properties and the nature of dust in these
systems.
For each of the 66 QSOs with foreground \ion{C}{i} absorbers, we
derived the QSO reddening, $E($B-V$)$, following the same approach
as used in, e.g., \citet{2008MNRAS.391L..69S} and
\citet{2009A&A...503..765N,2010A&A...523A..80N}. First, we corrected
the QSO spectra for Galactic reddening using the extinction maps from
\citet{1998ApJ...500..525S}. We then fitted the spectra with the SDSS
QSO composite spectrum from \citet{2001AJ....122..549V} shifted to
that QSO emission redshift and reddened with either a Small Magellanic
Cloud (SMC), Large Magellanic Cloud (LMC), LMC2 super-shell or Milky
Way (MW) extinction law \citep{2003ApJ...594..279G} at the
\ion{C}{i}-absorber redshift. Our procedure is illustrated in the left
panel of Fig.~\ref{fig:sed}. The fit with the smallest $\chi^2$ value
indicates the most representative extinction law for a given
absorber. The latter is specified in Col. 'Best fit' of
Table~\ref{tab:sed} and the corresponding $E($B-V$)$ value is given
in the preceding column.
For each QSO line-of-sight exhibiting \ion{C}{i} absorption, we defined
a control sample (hereafter denoted as ``C.S.'') made of SDSS-DR\,7 QSOs
from the searched sample having an emission redshift within $\pm 0.05$
and a $z$-band magnitude within $\pm 0.1$~mag from those of the QSO under
consideration. In some instances, this resulted in a sample of less than 30 QSOs
in which case we increased the above maximum magnitude difference by steps
of 0.01 until the number of QSOs in the control sample reached (or exceeded)
30. We then applied to each QSO spectra from the control sample the exact
same fitting procedure as described in the previous
paragraph. Table~\ref{tab:sed} lists the number of QSOs and the median reddening
and standard deviation of the distribution of $E($B-V$)$ values in each
control sample (see the upper right panel of Fig.~\ref{fig:sed} for an
illustration). The values given in Table~\ref{tab:sed} correspond to the
most representative extinction law previously determined for that
particular \ion{C}{i}-detected QSO spectrum.
In the left panel of Fig.~\ref{fig:histo}, we show the histogram of
reddening for the sample of 66 QSO lines-of-sight with detected
\ion{C}{i} absorbers compared to the cumulative control sample
(calculated as the sum of the normalized distributions of individual
control samples). As in Sect.~\ref{sec:colours}, an offset between the
two samples is apparent. The mean reddening induced by \ion{C}{i}
systems is 0.065~mag. A tail in the histogram of the
\ion{C}{i}-detected lines-of-sight is observed, with $E($B-V$)$
values up to $\sim 0.3$~mag.
\subsection{The 2175~\AA\ extinction feature}
\label{sec:abump}
A number of \ion{C}{i}-detected QSO spectra are best-matched by an
extinction law exhibiting the absorption feature at rest-frame wavelength 2175~\AA. In order
to measure the strength of this UV bump (denoted $A_\mathrm{bump}$),
we followed a prescription similar to the one used by
\citet{2010ApJ...720..328J} where the observed QSO spectrum was fitted
with the SDSS QSO composite spectrum reddened via a parametrized
pseudo-extinction law made of a smooth component and a Drude
component. However, here we fixed the wavelength and width of the bump
to the Galactic values determined by \citet{2007ApJ...663..320F}. Both
of these quantities indeed show little variation from line-of-sight to
line-of-sight through the Galaxy and the Magellanic Clouds. This then
limits the number of free parameters and prevents the fit from
diverging towards very wide and shallow solutions which could be
non-physical. Indeed, imperfect matching of the observed QSO continuum
by the smooth component is expected due to intrinsic QSO-shape
variations \citep[e.g.,][]{2000PASP..112..537P}.
The fitting process is illustrated in the left panel of
Fig.~\ref{fig:sed}. The shaded area represents the measure of the bump
strength. This is the difference between the above best-fit function
and the same function but considering only its smooth component (i.e.,
with the Drude component set to zero). $A_\mathrm{bump}$ values are
listed for each absorber in Table~\ref{tab:sed}. As previously done
for the determination of reddening (see Sect.~\ref{sec:ebv}), we also
defined a QSO control sample whose measured $A_\mathrm{bump}$
distribution is shown in the lower right panel of Fig.~\ref{fig:sed}
(i.e., for the given QSO emission redshift). Table~\ref{tab:sed} gives
for each control sample the median and standard deviation of this
distribution.
The histogram of bump strengths in the \ion{C}{i}-absorber sample is
displayed in the right panel of Fig.~\ref{fig:histo}. One can see from
this figure that more than a quarter of the \ion{C}{i} systems feature
absorption at 2175~\AA. This strengthens the result from the previous
section that significant reddening of the background QSOs by dust is induced
by some of the \ion{C}{i} absorbers. We will come back to this and quantify the
effect in Sect.~\ref{sec:relations}.
In the following, we shall use the median $E($B-V$)$ and $A_\mathrm{bump}$
values of the control samples, i.e., $\langle E($B-V$)\rangle_\mathrm{C.S.}$
and $\langle A_{\rm bump}\rangle_\mathrm{C.S.}$, to define the exact colour excess and
bump strength towards a given \ion{C}{i}-detected QSO line-of-sight:
$E($B-V$)=E($B-V$)_\mathrm{measured}-\langle E($B-V$)\rangle_\mathrm{C.S.}$,
and likewise for $A_\mathrm{bump}$. These zero-point corrections are usually almost
negligible (see Table~\ref{tab:sed}). In addition, the standard deviations of
$E($B-V$)$ and $A_\mathrm{bump}$ values in each control sample provide
an estimate of the uncertainty due to intrinsic QSO-shape
variations \citep[see][]{2000PASP..112..537P} and hence the significance of the
reddening induced by each \ion{C}{i} absorber and the significance of associated
2175~\AA\ absorption, respectively.
\section{\ion{H}{i} content}
\label{sec:nhi}
As part of a spectroscopic campaign which we will describe in a
companion paper, we followed up the \ion{C}{i} absorbers from the
overall sample which are observable from the southern hemisphere using
VLT/UVES. We present in Fig.~\ref{fig:histHI}
the \ion{H}{i} column-density distribution of this
\ion{C}{i}-absorber sub-sample (referred to in the following as the
\ion{H}{i} sub-sample) and compare it with the distribution of $N(\ion{H}{i})$ from
systematic DLA and/or sub-DLA surveys.
We secured \ion{H}{i} column-density measurements for most of the
systems in the overall sample which have a declination of
$\delta <+28\deg$, i.e., 14 out of 16 systems at redshifts
$z_\mathrm{abs}>1.8$ (the two exceptions being the lines-of-sight
towards SDSS~J\,091721.37$+$015448.1 and J\,233633.81$-$105841.5)
and four out of eight systems at $z_\mathrm{abs}\approx 1.75$ (see
Table~\ref{tab:ci}). While \ion{H}{i} column densities derived from UVES
spectroscopy are usually more accurate, the last two columns of
Table~\ref{tab:ci} show that they confirm those derived
directly from SDSS spectra as testified by the five systems at
$z_\mathrm{abs}>2.2$ where this measurement could be done from both
datasets. For this reason, we here complement our UVES measurements
with the values we derived using SDSS spectra for the three systems at
$z_\mathrm{abs}>2.2$ for which high-resolution spectroscopic data are
not available because the background QSOs are too far North for the
VLT to observe them. The \ion{H}{i} sub-sample thus comprises a total of 21
systems.
In Fig.~\ref{fig:histHI}, we compare the observed \ion{H}{i} column-density
distribution of \ion{C}{i}-selected absorbers with that of
\ion{H}{i}-selected DLAs (from SDSS DR\,7 as well;
\citealt{2009A&A...505.1087N}). In this figure, we also show the
expected number of sub-DLAs using the fitted distribution function
from \citet{2014MNRAS.438..476P}. We find that a large fraction of the
\ion{C}{i} absorbers have neutral atomic-hydrogen column densities
slightly below the conventional DLA limit ($N(\ion{H}{i})= 2\times 10^{20}$
atoms cm$^{-2}$) and therefore classify as strong sub-DLAs. However,
the fraction of \ion{C}{i} absorbers among sub-DLAs is much less than
among DLAs indicating that efficient shielding is much more difficult
to obtain below the DLA limit. Though rare, the existence of
\ion{C}{i} absorbers with low neutral atomic-hydrogen column densities
supports the presence of dust in these systems. The dust-to-gas ratio
in these systems has to be high enough so that the absorption of UV
photons by dust allows \ion{C}{i} to be present in large amounts.
No \ion{C}{i} system is found with $\log N(\ion{H}{i})<19$ (atoms~cm$^{-2}$).
This is a regime where shielding of UV photons becomes extremely difficult even in the
presence of dust. However, it is possible that such systems are missed
in our search. Indeed, as seen in Fig.~\ref{fig:nhi}, there is a trend
for neutral atomic-hydrogen column density to increase with \ion{C}{i}
equivalent width. The gradually decreasing completeness fraction of
the survey below $W_\mathrm{r}(\lambda 1560)\approx 0.4$~\AA\ (see
Sect.~\ref{sec:identification}) would therefore preclude low
$N(\ion{H}{i})$ systems from appearing in our sample.
Above the DLA limit, where the incompleteness fraction of our survey
is less of an issue, it appears that \ion{C}{i}-selected absorbers
do not follow the statistics of \ion{H}{i}-selected DLAs.
Although the number of \ion{C}{i} systems with $\log N(\ion{H}{i})>20.3$
(atoms~cm$^{-2}$) is small (i.e., only 9 systems), it is apparent that
the overall $N(\ion{H}{i})$ distribution of \ion{C}{i} systems is relatively
flat. A two-sided Kolmogorov-Smirnov test applied to all absorbers
with $\log N(\ion{H}{i})>20.3$ (atoms~cm$^{-2}$) gives a probability of
only 17\% that the
two distributions come from the same parent population (see inset of
Fig.~\ref{fig:histHI}). This could be explained by a larger number of
velocity components in higher \ion{H}{i} column-density gas, thereby increasing the
probability of detecting \ion{C}{i}. Moreover, large amounts of
shielded gas are probably the consequence of the line-of-sight passing
through the absorbing galaxy at small impact parameter, in which case
we can expect the $N(\ion{H}{i})$ distribution to be flatter than that of
the overall DLA population.
The strongest DLA found among the \ion{C}{i} absorbers in the \ion{H}{i}
sub-sample has $N(\ion{H}{i})=10^{21.8}$ atoms cm$^{-2}$. It is
however located at $z_{\rm abs}\approx z_{\rm em}$. Even ignoring
proximate systems, it is yet
surprising that three intervening DLAs with $N(\ion{H}{i})\ge 10^{21}$
atoms cm$^{-2}$ are present in such a small absorber sample. From DLA
statistics alone, the probability of randomly selecting three DLAs
that strong out of a sample of six DLAs is only 6\%.
There is therefore a probable excess of strong DLAs among \ion{C}{i}
absorbers. While the dust content of these systems is significant (see
Sects.~\ref{sec:colours} and \ref{sec:ebv}),
their dust-to-gas ratio must be limited.
Indeed, dust reddening and extinction of the background QSOs will
inevitably reduce the incidence of strong and dusty DLAs in
magnitude-limited QSO samples. This implies that the actual proportion of
strong DLAs among \ion{C}{i} systems in general is likely to be even
higher than what we here found out.
\section{Empirical relations in the sample}
\label{sec:relations}
Based on the results presented in the previous sections, we now study the
existence of empirical relations between the different quantities
measured in this work: neutral atomic-carbon and neutral
atomic-hydrogen contents, the reddening \ion{C}{i}-selected absorbers
induce on their background QSOs and the strength of possible 2175~\AA\
extinction features. Because the \ion{C}{i}\,$\lambda$1560 transition
line is weaker and hence exhibits less saturation than
\ion{C}{i}\,$\lambda$1656, we adopt the equivalent width of the former
as a proxy for the amount of neutral atomic carbon in the systems.
In Fig.~\ref{fig:nhi}, we plot the \ion{C}{i}\,$\lambda$1560
rest-frame equivalent width versus $\log N(\ion{H}{i})$ for the
\ion{C}{i} absorbers from the \ion{H}{i} sub-sample. Both quantities appear
to be weakly correlated. A Kendall rank-correlation test indicates
the significance of the correlation to be $1.8\sigma$ only. There is
therefore a tendency for strong DLAs to have larger \ion{C}{i}
equivalent widths but at the same time, for a given \ion{H}{i} column
density, the \ion{C}{i} content can vary substantially from one system
to another. Large values of $W_\mathrm{r}(\lambda 1560)$ are
observed in DLAs but also in sub-DLAs. The fraction of shielded and
probably cold gas could actually be large in some of these sub-DLAs.
From the optically-thin approximation applied to the
\ion{C}{i}\,$\lambda$1560 absorption line and assuming the ionization
equilibrium relation, $N($\ion{C}{i}$)/N($\ion{C}{ii}$)\sim 0.01$,
valid for the cold neutral medium \citep[CNM; see,
e.g.,][]{2011ApJ...734...65J}, a lower limit on the gas metallicity can be
derived:
\begin{equation}\label{eq:met}
[\mathrm{X}/\mathrm{H}]\ga 18.35+\log\left(\frac{W_\mathrm{r}(\lambda 1560)}{0.01 \times N(\ion{H}{i})}\right)
\end{equation}
For $W_\mathrm{r}(\lambda 1560)=0.4$~\AA\ and $\log N(\ion{H}{i})=20$
(atoms~cm$^{-2}$), the metallicity should be of the order of Solar. More generally,
the dashed and dashed-dotted curves in Fig.~\ref{fig:nhi} were calculated using
the above equation assuming metallicities of one-tenth of Solar and Solar respectively.
Within measurement uncertainties, most of the \ion{C}{i} systems
lie in between these two curves. If the medium
probed by the line-of-sight is a mixture of cold and warm gas, the
metallicity of the systems will be even higher. However, if part of the hydrogen is
in molecular form, the metallicity will be lower. For
the whole \ion{C}{i}-absorber sample, Eq.~\ref{eq:met} implies a
metallicity distribution ranging between [X/H$]=-1.4$ and
metallicities in excess of Solar, with a median value of [X/H$]\approx
-0.5$. This means that the metallicities of \ion{C}{i} absorbers would
on average be at least ten times larger than those of typical
DLAs (for the latter, see, e.g., \citet{2012ApJ...755...89R}). This should be confirmed by accurate measurements
of metal column densities.
In Fig.~\ref{fig:nhi2}, we display the relation between $E($B-V$)$
and $\log N(\ion{H}{i})$ among the \ion{C}{i} absorbers from the \ion{H}{i}
sub-sample. Here again, the data points are highly scattered. Most of
the systems are associated with low albeit consistently non-zero QSO
reddening. Since most of the $N(\ion{H}{i})$ values are relatively low,
the measured amounts of reddening, with median $E($B-V$)\sim 0.045$, are actually
remarkable. This departs from what is observed in the overall DLA population
where the reddening is usually negligible \citep[see,
e.g.,][]{2008A&A...478..701V,2012MNRAS.419.1028K}. The latter authors
have shown that DLAs at $z_\mathrm{abs}\approx 2.8$ typically induce a
reddening $E($B-V$)\sim 5\times 10^{-3}$~mag. Apart from a few
outliers, most of the \ion{C}{i} systems have reddening properties
consistent with those of the Galactic ISM. This is represented in
Fig.~\ref{fig:nhi2} by the solid line. In the Galaxy, the
reddening induced along a line-of-sight is indeed directly proportional to
the neutral atomic-hydrogen column density, with
$E($B-V$)/N($H$)=1.63\times 10^{-22}$ mag atoms$^{-1}$ cm$^2$
\citep{2012ApJS..199....8G}. Only two \ion{C}{i} systems with large
$N(\ion{H}{i})$ are more consistent with what is seen in typical DLAs and/or
along SMC lines-of-sight, where the above ratio is smaller than in the Galaxy.
Two other \ion{C}{i} systems with low
$N(\ion{H}{i})$ might also deviate from the Galactic relation being
consistent with a ten-times larger ratio. We caution however that the
uncertainties on the reddening measurements are fairly large. If real, this
would imply in the latter systems the existence of a grain chemistry more evolved
than in the Galaxy with a larger fraction of big grains over very small grains
\citep[e.g.,][]{1992ApJ...395..130P}. This is opposite to the trend
observed in the Magellanic Clouds. In such systems, strong
2175~\AA\ absorption is expected. Interestingly, this is what is observed in practice as
these two \ion{C}{i} systems exhibit two of the strongest three $A_{\rm bump}$ values of
the \ion{H}{i} sub-sample.
To investigate the characteristics of the \ion{C}{i} absorbers further, we look
in Fig.~\ref{fig:ebv} in more detail at the properties of dust in these systems. In the
left panel of this figure, $E($B-V$)$ and $W_\mathrm{r}(\lambda 1560)$ are
found to be correlated with each other at the $4.4\sigma$ significance level. This
is noteworthy as different degrees of saturation of the \ion{C}{i}\,$\lambda$1560 line
are expected to produce scatter in this relation. This implies that the neutral-carbon
content of the \ion{C}{i} systems is intimately related to the reddening induced along
the line-of-sight or, equivalently, that the amounts of shielded gas and dust are tightly
inter-connected. We also note that two of the largest three $E($B-V$)$ values in this
plot correspond to systems located at $z_{\rm abs}\approx z_{\rm em}$. This may
however be the result of small number statistics as the reddening induced by the other
proximate systems in our sample varies substantially from one system to another.
The relation between $E($B-V$)$ and the UV bump strength, $A_\mathrm{bump}$,
which we previously determined independently from $E($B-V$)$ (see Sect.~\ref{sec:abump}),
is shown in the right panel of Fig.~\ref{fig:ebv}. It can be seen that both quantities
are tightly correlated ($6.0\sigma$). A linear least-squares fit (linear correlation
coefficient of $r=0.77$), taking into account errors in both parameters, gives:
$E($B-V$)\simeq 0.43\times A_\mathrm{bump}$. The 2175~\AA\ extinction feature
is detected at more than $2\sigma$ (95\% confidence level) in about 30\% of
the \ion{C}{i} systems. In such cases, we find $A_\mathrm{bump}\sim 0.4$
and $E($B-V$)\sim 0.2$~mag or, equivalently, $A_{\rm V}\sim 0.6$~mag. These
values are comparable to what \citet{2011MNRAS.416.1871B} have found when
targeting the strongest \ion{Mg}{ii} systems from SDSS DR\,6
[$1<W_\mathrm{r}(\lambda 2796)<5$~\AA] where the 2175~\AA\ absorption is
detected on a statistical basis only \citep[see also][for candidate 2175~\AA\ absorption
in similar systems]{2011ApJ...732..110J}. Interestingly, our measured UV bump
strengths are also comparable to what has been observed along GRB lines-of-sight
at similarly low levels of extinction, e.g., towards GRB\,080605 \citep[see][]{2012ApJ...753...82Z}.
In the local Universe, 2175~\AA\ absorption can be observed together
with reddening values as low as $\sim 0.2$~mag \citep[see,
e.g.,][]{2007ApJ...663..320F}. Even at such low levels of
reddening, the UV bump is significantly stronger along Galactic
lines-of-sight, i.e., by up to a factor of ten, than in the present
\ion{C}{i} absorber sample and/or through GRB host galaxies. This
discrepancy may be explained if these high-redshift systems probe regions
of the ISM affected by a star formation more vigorous than in the Galaxy.
A similar argument was proposed by \citet{2003ApJ...594..279G}
with the aim of explaining the variety of LMC and SMC extinction curves.
In fact, most of the \ion{C}{i} absorbers at the high end of the reddening tail
in our sample are best-fit using the extinction law of the LMC2 super-shell
near the 30~Dor star-forming region\footnote{Super-giant shells with
sizes approaching 1~kpc form the largest structures seen in the ISM
of galaxies where large amounts of kinetic energy are contributed by
multiple supernovae explosions and energetic stellar winds.} (see
left panel of Fig.~\ref{fig:histo}). This means that the far-UV rise
of the extinction curve is enhanced and the carriers of the 2175~\AA\
absorption are depleted compared to Galactic lines-of-sight. This is
probably the consequence of a high UV flux and/or the mechanical
feedback from stars \citep[e.g.,][]{2011A&A...533A.117F} in the
vicinity of the \ion{C}{i} systems.
In contrast, the lack of a UV bump in typical DLAs
\citep[e.g.,][]{2012MNRAS.419.1028K} is probably
intrinsic to their low dust and metal contents as the lines-of-sight are
likely to pass at large impact parameters from the absorbing galaxy.
The main outlier in the right panel of Fig.~\ref{fig:ebv}, which exhibits
high reddening but no UV bump, is a proximate system. This
is consistent with the above picture where the enhanced UV radiation
field from the QSO and/or star-forming regions within the QSO host
galaxy are expected to deplete the carriers of the 2175~\AA\ absorption.
\section{Conclusions}
\label{sec:conclusions}
In this work, we presented a new population of QSO absorbers selected
directly from the properties of the shielded gas, namely the strongest
\ion{C}{i} absorbers, detected in low-resolution QSO spectra from the
SDSS-II DR\,7 database. These \ion{C}{i} absorbers, with
$W_\mathrm{r}(\lambda 1560)\ge 0.4$~\AA, are more than one hundred-times rarer
than DLAs at $z_\mathrm{abs}=2.5$. Their number per unit redshift is
increasing significantly below $z_\mathrm{abs}=2$, probably coupled to an increase
in the star-formation efficiency at these
redshifts. \citet{2012A&A...544A..21G} reported a similarly
high detection rate of 21-cm absorbers towards even lower redshifts
among strong \ion{Mg}{ii} systems, which they argued must be related
to the evolution of the CNM filling factor in the latter absorbers.
The \ion{H}{i} column-density distribution of \ion{C}{i}-selected absorbers
is flatter than that of \ion{H}{i}-selected absorbers. While sub-DLAs
have much larger cross-section than DLAs, this can be understood as
the shielding of the gas is more difficult at low \ion{H}{i} column densities and
the number of clouds along the line-of-sight is probably
smaller. Cold and dusty gas as traced by \ion{C}{i} absorbers is also
more likely to be found at small impact parameters from the absorbing
galaxies where a flatter $N(\ion{H}{i})$ distribution is expected. Indeed,
despite a likely bias against strong DLAs with large amounts of dust,
we find there is among \ion{C}{i} systems a probable excess of strong DLAs
with $\log N(\ion{H}{i})>21$ (atoms~cm$^{-2}$) compared to systematic
DLA searches. This is
reminiscent of the $N(\ion{H}{i})$ distribution of DLAs within GRB host
galaxies which is skewed towards extremely strong DLAs
\citep[see fig.~10 in][]{2009ApJS..185..526F}.
The reddening and therefore the presence of dust along the QSO
lines-of-sight with detected \ion{C}{i} absorption is directly related
to the amount of shielded gas but depends weakly on the total \ion{H}{i}
column density. The latter can indeed vary by more than a factor of ten for
the same \ion{C}{i} rest-frame equivalent width. This is probably the
consequence of the shielded gas being clumpy while \ion{H}{i}
absorption samples simultaneously warm diffuse neutral clouds and
cold, high-metallicity dusty pockets of gas. The presence of dust
inducing significant reddening of the background QSOs and/or 2175~\AA\
extinction features are ubiquitous in about 30\% of the \ion{C}{i}
absorbers. Several systems like these have been found before
\citep[see, e.g.,][]{2008MNRAS.391L..69S,2012ApJ...760...42W}. Here, we
find that the UV bump is weak compared to Galactic lines-of-sight
exhibiting the same amount of reddening. We interpret this as being
the consequence of star formation in the vicinity of the systems.
It is likely that the metal and molecular contents of \ion{C}{i}
absorbers are high and actually higher than those of most DLAs studied
till now. High-resolution spectroscopic follow-up observations of the
present sample therefore opens up the door to systematic searches for
carbon monoxide \citep[CO; see][]{2011A&A...526L...7N} and molecules
like CN and CH as well as diffuse interstellar bands at high
redshift. Such a spectroscopic campaign will be presented in a
companion paper. The typical reddening induced by \ion{C}{i} absorbers
along with the relation between reddening and shielded-gas column
density imply that the extinction could be high in some DLAs with
\ion{C}{i} absorption. If
strong dusty DLAs exist, they probably have been missed in the current
magnitude-limited QSO samples
\citep[see also][]{1998A&A...333..841B,2008A&A...478..701V}.
Some of the QSO lines-of-sight
identified here, as well as those which may be found by extending the
present survey to even larger databases\footnote{Note that from the
Baryon Oscillation Spectroscopic Survey, which is part of SDSS-III,
relatively few additional \ion{C}{i} systems are expected since the bulk
of the new QSOs is at $z_\mathrm{em}\sim 3$, which provides
shorter \ion{C}{i}-absorption path length.
}, will result in exceedingly long integration times on
high-resolution spectrographs installed on 8-10\,m class
telescopes. These will however be targets of choice for the coming
generation of Extremely Large Telescopes.
\begin{acknowledgements}
PN acknowledges support from the ESO Chile visiting scientist programme.
RS and PPJ gratefully acknowledge support from the
Indo-French Centre for the Promotion of Advanced Research (Centre
Franco-Indien pour la Promotion de la Recherche Avanc\'ee) under
contract No.~4304-2.
The authors of this paper also acknowledge the tremendous effort put
forth by the Sloan Digital Sky Survey team to produce and release
the SDSS survey. Funding for SDSS and SDSS-II has been provided
by the Alfred P. Sloan Foundation, the Participating Institutions,
the National Science Foundation, the U.S. Department of Energy, the
National Aeronautics and Space Administration, the Japanese
Monbukagakusho, the Max Planck Society, and the Higher Education
Funding Council for England. The SDSS Web Site is
http://www.sdss.org/. The SDSS is managed by the Astrophysical
Research Consortium for the Participating Institutions. The
Participating Institutions are the American Museum of Natural
History, Astrophysical Institute Potsdam, University of Basel,
University of Cambridge, Case Western Reserve University, University
of Chicago, Drexel University, Fermilab, the Institute for Advanced
Study, the Japan Participation Group, Johns Hopkins University, the
Joint Institute for Nuclear Astrophysics, the Kavli Institute for
Particle Astrophysics and Cosmology, the Korean Scientist Group, the
Chinese Academy of Sciences (LAMOST), Los Alamos National
Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the
Max-Planck-Institute for Astrophysics (MPA), New Mexico State
University, Ohio State University, University of Pittsburgh,
University of Portsmouth, Princeton University, the United States
Naval Observatory, the University of Washington.
\end{acknowledgements}
\bibliographystyle{aa} |
1701.05356 | \chapter*{Acknowledgements}
The work behind this thesis lasted three years, one of which I spent abroad: so let me take the opportunity to thank first of all the Università degli Studi di Roma Tre, that gave me the possibility of attending different educational experiences. Besides, I wish to express my gratitude to those who hosted me: the {\it Université de Lyon 1}, where I spent three months (October - December 2015), supported by the A*MIDEX project Hypathie (n. ANR-11-IDEX-0001-02) funded by the {\it "Investissements d'Avenir"} 25 French Government program, managed by the French National Research Agency (ANR), having the opportunity to enjoy the very stimulating end exciting climate of the Probability Group of Professor Toninelli; and the {\it Université de Geneve}, and {\it SwissMap project}, that funded me to attend, as a visiting student, the wonderful Masterclass in Statistical Physics 2015-2016, from January to June 2016.\\
Il mio più grande e sincero ringraziamento va al Prof. A. Giuliani, per la fiducia che ha avuto in me proponendomi questo problema, e per l’interesse, la disponibilità, e l’infinita pazienza con cui mi ha seguito: senza il suo supporto, {\it scientifico} e {\it umano}, la realizzazione di questo lavoro non sarebbe stata possibile.\\
Ringrazio il Prof. V. Mastropietro, per le stimolanti discussioni e per il {\it cruciale suggerimento} di inserire un {\it non-local boundary counterterm}.\\
Ringrazio il Prof. M. Porta, per tutto l’entusiasmo che mi ha trasmesso durante le mie, seppur brevi, visite a Zurigo.\\
Ringrazio il Dott. I. Jauslin, che ha reso meno traumatico e più veloce il mio inserimento nel mondo della Fisica Matematica e del RG, grazie alle lunghissime e coinvolgenti discussioni sempre affrontate con interesse e passione.\\
Ringrazio il Prof. Gallavotti: è grazie a lui, ai suoi consigli e al suo esempio che ho intrapreso la strada della Fisica Matematica e, in particolare, del Gruppo di Rinormalizzazione.\\
Infine, ringrazio i Professori M. Correggi, G. Dell’Antonio, G. Panati, A. Teta per avermi sempre dato la possibilità di partecipare alle molte conferenze da loro organizzate, e per aver sempre accolto con grande interesse qualsiasi mia domanda o semplice curiosità.
\chapter*{Abstract}
Recent years witnessed an extensive development of the theory of the critical point in two-dimensional statistical systems, which allowed to prove {\it existence} and {\it conformal invariance} of the {\it scaling limit} for two-dimensional Ising model and dimers in planar graphs. Unfortunately, we are still far from a full understanding of the subject: so far, exact solutions at the lattice level, in particular determinant structure and exact discrete holomorphicity, play a cucial role in the rigorous control of the scaling limit. The few results about not-integrable (interacting) systems at criticality are still unable to deal with {\it finite domains} and {\it boundary corrections}, which are of course crucial for getting informations about conformal covariance.
In this thesis, we address the question of adapting constructive Renormalization Group methods to non-integrable critical systems in $d= 1+1$ dimensions. We study a system of interacting spinless fermions on a one-dimensional semi-infinite lattice, which can be considered as a prototype of the Luttinger universality class with Dirichlet Boundary Conditions. We develop a convergent renormalized expression for the thermodynamic observables in the presence of a quadratic {\it boundary defect} counterterm, polynomially localized at the boundary. In particular, we get explicit bounds on the boundary corrections to the specific ground state energy.
\tableofcontents
\chapter{Introduction}
\section{Motivations}
\paragraph{Critical phenomena and symmetries}
It is now well understood that the common background to {\it critical phenomena} displayed by very different systems (both classical and quantum) like liquid-vapor transition, paramagnetic-ferromagnetic transition, superfluids, superconductors, {\it etc.}, is the strong fluctuation of infinitely many coupled variables. So, once this mechanism has been identified, it is natural to introduce models that are both as realistic as possible and mathematically treatable.\\
In this framework, two dimensional $(2D)$ statistical systems play the special role of being the {\it simplest non trivial examples} of systems undergoing a phase transition: in this regard, it must be mentioned the Ising model, introduced by Ising \cite{Ising1925} and exactly solved first by Onsager \cite{Onsager:1944aa} and later by many others (with different techniques) \cite{kaufman1949crystal, kac1952combinatorial, lieb1961two,hurst1966new,samuel1980use}.\\
The importance of the Ising model is due to the fact that it has been the first model giving quantitative indications that a {\it microscopic short range interaction} can produce phase transitions. A remarkable fact is that the notion of integrability for the Ising model in zero magnetic field is really strong, meaning that the model can be {\it exactly mapped} into a system of free fermions \cite{lieb1961two,hurst1966new,samuel1980use}, so that it is not only possible to explicitly compute the {\it free energy} and the {\it magnetization}, but one can even get exact formulae (allowing an exact control of the asymptotic behaviour for large distances of some of them) for several {\it spin correlation functions}: energy-energy correlation functions, {\it spin-spin} correlation functions \cite{montroll1963correlations,wu1976spin,tracy1973neutron,barouch1973zero,mccoy2014two}, some multispin correlation functions (with some constraint on the relative positions of the spins) \cite{kadanoff1969correlations}. The impressive thing is that, thanks to these results, it is possible to caclulate the {\it critical exponents} of the model and check that they are different from those predicted by the Curie-Weiss theory of ferromagnetism: so one claims that {\it the Ising model belongs to a different universality class}. The concept of {\it universality class}, thanks to which we classify in the same {\it family} models that, even though describe very different physical systems, show the same {\it critical behaviour} (meaning that the {\it critical exponents are the same}, provided one managed to identify in some sense the corresponding thermodynamic functions for the systems under comparison) has been largely studied and understood by using the Renormalization Group (RG) tools \cite{kadanoff1966scaling,di1969microscopic,callan1970broken, symanzik1970small, wilson1971renormalization, wilson1971renormalization2,wilson1972critical}: in the language of RG one says that the correlation functions of two systems respecting the {\it same symmetries} and with {\it interactions differing only by irrelevant terms}, are characterized by {\it the same} long distance behaviour at the critical point ({\it i.e. they have the same critical exponents}).
\paragraph{Conformal invariance} If, on the one hand, the idea of RG arises conceptually from the {\it scale invariance} of the scaling limit, which roughly tells us that under a {\it uniform change of lenght scale the correlation functions transform covariantly in a simple way}, on the other hand it is just thanks to the RG analysis that we can rigoroulsy conclude that the infrared fixed point, for many statistical systems, {\it is in fact scale invariant}, as well as invariant also with respect to the {\it usual} Euclidean symmetries.\\
This has been the starting point for naturally guessing that the {\it scaling limit} should be, under {\it plausible assumptions}, invariant under the {\it larger group} of the {\it conformal transformations}, which roughly speaking is a generalization of a scale transformation with a lenght-rescaling factor depending continuously on position, {\it i.e.} it is {\it conformal invariant}. The first time that the idea of the {\it conformal invariance of the scaling limit} appeared in literature was in a paper by Polyakov \cite{polyakov1970conformal}, in which he showed that the correlation functions are invariant under conformal transformations, and he used this to compute explicitly the three-point correlation functions. Nevertheless, it seems that for a while the {\it deep consequences} of the conformal invariance have not been properly understood by the community (for example, just a very short section is dedicated to this topic in the review about phase transitions and critical phenomena by Wegner \cite{wegner1976phase}). \\
The breakthrough in the field came with the seminal paper by Belavin, Polyakov and Zamolodchikov in 1984 \cite{belavin241infinite}, based on the fact that in $d=2$ the {\it conformal group} is much larger than in higher dimensions, and in particular it is isomorphic to the group of {\it analytic transformations}, whose corresponding group algebra, known as Virasoro algebra, had already been studied with different purposes in the context of particle theory \cite{kac1979lecture, jacob1974dual,mansouri1972gauge,ferrara1972conformal}. Roughly speaking they showed that, assuming the {\it conformal invariance} of the scaling limit, it is possible to get not only the {\it critical exponents} of the model, but also {\it all the multi-point correlation functions at the critical point} (the analysis is based on the correspondence of each of the {\it primary scaling operators} of a two-dimensional systems with a representation of the Virasoro algebra which allows, in some particular case, to perform explicit computations); notably they recognized that the theory is characterized by the {\it central charge} (also known as {\it conformal anomaly} since it is associated with an anomaly term in the commutation relations of the stress energy tensor).\\
\cite{belavin241infinite} paved the way for an impressive number of papers that, in the immediately following years, increased and refined the understanding of the topic \cite{dotsenko1984conformal,dotsenko1985four,dotsenko1985operator,
dotsenko1984critical,friedan1984conformal}. A special comment is deserved by the famous paper by Cardy \cite{cardy1984conformal} in which, for the first time, he realized that, using some {\it conformal mapping}, conformal invariance allows the explicit calculation of some {\it finite-size effects at the critical point}, offering the possibility of getting properties of the infinite system from some finite samples of the same system. In particular, these {\it finite-size effects} are linked to the concept of {\it central charge}: as already pointed out in \cite{belavin241infinite}, and then studied by Affleck \cite{affleck1986universal}, Bl\"ote-Cardy-Nightingale \cite{blote1986conformal} and Friedan-Qiu-Shenker \cite{friedan1984conformal}, the {\it central charge can be defined in terms of the finite size corrections to the free energy at criticality}; moreover, it has been recognized that, for some special cases in which the critical theory is fully characterized by the value of the chentral charge, the critical exponents are all explicitly known in terms of the Kac formula.
Of course, the importance of this result has to be read taking into account that the increase of the {\it computational power} of the computers offered, at that times, a lot of convincing validation of the principle of conformal invariance at the critical point. \\
It is worth stressing again that this huge amount of impressive results have been achieved {\it regarding the conformal invariance of the scaling limit as a principle}, since there was no rigorous proof of this fact, and one big conceptual problem was that it was not even straightforward to give a {\it mathematical definition} of the scaling limit ({\it i.e. to define a precise mathematical object to study in order to check whether the scaling limit of the model is conformal invariant or not}). A milestone in this direction has been posed by Schramm \cite{schramm2000scaling} who, in the context of percolation models (in which in some sense one can reduce the study of the critical point physics to the study of interfaces), inspired by the {\it numerical results} presented in \cite{langlands1994conformal} and by the explicit formula that Cardy proposed for the limit of percolation crossing probability \cite{cardy1992critical}, introduced the idea that interfaces of percolation models should belong to a family of {\it conformal invariant continuous non-selfcrossing curves}: the {\it Schramm-Loewner Evolutions} (SLE). The strenght of this proposal lies properly in the {\it mathematical formalization of the goal}: to prove that the interfaces of percolation models converge, at the scaling limit, to SLE processes. The revolution in the rigorous understanding of this topic came with the rigorous proof of Cardy's formula for critical site percolation on the triangular lattice by Smirnov \cite{smirnov2001critical}, whose great importance lies in an impressive consequence: {\it Cardy's formula} is equivalent to convergence of interfaces to SLE, meaning that proving the conformal invariance of a {\it well chosen observable} is enough to prove the conformal invariance of interfaces. This idea has been afterwards extended to Ising model, introducing the famous {\it fermionic observable} (a discrete holomorphic quantity) \cite{riva2006holomorphic}, which can be proved to converge to a holomorphic function in the scaling limit: this is the basic tool of the very ample literature \cite{smirnov2001critical, chelkak2012universality,chelkak2012conformal, duminil2012conformal, chelkak2014convergence, benoist2014conformal, lawler2011conformal} {\it etc.}, based on techniques of combinatorics, probability and discrete analysis (in particular discrete holomorphicity, {\it a.k.a. pre-holomorphicity}), already introduced by Kenyon in studying {\it close packed dimers} \cite{kenyon2001dominos}.\\
Fifteen years after the first step in the rigorous study of the {\it conformal invariance of the scaling limit of two-dimensional statistical systems}, the level of the understanding of these phenomena is really advanced, even though {\it mostly limited} to {\it integrable models}, since so far the {\it integrability} seems to be a {\it fundamental ingredient} to give full control of the existence and conformal invariance of the scaling limit. As a matter of fact, the two models on which results are more complete are models at the {\it free Fermi point}: Ising and dimers. Anyway, the existence and the conformal invariance of the scaling limit is believed to be independent of a {\it free fermions} description, meaning that it is believed to be true also for {\it non-integrable 2D systems} corresponding, in terms of fermions, to {\it interacting fermions} in $d=1+1$. \\
An important open problem, which motivates the study of this thesis, is the proof of {\it conformal invariance of the scaling limit of interacting non-solvable models} close, but non exactly at, the {\it free fermion point}.
\paragraph{Luttinger Liquid and its Universality Class}{In order to understand 2D critical systems outside the {\it free fermion point}, we need techniques for dealing with interacting fermionic systems in d=1+1}.\\
The starting point is to recognise that there are a few interacting models, that present a non-trivial critical behaviour, that can be exaclty solved by using special methods ({\it for instance} bosonization in the case of the Luttinger model, Bethe Ansatz in the case of the one dimensional antiferromagnetic Heisenberg model).\\
The reference model in the framework of $1+1$ dimensional fermionic system is the {\it exactly solvable} Luttinger model, which is the simplest possible model describing many body systems consisting of two different kinds of fermions, left-movers and right-movers on a (continuous) segment {\it interacting} via a weak, short range density-density potential. The model was introduced by Luttinger \cite{luttinger1963exactly} and rigorously solved by Mattis and Lieb \cite{mattis1965exact}, using a very famous techinque now known as {\it bosonization}, (see below). The interesting feature of the Luttinger model is that the presence of the interaction really changes the physical behaviour: first of all, the ground state of the system is characterized by a density of states which does not have a discontinuity at the Fermi momentum (as the Free Fermi Gas), but its graph has an infinite slope with tangency exponent $a(\lambda)=\mathcal O(\lambda^2)$, called the anomaly of the Fermi surface; moreover, also the $n-$point functions, which can be computed exactly, show a large distance behaviour with {\it anomalous exponents continuously depending on the interaction size $\lambda$}.\\
Of course, one wonders if the Luttinger physics is in some sense robust under weak modification of the model; Luttinger model is in fact believed to give a robust description of models described in terms of spinless $1D$ fermions. By combining bosonization techniques with (formal) perturbative renormalization arguments, it has been conjectured \cite{kadanoff1977connections, haldane1981luttinger, luther1975calculation, nienhuis1984critical,den1981derivation} the existence of a {\it universality class}, called {\it $8$-vertex universality class} or {\it Luttinger liquids}, describing a variety of two-dimensional classical systems, such as the $6$ and $8$-vertex models, the Ashkin-Teller model, and the interacting dimer models at close-packing; and one-dimensional quantum systems, such as the Heisenberg spin chains, the Luttinger model itself and the spinless Hubbard model and perturbation thereof.\\
The inspiring idea is that all the systems in the $8$-vertex universality class can be described in terms of {\it lattice fermions}, {\it i.e.} a family of Grassmann variables $\psi^{\epsilon}_{\omega,\bm x}$ indexed by lattice vertices $\bm x=(x,x_0)$ and by indices $\epsilon,\omega=\pm$. In particular, for a special choice of the model parameters (free-fermion point), these fermions are non-interacting, so the system is analyitically diagonalizable. As soon as we change the values of these parameters, the fermions become interacting, meaning that, in the {\it action} of the system, at least a quartic term in the Grassmann variables appear, so the partition and correlation functions are given by {\it non-Gaussian Grassmann integrals}. Performing a {\it formal continuum limit}, these fermions become interacting Dirac fermions in $d=1+1$ dimensions, which are massless at criticality.\\
Let us start by considering non-interacting massless Dirac fermions $\psi^\sigma_{x,\omega}$ with propagator antidiagonal in $\sigma=\pm$, diagonal in $\omega=\pm$, and translation-invariant in $\bm x\in\mathbb R^2$:
$$\left<\psi^-_{\bm x,\omega}\psi^+_{\bm 0,\omega}\right>=\frac{1}{2\pi}\frac{1}{x_0+i\omega x}.$$
In this case, the bosonization consists in two identities \cite{itzykson1991statistical}:
\begin{itemize}
\item the multi-point correlations of the {\it fermionic density} $\psi^+_{\bm x,\omega}\psi^-_{\bm x,\omega}$ are the same as the derivative of a boson field $\phi$ (massless gaussian field):
$$\psi^+_{x,\omega}\psi^-_{\bm x,\omega}\leftrightarrow -\omega \partial_\omega\phi(\bm x), \hspace{5mm} \partial_\omega:=\frac{1}{2}(\partial_{x_0}-i\omega\partial_{x}),$$
so that in particular correlations of {\it odd order} and truncated correlations of order larger than 2 vanish,
\item the {\it fermionic mass} $\psi^+_{\bm x,\omega}\psi^-_{\bm x,-\omega}$ has the same correlations as a normal ordered exponential of the boson field:
$$\psi^+_{\bm x,\omega}\psi^-_{\bm x,\omega}\leftrightarrow \frac{1}{2\pi} :e^{2\pi i\omega\phi(\bm x)}:$$
\end{itemize}
The remarkable fact is that these relations, up to {\it renormalization constants}, remain valid also in the presence of suitable density-density interaction, in particular for the Thirring model \cite{thirring1958soluble}, which can be thought of as a limit of Luttinger as the interactions tends to a local delta potential. In the case of Luttinger model, even though the correspondence between fermionic and bosonic representation is more complicated (so the formulae are more cumbersome), the consequences remain asymptotically the same.\\
There are some other models, as the {\it antiferromagnetic 1D Heisenberg model} and the {\it Hubbard model}, that are exactly solvabe by {\it Bethe ansatz}, thanks to which it is possible to compute the thermodynamic functions, the {\it critical exponents} and some of the {\it amplitudes}, unfortunately without a full control of the correlation functions.\\
Summarizing, there are some {\it very special, solvable models} as the Luttinger model, the Thirring model, the antiferromagnetic 1D Heisenberg model , the Hubbard model for which it is possible to explicitly check the conjectured properties we just mentioned. Formal perturbation/renormalization arguments suggest that the same long distance behavior should be displayed by several other models, provided that the interaction strength is suitably tuned, so that the critical exponent of (say) the Green function coincides; once this tuning is performed, all the other exponents should coincide. Even more remarkably, the resulting critical exponents and amplitudes should satisfy the same universal relations valid in the Luttinger model (Kadanoff and Haldane relations). These predictions, which are expected to hold for a very general class of models, have been first of all checked for solvable models, but checking them in absence of exact solutions or of bosonization identities is of course a hard mathematical task. Constructive quantum field theory and Renormalization Group methods are powerful tools to study these problems, and in fact they allowed to rigorously prove these prediction for several different models, as we briefly discuss in the next paragraph.
\paragraph{Renormalization Group in the context of many-body theories} RG methods {\it à la Wilson} \cite{wilson1971renormalization,wilson1971renormalization2} at the very beginning have been the basic tools for studying several problems in Constructive Quantum Field Theory, as the renormalization of $\phi^4_d$ theories \cite{gallavotti1985renormalization,polchinski1984renormalization,glimm2012quantum, glimm1973particle,guerra1976boundary} and the existence of the continuum limit of Quantum Theory models in $d=1+1$, as the Gross-Neveu model with $N>1$ colors \cite{gawedzki1985massless,feldman1986renormalizable}, or the massive Yukawa model \cite{lesniewski1987effective}.\\
In applying these methods to one dimensional fermionic system, one has to deal with the further difficulty given by the fact that the {\it theory is not asymptotically free}, but {\it it belongs to a class of models characterized by a vanishing beta function} (implying that a second order computation is not enough to recognize the nature of the flow of the effective coupling, but one has to exploit {\it non trivial cancellations at all orders in the renormalized expansion}).\\
Chronologically, Dzylaloshinskii and Larkin \cite{dzyaloshinskii1974correlation} first attacked the Tomonaga model \cite{tomonaga1950remarks} (not exactly solvable) performing a {\it non rigorous resummation} of the perturbative expansion after several uncontrolled estimates. Then, Metzner and Di Castro \cite{metzner1993conservation} correctly pointed out that the vanishing of the beta function, in multiplicative RG, follows from the Ward identities which, anyways, are {\it exactly true} only in the Luttinger model, not in {\it non solvable ones}.\\
Of course the natural next step is to push the understanding of these topics at a {\it rigorous level}. The Roman school gave an impressive contribution to the {\it construction of models with vanishing Beta function}: the starting point of a huge literature was the one-dimensional system of interacting non relativistic fermions in the continuum, studied in a seminal paper by Benfatto, Gallavotti, Procacci and Scoppola \cite{benfatto1993beta}, where the crucial property of vanishing beta function was proved by comparing this model with the exact solution of the Luttinger model (rigorous RG methods had already been used in attacking fermionic many-body theories in \cite{benfatto1990perturbation,feldman1992infinite}). Later, Benfatto and Mastropietro adapted the already mentioned ideas by Dzylaloshinskii, Larkin, Metzner, Di Castro to a {\it constructive RG approach}, and in doing that they had to overcome several technical problems. As a matter of fact, it is worth mentioning a series of papers in which, without any comparison with the exact solution of the Luttinger model, they proved the vanishing of the beta function \cite{benfattodensity,benfatto2001renormalization,benfatto2004ward, benfatto2005ward} overcoming well known problem due to the conflict between the Wilsonian RG and Ward Identities (basically, the Wilsonian RG breaks the local gauge invariance necessary to get Ward Identities). These techniques have been then used to study a variety of models belonging to the Luttinger universality class and, for some of these, to check the Kadanoff- Haldane predictions, \cite{giuliani2005anomalous, benfatto2009extended, benfatto2010universality,benfatto2010universal, benfatto2011drude, benfatto2014universality, benfatto2014universalityii, giuliani2015height, giuliani2016haldane}, {\it etc.}.\\
In light of these important achievements of the Renormalization Group methods, one naturally asks: what is missing to prove the {\it conformal invariance of the scaling limit of interacting non-solvable models?}
\paragraph{ Motivations of this thesis} Due to its {\it robustness} with respect to perturbations of {\it solvable models}, one is naturally tempted to use RG to {\it extend} the conformal invariance informations we have about {\it exactly-solvable systems} to {\it interacting, non solvable systems}. In this direction, a first step has been moved by Giuliani and Mastropietro \cite{giuliani2013universal}, who rigorously checked, for an {interacting Ising model on a torus} (so the system is {\it translation invariant}), the CFT prediction according to which, at the critical temperature, the finite size corrections to the free energy are universal (meaning that they are exactly independent of the interaction). Moreover, they showed that, as proposed by Affleck \cite{affleck1986universal} and Blote-Cardy-Nightingale \cite{blote1986conformal},
the central charge, defined in terms of the coefficient of the first subleading term to the free energy is constant and equal to $1/2$ for all $0<\lambda\leq \lambda_0$ where $\lambda_0$ is a small but finite convergence radius. Besides, it is worth mentioning \cite{giuliani2012scaling} where multipoint correlation functions are explicitly computed in the scaling limit in which the lattice spacing is sent to zero and the temperature at the critical one, in the case of a ferromagnetic Ising model weakly perturbed by a finite range perturbation. Anyway, if on the one hand these results confirm that the {\it energy-energy correlations} are in fact those predicted by {\it conformal field theories} and {\it bosonization}, on the other hand they are not enough to prove the {\it conformal invariance} of the scaling limit, since a control of the {\it boundary terms} is still missing.\\
Indeed, even though these papers must be considered as the starting point for a wider understanding of the conformal invariance of the interacting critical point, the rigorous contructive RG methods, which are the main tools used in those papers, built up so far are still based too heavily on the {\it translation invariance of the system}, that implies a lot of technical and conceptual simplifications. These considerations seem to identify the goal: adapting the {\it RG formalism} to the case of systems defined in non-trivial domains (hopefully a formalism independent of boundary conditions, as also Brydges suggests in \cite{brydges2007lectures}). In the context of one dimensional Fermionic systems, the simplest non trivial domain is the half-line.\\
In the last 20 years, encouraged by the possibility of realizing and performing measurements on the so called {\it quantum wires}, the theoretical physics community has been interested in trying to describe finite one dimensional fermionic systems with open boundary conditions \cite{fabrizio1995interacting, meden2000luttinger,mattsson1997properties,grap2009renormalization}, predicting in fact that the boundary induces some {\it anomalous boundary critical exponent}. Besides, it is worth mentioning that a conceptually similar question is linked to two important problems: the one, that we will briefly comment in the conclusive chapter, is the Kondo effect, as pointed out in \cite{affleck1995conformal}.
The other one is the {\it Casimir effect} that, starting from the $1908$'s when a seminal paper by Symanzyk appeared \cite{symanzik1981schrodinger}, motivated a series of papers about $\phi^4_{4-\epsilon}$ theories in non trivial domains (properly in a {\it semispace}, meaning that the simplest possible non-trivial boundary is introduced in the theory) \cite{diehl1981field1, diehl1981field2, diehl1983universality,diehl1986field,diehl1994surface,diehl1997theory,diehl1998massive,dietrich1981critical,mattsson1997properties,cordery1981surface}, in which the basic strategy is to show that the boundary corrections are localized at the boundary and absorbed into a {\it boundary potential}. \\
We stress that when we say that the half-line is the {\it simplest non-trivial domain}, we mean that even being {\it simple to define} it already shows {\it non-trivial complications}: indeed, due to the presence of a boundary, the relevant and marginal terms that {\it naturally} are generated in the construction of the effective theory, respectively related to the {\it density} of the system and the {\it dressed density-density interaction}, are no more {\it running coupling constants}, but more in general they are {\it running coupling functions}.\\
Driven by the fact that, {\it well inside the bulk}, one expects to recover the predictions of the {\it translation invariant theory} (meaning that one expects to lose, at some point, memory of the boundary), an intuitive way to look at the contributions we are interested in, {\it i.e.} the quadratic and quartic terms of the effective theories we define in the RG procedure (being respectively {\it relevant} and {\it marginal} in a RG sense) is to split them into a {\it bulk} and {\it boundary contributions}. One expects that the first ones are related to the {\it usual running coupling constants} appearing in the {\it analogous translation invariant theory}, while the {\it boundary contributions}, by construction, {\it keep memory} of the boundary. The main technical result is that, in fact, the boundary terms have a {\it dimensional gain}, in the sense of $L_1$ norms, with respect to the {\it bulk} contributions. This dimensional gain is enough to conclude that the {\it boundary correction to the quartic terms} are in fact {\it irrelevant}; unfortunately, on the other hand it is not enough to {\it renormalize the} quadratic contributions, that deeply modify the {\it effective theory}. \\
Of course all these intuitions have to be {\it quantified in a mathematically meaningful way}, so the question we ask is: are we able to make {\it quantitative} the {\it intuitive notions} of {\it nearby the boundary} and {\it well inside the bulk}? In order to do that, it is necessary to control the {\it quadratic terms} that, as just commented, give rise to {\it running coupling functions} instead of running coupling constants. In this thesis we show that it is possible to find a convergent expansion for termodynamic functions provided we choose a suitable {\it quadratic counterterm algebraically localized at the boundary}, whose decay law seems to be compatible with a {\it space-dependent correction to the critical exponents}.
\section{The model and the main result}
We are interested in constructing the ground state of interacting spinless fermions living in a discrete one-dimensional box of mesh size $a=1$ and volume $L\gg 1$ with {\it open boundary conditions}, meaning that the system is defined on a segment instead of on a torus. \\
Let $\mathcal F=\oplus_{n=0}^\infty H^{\wedge n}$ be the standard antisymmetric (fermionic) Fock space, where $\wedge$ denotes the antisymmetric tensor product, and let $\psi^\pm_x$ be the {\it fermionic creation and annihilation} operators defined on $\mathcal F$, where $x$ is the space coordinate and $\Lambda:=\left\{x\in\mathbb Z: 1\leq x\leq L\right\}$, $L\in \mathbb N$. We introduce the Hamiltonian
\begin{equation}
H=H_0+\lambda V+ \varpi \mathcal N,
\end{equation}
where
\begin{equation}
\begin{split}
H_0&=T_0-\mu_0 N_0,\\
T_0&=\sum_{x\in\Lambda}\psi^+_x\left(-\Delta^d \psi^-_x\right)=\sum_{x\in\Lambda}\frac{1}{2}\left(-\psi^+_{x+1}\psi^-_x-\psi^+_{x-1}\psi^-_x+2 \psi^+_x\psi^-_x\right),\\
N_0&=\sum_{x\in \Lambda}\psi^+_x\psi^-_x,
\end{split}
\end{equation}
where, in the formula of $T_0$, we have to interpret $\psi^{\pm}_0=\psi^{\pm}_{L+1}=0$, where $\mu_0$ is the chemical potential, choosen in such a way that, if we call $\sigma(T_0):=[e_-, e_+]$ the spectral band of the kinetic operator, $\mu_0\in [e_-+\kappa, e_+-\kappa]$ for some $\kappa>0$ fixed once for all.\\
Morover the interaction of {\it strenght} $\lambda$ is
\begin{equation}
V=\sum_{x,y \in\Lambda}\psi^+_x\psi^-_xv(x,y)\psi^+_y\psi^-_y
\end{equation}
where $v(x,y)=v(y,x)$ is a real, compactly supported function, and satisfies what we call {\it Dirichlet property}, {\it i.e.} it can be written as
\begin{equation}
v(x,y)=\frac{2}{L+1}\sum_{k \in \mathcal{D}^d_{\Lambda}}\sin(kx)\sin(ky)\hat v(k),
\end{equation}
where $\mathcal D_\Lambda^d=:\left\{k=\frac{n\pi}{L+1}, n=1,\dots,L\right\}$. We stress that the {\it Dirichlet property} of $v( x, y)$ is not crucial at all but it simplifies some technical aspects of the proof.\\
Finally, $\mathcal N$ is a {\it boundary counterterm} of size $\varpi=\mathcal O(\lambda)$ of the form
\begin{equation}
\mathcal N =\sum_{x,y\in \Lambda}\psi^+_x\psi^-_y \pi(x,y),
\end{equation}
where $\pi(x,y)$ is a Hermitian matrix such that $\sup_{x\in \Lambda}\int dy |\pi(x,y)|=1$.\\
We present here the main result: let $\beta\geq 0$ be the {\it inverse temperature} defining the {\it finite volume specific free energy}
\begin{equation}
f_{\Lambda,\beta}=-\frac{1}{|\Lambda|\beta}\log \left(Tr \left(e^{-\beta H}\right)\right),
\end{equation}
and respectively
\begin{equation}
f_{\Lambda}=-\frac{1}{|\Lambda|}\lim_{\beta\nearrow \infty}\frac{1}{\beta}\log \left(Tr \left(e^{-\beta H}\right)\right),\hspace{3mm} f_{\infty}=-\lim_{|\Lambda|\nearrow \infty}\frac{1}{|\Lambda|}\lim_{\beta\nearrow \infty} \frac{1}{\beta}\log \left(Tr \left( e^{-\beta H}\right)\right),
\label{definition_free_energies_finite_infinite_volume}
\end{equation}
we can state the main result.
\begin{thm}
\label{theorem_main_introduction}
In this framework, there exists a radius $\lambda_0>0$ such that, for any $|\lambda|\leq \lambda_0$ it is possible to fix the {\it boundary defect} $\pi(x,y)$ and its strenght $\varpi=\varpi(\lambda)$ in such a wat that, for any $\theta\in (0,1)$, there exists a constant $C_\theta$ such that
\begin{equation}
\sum_{y\in\Lambda} \left|\pi(x,y)\right| \leq C_\theta \left(\frac{1}{\left(1+|x|\right)^\theta}+\frac{1}{\left(1+|L-x|\right)^\theta}\right),
\end{equation}
and in such a way that the finite volume specific ground state free energy $f_\Lambda$ admits a convergent expansion in $\lambda$ and $\varpi$, uniformly in $\Lambda$.\\
Moreover
\begin{equation}
\left| f_\Lambda-f_\infty \right|\leq |\lambda|\frac{C_\theta}{L^\theta}.
\end{equation}
\end{thm}
Even though it is not explicitly investigated in this thesis, we stress that a straightforward extension of the proof of this theorem would allow one to control the boundary corrections at finite volume also for the correlation functions.
\section{The outline of the proof}
\paragraph{Multiscale decomposition} The proof relies on a multiscale analysis of the model, in which the free energy and Schwinger functions are expressed as successive integrations over individual scales. To define a multiscale decomposition, we refer to momentum space, in which each scale is defined as a set of momenta $\bm k$’s contained inside an annulus at a distance of $2^h$ for $h\in\mathbb Z$ around the singularities located at the Fermi points. The positive scales correspond to the ultraviolet regime, that we do not study in detail, referring to \cite{benfatto1993beta}. The negative scales contain the essential difficulties of the problem, whose nature is intrinsically infrared.
\paragraph{Presence of a non trivial boundary} Physically, the presence of a non trivial boundary induces, obviously, the breaking of translation invariance (so of the momentum conservation): one expects that, very far from the boundary, the bulk {\it i.e. correlation functions} tend to the translation invariant one while, going closer and closer to the boundary, one expects some non trivial boundary effect. Despite the conceptual immediacy of this difference between the physics in presence (or in absence) of a boundary, it is an hard problem to deal with from a technical point of view. Indeed, an important symmetry which most RG methods are based on is the {\it invariance of boundary conditions under RG iterative step}: starting with periodic boundary conditions, the integration of {\it a single scale degrees of freedom} gives back an effective theory having exactly the same boundary conditions as the original one, so it is {\it immediately true} that we are dealing with a {\it selfsimilar theory}; as it will be clear later, in the Dirichlet boundary condition case (and it would be the same for any {\it non translation invariant boundary conditions}) the very first integration is enough to give us an effective theory whose quadratic term is no longer diagonal in the {\it Dirichlet basis}, so it is not sufficient to iterate the {\it rescaling and dressing} process, as one {\it usualy would do} to renormalize a theory whose boundary conditions are {\it invariant under Renormalization Group procedure}.
\paragraph{The main idea} So far, we cannot renormalize the theory without the counterterm $\mathcal{N}$ we introduced in the definition of the model. Indeed, the idea will be to keep as a reference a theory with DBC. Technically, the first step is to recognize that the propagator of the model defined on a box with Dirichlet boundary conditions can be written as a linear combination of propagators of a model on a suitably defined box with periodic boundary conditions, computed respectively in the difference of the arguments (translation invariant part) and in the sum of them (remainder). So, in evaluating the Feynman diagrams coming from the fermionic Wick rule, we will follow the following steps:
\begin{itemize}
\item {\bf Dimensional analysis} Being the bulk contribution the dominant one, a naive dimensional analysis would have the same result of the translation invariant case, so the only {\it problematic terms} will be the quartic (marginal) and quadratic (relevant) operators. After a deeper analysis, one can recognize that the presence of a {\it remainder propagator} improve by {\it one scaling dimension} (this terminology will be clear later) the $L_1$ norm of the values of the graphs; so first of all the flow of quartic terms is reduced to the flow of the {\it translation invariant quartic terms} ({\it i.e.} it is the same flow of the bulk theory). On the other hand, this dimensional gain is not enough to renormalize the quadratic term, so we must do something more.
\item {\bf Dirichlet part extraction and dressing of the propagator} The idea is to redefine the {\it localization operator}, in order to, first of all, extract a bulk quadratic term diagonal in the Dirichlet basis to dress the propagator with, bringing the theory back to the well known formalism of the {\it translation invariant case}, and then to extract the relevant and marginal parts.
\item {\bf Tuning the counterterm} In addition to the bulk relevant and marginal terms, our procedure identifies a marginal, boundary quadratic term, whose divergent part is controlled by the counterterm $\varpi \mathcal N$ that we introduced in the Hamiltonian. The counterterm $\varpi\mathcal N$, that physically reflects the breaking of translation invariance of the theory, will be fixed by studying the flow of {\it coupling functions} (no more constants), whose presence is due to the boundary, {\it via a fixed point argument}.
\end{itemize}
The thesis is organized as follows: since conceptually we will refer to the {\it usual way} to perform RG on translation invariant models, first of all we will give a review about how to deal with a one dimensional system of interacting spinless fermions on a periodic lattice; then, we will be able to explain the new ideas arising in the presence of the boundary.\\
In particular,
\begin{itemize}
\item in Chapter (\ref{chapter_fermions_PBC}) we review the RG approach to translationally invariant spinless 1D systems. More precisely:
\begin{itemize}
\item in Section (\ref{section_PBC_the_model}) we define the model, the observables we are interested in and we state the main result of Chapter (\ref{chapter_fermions_PBC}),
\item in Section (\ref{section_1_pert_theory}) we first show the failure of the {\it naive perturbation theory} in computing the {\it specific free energy}, due to two different problems:
\begin{itemize}
\item the sum over all the perturbative orders diverges because of a {\it too big number} of Feynman diagrams involved in the expansion,
\item the infinite volume limit does not exists, since the rough bounds we obtain by naive perturbative estimates are not uniform in the cut-offs.
\end{itemize}
In Subsection (\ref{subsection_determinant_expansion}) we solve one of the two problems, the combinatorial one, by showing the so called {\it determinant expansion}. To solve the other problem it is necessary a multiscale analysis
\item in Section (\ref{section_multiscale_analysis}) we show the {\it multiscale analysis} of the theory, stressing in particular its hierarchical structure that allows us to represent the observables we are interested in in terms of the so called {\it Gallavotti-Nicolò} trees. Besides, we use the multiscale expansion to identify, in RG language, the {\it sources of the divergences}.
\item in Section (\ref{subsection_renormalization_group_PBC}) we explain how to prove, using RG methods, that we can express the specific free energy as a convergent series in the size of the interaction, if $\lambda$ is small enough.
\end{itemize}
\item Chapter (\ref{chapter_Interacting_fermions_on_the_half_line}) contains the new results of this thesis, in particular we prove the main Theorem (\ref{theorem_main_introduction}):
\begin{itemize}
\item in Section (\ref{the_model_DBB}) we present the model and we recall the main result,
\item in Section (\ref{section_the_interacting_case_DBC}) we perform a multiscale expansion of the thermodynamic observables of the system,
\item in Section (\ref{section_Non-renormalized expansion and properties of kernels}) we identify the source of the divergences by a non-renormalized analysis, and we extract the bulk contributions from the quadratic and the quartic terms of the effective potential,
\item in Section (\ref{section_renormalization_group_DBC}), in order to prove the main theorem, we show in a series of technical Lemmata how the presence of non-translation invariant elements improves the dimensional bound of the kernels; finally, we prove the main theorem.
\end{itemize}
\item in Chapter (\ref{chapter_conlcusion}) we draw the conclusions of this thesis:
\begin{itemize}
\item in Section (\ref{section_summary}) we summarize the result of this work, and we comment some possible and simple improvement of the bounds that can easily be reached,
\item in Section (\ref{section_outlook}) we present some very general ideas we would like to explore in more detail in order to approach the main goal of studying the theory without boundary counterterms.
\end{itemize}
\end{itemize}
\chapter{Interacting fermions on the line}
\label{chapter_fermions_PBC}
In this chapter, the main goal is to introduce the reader to the study, via rigorous constructive Renormalization Group techniques, of one dimensional interacting Fermi systems. It is important to stress that nothing new will be shown (we will present in detail the new result in the following chapter) but, especially for a reader not familiar with RG, it will be explained how to {\it construct the ground state} of a model describing spinless fermions living on a one dimensional lattice, where the only perturbation to the free {\it hopping} Hamiltonian is a {\it weak} density-density interaction.
For a more detailed review of RG applied to 1D fermionic systems, we refer to \cite{gentile2001renormalization}. In this chapter we will give a self-consistent presentation of the main ideas of the construction of 1D fermions in the translationally invariant case, since this will serve as reference theory for the construction of the theory on the half-space, discussed in Chapter 3.\\
Before starting, it worths doing two comments on how the technical assumption we will do reflect on the physics we are interested in:
\begin{itemize}
\item {\bf Fermions on a lattice} Thanks to this assumptions we have a natural ultraviolet cut-off (which is the mesh size of the lattice), by which we get rid of the ultraviolet divergences. Physically, assuming that the electrons can move only on a lattice corresponds to thinking the electrons as localized on atomic sites of a crystal, and by the hopping Hamiltonian we let them move to the nearest neighbor atoms.
\item {\bf Periodic boundary conditions} As we already mentioned in the previous introducting chapter, after some decades of impressive work, nowadays the theory of RG is well developed, and a lot of important and fundamental results have been proven under the assumption of {\it translation invariance}. On the one hand, it is true that a lot of technical simplifications come from this assumption (as it will be clear by comparing this chapter with the next one), but on the other hand it is important to underline that this assumption is quite satisfactory as long as one is interested in the bulk properties of the model (which, in the case of condensed matter, translates into asking what happens very far from the boundaries of the crystal we have in the lab, driven by the idea that, being the {\it size} of particles much smaller than the distance from the boundary, a model without boundaries is a good model for the bulk behavior for system).
\end{itemize}
In the following we introduce all the necessary technical and theoretical tools, whose definition will be extended in the following to the case of Dirichlet boundary conditions.
\section{The model}
\label{section_PBC_the_model}
\subsection{Definition and main result}
\paragraph{The Hamiltonian}
We are interested in constructing the ground state of interacting spinless fermions living in a discrete one-dimensional box of step $a=1$ and size $L\gg 1$. In particular, we perturb by a {\it weak} density- density interaction an integrable Hamiltonian describing non interacting fermions hopping to the nearest neighbouring sites in a box $\Lambda$ with periodic boundary condition (PBC), imposed by identifying the two extremal sites.\\
Let $\mathcal{F}=\oplus_{n=0}^{\infty}H^{\wedge n}$ be the standard antisymmetric fermion Fock space, where $\wedge$ denotes the antisymmetric tensor product and let $\psi_{x}^{\pm}$ be the {\it fermionic creation or annihilation} operators defined on $\mathcal{F}$, where $x$ is the spatial coordinate. Let us consider the discrete box $\Lambda:=\{x\in\mathbb{Z}: -\lfloor L/2\rfloor \leq x\leq \lfloor (L-1)/2 \rfloor\}$, and the gran-canonical Hamiltonian
\begin{equation}
H=H_0+\lambda V,
\label{hamiltonian_PBC}
\end{equation}
where
\begin{equation}
\begin{split}
H_0&=T_0-\mu_0N_0,\\
T_0&=\sum_{x\in\Lambda}\frac{1}{2}\left(-\psi_x^+\psi_{x+1}^- -\psi_x^+\psi_{x-1}^-+2\psi_x^+\psi_x^-\right),\\
N_0&=\sum_{x\in\Lambda}\psi_x^+\psi_x^-,
\end{split}
\label{free_hamiltonian_PBC}
\end{equation}
where $\mu_0$ is the chemical potential, choosen in such a way that, if we call $\sigma(T_0):=[e_-, e_+]$ the spectral band of the kinetic operator, $\mu_0\in [e_-+\kappa, e_+-\kappa]$ for some $\kappa>0$ fixed once for all;
\begin{equation}
V=\sum_{x,y\in\Lambda}\psi_x^+\psi_x^- v(x-y) \psi^+_y\psi_y^-,
\label{interaction_PBC}
\end{equation}
$v(x-y)$ is a {\it compactly supported} function: $V$ is a so-called {\it density-density interaction}, being $\psi^+_x\psi^-_x=:n_x$ the density operator in $x$.
\paragraph{Specific free energy, Schwinger functions and the main theorem}
The main goal of this chapter is to compute the {\it specific free energy}, defined as
\begin{equation}
f_{\Lambda,\beta}:=-\frac{1}{\beta |\Lambda|}\log\left(Tr \left(e^{-\beta H}\right)\right)
\label{free_energy_specific_PBC}
\end{equation}
where $\beta$ is the inverse temperature (so in order to construct the ground state energy we are interested in the {\it zero temperature limit} $\beta\to \infty$). We are also interested in the {\it finite temperature imaginary time correlation functions, or {\it Schwinger functions,}} at finite temperature $T=\beta^{-1}$, defined as
\begin{equation}
S_{\Lambda,\beta}(\bm x_1, \epsilon_1;\dots;\bm x_m, \epsilon_m):=\left< \psi^{\epsilon_1}(\bm x_1)\dots \psi^{\epsilon_n}(\bm x_m)\right>_{\Lambda, \beta}:=\frac{Tr\left( e^{-\beta H}\bm T \left( \psi^{\epsilon_1}(\bm x_1)\dots \psi^{\epsilon_m}(\bm x_m)\right)\right)}{Tr\left( e^{-\beta H}\right)}
\label{schwinger_function_n_points_PBC}
\end{equation}
where $\epsilon_i\in \{\pm\}$ for $i=1,\dots, m$ and $\bm T$ is the {\it Fermionic time ordering operator}, and where we have introduced a collection $\left\{t_1,\dots,t_m\right\}$ of {\it time variables} such that $t_i\in \left[0,\beta\right)$ $\forall i=1,\dots,m$.\\
The main strategy to compute these quantities will be to derive {\it convergent expansions} for both $f_{\Lambda,\beta}$ and $S$, uniformly in the volume $|\Lambda|$ and in the inverse temperature $\beta$, and then to take the {\it infinite volume} and the {\it zero temperature} limits: $|\Lambda|\to \infty$ first,then $\beta\to \infty$.\\
We will describe in detail how to compute the {\it specific ground state energy}, in particular how to prove the following theorem.
\begin{thm}
\label{theorem_free_energy_analyticity_PBC}
In this framework, there exists a radius $\lambda_0>0$ such that for each $\lambda\leq |\lambda_0|$ the specific ground state energy
\begin{equation}
f:=\lim_{\beta\nearrow \infty}\lim_{|\Lambda|\nearrow \infty}\left[-\frac{1}{|\Lambda| \beta}\log\left( Tr \left({e^{-\beta H}}\right)\right)\right],
\end{equation}
exists uniformly in $|\Lambda|$ and $\beta$, an it is an analyitic function of $\lambda$.
\end{thm}
\begin{rem}
A modification of the expresion behind the proof of Theorem (\ref{theorem_free_energy_analyticity_PBC}) allows one to compute the Schwinger functions, see \cite{gentile2001renormalization}, Section 12.
\end{rem}
\subsection{Free Hamiltonian diagonalization and free propagator}
\label{subsection_free_propagator}
It is straightforward to check that the {\it free Hamiltonian} $H_0$ can be diagonalized in Fourier space by defining
\begin{equation}
\hat \psi^{\pm}_k = \sum_{x\in\Lambda}e^{\mp ik x}\psi^{\pm}_x,
\label{fourier_transform_creation_annihilation_PBC}
\end{equation}
where $k\in\mathcal D_\Lambda$,
\begin{equation}
\mathcal{D}_\Lambda=\left\{k=2\pi n/L, n\in\mathbb{Z}, -[L/2]\leq n \leq [(L-1)/2] \right\}.
\label{dual_space_PBC}
\end{equation}
and the operator $\hat \psi^+_k/\hat \psi^-_k$ creates/annihilates a spinless electron with momentum k, so that
\begin{equation}
H_0=\frac{1}{|\Lambda|}\sum_{k\in\mathcal D_\Lambda}\hat\psi^+_k e(k)\hat \psi^-_k,
\label{H_0_PBC_diagonal}
\end{equation}
where $e(k)=1-\cos k-\mu_0$ is called the {\it dispersion relation} defined in $\mathcal D_\Lambda$.\\
It worths noting that when $L\to \infty$, $\mathcal{D}_L\to [-\pi, \pi]$, so in the infinite volume limit there are two points, let us call them $\pm p_F$, such that $e(\pm p_F)=0$ since $\mu_0\in [e_-+\kappa,e_+-\kappa]$.
\paragraph{Free propagator}
The non interacting model, {\it i.e.} the model described by the free Hamiltonian $H_0$, is exactly solvable and all the Schwinger functions can be computed, by the anticommutative {\it Wick rule}, starting from the {\it two point Schwinger function}, also known as {\bf propagator}, that can be explicitly computed starting from the definition, we refer to \cite{benfatto1995renormalization}. \\
Let us recall that $\psi_x^{\pm}=\frac{1}{|\Lambda|}\sum_{k\in\mathcal D_{\Lambda}} e^{\pm ikx}\hat \psi_k^{\pm}$ for any $x\in\Lambda$, and if we call $\bm x=(x,x_0)$, $\bm y=(y,y_0)$, $\bm k=(k,k_0)$ the evolution in time of the operators is $\psi^{\pm}_{\bm x}=e^{H_0 x_0}\psi^{\pm}e^{-H_0x_0}$. Recalling that $\left<\cdot\right>_{\Lambda,\beta, 0}=Tr\left(e^{-\beta H_0}\cdot\right)/Tr(e^{-\beta H_0})$, we can compute, for any $-\beta < x_0-y_0 \leq \beta$,
\begin{equation}
\begin{split}
\left<\bm T\left(\psi^-_{\bm x}\psi^+_{\bm y}\right)\right>_{\Lambda,\beta, 0}=\frac{1}{|\Lambda|}\sum_{k\in\mathcal D_{\Lambda}}e^{-ik(x-y)}\cdot\\
\cdot \left[\theta(x_0-y_0)\frac{e^{-(x_0-y_0)e(k)}}{1+e^{-\beta e(k)}}-\theta(y_0-x_0)\frac{e^{-(x_0-y_0+\beta)e(k)}}{1+e^{-\beta e(k)}}\right]
\end{split}
\end{equation}
there $\theta(\cdot )$ is the Heaviside step function. The latter formula is a priori defined only for $-\beta < x_0-y_0\leq \beta$, but we can extend it periodically over the whole real axis: the periodic extension is continuous in $x_0-y_0\notin \beta \mathbb Z$, while it has jump discontinuities at $x_0-y_0\in\beta \mathbb Z$ (the jump height is equal to $(-1)^n\delta_{x,y}$ if $x_0-y_0=\beta n$), so if we define
\begin{equation}
\mathcal{D}_{\beta,M}:=\left\{k_0:=\frac{2(n+1/2)\pi}{\beta}, n\in\mathbb{Z}, -M\leq n \leq M-1\right\},
\label{momenta_space_time}
\end{equation}
we get
\begin{equation}
S^0_{L,\beta}(\bm x,-;\bm y,+):= g(\bm x-\bm y)=\frac{1}{\beta L}\lim_{M\to \infty}\sum_{k \in\mathcal{D}_{\Lambda}}\sum_{k_0\in\mathcal{ D}_{\beta,M}}e^{i\delta_Mk_0}e^{-i\bm k\cdot (\bm x-\bm y)}\hat g(\bm k),
\label{free_propagator_PBC}
\end{equation}
where $D_\Lambda$ has been defined in (\ref{dual_space_PBC}), while $M$ is a suitable cut-off to be removed at the very end (of course the scheme will be to get bound independent of the cut-off $M$, and finally to take the limit $M$ to infinity), and
\begin{equation}
\hat{g}(\bm k):=\frac{1}{-ik_0+e(k)},
\label{free_propagator_momenta}
\end{equation}
where $e(k)$ is the {\it dispersion relation} already defined in (\ref{H_0_PBC_diagonal}). From now on, we will use $\bm k \in \mathcal{D}_{\Lambda,\beta,M}$ to denote $(k,k_0)\in\mathcal{D}_{\Lambda}\times\mathcal{D}_{\beta,M}$.
The constant $\delta_M=\beta/\sqrt{M}$ is introduced in order to take correctly into account the discontinuity of the propagator $g(\bm x-\bm y)$ at $\bm x=\bm y$, where it has to be defined as $\lim_{x_0\to 0^-}g(0,x_0)$, in fact the latter definition guarantees that $\lim_{M\to \infty}g_M(\bm x-\bm y):=g(\bm x-\bm y)$ for $\bm x\neq\bm y$, while $\lim_{M\to \infty}g_M(\bm 0):=g(0,0^-)$ at equal points.
\section{Perturbation theory and Grassmann integral formulation}
\label{section_1_pert_theory}
\subsection{Perturbation theory and Trotter's formula}
Let us now consider the interacting case. Our strategy is to derive first a formal perturbation theory for the specific free energy, and properly to find rules to {\it formally compute the generic perturbative order in} $\lambda$ of $f_{\Lambda,\beta}$. Then we will explain how to give sense to this formal expression, by suitable resummations of the formal power series. It is worth stressing that the interaction could in principle move, in some {\it interaction dependent way}, the Fermi points of the theory. To take into account this fact, we rewrite
\begin{equation}
\mu_0=\mu+\nu,
\end{equation}
where $\nu$ is a {\it counterterm} that will be eventually suitably chosen in order to fix the position of the singularity to some {\it interaction independent} point.\\
So we rewrite
$$H=H_0+U,$$
where
\begin{equation}
U=\lambda V+\nu N_0=\lambda \sum_{x,y\in\Lambda}\psi^+_x\psi^-_xv(x-y)\psi^+_y\psi^-_y+ \nu\sum_{x\in\Lambda}\psi^+_x\psi^-_x,
\end{equation}
and we use the Trotter product formula
\begin{equation}
e^{-\beta H}=\lim_{n\to \infty}\left[e^{-\beta H_0/n}\left(1-\frac{\beta}{n}U\beta\right)\right]^n,
\end{equation}
so that, if we define
\begin{equation}
U(t):=e^{tH_0}Ve^{-tH_0},
\end{equation}
we get
\begin{equation}
\begin{split}
\frac{Tr\left(e^{-\beta H}\right)}{Tr\left(e^{-\beta H_0}\right)}=\\
=1+\sum_{(-1)^N}\int_0^\beta dt_1 \int_0^{t_1} dt_2\dots\int_0^{t_N-1}dt_N \frac{Tr\left(e^{-\beta H_0}U(t_1)\dots U(t_N)\right)}{Tr\left(e^{-\beta H_0}\right)},
\end{split}
\end{equation}
which, using again the {\it fermionic time-ordering operator}, can be rewritten as
\begin{equation}
\frac{Tr\left(e^{-\beta H}\right)}{Tr\left(e^{-\beta H_0}\right)}=1+\sum_{N\geq 1}\frac{(-1)^N}{N!}\left<\bm T \left(U(\psi)^N\right)\right>_{\Lambda, \beta, 0}
\label{Tr(cdot)/Tr}
\end{equation}
where $\left<\cdot\right>_{\Lambda, \beta, 0}=Tr\left(e^{-\beta H_0 \cdot}\right)/Tr\left(e^{-\beta H_0}\right)$ and we have defined
\begin{equation}
\begin{split}
U(\psi)=\lambda\int_{[0,\beta)}dx_0 \sum_{x\in\Lambda}\int_{[0,\beta)}dy_0 \sum_{y\in\Lambda} \psi^+_{\bm x}\psi^-_{\bm x}v(x,y)\delta_{x_0,y_0}\psi^+_{\bm y} \psi^-_{\bm y}+\nu\int_{[0,\beta)}dx_0\sum_{x\in\Lambda}\psi^+_{\bm x}\psi^-_{\bm x}.
\end{split}
\end{equation}
The $N$-th order of formula (\ref{Tr(cdot)/Tr}) can be computed using the Wick rule
\begin{equation}
\begin{split}
&\left<\bm T\left(\psi^-_{\bm x_1}\dots \psi^+_{\bm x_n}\right)\right>_{0,\Lambda,\beta}=det G,\\
&G_{ij}=\left<\bm T\left(\psi^-_{\bm x_i}\psi^+_{\bm x_j}\right)\right>_{0,\Lambda,\beta}=S_{L,\beta}^0(\bm x,-;\bm y,+).
\end{split}
\end{equation}
and the explicit {\it free propagator} (\ref{free_propagator_PBC}), where the subscript $0$ denotes that the expectations are computed with respect to the free measure. In order to use the Wick rule, it is conveniente to briefly recall the Feynman rules.
\subparagraph{Feynman rules}
\begin{figure}
\begin{center}
\begin{tikzpicture}
[thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\node at (1,3.3) {{\bf x}};
\node at (3,3.3) {{\bf y}};
\fill (1,3) circle (0.06);
\fill (3,3) circle (0.06);
\draw [postaction={decorate}] (0,2) -- ++(1,1);
\draw [postaction={decorate}](0,2) ++ (1,1)-- ++ (-1,1);
\draw [postaction={decorate}] (0,2) ++ (1,1)++ ( 2,0) ++(1,1) ++ (-1,-1) --++ (1,-1);
\draw [postaction={decorate}] (0,2) ++ (1,1)++ ( 2,0) ++(1,1) ++ (-1,-1) ++ (1,-1) ++ (-1,1) -- ++(1,1);
\draw [-,decorate,decoration=snake] (0,2) ++ (1,1) ++ (-1,1)++ (1,-1) -- ++(2,0);
\node at (-4,3.3) {\bf x};
\fill (-4,3) circle (0.1);
\draw [postaction={decorate}] (-5,3) -- ++ (1,0);
\draw [postaction={decorate}] (-4,3) -- ++ (1,0);
\end{tikzpicture}
\end{center}
\caption{Graph elements associated with $\nu$-type endpoints (left) and $\lambda$-type endpoints (right).}
\label{figure_graph_elements_PBC}
\end{figure}
In order to compute $\left<\bm T\left(U_\beta(\psi))^N\right)\right>_{\Lambda, \beta, 0}$, it is easy to check that one can follow these steps:
\begin{itemize}
\item $\forall k,l$ such that $0 \leq k,l\leq N $ and $k+l=N$, draw $k$ graph elements consisting of {\it four legged vertices}, $l$ graph elements consisting of {\it two legged local vertices} with the vertices associated to labels $\bm x_i$, $i=1,\dots,N$, in such a way that the {\it four legged vertices} are composed by two entering and to exiting fields, while the {\it two legged vertices} are associated with one exiting and one entering leg (see Figure (\ref{figure_graph_elements_PBC}));
\item pair the fields in all possible ways, in such a way that every pair is obtained by contracting an entering and an exiting leg;
\item associate to every pairing the {\it right sign}, which is the sign of the permutation needed to bring every pair of contracted fields next to each other;
\item associate to every linked pair of fields $\left(\psi^-(\bm x_i),\psi^+(\bm x_j)\right)$ an {\it oriented} line connecting the $i-$th with the $j-$th vertex, oriented from $j$ to $i$ ({\it i.e.} from $+$ to $-$ field);
\item associate to every oriented line from $j$ to $i$ value $g(\bm x_i,\bm x_j)$ given by (\ref{free_propagator_PBC});
\item associate to every configuration of pairings, which is called {\it Feynman graph} a value, equal to the product of the sign of the pairing, times $\lambda^k\nu^l$ times the product of the values of all the oriented lines;
\item integrate over $\bm x_i$, then perform the sum over all the possible pairings, over $k, l$ and over N;
\end{itemize}
It is convenient, algebraically, to rewrite the quantities (\ref{free_energy_specific_PBC}) and (\ref{schwinger_function_n_points_PBC}) in terms of {\it Grassmann Gaussian integrals}. Even though the theory of {\it Grassmann integrals} is a very well known topic in the literature (see again, for instance, \cite{gentile2001renormalization}), in sake of self consistency we will sketch the main definitions and properties.
\subparagraph{Grassmann algebra}
Given some finite set $A$ of indices $\alpha\in A$, we define a {\it finite dimensional Grassmann algebra}, generated by a set of {\it anticommuting Grassmann variables} $\left\{\psi_{\alpha}^{\pm}\right\}_{\alpha\in A}$: we attach at each element $\alpha\in A$ a couple of variables $\psi \equiv \left\{\psi^+_{\alpha}, \psi^-_{\alpha}\right\}$ such that
\begin{equation}
\psi^{\epsilon}_{\alpha}\psi^{\epsilon'}_{\alpha'}+ \psi^{\epsilon'}_{\alpha'}\psi^{\epsilon}_{\alpha}=0 \hspace{3mm} \forall \alpha,\alpha'\in A, \forall \epsilon, \epsilon'\in\{\pm\}.
\label{anticommutation_rules_grassmann}
\end{equation}
\begin{rem}
In particular, $\forall \alpha\in A, \forall \epsilon\in\{\pm\}$ we have $\left(\psi^{\epsilon}_{\alpha}\right)^2=0.$
\label{grassmann_variables_squared_remark}
\end{rem}
\subparagraph{Grassmann integral operator}
Let us introduce the {\it Grassmann integral operator} $\int d\psi^{\epsilon}_{\alpha}\cdot$ acting as:
\begin{equation}
\int d\psi^{\epsilon}_{\alpha}\psi^{\epsilon}_{\alpha}=1, \hspace{3mm} \int d\psi^{\epsilon}_{\alpha}=0
\label{grassmann_integral}
\end{equation}
A straightforward generalization in the case of many {\it Grassmann variables integral} can be obtained by iterating (\ref{grassmann_integral}):
\begin{equation}
\int \prod_{\alpha\in B}d\psi^+_\alpha d\psi^-_{\alpha}\left(\prod_{\alpha\in B}\psi^-_\alpha \psi^+_{\alpha}\right)=1,\hspace{3mm} \forall B\subset A.
\end{equation}
so that if $F(\psi)$ is a polynomial in $\psi_\alpha^+, \psi_\alpha^-, \alpha\in A$, the operation
\begin{equation}
\int \prod_{\alpha\in A}d\psi^+_\alpha d\psi^-_\alpha F(\psi)
\end{equation}
extracts the coefficient of the linear term in $\left(\prod_{\alpha=1}^N \psi^-_\alpha \psi^+_\alpha \right)$.
\\ Using the remark (\ref{grassmann_variables_squared_remark}) and the usual Taylor series for the exponential, $e^{-\psi^+_{\alpha}C\psi^-_{\alpha}}=1-\psi^+_{\alpha}C\psi^-_{\alpha}$, so by the definition (\ref{grassmann_integral})
\begin{equation}
\frac{\int d\psi^+_{\alpha}d\psi^-_{\alpha}e^{-\psi^+_{\alpha}C\psi^-_{\alpha}}\psi^-_{\alpha}\psi^+_{\alpha}}{\int d\psi^+_{\alpha}d\psi^-_{\alpha}e^{-\psi^+_{\alpha}C\psi^-_{\alpha}}}=C^{-1}, \hspace{3mm} \forall \alpha\in A, C \in \mathbb{C}
\end{equation}
To generalize this formula in the case of $2N$ Grassmann variables, we introduce the matrix $M\in GL(N,\mathbb C)$,
\begin{equation}
\begin{split}
\int \prod_{\alpha=1}^N \left(d\psi^+_{\alpha}d\psi^-_{\alpha}\right) e^{-\sum_{\alpha,\alpha'=1}^N\psi^+_{\alpha}M_{\alpha,\alpha'}\psi^-_{\alpha'}}=\det M, \\
\int \prod_{\alpha=1}^N \left(d\psi^+_{\alpha}d\psi^-_{\alpha}\right) e^{-\sum_{\alpha,\alpha'=1}^N\psi^+_{\alpha}M_{\alpha,\alpha'}\psi^-_{\alpha'}}\psi^-_{\bar{\alpha}}\psi^+_{\tilde{\alpha}}= \bar M_{\tilde{\alpha},\bar{\alpha}}
\end{split}
\end{equation}
where $\bar M_{\bar{\alpha},\tilde{\alpha}}$ is the minor complementary to the entry $M_{\bar{\alpha},\tilde{\alpha}}$ and, if $M$ is invertible,
\begin{equation}
\frac{\int \prod_{\alpha=1}^N \left(d\psi^+_{\alpha}d\psi^-_{\alpha}\right) e^{-\sum_{\alpha,\alpha'=1}^N\psi^+_{\alpha}M_{\alpha,\alpha'}\psi^-_{\alpha'}}\psi^-_{\tilde \alpha}\psi^+_{\bar \alpha}}{\int \prod_{\alpha=1}^N \left(d\psi^+_{\bar \alpha}d\psi^-_{\tilde \alpha}\right) e^{-\sum_{\alpha,\alpha'=1}^N\psi^+_{\alpha}M_{\alpha,\alpha'}\psi^-_{\alpha'}}}=\left[M^{-1}\right]_{\bar{\alpha},\tilde{\alpha}}
\end{equation}
\begin{rem}
These properties are similar to the ones of the usual {\it Gaussian integrals}, without the constraint on $C$ to be real and on $M$ to be positive definite, but only invertible.
\label{grassmann_integrals_gaussian_remark}
\end{rem}
\paragraph{Grassmann Gaussian integration}
Inspired by the remark (\ref{grassmann_integrals_gaussian_remark}), we can build up a {\it Grassmann Gaussian integration} $P(d\psi)$ associated with the propagator $g(\bm x- \bm y)$ in order to express the specific free energy (\ref{free_energy_specific_PBC}) and the Schwinger functions (\ref{schwinger_function_n_points_PBC}) as Gaussian Grassmann integrals.
First of all, let us introduce a finite set of {\it Grassmann variables} $\{\hat \psi^{\pm}(\bm k)\}_{\bm k\in\mathcal{D}_{\Lambda,\beta,M}}$; hence we define
\begin{equation}
P(d\psi)=\left(\prod_{\bm k\in\mathcal{D}_{\Lambda,\beta,M}}\left(L\beta\hat g(\bm k)\right)\hat \psi^+(\bm k)\hat \psi^-(\bm k)\right)e^{-\sum_{\bm k \in \mathcal{D}_{\Lambda,\beta,M}}\left(L\beta\hat g(\bm k)\right)^{-1}\hat \psi^+(\bm k)\hat \psi^-(\bm k)}.
\label{grassmann_gaussian_measure_k_space_PBC}
\end{equation}
By introducing the Fourier transforms:
\begin{equation}
\psi^{\pm}(\bm x)=\frac{1}{L\beta}\sum_{\bm k\in\mathcal{D}_{L,\beta,M}}\hat \psi^{\pm}(\bm k)e^{\pm i \bm k\cdot \bm x},
\label{grassman_variables_fourier_transform}
\end{equation}
we can use the measure (\ref{grassmann_gaussian_measure_k_space_PBC}) to get
\begin{equation}
\lim_{M\to\infty}\int P(d\psi)\psi^-(\bm x)\psi^+(\bm y)=\frac{1}{L\beta}\lim_{M\to\infty}\sum_{\bm k\in \mathcal{D}_{\Lambda,\beta,M}}\hat g(\bm k)e^{-i\bm k\cdot (\bm x-\bm y)}=g(\bm x-\bm y),
\label{grassmann_gaussian_measure_x_space_PBC}
\end{equation}
where we denoted with $P$ the {\it Grassmann Gaussian integration} associated to the propagator $g$ in (\ref{free_propagator_PBC}).
\subparagraph{Expectation functional}
Calling $P(d\psi)$ {\it Gaussian fermionic integration} with covariance $g$ we mean that, for any analytic function $F$ defined on the Grassmann algebra, we can define an {\it expectation functional}
\begin{equation}
\int P(d\psi)F(d\psi)=\mathcal{E}(F).
\label{expectation}
\end{equation}
\begin{rem}
$P(d \psi)$ is not a measure in the usual sense, indeed it does not satisfy the positivity condition, so we use the terminology of expectation $\mathcal E$ by analogy.
\end{rem}
\subparagraph{Truncated expectation functions} Given $p$ functions $X_1,\dots, X_p$ defined on the Grassmann algebra and $p$ integer numbers $n_1,\dots,n_p,$ the {\it truncated expectation} is defined as
\begin{equation}
\mathcal{E}^T\left(X_1,\dots,X_p;n_1,\dots,n_p\right)= \frac{\partial^{n_1+\dots+n_p}}{\partial_{\lambda_1}^{n_1}\dots \partial_{\lambda_p}^{n_p}} \left .\log \int P(d\psi)e^{\lambda_1X_1(\psi)+\dots+ \lambda_pX_p(\psi)}\right|_{\lambda=0}.
\label{expectation_truncated}
\end{equation}
where $\lambda=\left(\lambda_1,\dots,\lambda_p\right)$; we will use the notation
\begin{equation}
\mathcal E^T(X_1,\dots, X_p):=\mathcal E^T\left(X_1,\dots, X_p;\underbrace{1,\dots,1}_{p \mbox{ times }}\right),
\end{equation}
\\
In particular,
\begin{equation}
\mathcal{E}^T\left(X;n\right)=\frac{\partial^n}{\partial^n\lambda}\log \left . \int P(d\psi)e^{\lambda X(\psi)}\right |_{\lambda=0},
\end{equation}
and
\begin{equation}
\log \int P(d\psi)e^{X(\psi)}=\sum_{n=0}^{\infty} \frac{1}{n!} \frac{\partial^n}{\partial \lambda^n}\log \left .\int P(d\psi)e^{\lambda X(\psi)}\right|_{\lambda=0}=\sum_{n=0}^{\infty}\frac{1}{n!}\mathcal{E}^T\left(X;n\right).
\label{log_grassmann_integral_free_energy}
\end{equation}
\paragraph{Properties of Grassmann integrals and expectation functions}
\begin{itemize}
\item {\bf Wick rule} Given two sets of labels $\left\{\alpha_1,\dots,\alpha_n\right\},\left\{\beta_1,\dots,\beta_m \right\}\subset A$, so
\begin{equation}
\int P(d\psi) \psi_{\alpha_1}^-\dots\psi_{\alpha_n}^- \psi_{\beta_1}^+\dots\psi_{\beta_m}^+=\delta_{n,m} \sum_{\Gamma}\sum_{\pi}(-1)^{p_{\pi}}\prod_{\Gamma \ni \ell=(\bm x_i,\bm x_{\pi(j)})}g(\ell).
\label{wick_rule}
\end{equation}
where the sum over $\Gamma$ is the sum over all the possible pairings (or Feynman graph configurations) and the product over $\ell$ is the product over all the possible contractions compatible with the configuration $\Gamma$.
\item {\bf Addition principle} Given two Grassmann measures $P(d\psi_1)$ with covariance $g_1$ and $P(d\psi_2)$ with covariance $g_2$, for any analytic function $F(\psi)$ defined on the Grassmann algebra and such that $\psi=\psi_1+\psi_2$, so
\begin{equation}
\int P(d\psi_1)\int P(d\psi_2)F\left(\psi_1+\psi_2\right)= \int P(d\psi)F(\psi),
\label{addition_principle}
\end{equation}
with $P(d\psi)$ associated to a covariance $g=g_1+g_2$.
\item {\bf Invariance of exponentials} Using the definition of truncated expectation (\ref{expectation_truncated}) it follows that, if $\phi$ is an external field (meaning that $\phi$ is not involved in the integration process),
\begin{equation}
\int P(d\psi)e^{X(\psi+\phi)}=\exp \left[\sum_{n=0}^{\infty}\frac{1}{n!}\mathcal{E}^T\left(X\left(\cdot+\phi\right);n\right) \right]=:e^{X'(\phi)}.
\label{invariance_of_exponential}
\end{equation}
\item {\bf Change of integration measure} Let $P_g(d\psi)$ be the integration measure with covariance $g$. Then, for any analytic function defined on the Grassmann algebra $F(d\psi)$, it holds
\begin{equation}
\frac{1}{N_{\nu}}\int P_g(d\psi)e^{-\nu\psi^+\psi^-}F(\psi)= \int P_{\tilde g}(d\psi)F(\psi),
\label{change_of_integration_measure_property}
\end{equation}
where $\tilde{g}^{-1}=g^{-1}+\nu$ and $N_{\nu}=\frac{g^{-1}+\nu}{g^{-1}}=1+g\nu=\int P_g(d\psi) e^{-\nu\psi^+\psi^-}$.
\end{itemize}
\paragraph{Free energy}
Using these definitions and the Feynman rules described above, we can rewrite equation (\ref{Tr(cdot)/Tr}) as
\begin{equation}
\label{Tr/Tr_as_grassmann_integral}
\frac{Tr\left(e^{-\beta H}\right)}{Tr\left(e^{-\beta H_0}\right)}=\lim_{M\to\infty}\int P_M(d\psi)e^{-\mathcal V(\psi)},
\end{equation}
where
\begin{equation}
\mathcal{V}=\lambda\int_0^{\beta} dx_0\int_0^{\beta}dy_0\sum_{x,y\in\Lambda}\psi_{\bm x}^+\psi_{\bm x}^-v(x-y)\delta_{x_0,y_0}\psi_{\bm y}^+\psi_{\bm y}^-+\nu \int_{[0,\beta)}dx_0 \sum_{x\in\Lambda}\psi^+_{\bm x}\psi^-_{\bm x},
\label{interaction_grassmann}
\end{equation}
and $e^{-\mathcal V(\psi)}$ must be identified with its Taylor series in $\lambda$ and $\nu$, which is finite for every finite $M$ due to the anticommutation rules of the Grassmann variables and the fact that the Grassmann algebra is finite for every finite M. A priori, equation (\ref{Tr/Tr_as_grassmann_integral}) has to be read as an equality between formal power series in $\lambda$ and $\nu$, however, it can be given a {\it non-perturbative meaning} provided we can prove the convergence of the Grassmann integral in the r.h.s. under analiticity assumption in a complex disc. \\
Using (\ref{Tr/Tr_as_grassmann_integral}), we can compute the specific free energy (\ref{free_energy_specific_PBC}) provided we are able to check that the r.h.s. of (\ref{Tr/Tr_as_grassmann_integral}) is analytic in a domain that is uniform in $M,\beta,\Lambda$, and that it converges to a well defined analyric function uniformly as $M\to\infty$; in fact, this will be the main goal of this chapter. Let us start by rewriting the specific free energy as:
\begin{equation}
f_{\Lambda, \beta}:=-\frac{1}{\beta L}\sum_{N\geq 1}\frac{(-1)^N}{N!}\mathcal{E}^T\left(\mathcal{V};N\right),
\label{free_energy_as_sum_of_trunc_expec}
\end{equation}
where the {\it expectations functionals} have been already defined, and now we will discuss how to compute them. We underline that we slightly abused of the notation, indeed the function $f_{\Lambda,\beta}$ just defined is actually the {\it difference between the specific free energy of the interacting system and the specific free energy of the free system $f_{0,\Lambda,\beta}=-1/|\Lambda|\beta \log Tr(e^{-\beta H_0})$.}
\subsection{How to compute truncated expectations}
\label{subsection_How_to_compute_truncated_expectations}
\paragraph{Feynman graphs}
We have already described the most immediate way to compute truncated expectation functions when we listed the {\it Feynman rules} to compute the expectations values in (\ref{free_energy_as_sum_of_trunc_expec}), getting the result we recall here.\\
Given $s$ sets of indices $P_1,\dots,P_s$, we define for each of those
\begin{equation}
\tilde{\psi}\left(P_i\right)=\prod_{f\in P_i}\psi^{\sigma(f)}_{\bm x(f)},
\end{equation}
where $\sigma(f)\in\{\pm\}$ and $\bm x(f)\in \Lambda\times\left[0,\beta\right)$. Then,
\begin{equation}
\mathcal{E}\left(\tilde{\psi}\left(P_1\right),\dots,\tilde{\psi}\left(P_s\right)\right)=\sum_{\Gamma\in\mathcal{G}_0}Val(\Gamma),
\label{expectation_truncated_s_sets}
\end{equation}
where $\Gamma$ is a Feynman graph belonging to the family of all possible Feynman graphs $\mathcal{G}_0$, and $Val(\Gamma)$ includes the integration over the space-time labels $\bm x_i$: for instance let $\Gamma\in\mathcal G_{0,N}$, where $\mathcal G_{0,N}$ is the family of all possible Feynman graphs of order $N$,
\begin{equation}
Val(\Gamma)=\sum_{1\leq k+l\leq N}\nu^k\lambda^l \int d{\bm x_1}\dots d\bm x_n(-1)^{p_\pi}\prod_{\ell\in\Gamma}g_{\ell}
\label{value_of_a_feynman_graph}
\end{equation}
where, as explained in the list of the rules, $p_{\pi}$ is the parity of the permutation, and $\ell\in\Gamma$ is the set of all the lines belonging to the Feynman graph. As we already commented in the general discussion of expectations,
\begin{equation}
\mathcal E^T(\mathcal V; N)=\sum_{\Gamma\in\mathcal G^T_{0,N}} Val(\Gamma).
\end{equation}
where $\mathcal G^T_{0,N}\subset \mathcal G_{0,N}$ is the set of {\it connected Feynman diagrams.}\\
These considerations, and the fact that we can compute $Val(\Gamma)$ using the Feynman rules, allow us to derive a very rough {\it upper bound} on the $N$-th order contribution to $f_{\Lambda,\beta}$ that, thanks to (\ref{free_energy_as_sum_of_trunc_expec}), is
\begin{equation}
f_{\Lambda,\beta}^{(N)}:=-\frac{1}{|\Lambda|\beta}\frac{(-1)^N}{N!}\mathcal E^T(\mathcal V;N).
\label{free_energy_specific_N_th_order}
\end{equation}
\begin{lem}
\label{lemma_bounds_no_multiscale_no_determinants}
Let $\epsilon:=\max\{\lambda,\nu\}$, $|\mathcal G_{0,N}^T|$ be the number of connected Feynman diagrams of order $N$, so it holds
\begin{equation}
\begin{split}
|f_{\Lambda,\beta}^{(N)}|\leq \frac{1}{\beta|\Lambda|}\frac{1}{N!}\sum_{\Gamma\in\mathcal G_{0,N}^T}|Val(\Gamma)|\leq \frac{|\mathcal G_{0,N}^T|}{N!}\epsilon^N||g||_{\infty}^{N+1}||g||_1^{N-1}\leq \\
\leq \left(C\epsilon\right)^N N! M^{N+1}\beta^{(N-1)},
\end{split}
\end{equation}
\end{lem}
\begin{proof}
Given $\Gamma\in\mathcal G_{0,N}^T$, select an arbitrary {\it spanning tree} in $\Gamma$ (a loopless subset of $\Gamma$ connecting all the N vertices). The integrals over the space time coordinates of the product of the propagators of the spanning tree is bounded by $\beta |\Lambda| ||g||^{N-1}_1$, while the product of the remaining propagators is bounded by $||g||^{N+1}_{\infty}$. Then, we use that for some $c>0$, $|\mathcal G_{0,N}^T|\leq c^N(N!)^2$ (see Appendix A.3.3 of \cite{gentile2001renormalization}), and the estimates $||g||_\infty\leq CM$, $||g||_1\leq C\beta$, that we prove in Appendix (\ref{appendix_propagator_decay_property}).
\end{proof}
Of course this rough Lemma has two main problems:
\begin{enumerate}
\item a combinatorial problem, associated to $N!$, that does not allow us to perform the sum over N not even for finite $M, \beta$;
\item a divergence problem, associated to $M^{N+1}\beta^{N-1}$ which is exponentially divergent as $M\to\infty$ and $\beta\to\infty$
\end{enumerate}
Problem 1) can be solved via a smarter re-organization of the perturbation theory in the form of a determinant expasion together with a systematic use of the Gram-Hadamard bound. Problem 2) can be solved by a systematic resummation of the series, based on a multiscale integration of the theory.
\subsection{The determinant expansion}
\label{subsection_determinant_expansion}
Let us show how the first problem can be solved.\\
The basic idea is that, besides the already discussed Feynman diagram representation, there is another well known way to represent the truncated expectation: et us consider the same setting described in the case of (\ref{expectation_truncated_s_sets}), so $s$ sets of indices $P_1,\dots,P_s$. Let us call $|P_i|$ the number of elements in the set $P_i$, let us label each element with a couple of indices $P_j\ni f:=(j,i)$ where the first index is associated to the set the element belongs to, and the second one is $i=1,\dots,|P_j|$. Finally, let us call $2n=|P_1|+\dots+|P_s|$, {\it i.e.} $n$ is the number of {\it lines} in the Feynman graphs $\Gamma\in\mathcal{G}_0$. So
\begin{equation}
\mathcal{E}^T\left(\tilde \psi\left( P_1\right),\dots, \tilde \psi \left( P_s\right)\right)= \sum_T \alpha_T \left(\prod_{\ell\in T}g_{\ell}\right)\int dP_T(\bm t)\det G^T(\bm t),
\label{expectation_truncated_determinants}
\end{equation}
where
\begin{enumerate}
\item $T$ is an {\it anchored tree} between the clusters of points $P_1,\dots, P_s$: $T$ is a set of lines becoming a tree if one identifies all the points in the same cluster;
\item $\alpha_T$ is a sign, irrelevant for the subsequent bounds;
\item $\bm t$ is the set of parameters $\bm t:=\{t_{j,j'}\in[0,1], 1\leq j,j'\leq s\}$;
\item $dP_T(\bm t)$ is a {\it normalized probability measure} with support on a set $\bm t$ which can be obtained as $t_{i,i'}=\bm u_j\cdot \bm u_{j'}$ for some family of unitary-normed vectors $\bm u_j\in\mathbb R^s$;
\item $G^T(\bm t)$ is a $(n-(s-1))\times(n-(s-1))$ matrix, whose elements are
\begin{equation}
\left[G^T(\bm t)\right]_{(j,i).(j',i')}=t_{j,j'}g(\bm x(j,i),\bm x(j',i'))
\end{equation}
where $1\leq j,j'\leq s$ and $1\leq i \leq |P_j|$, $1\leq i' \leq |P_{j'}|$ in such a way that the lines $\ell=(\bm x(j,i),\bm x(j',i'))$ do not belong to the anchored tree $T$. If $s=1$, $\sum_T$ is empty, and we shall interpret (\ref{expectation_truncated_determinants}) as
\begin{equation}
\mathcal{E}^T\left(\tilde\psi\left(P_1\right)\right)= \begin{cases}
1,\mbox{ if $P_1$ is empty},\\
\det G(\bm 1), \mbox{ otherwise },
\end{cases}
\end{equation}
where $\bm 1$ is obtained by setting $t_{j,j'}=1 \forall j,j'$.
\end{enumerate}
\begin{rem}
If we expressed the left hand side of (\ref{expectation_truncated_determinants}) as a sum over all possible Feynman graphs, we would actually expand the sum into $O(s!)^2$ terms (where $s$, as in the previous list, is the number of clusters). The latter expression (\ref{expectation_truncated_determinants}) is written in terms of a sum over the family of trees connecting the boxes. It worths noting that, fixing a tree $T$, one can expand the determinant $\det G^T(\bm t)$ in order to obtain, {\it as expected}, all the possible graphs which can be obtained by contracting the $(n-(s-1))$ half-lines not belonging to $T$, {\it i.e.} one can get the Feynman graph representation leading to (\ref{expectation_truncated_s_sets}). The big {\bf improvement} is in the number of terms we are summing up: in the case on Feynman graphs expansion, the sum runs over $O(s!)^2$ terms, while in the latter case the sum runs over the anchored trees, whose number is only $O(s!)$, which morally compensates the $\frac{1}{s!}$ coming from the perturbative expansion.
\end{rem}
We do not present in this thesis the proof of the determinant representation (see \cite{gentile2001renormalization}), which is due to a fermionic reinterpretation of the interpolation formulas by Battle, Brydges and Federbush \cite{battle1984note, brydges1978new, brydges1984short}. Using (\ref{expectation_truncated_determinants}), we get that the $N-$ order of the specific free energy is
\begin{equation}
f_{\Lambda,\beta}^{(N)}=-\frac{1}{\beta|\Lambda|}\frac{(-1)^N}{N!}\epsilon^N \sum_{T\in \mathcal T_N}\alpha_T\int d\bm x_1\dots d\bm x_N \prod_{\ell\in T} g_\ell\int dP_T(\bm t)\det G^T(\bm t),
\label{free_energy_determinant_expansion_gram_hadamard}
\end{equation}
that definitely improves the rough bound in previous Lemma. Indeed, using the fact that the number of anchored trees in $\bm T_N$ is bounded by $C^NN!$ for some $C>0$ (see \cite{gentile2001renormalization}, A.3.3), we get
\begin{equation}
|f_{\Lambda,\beta}^{(N)}|\leq c^N\epsilon^N||g||_1^{N-1}||\det G^T(\cdot)||_\infty.
\end{equation}
Then, in order to bound $||\det G^T||_\infty$, we use the {\it Gram-Hadamard inequality},
\begin{lem}[Gram-Hadamard inequality]
\label{lemma_gram_hadamard_inequality}
If $M$ is a square matrix with elements $M_{ij}$ of the form $M_{ij}=\left<A_i,B_j\right>$, where $A_i$ and $B_j$ are vectors in a Hilbert space with scalar product $\left<\cdot,\cdot\right>$, then
\begin{equation}
\left|\det M\right|\leq \prod_{i}||A_i|| ||B_j||
\end{equation}
where $||\cdot||$ is the norm induced by the scalar product.
\end{lem}
We do not prove this result, and we refer {\it e.g.} to \cite{gentile2001renormalization}, Theorem A.1, but we use it to state the following Lemma.
\begin{lem}
\label{lemma_gram_hadamard_for_G}
Provided we are able to prove that $t_{j,j'}g(\bm x(j,i),\bm x(j',i'))$ can be obtained as a scalar product in a suitable Hilbert space, we can use the Gram-Hadamard inequality to bound:
\begin{equation}
||\det G^T||_\infty\leq c^N ||g||_\infty^{N+1}.
\end{equation}
\end{lem}
Recalling that, as we already mentioned, $||g||_\infty\leq c M$
\begin{equation}
|f_{\Lambda,\beta}^{(N)}|\leq c^N\epsilon^N M^{N+1}\beta^{N-1}.
\end{equation}
The proof of the assumption is a subcase of Appendix (\ref{appendix_gram_representation}).
\begin{rem}
Now, the r.h.s. of the latter bound is summable over $N$ for $\epsilon$ small enough, even though non uniformly in $M$ and $\beta$.\\
Proving that the right side of (\ref{free_energy_determinant_expansion_gram_hadamard}) is well defined is a non trivial topic that requires a {\bf multiscale analysis} we are going to explain in the next chapter.
\label{remark_necessity_multiscale_analysis}
\end{rem}
\section{Interacting case: the multiscale analysis}
\label{section_multiscale_analysis}
In this section we explain how to set up a multiscale procedure to perform iterative resummations in order to re-express the specific free energy in terms of a modified expansion, whose $N-$th order term is summable in $N$ and uniformly convergent when the cut-offs are removed.
\subsection{Ultraviolet and infrared regimes, effective potential}
We wish to compute the {\it partition function} defined as $f_{\Lambda,\beta, M}=-\left(|\Lambda|,\beta\right)^{-1}\log \Xi_{\Lambda,\beta, M}$,
\begin{equation}
\Xi_{\Lambda,\beta,M}:=\int P_M(d\psi)e^{-\mathcal V(\psi)}.
\label{partition_function}
\end{equation}
First of all, let us fix the chemical potential: let $p_F=2\pi n_F/L$, $n_F\in\mathbb{N}$ and $\mu_0=1-\cos p_F$.\\
Dealing with fermions, what we are interested in are the excitations near the Fermi surface (which, in dimension one is the pair of points $\pm p_F$), so it is useful to look at the relative momenta with respect two $\pm p_F$: $k =k' \pm p_F$. So we can rewrite the dispersion $e(k)$ as $$\cos p_F - \cos(k'\pm p_F)= \cos p_F- \left( \cos k'\cos p_F \mp \sin k' \sin p_F\right)$$ so that, near the singularities ({\it i.e.} for $k'\sim 0$) we can consider the linear approximation of the free propagator (\ref{free_propagator_PBC})
\begin{equation}
\hat g(\pm p_F+k',k_0)\sim\frac{1}{-ik_0\pm k' \sin p_F}.
\label{free_propagator_PBC_linear_approx}
\end{equation}
\begin{rem}
\label{remark_linear_part_is_luttinger_propagator}
This approximation, besides carrying the {\it physical information} of the theory being the dominant part, corresponds to the propagator of an infrared Luttinger liquid model (i.e. the Luttinger model with an ultraviolet cut-off, that we will comment a bit more when we will study the flow of the running coupling constants). Despite the fact that the infrared Luttinger model, differently from the original Luttinger model (without an ultraviolet cut-off), is not exactly solvable by bosonization, we will use it as a reference model to study the flow of the running coupling constants.
\end{rem}
In order to split the whole momentum space into the union of annuli, first of all we define
\begin{equation}
|\bm k'|=\sqrt{k_0^2+v_0||k'||_{\mathbb T}^2},
\end{equation}
where $||k'||_{\mathbb T}^2=\min_{n\in\mathbb{Z}}|k'-2\pi n|,$ and $v_0=\sin p_F=\left .\frac{d }{dk}e(k)\right|_{k=p_F}$. So, we introduce a smooth $C^{\infty}$ function $\chi:\mathcal{D}_{\Lambda}\times \mathcal{D}_{\beta, M}\to C^{\infty}([0,1])$ defined in such a way that
\begin{equation}
\chi(\bm k')=
\begin{cases}
1, \mbox{ if } |\bm k'|\leq \gamma^{-1} p_F/2 ,\\
0, \mbox{ if } |\bm k'|\geq p_F/2,
\end{cases}
\label{cut_off_chi_definition}
\end{equation}
where $\gamma >1$, and $|\bm k|=\sqrt{k_0^2+k^2}$. So, using $$1=1-\chi (k+p_F,k_0)-\chi(k-p_F,k_0)+\chi (k+p_F,k_0)+\chi(k-p_F,k_0)$$ we define the {\it ultraviolet and infrared propagators as follows}:
\begin{equation}
\hat{g}(\bm k)=\underbrace{\frac{1-\chi (k+p_F,k_0)-\chi(k-p_F,k_0)}{ik_0+\cos p_F-\cos k}}_{\hat g^{(u.v.)}(\bm k)}+\underbrace{\frac{\chi(k+p_F,k_0)+\chi(k-p_F,k_0)}{-ik_0+\cos p_F-\cos k}}_{\hat g^{(i.r.)}(\bm k)}.
\end{equation}
Now, using the {\it addition principle} (\ref{addition_principle}),
we can introduce for any $\bm k\in\mathcal{D}_{\Lambda,\beta,M}$ a couple of Grassmann variables $\left(\psi^{(u.v.)}_{\bm k},\psi^{(i.r.)}_{\bm k}\right)$ with propagators respectively $\hat g^{(u.v.)}(\bm k)$ and $\hat g^{(i.r.)}(\bm k )$ so, given the potential $\mathcal{V}(\psi)$, we can split the integration as
\begin{equation}
\int P(d\psi)e^{-\mathcal{V}\left(\psi\right)}=\int P(d\psi^{(i.r.)})\int P(d\psi^{(u.v.)})e^{-\mathcal{V}\left(\psi^{(u.v.)}+\psi^{(i.r.)}\right)}
\end{equation}
Finally, we can use the {\it invariance of exponentials} (\ref{invariance_of_exponential}) and define the {\it effective potential at scale $0$}:
\begin{equation}
\int P(d\psi) e^{-\mathcal V(\psi)}=\int P(d\psi^{(i.r.)})\int P(d\psi^{(u.v)}) e^{-\mathcal V\left(\psi^{(i.r.)}+\psi^{(u.v.)}\right)},
\end{equation}
so that
\begin{equation}
\begin{split}
e^{-\beta |\Lambda| f^{(M)}_{\Lambda,\beta}}=\int P(d\psi^{(i.r.)}) \exp \left( \sum{n\geq 1}\frac{1}{n!}\mathcal E_{u.v.}^T\left(-\mathcal V\left(\psi^{(i.r.)}+\cdot\right);n\right)\right):=\\ := e^{-\beta |\lambda| e_{M,0}}\int P(d\psi^{(i.r.)})e^{-\mathcal V^{(0)}(\psi^{(i.r.)})}.
\end{split}
\end{equation}
where with $\mathcal{E}_{u.v.}^T\left(\mathcal{V}\left(\cdot+\psi^{(i.r.)}\right);n\right)$ means that we are computing the truncated expectation functions with respect to the Gaussian Grassmann measure $P_{(u.v.)}$ associated to the propagator $\hat g^{(u.v.)}$ keeping the Grassmann variable $\psi^{(i.r.)}$ as an external field, and the effective potential $\mathcal V_0(\psi)$ can be written as
\begin{equation}
\mathcal V_0(\psi)=\sum_{n=1}^{\infty}\sum_{\substack{ \bm x_1,\dots,\bm x_{2n}\\ \in\\ \Lambda\times [0,\beta)}} \left(\prod_{j=1}^{n} \psi^{(i.r.)+}_{\bm x_{2j-1}}\psi^{(i.r.)-}_{\bm x_{2j}} \right) W_{M,2n} (\bm x_1,\dots,\bm x_{2n}).
\label{effective_potential_scale_0}
\end{equation}
\begin{lem}[Ultraviolet integration]
\label{lemma_ultraviolet_integration}
The kernels $W_{M,2n}(\bm x_1,\dots, \bm x_{2n})$ in the previous expansion are given by power series in $\lambda$ convergent in the complex disc $|\lambda|\leq \lambda_0$ for $\lambda_0$ small enough and independent of $M,\Lambda, \beta$, and satisfy the following bound
\begin{equation}
\frac{1}{\beta |\Lambda|}\int d\bm x_1 \dots d\bm x_{2n} \left| W_{M,2n}(\bm x_1,\dots,\bm x_{2n}) \right|\leq C^n |\lambda|^{\max\{1,n-1\}}.
\end{equation}
Moreover, the limits $e_0=\lim_{M\to \infty} e_{M,0}$ and $W_{2n}=\lim_{M\to \infty}(\bm x_1,\dots,\bm x_{2n})$ exist and are reached uniformly in M.
\end{lem}
We do not prove this Lemma because, even if the proof is not trivial, it is simpler than what we will do in studying the infrared regime, and uses the same techniques: we refer to \cite{benfatto1993beta} or to \cite{giuliani2009rigorous,giuliani2011ground}, in which the ultraviolet regime is studied by a multiscale analysis in order to deal with the (very mild) singularity of the free propagator at equal imaginary times. Anyway, the multiscale analysis for the ultraviolet regime is not stricly necessary, and it may be possible to avoid it following the ideas of \cite{pedra2008determinant}.
\begin{rem}
The fact that the limits are reached {\it uniformely} in $M$ tells us that the infrared problem is essentially independent of M. Since in the infrared region $M$ does not play any role, from now on we drop the label $M$.
\end{rem}
What we have just explained technically, is the first step of Wilson's idea: ideed, we have integrated out the physical information coming from the high energies degree of freedom (far away from the singularities of the infinite volume free propagators), and we are left with an effective theory, described by the effective potential $\mathcal{V}^{(0)}$, describing fermions with momenta a bit closer to singularities. Of course, the information coming from higher energies degrees of freedom are averaged in the effective potential. It will be clear soon in this section how we keep this information in the effective potential by changing the so called coupling constants.
\subsection{Quasi-particles and multiscale expansion}
\paragraph{Quasi-particles in momentum space} As we have already noticed, there are two points in which $\hat g(\bm k)$ is singular and of course, having integrated a slice of momenta far away from singularity (ultraviolet integration), the infrared propagator is still singular in the same two points $\pm p_F$. So, driven by the idea of using again the {\it addition principle}, it is worth defining
\begin{equation}
\hat g^{(i.r.)}(\bm k)=\sum_{\omega=\pm 1}\frac{\chi\left(k-\omega p_F,k_0\right)}{-ik_0+\cos p_F-\cos k}=:\sum_{\omega=\pm}\hat g^{(i.r.)}_{\omega}(\bm k)
\end{equation}
allowing us to write
\begin{equation}
\int P(\psi^{(i.r.)})e^{\mathcal{V}^{(0)}\left(\psi^{(i.r.)}\right)}=\prod_{\omega=\pm 1}\int P(d\psi_{\omega}^{(i.r.)})e^{\mathcal{V}^{(0)}\left(\psi_+^{(i.r.)}+\psi_-^{(i.r.)}\right)}
\label{quasi_partice_PBC_definition}
\end{equation}
which is the definition of the {\it quasi-particles} Grassmann fields, and the {\it label} $\omega$ is sometimes called the {\it branch label}: for the readers familiar with the Luttinger liquids theory, nearby the singularities we consider the linear approximation of the free propagator (\ref{free_propagator_PBC_linear_approx}), and $\omega=\pm$ labels the {\it right-moving and left-moving fermions}.
\paragraph{Quasi-particles in real space-time} We introduced the cutoff in momentum-space because we want to get closer and closer to the singularities in $(\pm p_F,0)$. It is worth keeping in mind that we are going to plug in the strategy we introduced in subsection (\ref{subsection_How_to_compute_truncated_expectations}), in particular formula (\ref{expectation_truncated_determinants}), so we will need to build up the matrix $G_T$ we introduced in (\ref{expectation_truncated_determinants}), and it is well known how to do it in real space. So it is convenient to define the quasi-particles in real space-time starting from the Fourier transform of the propagator $\hat g^{(i.r.)}$:
\begin{equation}
\begin{split}
g^{(i.r.)}(\bm x-\bm y)=\frac{1}{L\beta}\sum_{\omega=\pm 1} \sum_{\bm k\in \mathcal{D}_{\Lambda,\beta}}\frac{e^{-i k_0(x_0-y_0)}e^{-ik(x-y)}}{-ik_0+e(k)}\chi(k-\omega p_F,k_0)=\\
=\frac{1}{L\beta}\sum_{\omega=\pm 1}\sum_{k'\in\mathcal{D}^\omega_{\Lambda,\beta}} \frac{e^{-i k_0(x_0-y_0)}e^{-i\omega p_F(x-y)}e^{-ik'(x-y)}}{-ik_0+e(k'+\omega p_F)}\chi(k',k_0)=\\
=: \sum_{\omega = \pm 1}e^{-i\omega p_F(x-y) }g^{(i.r.)}_{\omega}(\bm x-\bm y)
\end{split}
\label{free_propagator_infrared_quasi_particles}
\end{equation}
where $\mathcal{D}^\omega_{\Lambda,\beta}=\mathcal{D}_{\Lambda,\beta}-(\omega p_F, 0)$ and
\begin{equation}
g^{(i.r.)}_{\omega}(\bm x-\bm y)=\frac{1}{L\beta}\sum_{k'\in\mathcal{D}^\omega_{\Lambda,\beta}} \frac{e^{-i\bm k'\cdot (\bm x-\bm y)}}{-ik_0+e(k'+\omega p_F)}\chi(k',k_0).
\end{equation}
\paragraph{Multiscale expansion}
The idea is to approach the singuarities in infinitely many steps. So, we introduce the telescopic identity $$\chi (\bm k)= \sum_{h=-\infty}^0 \left(\chi(\gamma^{-h}\bm k)-\chi(\gamma^{-h+1}\bm k)\right):=\sum_{h=-\infty}^0f_h(\bm k), $$
which implies the obvious definition of {\it propagators on single scale}
\begin{equation}
\hat g^{(i.r.)}_\omega(\bm k)=\sum_{h=-\infty}^0\frac{f_h(k-\omega p_F,k_0)}{-ik_0+\cos p_F-\cos k}=:\sum_{h=-\infty}^0 \hat g^{(h)}_\omega(\bm k)=:\hat g^{(\leq h)}.
\label{propagators_splitted_on_all_scales}
\end{equation}
\begin{rem}
In fact, as far as $L$ and $\beta$ are finite, the sum on $h$ is a sum over a finitely many terms ({\it i.e.} we have a {\it natural} cut-off): indeed by the very definition of $\mathcal D_{\beta}$ (\ref{momenta_space_time}), $|k_0|\geq 2\pi/\beta$, so that $f_h(\bm k)=0$ for any $h<h_\beta$ where $$h_\beta =\min \left\{h: \gamma^{h+1}>\pi/\beta \right\},$$ {\it i.e. } $h_\beta=O\left(\log \beta\right)$ so, as already pointed out, we perform our computations keeping $L$ and $\beta$ finite and then, having obtained bounds independent of $L$ and $\beta$, we take the thermodinamic and the zero temperature limits.
\end{rem}
Again, by combining the {\it addition principle} (\ref{addition_principle}) and the {\it invariance of the exponential} (\ref{invariance_of_exponential}) we can split first of all the Grassmann field $\psi_{\omega}^{(\leq 0)}$ into to Grassmann fields $\psi_{\omega}^{(0)}$ and $\psi_{\omega}^{(\leq -1)}$ with propagators respectively $\hat g^{(0)}_\omega$ and
\begin{equation}
\hat g_\omega^{(\leq -1)}(\bm k)=\sum_{h\leq -1}\hat g^{(h)}_{\omega}(\bm k),
\end{equation}
or, in a wider generality, $\psi^{h+1}$ and $\psi^{(\leq h)}_\omega$ with propagators respectively $\hat g^{(h+1)}$ and
\begin{equation}
\hat g_\omega^{(\leq h)}(\bm k)=\sum_{j\leq h}\hat g^{(j)}_\omega(\bm k),
\end{equation}
by which we can compute the effective potential on scale $-1$ by
\begin{eqnarray}
\begin{aligned}
\int P(d\psi^{(\leq 0)})e^{\mathcal{V}^{(0)}(\psi^{(\leq 0)})}=\int P(d\psi^{(\leq -1)})\int P(d\psi^{(0)})e^{\mathcal{V}^{(0)}(\psi^{(\leq 0)})}=\\
=: e^{|\Lambda|\beta e_{0}}\int P(d\psi^{(\leq -1)})e^{\mathcal{V}^{(-1)}(\psi^{(\leq -1)})},\\
\end{aligned}\\
\begin{aligned}
|\Lambda|\beta e_0+\mathcal{V}^{(-1)}\left(\psi^{(\leq -1)}\right)=\sum_{n=0}^{\infty}\frac{1}{n!} \mathcal{E}_0^T\left(\mathcal{V}^{(0)}\left(\cdot+\psi^{(\leq -1)}\right);n\right)=\\
=\sum_{n=0}^{\infty}\frac{1}{n!} \mathcal{E}_0^T\left(\sum_{m=0}^{\infty}\frac{1}{m!} \mathcal{E}_{(u.v.)}^T\left(\mathcal{V}\left(\cdot+\psi^{(\leq 0)}\right);m\right);n\right).
\end{aligned}
\end{eqnarray}
where $\mathcal V^{(-1)}\left(0\right)=0$ and, iteratively, for any scale $h$ we can define an effective potential by
\begin{eqnarray}
\begin{aligned}
e^{\mathcal{V}^{(h)}\left(\psi^{(\leq h)}\right)}e^{+|\Lambda|\beta e_{h+1}}=\\=\int P(d\psi^{(h+1)})\dots \int P(d\psi^{(0)}) \int P(d\psi ^{(u.v.)})e^{\mathcal{V}\left(\psi^{(\leq h)}+\psi^{(h+1)}+\dots+(\psi^{(u.v.)}\right)} ,
\end{aligned}\\
|\Lambda|\beta e_{h+1}+\mathcal{V}^{(h)}\left(\psi^{(\leq h)}\right)=\sum_{n=0}^{\infty} \frac{1}{n!}\mathcal{E}^T_{h+1}\left(\mathcal{V}^{(h+1)}(\cdot+\psi^{(\leq h)});n\right).
\label{effective_potential_scale_h_recursive}
\end{eqnarray}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
[scale=1, transform shape]
\node at (1,3) {$\mathcal V^{(-1)}$ =};
\node at (1,0) {$\mathcal V^{(0)}$ =};
\node at (4.5, 3) {+};
\node at (7.5,3) {+};
\node at (10.5,3) {...};
\node at (4,0) {=};
\node at (7.5, 0) {+};
\node at (10.5,0) {+};
\node at (13.5,0) {...};
\draw [very thick] (2,3) -- ++ (2,0) ++ (1,0) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (1,1) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (-1,1) -- ++ (1,0);
\fill (2,3) circle (0.1);
\fill (2,3) ++ (1,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) circle (0.2);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,1) circle (0.2);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,-1) circle (0.2);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (2,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,1) circle (0.2);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,-1) circle (0.2);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,0) circle (0.2);
\draw [very thick] (2,0) -- ++ (1,0);
\fill (2,0) circle (0.1);
\fill (2,0) ++ (1,0) circle (0.2);
\draw [very thick] (5,0) -- ++ (2,0) ++ (1,0) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (1,1) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (-1,1) -- ++ (1,0);
\fill (5,0) circle (0.1);
\fill (5,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,-1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (2,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,-1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,0) circle (0.1);
\end{tikzpicture}
\caption{Graphic representation of the first step of the iteration: the graphic expression of $\mathcal V^{(-1)}$ in the first line is the same as the graphical expression of $\mathcal V^{(0)}$ in the second line, where points have been replaced by big black docks, and the meaning of a big black dot attached to the lines is clear in the second line, {\it i.e.} it is a shortcut to write $\mathcal V^{0}$.}
\label{figure_effective_potentiale_scale_0}
\end{figure}
where the truncated expectation $\mathcal E^T_{h}$ (we can think that $h$ can assume also the values $h=1\equiv u.v.$, $(\leq h-1)=(\leq 0) \equiv (i.r.)$ to have a general definition), given a polynomial $F\left(\psi^{(h)}\right)$ with coefficients depending on $\psi^{(\leq h-1)}$, is defined as
\begin{equation}
\mathcal E^T_h \left(F(\cdot);n\right)=\frac{\partial^n}{\partial\lambda^n} \int P(d\psi^{(h)}) e^{\lambda F(\psi^{(h)})}\Bigl|_{\lambda =0}.
\end{equation}
and, in the argument of the sum, we could express $\mathcal{V}^{(h+1)}$ in terms of $\mathcal{V}^{(h+2)}$ and so on until the only potential involved in the computation is the very first. The recursive structure of these formulae suggests their {\it diagrammatic representation}, known as Gallavotti-Nicolò trees \cite{gallavotti1985renormalization}. In order to understand how to draw these trees before a systematic explanation, it worths looking at the effective potential on scale $-1$ (the first non trivial one): in figure (\ref{figure_effective_potentiale_scale_0}) it is expressed in terms of the {\it previous} effective potential $\mathcal{V}^{(0)}$.\\
In general, the effective potential at scale $h$ can be written as
\begin{equation}
\mathcal V^{(h)}(\psi)=\sum_{n=1}^{\infty}\sum_{\substack{ \bm x_1,\dots,\bm x_{2n}\\ \in\\ \Lambda\times [0,\beta)}} \left(\prod_{j=1}^{n} \psi^{(\leq h)+}_{\bm x_{2j-1}}\psi^{(\leq h)-}_{\bm x_{2j}} \right) W^{(h)}_{2n} (\bm x_1,\dots,\bm x_{2n}).
\label{effective_potential_scale_h}
\end{equation}
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}
\node at (-1.5,2) {=};
\node at (1.5,2) {+};
\node at (4.5,2) {+};
\node at (7.5,2) {+};
\node at (10.7,2) {+...};
\fill (-3,2) circle (0.06);
\node at (-3,2.4) {h-1};
\fill (-2,2) circle (0.17);
\node at (-2,2.4) {h};
\fill (-1,2) circle (0.06);
\node at (-1,2.4) {h-1};
\fill (0,2) circle (0.06);
\node at (0,2.4) {h};
\fill (1,2) circle (0.17);
\node at (1,2.4) {h+1};
\fill (2,2) circle (0.06);
\node at (2,2.4) {h-1};
\fill (3,2) circle (0.06);
\node at (3,2.4) {h};
\fill (4,3) circle (0.17);
\node at (4,3.4) {h+1};
\fill (4,1) circle (0.17);
\node at (4,1.4) {h+1};
\fill (5,2) circle (0.06);
\node at (5,2.4) {h-1};
\fill (6,2) circle (0.06);
\node at (6,2.4) {h};
\fill (7,2) circle (0.17);
\node at (7,2.4) {h+1};
\fill (7,3) circle (0.17);
\node at (7,3.4) {h+1};
\fill (7,1) circle (0.17);
\node at (7,1.4) {h+1};
\fill (8,2) circle (0.06);
\node at (8,2.4) {h-1};
\fill (9,2) circle (0.06);
\node at (9,2.4) {h};
\fill (10,3) circle (0.17);
\node at (10,3.4) {h+1};
\fill (10,1) circle (0.17);
\node at (10,1.4) {h+1};
\fill (10,1.7) circle (0.17);
\node at (10,2.0) {h+1};
\fill (10,2.3) circle (0.17);
\node at (10,2.6) {h+1};
\draw [very thick] (-3,2) -- ++(1,0) ++ (1,0) -- ++ (1,0) -- ++ (1,0) ++ (1,0)
-- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++(1,-1) ++ (0,1) ++ (1,0) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (-1,1) -- ++ (1,0) ++ (1,0) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (-1,1) -- ++ (1,0.3) ++ (-1,-0.3) -- ++ (1, -0.3);
\end{tikzpicture}
\end{center}
\caption{Graphical representation of $\mathcal V^{(h-1)}$, where the big black dot represents $\mathcal V^{(h)}$. It sould be thought of as the generalization at a generic scale $h$ of figure (\ref{figure_effective_potentiale_scale_0})}
\end{figure}
\subsection{Gallavotti-Nicolò trees}
So far we have rewritten the quantities we are interested in, as the {\it specific free energy} (\ref{free_energy_specific_PBC}) and the {\it Schwinger functions} (\ref{schwinger_function_n_points_PBC}) by combining the {\it Grassmann integrals representation} (which implies that we can express these quantities just in terms of {\it truncated expectation functions} of some simple object, as effective interaction (\ref{free_energy_as_sum_of_trunc_expec})) and a {\it multiscale representation} (based on a splitting of the momentum space due to the fact that the main ingredient of our analysis, {\it i.e.} the free propagator (\ref{free_propagator_PBC}), is singular in two points, and we want to approach these singularities following Wilson's idea of RG).\\ This led us to a recursive formula (\ref{effective_potential_scale_h_recursive}) for the effective potentials which in principle involves $h$ sums over infinitely many terms, and we have to deal with its convergence. As we will see, the important tool of Gallavotti-Nicolò trees \cite{gallavotti1985renormalization} allows us to exploit the {\it multiscale structure} of these formulae in order to study in a systematic way the convergence of the series we want to study.
\paragraph{Construction of the tree} Before starting: from now on {\it line} and {\it branch} have the same meaning.\\
Graphically, first of all we consider the plane $(x,y)$, we draw the vertical lines $x=h, h+1, h+2,\dots,0,1$, and we consider all possible graphs obtained as follows. We pick a point on the vertical line $x=h$, we call it $r$ (meaning the {\it root} of the tree), and we draw an horizontal {\it line } starting from $r$ and leading to a point $v_0$ on the vertical line $x=h_{v_0}>h$, which is the {\it first non trivial vertex}, because it is the first (starting from the left) branching point of $s_{v_0}\geq 2$ lines, forming an angle $\vartheta_j\in (-\pi/2,\pi/2)$ with the x-axis, where $j=1,\dots,s_{v_0}$, and ending into point each of which is located on some vertical line $x=h_{v_0+1},h_{v_0+2},\dots$, which in turn will become branching points. We go on in such a way until $n$ points on the vertical line $x=1$ are reached, and we call them the {\it endpoints}. All the {\it branching points} between the root and the endpoints will be called the {\it nontrivial vertices}, while all the intersections of the lines connecting two nontrivial vertices with the vertical lines will be called {\it trivial vertices}. The integer $n$ we have already introduced, which is the number of endpoints, is the {\it order} of the tree; in sake of clarity, we will label them with numbers from $1$ to $n$ going from the top to the bottom.\\
Among all the trees, we associate a special name to the tree having only one line connecting the root to a vertex on the line $x=1$: it is the {\it trivial tree} $\tau_0$, and in such a case the root has scale $h=1$ (it is important to identify this tree among the others because it will be the starting point of an iterative procedure to rewrite the effective potentials (\ref{effective_potential_scale_h_recursive}) in terms of some numerical values we will associate to these graphical elements). The graph obtained is a {\it tree graph}, because it has no loops and it consists of a set of lines connecting a {\it partially ordered} set of points (that we call {\it vertices}). In particular, having a special point called {\it root}, it is a {\it rooted tree}. We will denote the partial ordering with the symbol $\prec$, meaning that if two vertices $v$ and $w$ are ordered as $v\prec w$, then $h_v<h_w$ (of course, since there is a one to one correspondence between branches and vertices, {\it i.e.} we can associate to each branch the vertex it enters, also the branches are ordered). By construction, to each vertex $v$ is associated an integer number $h_v$ that we call {\it scale label}. We call $\mathcal T_{h,n}$ the family of trees with $n$ endpoints and the root at scale $h$, and we call a generic element of this family $\tau$ (see Figure (\ref{figure_gallavotti_nicolo_tree})).\\
Of course we will use the {\it Gallavotti-Nicolò trees} formalism in order to study a very precise problem, so we will need to introduce other labels to branches and/or vertices: we will call the set of all the vertices of the tree $\tau$ (included trivial vertices and endpoints) $V(\tau)$, and we introduce a special notation for the endpoints (because it will be useful in the notation we are going to use) $V_f(\tau)\in V(\tau)$. We remark that, by construction (of the tree) and by definition of the sets of vertices, for any $v\in V_f(\tau)$ $h_v=1$, while for any $w\in V(\tau)\setminus V_f(\tau)$ $h<h_w<1$.
\begin{figure}
\centering
\begin{tikzpicture}
[scale=1, transform shape]
\foreach \i in {1,2,3,4,5,6,7,8,9,10,11,12,13,14} {%
\draw [thick] (\i,2.9) -- (\i, 11.2); }
\foreach \j in {1,2,3,4,5} {%
\draw [very thick] (\j,7) -- ++ (1,0);
\fill (12,9) circle (0.1);
\fill (13,9.5) circle (0.1);
\fill (\j,7) circle (0.1);
\fill (6,7) circle (0.1);
}
\foreach \j in {0,1,2,3,4,5} {%
\draw [very thick] (6+\j, 7 -\j *0.5) -- +(1,-0.5);
\fill (6+\j,7-\j*0.5) circle (0.1);}
\fill (6+6, 7-3) circle (0.1);
\foreach \j in {0,1,2,3} {%
\draw [very thick] (6+\j, 7 +\j *0.5) -- +(1,+0.5);
\fill (6+\j,7+\j*0.5) circle (0.1);}
\fill (6+4, 7+2) circle (0.1);
\foreach \j in {0,1,2,3} {%
\draw [very thick] (10+\j, 9 +\j *0.5) -- +(1,+0.5);
\fill (10+\j,9+\j*0.5) circle (0.1);}
\fill (14, 11) circle (0.1);
\foreach \j in {0,1,2,3} {%
\draw [very thick] (10+\j, 9 -\j *0.5) -- +(1,-0.5);
\fill (10+\j,9-\j*0.5) circle (0.1);}
\fill (14, 7) circle (0.1);
\draw [very thick] (13,7.5) -- (14,8);
\fill (14,8) circle (0.1);
\foreach \j in {0,1} {%
\draw [very thick] (12+\j, 8 +\j *0.5) -- +(1,+0.5);
\fill (12+\j,8+\j*0.5) circle (0.1);}
\fill(14,9) circle (0.1);
\foreach \j in {0,1,1} {%
\draw [very thick] (12+\j, 4 +\j *0.5) -- +(1,+0.5);
\fill (12+\j,4+\j*0.5) circle (0.1);}
\foreach \j in {0,1,1} {%
\draw [very thick] (12+\j, 4 -\j *0.5) -- +(1,-0.5);
\fill (12+\j,4-\j*0.5) circle (0.1);}
\fill (14,3) circle (0.1);
\fill (14,4) circle (0.1);
\fill (14,5) circle (0.1);
\draw [very thick] (12,4) -- ++ (1,0) -- ++ (1,0);
\fill (13,4) circle (0.1);
\draw [very thick] (11,8.5) -- (14, 10);
\fill (14,10) circle (0.1);
\node at (1,2.7) {$\bm h$};
\node at (2,2.7) {$\bm h+1$};
\node at (3,2.7) {$\bm h+2$};
\foreach \i in {4,5,6,7,8} {%
\node at (\i,2.8) {...};}
\node at (9,2.7) {$\bm h_v$};
\node at (10,2.7) {$\bm h_v+1$};
\foreach \i in {11,12,13} {%
\node at (\i,2.8) {...};}
\node at (14,2.7) {$\bm 1$};
\node at (9,8.8) {$ v$};
\node at (1,7.3) {$ r$};
\node at (2,7.3) {$ v_0$};
\foreach \i in {0,1,2,3} {%
\draw [very thick] (8+\i,8-0.5*\i) -- ++ (1,-0.5);
\fill (8+\i,8-0.5*\i) circle (0.1);}
\fill (12,6) circle (0.1);
\foreach \i in {0,1} {%
\draw [very thick] (12+\i,6-0.25*\i) -- ++ (1,-0.25);
\fill (12+\i,6-0.25*\i) circle (0.1);}
\foreach \i in {0,1} {%
\draw [very thick] (12+\i,6+0.25*\i) -- ++ (1,+0.25);
\fill (12+\i,6+0.25*\i) circle (0.1);}
\fill (14, 6.5) circle (0.1);
\fill (14, 5.5) circle (0.1);
\end{tikzpicture}
\caption{Example of a tree $\tau\in \mathcal T_{h,n}$ where $n=10$.}
\label{figure_gallavotti_nicolo_tree}
\end{figure}
\paragraph{The "importance" of the endpoints}
Since we have built up these trees by an iterative procedure up to the endpoints, the endpoints themself have a kind of a special role: indeed they are the vertices corresponding to an interaction part of the effective potential $\mathcal V^{(0)}$ (\ref{effective_potential_scale_0}) (in fact the vertices $V(\tau)\setminus V_f(\tau)$ correspond to effective potentials).\\
Then each endpoint $v\in V_f(\tau)$ on scale $h_v=1$ is labeled by a further label $i$ which identifies in a unique way the contribution $V_i$ to the potential $\mathcal V^{(0)}$, and we will say that the endpoint $v\in V_f(\tau)$ is {\it of type} $r_i$ if $i_v=i$.\\
Besides, we assign to each endpoint $v\in V_f(\tau)$ a set of spacetime points $\{\bm x_v\}$ which are the integration variables corresponding to the particular interaction contribution $V_i$: only one integration variable if we have an endpoint of $\nu$ tipe, {\it i.e.} a counterterm $\nu \sum_{x\in\Lambda}\psi^+(\bm x)\psi^-(\bm x)$, or two integration points if we have an endpoint of type $\lambda$, {\it i.e.} a two-points interaction and so on. We extend the assignment of the index $\{\bm x_v\}$ also to the not-endpoint vertices $v\in V(\tau)\setminus V_f(\tau)$, saying that $\{\bm x_v\}$ is the family of all space-time points associated with the endpoints following $v$, {\it i.e.} with all the endpoints $w\in V_f(\tau)$ such that $v\prec w$.\\
Finally, we introduce a {\it field label} $f$ to recognize the different fields appearing in the terms associated to the endpoints, and for each endpoint $v\in V_f(\tau)$ we collect the field labels into a further label $I_v=\{f_1^{(v)},\dots,f_s^{(v)}\}$ in the case that the endpoint $v$ is associated to $s$ fields; so the variables $\bm x(f)$, $\epsilon(f)$ and $\omega(f)$ will indicate respectively the {\it spacetime point}, the {\it creation/annihilation index} and the {\it quasi-particle index} of the field $f$. As a concrete example, let us consider a quartic endpoint $v\in V_f(\tau)$ associated with four different fields and four different integration points: $$\lambda \sum_{\bm x_1\dots \bm x_4} \psi^+_{\omega_1}(\bm x_1)\psi^-_{\omega_2}(\bm x_2)\psi^+_{\omega_3}(\bm x_3)\psi^-_{\omega_4}(\bm x_4)W_{4,\bm \omega}(\bm x_1,\dots,\bm x_4),$$
so $\{\bm x_v\}=\{\bm x_1, \bm x_2,\bm x_3, \bm x_4\}$, $I_v=\{f_1,f_2, f_3,f_4\}$ so that
\begin{equation}
\begin{split}
\bm x(f_1)=\bm x_1,\hspace{3mm} \epsilon(f_1)=+,\hspace{3mm} \omega(f_1)=\omega_1,\\
\bm x(f_2)=\bm x_2,\hspace{3mm} \epsilon(f_2)=-,\hspace{3mm} \omega(f_2)=\omega_2,\\
\bm x(f_3)=\bm x_3,\hspace{3mm} \epsilon(f_3)=+,\hspace{3mm} \omega(f_3)=\omega_3, \\
\bm x(f_4)=\bm x_4,\hspace{3mm} \epsilon(f_1)=-,\hspace{3mm} \omega(f_4)=\omega_4.
\end{split}
\end{equation}
Finally, we call the family of the spacetime points $\bm x(I_v)=\{\bm x(f): f\in I_v\}$.
\paragraph{Clusters and effective potentials} Once we have introduced Gallavotti-Nicolò trees, we can exploit this diagramatic structure to write the effective potentials on scale $h$ as
\begin{equation}
\mathcal V^{(h)}\left(\psi^{(\leq h)}\right)+L\beta e_{h+1}= \sum_{n=1}^{\infty}\sum_{\tau\in\mathcal{T}_{h,n}}\mathcal{V}^{(h)}\left(\tau, \psi^{(\leq h)}\right)
\label{effective_potential_sum_over_trees}
\end{equation}
where $e_{h+1}$ is a normalization factor for any $h\leq 1$, and of course there is a little abuse of notation in the use of the symbol $\mathcal V^{(h)}$, which has two different meanings in the left side, where $\mathcal{V}^{(h)}(\cdot)$ depends only on the fields on scale $\leq h$, and in the right hand side where the argument of the sum, which depends both on the fields on scale $\leq h$ and on a specific tree $\tau$ chosen among the trees of the family $\mathcal{T}_{h,n}$, is defined iteratively as follows:
\begin{itemize}
\item if $\tau$ is trivial, $\mathcal V^{(0)}(\tau_0,\psi^{(\leq 0)})$ is given simply by one of the contributions to $\mathcal{V}(\psi)$ of the interaction,
\item so if $\tau$ is not trivial, there is a {\it first vertex} $v_0$ the $s_{v_0}$ subtrees $\tau_1,\dots,\tau_{s_{v_0}}\subset \tau$ with root $v_0$ arise from, then
\begin{equation}
\mathcal{V}^{(h)}\left(\tau, \psi^{(\leq h)}\right)=\frac{1}{s_{v_0}!} \mathcal{E}_{h+1}^T\left(\mathcal{V}^{(h+1)}\left(\tau_1,\psi^{(\leq h+1)}\right),\dots, \mathcal{V}^{(h+1)}\left(\tau_{s_{v_0}},\psi^{(\leq h+1)}\right)\right)
\label{effective_potentials_tree_wrt_first_vertex}
\end{equation}
\item of course, each of the $s_{v_0}$ trees can be handled as the {\it original} tree $\tau$ (note that $s_v=0$ if $v\in V_f(\tau)$ is an endpoint), and we can iterate, for each argument of $\mathcal{E}_{h+1}^T$, the formula (\ref{effective_potentials_tree_wrt_first_vertex}).
We define, for any vertex $v$ of the tree, a subset $P_v$ of $I_v$, the {\it external fields} of $v$, that must satisfy some constraints:
\begin{itemize}
\item if $v$ is an endpoint, $P_v=I_v$,
\item if $v$ is not an endpoint, $v_1,\dots, v_{s_v}$ are $s_v\geq 1$ vertices immediately following it, so $P_v\subset\cup_{i}P_{v_i}$,
\item if $v$ is not an endpoint, we define $Q_v=P_v \cap P_{v_i}$, being the set of labels of the fields associated to the external fields both of $v_i$ and of $v$, that implies $P_{v}=\cup_i^{s_v} Q_{v_i}$,
\item \begin{equation}
\tilde \psi^{(\leq h_v)}\left(P_{v_j}\right)=\prod_{f\in P_{v_j}}\psi^{(\leq h_v)\epsilon(f)}_{\omega(f)}(\bm x(f)),
\label{tilda_psi_product_of_fields}
\end{equation}
is a product of $|P_{v_j}|$ fields on scale $\leq h_v$ (as almost all these formulae, this one can be proven by induction on the scale $h_v$).
\end{itemize}
Finally, we get the general formula
\begin{equation}
\begin{aligned}
\mathcal{V}^{(h)}\left(\tau, \psi^{(\leq h)}\right)= \left(\prod_{v\in V(\tau)}\frac{1}{s_v!}\right)\\ \mathcal{E}_{h+1}^T\left(\mathcal{E}_{h+2}^T\left(\mathcal{E}_{h+3}^T\dots \mathcal{E}_{-2}^T\left(\mathcal{E}_{-1}^T\left(\mathcal{E}_{0}^T\left(\mathcal V^{(0)}(\tau_0,\psi^{(\leq 1)}),\dots\right),\dots\right),\dots\right),\dots\right),\dots\right)
\end{aligned}
\end{equation}
where, thanks to the first step, we know $\mathcal V^{(0)}(\tau_0,\psi^{(\leq 1)})$.\\
Since the starting point of the latter iterative formula are the {\it trivial trees}, the {\it direction} to follow in order to compute the truncated expectation values is from the endpoints toward the root: once a vertex $v$ is reached, one is left with computing a quantity as
\begin{equation}
\frac{1}{s_v!}\mathcal{E}_{h_v}^T\left(\tilde \psi^{(\leq h_v)}\left(P_{v_1}\right),\dots, \tilde \psi^{(\leq h_v)}\left(P_{v_{s_v}}\right)\right).
\label{expectation_truncated_scale_h_v_gallavotti_nicolo}
\end{equation}
\end{itemize}
\begin{rem}
\label{remark_cluster_structure_gallavotti_nicolo_trees}
At this point, the intrinsic cluster structure of the Gallavotti-Nicolò trees comes out: indeed, using the {\it determinant expansion} (\ref{expectation_truncated_determinants})
$$\mathcal{E}^T\left(\tilde \psi\left( P_1\right),\dots, \tilde \psi \left( P_s\right)\right)= \sum_T \alpha_T \left(\prod_{\ell\in T}g_{\ell}\right)\int dP_T(\bm t)\det G^T(\bm t)$$
to compute (\ref{expectation_truncated_scale_h_v_gallavotti_nicolo}), we obtain a sum over all possible Feynman diagrams obtained by contracting the half-lines coming from the sets $P_{v_1}, \dots, P_{v_{s_v}}$: when we reach, moving along the tree $\tau$ (from the endpoint towards the root), a vertex $v\in V(\tau)$, we construct a {\it diagram} formed by lines $\ell$ on scales $h_\ell\geq h_v$. Moreover, for any vertex $w\succ v$ there is a subdiagram, that we call $\Gamma_w$ such that all the lines $h_w$ form a connected set if all the further subdiagrams $\Gamma_{w_j}$, $j=1,\dots,w_{s_w}$, corresponding to the $s_w$ vertices immediately following $w$ ({\it i.e.} the roots of the subtrees arising from $w$) are seen as as single elements, which is a consequence of the very definition of truncated expectation.\\
We define a {\it cluster} on scale $h$ as a set of endpoints which are connected by lines on scale $h'\geq h$ such that there is at least one line on scale $h$. The endpoints are trivial clusters at scale $h=1$.
\end{rem}
In this way, we set up a {\it hierarchical structure} of the endpoints into {\it clusters} contained into each other following the order of scales $h\leq 1$. So
\begin{itemize}
\item We associate with each vertex $v\in V(\tau)$ the cluster $G_v$ containing all the endpoints following $v$. From this definition it follows the inclusion relation
\begin{equation}
v\prec w \Rightarrow G_v \supset G_w
\label{hierarchy_inclusion_relation_clusters}
\end{equation}
So there is a one to one map allowing us to represent a trees as a set of hierarchically organized clusters and {\it vice versa}.
\item Given the cluster structure, we can define the {\it anchored trees}: if all the maximal subclusters of $G_v$: $G_{v_1},\dots,G_{v_{s_v}}\subset G_v$ are thought as points, then the set of these points is connected. So, it is possible to select a set of $s_v-1$ lines connecting all of them. This set is, by definition, an {\it anchored tree} (which is a minimal connection between the maximal subclusters of $G_v$).
\begin{rem}
\label{independence_of_trunc_exp_functions_of_the_internal_structure_of_clusters}
Each truncated expectation function sees the clusters associated to the sets of labels $P_{v_1},\dots,P_{v_{s_v}}$ as points, so the action of truncated expectations is independent of the internal structures of the subclusters $G_{v_1},\dots,G_{v_{s_v}}$, and depends only on the external lines of these clusters.
\end{rem}
\end{itemize}
\subsection{Non-renormalized expansion, non-perturbative estimates and classification of the divergences}
\label{subsection_non-renormalized_expansion}
\paragraph{Properties of propagators}
Let us study the behaviour of the single-scale propagators.
\begin{lem}
\label{lemma_propagator_faster_any_power}
For any $N\in\mathbb{N}$ there exists a constant $C_N$ such that the {\it quasi-particle} propagator is bounded by
\begin{equation}
\left|g_{\omega}^{(h)}(\bm x-\bm y)\right|\leq \gamma^h\frac{C_N}{1+\left(\gamma^h |\bm x|\right)^N} .
\label{bound_propagator_faster_than_any_power}
\end{equation}
\end{lem}
We prove this Lemma in Appendix (\ref{appendix_propagator_decay_property}).\\
It is worth underlining that the fact $C_N$ grows with $N$ is not an issue, since we will use the previous bound for $N\leq 4$. Besides, the previous bound comes from the well known result in Fourier analysis saying that the Fourier transform of a $C^\infty$ function decays faster than any power, {\it i.e.} it is just a consequence of the cut-off $f_h$ we used.
\begin{corollary}
As a trivial consequence of Lemma \ref{lemma_propagator_faster_any_power}, we can bound the norms $||\cdot||_\infty$ and $||\cdot||_1$ of the propagator:
\begin{equation}
||g_{\omega}^{(h)}||_{\infty}:=\sup_{\bm x,\bm y}|g_{\omega}^{(h)}(\bm x-\bm y)|\leq C\gamma^h,
\label{norm_infty_PBC_propagator}
\end{equation}
and
\begin{equation}
||g_{\omega}^{(h)}||_1=\left|\frac{1}{L\beta}\int d\bm x d\bm y g_{\omega}^{(h)}(\bm x-\bm y)\right|\leq C\gamma^{-h},
\label{norm_1_propagator}
\end{equation}
for some $C>0$.
\end{corollary}
\paragraph{Estimates of kernels of effective potential} Summarizing the analysis of the previous section in a more compact notation, we can write that the {\it effective potential} at scale $h$ is
\begin{equation}
\begin{split}
\mathcal{V}^{(h)}\left(\psi^{(\leq h)}\right)=\sum_{n=1}^{\infty} \sum_{\tau\in\mathcal T_{h,n}}\mathcal{V}^{(h)}\left(\tau,\psi^{(\leq h)}\right),\\
\mathcal{V}^{(h)}\left(\tau,\psi^{(\leq h)}\right)=\int d\bm x(I_{v_0})\sum_{P_{v_0}\subset I_{v_0}}\tilde \psi^{(\leq h)}\left(P_{v_0}\right)\mathcal W^{(h)}\left(\tau, P_{v_0},\bm x(I_{v_0})\right)
\end{split}
\label{effective_potential_expanded_in_kernels}
\end{equation}
where, besides all the quantities we have already defined, $\mathcal{W}^{(h)}$ is defined by the latter expression itself, that we use to get the recursive relation
\begin{equation}
\begin{split}
\mathcal W^{(h)}\left(\tau,P_{v_0}, \bm x(I_{v_0})\right)=\\
\sum_{P_{v_1},\dots,P_{s_{v_0}}}\left( \prod_{j=1}^{s_{v_0}} \mathcal W^{(h+1)}\left(\tau_j, P_{v_j},\bm x(I_{v_j})\right) \right)\\
\frac{1}{s_{v_0}!}\mathcal E^T_{h+1}\left(\tilde{\psi}^{(h+1)}\left(P_{v_1}\setminus Q_{v_1}, \right) \dots,\tilde{\psi}^{(h+1)}\left(P_{v_{s_{v_0}}}\setminus Q_{v_{s_{v_0}}} \right) \right)
\end{split}
\end{equation}
where
\begin{equation}
Q_{v_j}=P_{v_0}\cap P_{v_j},\hspace{3mm} j=1,\dots,s_{v_0}.
\end{equation}
It worths noting that the sets $Q_v$ are {\it uniquely determined} by the sets $\{P_v\}$ because, for any $v\in V(\tau)$,
\begin{equation}
Q\in P_v, \mbox{ and } P_v=\bigcup_{j=1}^{s_v}Q_{v_j}\Longrightarrow Q_{v_j}=P_v\cap P_{v_j}, j=1,\dots, s_v
\end{equation}
To get an {\it explicit expression} for $\mathcal W^{(h)}$, we can iterate the latter formula going along the tree $\tau$ and getting
\begin{equation}
\begin{split}
\mathcal W^{(h)}\left(\tau,P_{v_0},\bm x(I_{v_0})\right)=\\
= \sum_{\{P_v\}_{v\in V(\tau )}} \left(\prod_{v\notin V_f(\tau)}\frac{1}{s_v!} \mathcal E^T_{h_v+1}\left(\tilde{\psi}^{(h_v)}\left(P_{v_1}\setminus Q_{v_1}, \right) \dots,\tilde{\psi}^{(h_v)}\left(P_{v_{s_{v}}}\setminus Q_{v_{s_{v}}} \right) \right)\right)\left(\prod_{v\in V_f(\tau)}r_v\right)
\end{split}
\label{kernels_as_truncated_expectation_values}
\end{equation}
where we repeat that $r_v\in\{\nu,\lambda\}$ and the sum over $\{P_v\}_{v\in V(\tau)}$ is a sum over all the possible choices of the sets $P_v$ corresponding to the vertices of $\tau$, except $P_{v_0}$ (which is fixed). Finally, we can define a more intuitive notation to rewrite (\ref{effective_potential_expanded_in_kernels}) as
\begin{equation}
\begin{split}
\mathcal{V}^{(h)}\left(\psi^{(\leq h)}\right)=\sum_{n=1}^{\infty} \sum_{\tau\in\mathcal T_{h,n}}\mathcal{V}^{(h)}\left(\tau,\psi^{(\leq h)}\right),\\
\mathcal{V}^{(h)}\left(\tau,\psi^{(\leq h)}\right)=\sum_{P_{v_0}\subset I_{v_0}} \int d\bm x(P_{v_0})\tilde \psi^{(\leq h)}\left(P_{v_0}\right)\mathcal W^{(h)}\left(\tau, P_{v_0},\bm x(P_{v_0})\right)
\end{split}
\label{effective_potential_expanded_in_kernels_improved_formalism}
\end{equation}
where the gain is that now the kernel $\mathcal{W}^{(h)}$ depends only on the variables $\bm x(P_{v_0})=\{\bm x(f)\}_{f\in P_{v_0}}$ being
\begin{equation}
\mathcal{W}^{(h)}\left(\tau, P_{v_0},\bm x(P_{v_0})\right)=\int d\bm x(I_{v_0}\setminus P_{v_0})\mathcal{W}^{(h)}\left(\tau, P_{v_0},\bm x(I_{v_0})\right).
\end{equation}
Now we can state the
\begin{thm}
\label{theorem_bound_of_kernels}
In the framework described by the Hamiltonian (\ref{hamiltonian_PBC}), we can bound the kernels we just intruduced as
\begin{equation}
\begin{split}
\int d\bm x(P_{v_0})\left| \mathcal W^{(h)} (\tau, P_{v_0}, \bm x(P_{v_0})) \right| \leq \\ \leq \beta L \gamma^{-h D(P_{v_0})}\sum_{\{P_v\}}\left(\prod_{v\notin V_f(\tau)} \gamma^{-(h_v-h_{v'})D(P_v)} \right)\left(\prod_{v\in V_f(\tau)}\gamma^{-h_{v'}(\frac{|I_v|}{2}-2)}\right)\left(C\epsilon\right)^n
\end{split}
\label{kernels_bound}
\end{equation}
\end{thm}
where $D(P_v)=|P_v|/2-2$, $\epsilon=\max\{\nu,\lambda\}$ and $C=C_N$ for some fixed $N$ in (\ref{bound_propagator_faster_than_any_power}).
\begin{rem}
\label{remark_necessity_of_renormalization_procedure}
Of course the estimate (\ref{kernels_bound}) is finite, and it could not have been otherwise because, thanks to the cut-off, we are performing a power counting on finite quantities. Of course, the troubles come when, in order to reconstruct the original theory, we try to perform a sum over all the scales $h\leq 1$ and the infinite volume limit: as soon as $D(P_{v_0})\leq 0$ ({\it i.e.} when $n_{v_0}=2,4$) the infinite volume limit of the sum diverges being $h_v-h_{v'}>0$. So, despite we still have a {\it divergence problem} as in the {\it naive estimate} in Lemma (\ref{lemma_bounds_no_multiscale_no_determinants}), the advantage of using the {\it multiscale analysis} is that we can clearly identify the {\it sources} of the problem: {\bf if $n_v^e:=|P_v|\leq 4$ the above sum cannot be performed uniformly in $\beta$ and $|\Lambda|$}. So we must deal in some smart way with the clusters with $2$ or $4$ external lines. Actually, our plan consists in nothing else than a {\it slightly different} expansion of the same quantities, by which we can prove that the sum is well defined.\\
Besides the identification of the divergences, the latter estimate is useful also to understand how, thanks to the {\it Gram-Hadamard} estimate (Lemma (\ref{lemma_gram_hadamard_inequality})), we have no longer the combinatorial problem, indeed we can perform the sum in $n$ of the right hand side of (\ref{kernels_bound}) provided $\epsilon$ is small enough.
\end{rem}
The latter remark is the motivation for the following three definitions:
\begin{itemize}
\item the terms which are, after the dimensional estimate, well behaved a priori ({\it i.e.} the terms with more than six external legs) are calld {\it irrelevant};
\item the terms with $D(P_{v})=0$ are called {\it marginal};
\item the terms with $D(P_{v})=-1$ are called {\it relevant}.
\end{itemize}
The multiscale decomposition we just described involves the computation, for any vertex $v\notin V_f(\tau)$, of {\it scale $h_v$ truncated expectations $\mathcal E^T_{h_v}$}, for which the determinant expansion (\ref{expectation_truncated_determinants}) has to be rewritten as
\begin{equation}
\label{expectation_truncated_scale_h_v}
\mathcal{E}_{h_v}^{T_v}\left(\tilde \psi^{( h_v)}\left( P_1\right),\dots, \tilde \psi^{( h_v)} \left( P_s\right)\right)= \sum_{T_v} \alpha_{T_v }\left(\prod_{\ell\in T_v}g^{(h_v)}_{\ell}\right)\int dP_{T_v}(\bm t)\det G^{h_v,T_v}(\bm t),
\end{equation}
where by $g_\ell^{h_v}$ we mean that the propagators associated with lines of the spanning tree $T_v$ live at scale $h_v$, and the matrix $G^{(h_v,T_v)}$ is the analogous of the already described $G^{T}(\bm t)$, but the propagators contributing to the entries $G^{h_v,T_v}_{i,j}$ live at scale $h_V$. So, assuming that $g^{(h_v)}(\bm x,\bm y)$ can be written as a scalar product of vectors belonging to a suitable Hilbert space (as we prove in (\ref{appendix_gram_representation}), looking only at quantities $A^{(h)}_{2(L+1)}$ and $B^{(h)}_{2(L+1)}$) and using the estimate (\ref{bound_propagator_faster_than_any_power}), we can adapt Lemma (\ref{lemma_gram_hadamard_for_G}) as
\begin{equation}
\begin{split}
||\det G^{h_v,T_v}||_\infty \leq c^{\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)}||g^{(h_v)}||_\infty^{\frac{1}{2}\left(\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)\right)}\leq\\ \leq c_1^{\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)}\gamma^{\frac{h_v}{2}\left(\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)\right)},
\end{split}
\end{equation}
so that (\ref{expectation_truncated_scale_h_v}) is bounded by
\begin{equation}
\label{expectation_truncated_scale_h_v_bound}
\begin{split}
\left|\mathcal{E}_{h_v}^{T_v}\left(\tilde \psi^{( h_v)}\left( P_1\right),\dots, \tilde \psi^{( h_v)} \left( P_s\right)\right)\right)\leq \\ \leq \sum_{T_v} \prod_{\ell\in T_v}\left| g^{(h_v)}_{\ell}\right|c_1^{\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)}\gamma^{\frac{h_v}{2}\left(\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)\right)}.
\end{split}
\end{equation}
\begin{proof}[Proof of Theorem (\ref{theorem_bound_of_kernels})]
The integration variable $d\bm x(P_{v_0})$ means that that the integration has to be performed over all the endpoints.\\
First of all, it is convenient to decrease the number of integration points $n+n_4\rightarrow n$ by integrating, for any enpoint with $4$ external legs, the {\it finite range} potential $v(x-y)\delta(x_0-y_0)$.\\
Then, using (\ref{expectation_truncated_scale_h_v_bound})
we get:
\begin{equation}
\begin{split}
\left| \mathcal E^T_{h_v}\left(\tilde{\psi}^{(h_v)}\left(P_{v_1}\setminus Q_{v_1}, \right) \dots,\tilde{\psi}^{(h_v)}\left(P_{v_{s_{v}}}\setminus Q_{v_{s_{v}}} \right) \right)\right|\leq\\
\leq \sum_T \left(\prod_{\ell\in T}|g_\ell|\right)(CC_n)^{\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)}\gamma^{\frac{h_v}{2}\left(\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)\right)},
\end{split}
\end{equation}
where $g_\ell$, $\ell\in T$ are propagators contracted on scale $h_v$. We used that for each anchored tree $T_v$ defined by $T=\cup_{v\notin V_f(\tau)}T_v$ contributing to the sum we can perform $s_v-1$ integrations by using the $s_v-1$ propagators $g_\ell\in T$ getting a factor (thanks to (\ref{norm_1_propagator})), a factor
\begin{equation}
\gamma^{-h_v(s_v-1)}.
\end{equation}
We rewrite the contribution $\prod_{v\notin V_f(\tau)}\gamma^{-h_v(s_v-1)}$ using the formula (that can be easily proved by induction):
\begin{equation}
\begin{split}
\sum_{v\notin V_f(\tau)}h_v(s_v-1)=h(n-1)+\sum_{v\notin V_f(\tau)}(h_v-h_{v'})(n^e_v-1),
\end{split}
\end{equation}
where $v'$ is the vertex immediately preceeding $v$ on $\tau$, and $n^e_v$ is the number of endpoints following $v$ on $\tau$.\\
It has to be pointed out that the integral runs over $n$ variables, and that the number of variables involved in the integrals we are performing using the propagators belonging to the spanning tree is given by:
\begin{equation}
\begin{split}
\sum_{v\notin V_f(\tau)}(s_v-1)=|V_f(\tau)|-1=n-1,\\
\sum_{\bar v\in V(\tau_v)}\left[\frac{1}{2}\left(\sum_{i=1}^{s_{\bar v}}|P_{\bar v_i}|-|P_{\bar v}|\right)\right]=\frac{1}{2}\left(|I_v|-|P_v|\right).
\label{number_of_integral_variables_translation_invariant_PBC}
\end{split}
\end{equation}
This means that we exploit the {\it compact support properties} of the propagators to integrate out all the variables but one, whose integration (running over all the available space), gives a factor $\beta L$.\\
Furthermore, by definition we have that each endpoint is associated with either $\nu$ or $\lambda$, and that $|V_f(\tau)|\leq n$, so we have that $$\prod_{v\in V_f(\tau)}|r_v|\leq \epsilon^n.$$
Finally we get the bound we are interested in for the left hand side of (\ref{kernels_bound}),
\begin{equation}
\beta L \left(\prod_{v\notin V_f(\tau)}\gamma^{h_v\left[\frac{1}{2}\left(\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|\right)-2(s_v-1)\right]}\right)\left(C\epsilon\right)^n.
\label{bound_proof_theorem_kernel_PBC}
\end{equation}
and using formulae (\ref{number_of_integral_variables_translation_invariant_PBC}) we get
\begin{equation*}
\begin{split}
\int d\bm x(P_{v_0})\left| \mathcal W^{(h)} (\tau, P_{v_0}, \bm x(P_{v_0})) \right| \leq \\ \leq \beta L \gamma^{-h D(P_{v_0})}\sum_{\{P_v\}}\left(\prod_{v\notin V_f(\tau)} \gamma^{-(h_v-h_{v'})D(P_v)} \right)\left(\prod_{v\in V_f(\tau)}\gamma^{-h_{v'}\left(\frac{|I_v|}{2}-2\right)}\right)\left(C\epsilon\right)^n.
\end{split}
\end{equation*}
\end{proof}
\section{Renormalization Group}
\label{subsection_renormalization_group_PBC}
In Remark (\ref{remark_necessity_of_renormalization_procedure}) we pointed out that, after the multiscale decomposition, the {\it na\"ive cluster expansion} is not enough to conclude that the theory is well defined (meaning that the observables as the {\it specific free energy} and the {\it Schwinger functions} are expressed as finite sums), because the sum over all possible trees is not well defined due to the clusters with $2$ and $4$ external legs.\\
The key idea is that, combining {\it multiscale} and {\it cluster} expansions, we are {\it fragmenting} the quantities we are interested in into infinitely many different pieces in some way easier to control and that, once individually {\it controlled}, we would like to re-sum up in order to reconstruct the initial quantity in such a way that they are clearly well defned (in particular, as analytic functions of the perturbative parameter $\lambda$ within a radius of convergence $\lambda_0>0$). So there is not a unique way to perform this cluster expansion, and our plan is to change the cluster expansion we explained in subsection (\ref{subsection_non-renormalized_expansion}) to {\it cure} the divergences arising from $2$ and $4$ external legs diagrams.\\
Morally, there are no problems coming from the {\it harmless part} consisting of all the clusters with $6$ or more external legs; our strategy will be to identify what is the real source of divergences in the dangerous clusters and to perform a further splitting of them into two contributions: the {\it renormalized part} which we will put once for all in the harmless part of the theory, and the {\it local part}, which is properly the dangerous part, we will {\it dress} the free theory with.
\subsection{Localization and Renormalization operator}
\label{subsection_localization_renormalization_PBC}
It is worth by recalling that the explicit expression of the effective potential at scale $h$ (\ref{effective_potential_scale_h}). In particular, using quasi-particle decomposition of the Grassmann variables we can rewrite
\begin{equation}
\begin{split}
\mathcal V^{(h)}(\psi)=\sum_{n=1}^{\infty}\sum_{\substack{ \bm x_1,\dots,\bm x_{2n}\\ \in\\ \Lambda\times [0,\beta)}} \left(\prod_{j=1}^{n} \psi^{(\leq h)+}_{\bm x_{2j-1}}\psi^{(i\leq h)-}_{\bm x_{2j}} \right) W^{(h)}_{2n} (\bm x_1,\dots,\bm x_{2n})=\\
=\sum_{n=1}^{\infty}\sum_{\bm \omega}\sum_{\substack{ \bm x_1,\dots,\bm x_{2n}\\ \in\\ \Lambda\times [0,\beta)}} \left(\prod_{j=1}^{n} e^{-i\omega_{2j-1} x_{2j-1}p_F}\psi^{(\leq h)+}_{\omega_{2j-1}\bm x_{2j-1}}e^{i\omega_{2j}x_{2j}p_F}\psi^{(\leq h)-}_{\omega_{2j},\bm x_{2j}} \right)\cdot \\ \cdot W^{(h)}_{2n} (\bm x_1,\dots,\bm x_{2n})=\\
=:\sum_{n=1}^{\infty}\sum_{\bm \omega}\sum_{\substack{ \bm x_1,\dots,\bm x_{2n}\\ \in\\ \Lambda\times [0,\beta)}} \left(\prod_{j=1}^{n} \psi^{(\leq h)+}_{\omega_{2j-1}\bm x_{2j-1}}\psi^{(\leq h)-}_{\omega_{2j},\bm x_{2j}} \right) W^{(h)}_{2n, \bm \omega} (\bm x_1,\dots,\bm x_{2n}),
\end{split}
\label{effective_potential_quasi_particles_scale_h}
\end{equation}
where $W^{(h)}_{2n, \bm \omega} (\bm x_1,\dots,\bm x_{2n})=\left(\prod_{j=1}^{n} e^{-i\omega_{2j-1} x_{2j-1}p_F}e^{i\omega_{2j}x_{2j}p_F} \right)W^{(h)}_{2n} (\bm x_1,\dots,\bm x_{2n})$. Analogously, we can define $\hat W^{(h)}_{2n, \bm \omega}(\bm k'_1,\dots,\bm k'_{2n-1})=\hat W^{(h)}_{2n}(\bm k'_1+\omega_1p_F,\dots,\bm k'_{2n-1}+\omega_{2n-1}p_F)$, with $\bm k'\in\mathcal D^{\omega}_{\Lambda,\beta}$, as the Fourier transform of $W^{(h)}_{2n,\bm \omega} (\bm x_1,\dots,\bm x_{2n})$, where $\hat W^{(h)}_{2n,\bm \omega}$ depends on $2n-1$ momenta because of the momentum conservation, meaning that $\sum_{i=1}^{2n}\left(\bm k_i'+\omega_ip_F\right)=0$.\\
We saw that the diagrams with $2$ and $4$ external legs are {\it dangerous} (respectively relevant and marginal terms in the RG terminology), meaning that they do not allow us to perform the sum over all the scales $h\leq 1$, so they do not allow us to conclude that the {\it specific free energy} and the {\it Schwinger functions} defined in (\ref{free_energy_specific_PBC}) and (\ref{schwinger_function_n_points_PBC}) are well defined. So we are forced to {\it manipulate} in some sense the dangerous part: the idea is to extract, first of all, the source of troubles from the $2$ and $4$ external legs diagrams. In particular, we split the quartic terms into the sum of a {\it marginal term} (that can be controlled by studying the flow of a single running coupling constant $\lambda_h$) and an {\it irrelevant one}, and the two external legs terms into three different contributions: an {\it irrelevant one}, a {\it marginal one} that we properly {\it use to dress} the theory and a {\it relevant} one, that we compensate thanks to the {\it counterterm} $\nu$ (which has to be fixed in such a way that the coupling constant $\nu_h$ vanishes when $h\to -\infty$).\\
In particular, we pointed out that the singularities of the propagator $\hat g$ are at $\bm p_F$, so the idea is to expand the kernels in Taylor series near these singularities; now we can appreciate the choice of the change of variables $\bm k=\bm k'+\omega p_F$: a Taylor expansion around the singularities is a Taylor expansion in $\bm k'\sim 0$.\\
We should keep in mind that the fact that the volume is finite gives rise to a lot of technical difficulties: in a finite volume the momenta $\bm k'\in\mathcal{D}_{\Lambda,\beta}^{\omega}$ are quantized and they are not precisely zero, so we should define a Taylor expansion at finite volume, meaning that even if we morally want to expand around the Fermi points $\pm\bm p_F$ we are forced to localize not precisely at $\pm \bm p_F$, but at {\it the closest possible point}. For pedagogical reasons, we will take care of these problems giving a precise {\it finite volume localization definition} only in Appendix (\ref{appendix_real_space-time_localization}), while here we give a more intuitive definition of localization which is correct only at {\it the limit} $\beta,|\Lambda|\nearrow \infty$ (since it neglects the finite volume corrections). \\
Let $\mathcal{L}$ be the {\it localization operator} acting on the {\it effective potentials} in the following way:
\begin{itemize}
\item the terms with more then $6$ external legs cause no problems, so we have nothing to extract:
$$\mathcal{L}W^{(h)}_{2n,\omega}(\bm k'_1,\dots,\bm k'_{2n-1})=0 \mbox{ if } 2n\geq 6,$$
\item on the terms with $4$ external legs,
\begin{equation}
\begin{aligned}
\mathcal L \left(\frac{1}{\left(\beta L\right)^4}\sum_{\bm k'_1,\bm k'_2,\bm k'_3,\bm k'_4 \in \mathcal D^{\bm \omega}_{L,\beta}}\psi^{(\leq h)+}_{\omega_1,\bm k'_1}\psi^{(\leq h)+}_{\omega_2, \bm k'_2}\psi^{(\leq h)-}_{\omega_3, \bm k'_3}\psi^{(\leq h)-}_{\omega_4, \bm k'_4}\right.\\ \left .\hat W^{(h)}_{4,\bm \omega}(\bm k'_1,\bm k'_2,\bm k'_3,\bm k'_4)\delta_{\bm k'_1+\bm k'_2,\bm k'_3+\bm k'_4}\delta_{\omega_1+\omega_2,\omega_3+\omega_4}\right)=\\
\frac{1}{\left(\beta L\right)^4}\sum_{\bm k'_1,\bm k'_2,\bm k'_3,\bm k'_4 \in \mathcal D^{\bm \omega}_{L,\beta}}\psi^{(\leq h)+}_{\omega_1,\bm k'_1}\psi^{(\leq h)+}_{\omega_2, \bm k'_2}\psi^{(\leq h)-}_{\omega_3, \bm k'_3}\psi^{(\leq h)-}_{\omega_4, \bm k'_4}\cdot \\ \cdot \hat W^{(h)}_{4,\bm \omega}(0, 0, 0, 0)\delta_{\bm k'_1+\bm k'_2,\bm k'_3+\bm k'_4}\delta_{\omega_1+\omega_2,\omega_3+\omega_4},
\end{aligned}
\label{localization_4el_finite_volume_limit}
\end{equation}
\item on the terms with $2$ external legs
\begin{equation}
\begin{split}
\mathcal L\left( \frac{1}{L\beta}\sum_{\bm k\in\mathcal{D}^{\omega}_{L,\beta}} \psi_{\omega,\bm k'+}^{(\leq 0)+}\hat \psi_{\omega,\bm k'}^{(\leq 0)-}W_{2,\omega}(\bm k') \right)
=\frac{1}{L\beta}\sum_{\bm k\in\mathcal{D}^{\omega}_{L,\beta}} \psi_{\omega,\bm k'}^{(\leq 0)+}\hat \psi_{\omega,\bm k'}^{(\leq 0)-}\cdot \\
\cdot\left[\hat W^{(h)}_{2, \omega}(\bm 0)+k'\frac{\partial \hat{W}^{(h)}_{2,\omega}}{\partial k}(\bm 0)+k_0 \frac{\partial \hat W^{(h)}_{2,\omega}}{\partial k_0}(\bm 0)\right].
\end{split}
\label{localization_2el_infinite_volume_limit}
\end{equation}
\end{itemize}
\begin{rem}
Let us comment that we factorized $\delta(\sum_{i}\bm k_i)=\delta (\sum_i \omega_i )\delta(\sum_{i} \bm k'_i)$, which is strictly true only if the scale $h$ is small enough.
\end{rem}
Finally, we simply define the {\it renormalization operator}
\begin{equation}
\mathcal R=1-\mathcal L,
\label{renormalization_operator}
\end{equation}
where $1$ has to be read as the {\it identity operator}.\\
Summarizing, we get:p
\begin{equation}
\mathcal L \mathcal V^{(h)}\left(\psi^{(\leq h)}\right)=\gamma^hn_h F_\nu^{(\leq h)} + z_h F_{\zeta}^{(\leq h)}+ a_h F_{\alpha}^{(\leq h)}+ l_h F_{\lambda}^{(\leq h)},
\label{local_effective_potential_scale_h_PBC}
\end{equation}
where $n_h,z_h,a_h,l_h$ are real numbers defined by the latter definition itself, and we remark that they are not yet the running coupling constants we want to study, that we will define in a while after a rescaling procedure, and
\begin{eqnarray}
\begin{aligned}
F_\nu^{(\leq h)}&=\frac{1}{L\beta}\sum_{\omega=\pm}\sum_{\bm k'\in\mathcal{D}_{L,\beta}^{\omega}}\hat \psi_{\omega, \bm k'}^{(\leq h)+}\hat \psi_{\omega, \bm k'}^{(\leq h)-},\\
F_{\alpha}^{(\leq h)}&=\frac{1}{L\beta}\sum_{\omega=\pm} \sum_{\bm k'\in\mathcal{D}_{L,\beta}^{\omega}} \omega v_0 k'\hat \psi_{\omega, \bm k'}^{(\leq h)+}\hat \psi_{\omega, \bm k'}^{(\leq h)-},\\
F_{\zeta}^{(\leq h)}&=\frac{1}{L\beta}\sum_{\omega=\pm} \sum_{\bm k'\in\mathcal{D}_{L,\beta}^{\omega}}(-ik_0)\hat \psi_{\omega, \bm k'}^{(\leq h)+}\hat \psi_{\omega, \bm k'}^{(\leq h)-},\\
F_{\lambda}^{(\leq h)}&=\frac{1}{\left(L\beta\right)^4}\sum_{\bm k_1',\dots,\bm k'_4\in\mathcal{D}^{\omega}_{L,\beta}}\hat \psi_{+, \bm k_1'}^{(\leq h)+}\psi_{-, \bm k_2'}^{(\leq h)+}\hat \psi_{+, \bm k_3'}^{(\leq h)-}\hat \psi_{-, \bm k_4'}^{(\leq h)-}\delta(\bm k'_1+\bm k'_2-\bm k'_3-\bm k'_4),
\end{aligned}
\label{local_effective_potential_scale_h_PBC_term_by_term}
\end{eqnarray}
\begin{rem}
\label{remark_uniqueness_of_counterterm_PBC}
We used that, in the vicinity of $(p_F,0)$, $\hat W^{(h)}_{\omega,2}(\bm k')\simeq -iz_hk_0+\omega v_0 a_h k'$ (see (\cite{benfatto1993beta} for details), and we would like to comment, especially in order to compare the system we are studying with the one we will study in the next chapter (\ref{chapter_Interacting_fermions_on_the_half_line}), the constants arising from this linearization procedure.
\begin{itemize}
\item First of all, we underline that $\gamma^h n_h$ is a constant, as expected due to the translation invariance.\\
Besides, we point out that, despite $\mathcal L$ is defined as acting on $\bm \omega$-dependent kernels (\ref{localization_4el_real_spacetime}), (\ref{localization_2el_real_spacetime}), so {\it a priori} there would be two different constants, we get a unique $\bm \omega$-independent constant. Of course, this is due to some simmetry of the kernels: indeed by the equations (\ref{local_effective_potential_scale_h_PBC}) and (\ref{local_effective_potential_scale_h_PBC_term_by_term}), we actually define
\begin{equation}
\gamma^h n_h=\hat W^{(h)}_{2,\omega}(\bm 0)=\hat W^{(h)}_{2,-\omega}(\bm 0)
\end{equation}
which is true thanks to the momentum conservation and the parity properties of the propagators (\ref{free_propagator_PBC}).
\item We defined $\partial_k\hat W^{(h)}_{\omega,2}(\bm 0)=\omega v_0 a_h$, with $v_0=\sin p_F$, using that $\partial_k\hat W^{(h)}_\omega(\bm 0)$ is odd in $\omega$.
\item In $F_{\lambda}$ we heavily exploited the anticommutation rules of the Grassman variables and the momentum conservation to see that the only non vanishing term is associated to the choice $\bm \omega=(+,-,+,-)$.
\end{itemize}
\end{rem}
Of course, for $h=0$ we have a quite explicit control on the constants, and one can check that:
\begin{equation}
\begin{cases}
n_0=\nu+O(\lambda),\\
a_0=O(\lambda),\\
z_0=O(\lambda),\\
l_0=\lambda\left(\hat v(0)-\hat v(2p_F)\right)+O(\lambda^2)
\end{cases}
\end{equation}
\paragraph{Dimensional gain of renormalized clusters}
\subparagraph{Momentum space} As we have seen, for marginal and relevant terms $D(P_v)$ is, respectively, $0$ and $-1$. So after the action of the operator $\mathcal R$, the goal is to gain a factor $\gamma^{-(h_v-h_{v'})z_v}$ where it is enough to have $z_v=1$ in the case of marginal clusters and $z_v=2$ in the case of relevant ones.\\
This is one of the best examples of the power of the {\it multiscale cluster expansion}, because we manage to gain the right $\gamma^{-z_v (h_v-h_{v'})}$ thanks to the hierarchical structure of the cluster which is, by definition, such that {\it the propagators belonging to a cluster on scale $h$ lives on scales $\geq h$, while the external lines are necessarily on scales $<h$, otherwise they would have been included in the cluster}. \\
Let us consider the case of the two external legs terms. As we pointed out, if we stay in the Fourier space we can simply represent the localization (and then the renormalization) operator as acting directly on the kernels $\hat W^{(h)}_{2n, \omega}(\bm k_1,\dots,\bm k_{2n-1})$, so in particular we are interested in the kernel $\hat W^{(h)}_{2,\omega}(\bm k')$ where, first of all, we recall that it depends on only one $\bm k'$ and $\omega$ because the kernels preserve the momentum (if $h$ is small enough, the entering momentum $\bm k'_{in}+\omega_{in} \bm p_F\bm$ must be equal to the exiting one $\bm k'_{out}+\omega_{out}\bm p_F$, {\it i.e.} $\bm k'_{in}=\bm k'_{out}=:\bm k'$ and $\omega_{in}=\omega_{out}=:\omega$ since $\mathcal D^+_{\Lambda,\beta}\cap \mathcal D_{\Lambda,\beta}^-=\emptyset$); then we recall that $\bm k'$ and $\omega$ are the momentum and the quasi-particles
index carried by the {\it external lines}. Since we are interested in the {\it dimensional analysis} of the renormalized cluster $\mathcal R\hat W_{2,\omega}(\bm k')$, let us start with recalling that:
\begin{equation}
\mathcal{L}\hat W^{(h)}_{2,\omega}(\bm k')=\hat W^{(h)}_{2,\omega}(\bm 0)+\bm k' \partial_{\bm k'} \hat W^{(h)}_{2,\omega}(\bm 0),
\end{equation}
where the notation is a bit inaccurate, but for our aim it is enough because it is based on the fact that we are expanding around $\bm k'=0$ and we used the {\it linear approximation} (\ref{free_propagator_PBC_linear_approx}). So the kernel can be rewritten as
$$\mathcal{L}\hat W^{(h)}_{2,\omega}(\bm k') + \mathcal R \hat W^{(h)}_{2,\omega}(\bm k')=\hat W^{(h)}_{2,\omega}(\bm k')=\hat W^{(h)}_{2,\omega}(\bm 0)+\bm k' \partial_{\bm k'} \hat W^{(h)}_{2,\omega}(\bm 0)+ \bm k'^2\int_0^1d t (1- t)^2\partial^2_{t} \hat W^{(h)}( t\bm k')$$
from which
\begin{equation}
\mathcal R \hat W^{(h)}_{2,\omega}(\bm k')=\bm k'^2\int_0^1d t (1- t)^2\partial^2_{t} \hat W^{(h)}( t\bm k'),
\end{equation}
where we used the subscript $t$ meaning that $\bm k'$ is considered as an external variable.\\
Looking at the cluster representation of the kernel, we see that the external momentum $\bm k'$ is associated to an external leg of the cluster $G_v$, so it is on scale $h=h_{v'}$, $|\bm k'|\sim \gamma^{h_{v'}}$. The derivative, being the kernel a {\it convolution of propagators on scales $>h_v$}, acts on a propagator with scale $\geq h_v$ and, being a derivative, in the dimensional estimate we get a (bad) scale factor $\gamma^{-h_v}$ with $h_v$. Being a second order remainder, we have a further factor with respect to the usual estimate:
\begin{equation}
\gamma^{2(h_{v'}-h_v)}
\end{equation}
so exactly $z_v=2$, as we wanted.\\ At this point of the presentation it should be clear that, besides the dimensional estimates, we have to be careful also in dealing with combinatorial problems arising from the fact that we are dealing with an infinite number of trees of order $n\to \infty$. This is the problem of the so called {\it incapsulated resonances, i.e.} a configuration such that the clusters $G_{v_1}\supset G_{v_2} \supset\dots \supset G_{v_m}$, corresponding to $v_1\prec v_2 \prec\dots\prec v_m$, have to be renormalized. So let us imagine to iteratively apply the procedure described in ({\ref{localization_4el_finite_volume_limit}) and (\ref{localization_2el_infinite_volume_limit}) starting from the most external cluster $G_{v_1}$, then $G_{v_2}$ and so on until the very last one $G_{v_m}$. The recipe we have given to renormalize the clusters says that the derivatives act on some propagator internal to the cluster, so it is possible that in renormalizing $G_{v_1}$ the derivative acts on a propagator belonging to the innermost cluster $G_{v_m}\subset G_{v_1}$, in renormalizing $G_{v_2}$ on the same and so on until the renormalization of the cluster $G_{v_m}$. After $m$ renormalization steps, all the incapsulated clusters $G_{v_1},\dots, G_{v_m}$ have been renormalized but, among all the contributions we get by the renormalization procedure, there are also terms like $\partial^m_{\bm k'}g_\ell$, $\ell\in G_{v_m}$ that, in addition to the right dimensional factor, contributes to the bound with a factor $(m!)^\alpha$, $\alpha\geq 1)$.\\
There are several ways to solve this problem, but to convince the reader that this is not a {\it real problem} we present a very simple argument, and we refer for details to \cite{benfatto2001renormalization}. The main idea is to show that all the propagators are at most derived twice, since once a gain has been obtained corresponding to some resonance there is no need ore to renormalize it. In the cluster configuration we have just described, let us imagine that, in renormalizing the cluster $G_{v_1}$ the derivatives acts on a propagator $g_\ell^{h_{v_n}}$ with $\ell\in G_{v_n}$ but $\ell\notin G_{v_{n+1}}$. Using the result described above, we know that we have a {\it scale jump} $\gamma^{(h_{v'_1}-h_{v_n})}$ that can be rescribed as
\begin{equation}
\gamma^{(h_{v'_1}-h_{v_n})}= \gamma^{(h_{v'_1}-h_{v_1})}\gamma^{(h_{v'_2}-h_{v_2})}\dots\gamma^{(h_{v'_{n}}-h_{v_n})}
\end{equation}
which clearly shows that, as a consequence of a single renormalization at scale $h_{v_n}$, each cluster $G_{v_n}\subset G_{v_j}\subset G_{v_1}$ has a {\it scale jump} $\gamma^{(h_{v'_j}-h_{v_j})}$, so there is no need of a further renormalization.
\subparagraph{Real space}
It is convenient (especially for the next chapter) to understand what is the corresponding gain mechanism in real space and the possible sources of problems. Let us consider the {\it first order} remainder (then we can generalize the idea to the {\it second order} remainder),that involves a difference of fields
$$\psi^{(\leq h)\epsilon}_{\omega,\bm x_i}-\psi^{(\leq h)\epsilon}_{\omega,\bm x_4},$$
which we can formally rewrite as
\begin{equation}
\psi^{(\leq h)\epsilon}_{\omega,\bm x_i}-\psi^{(\leq h)\epsilon}_{\omega,\bm x_1}=(\bm x_i-\bm x_4)\cdot \int _0^1 ds \bm \partial\psi^{(\leq h)\epsilon}_{\omega,\bm x_1+s(\bm x_i-\bm x_4)}.
\end{equation}
where it hat to be stressed that the latter equation has to be read in the {\it weak sense}, meaning that it is properly true once we {\it contract the fields}.\\
The key idea is that the factor $\bm x_i-\bm x_4$ is associated with the kernel $W_{\bm \omega, 4}(\bm x_1-x_4,\bm x_2-\bm x_4,\bm x_3-\bm x_4)$ with $i=1,2,3$. So, we can estimate that $|\bm x_i-\bm x_4|\sim \gamma^{-h_v}$ (actually, one should first expand $|\bm x_i-\bm x_4|$ along the spanning tree, and then bound each contribution by $\gamma^{-h_v}$). Similarly the derivative $\bm \partial$, acting on $\psi_{\omega,\bm x_i}^{(\leq h)\epsilon}$, is constracted at scale $h_i\leq h_{v'}<h_v$, so using (\ref{bound_propagator_faster_than_any_power}), we get a further contribution $\gamma^{h_{v'}}$ to the usual bound. Summarizing, we get, besides the usual dimensional bound, a further term
\begin{equation}
\gamma^{h_{v'}-h_v}.
\end{equation}
This strategy can be extended also in the case of the second order remainder.\\
The analogous of what we worried about in the momentum space representation could happen: if some {\it field variable} is, at the same time, the external line of a big number $m$ of {\it dangerous} (marginal or irrelevant) clusters, it could happen that $m$ derivatives act on the same external line, giving rise (conceptually) to the same combinatorial problem as before. Exploiting the {\it freedom} that we have in choosing the localization point (it is equivalent to localize in any of the points due to the translation invariance of the theory), it is possible to define a localization procedure in such a way that at most two derivatives act on the same external line. Since the intuitive idea is similar to what we used in the momentum space representation, and the rigorous solution of this {\it problem} is well known in literature, we refer to (\cite{benfatto2001renormalization}, section $3.2-3.5$).
\subsection{Scale h integration and dressed theory}
\label{subsection_anomalous_integration_PBC}
The starting point to define a Gaussian Grassman integration is, of course, a quadratic operator (in our case an operator which is quadratic in the field variables, as the free initial Hamiltonian $H_0$). So far, we have identified, scale by scale, the irrelevant part of the theory (given by all the terms of degree $\geq 6$ and by the {\it renormalized part} of the $2$ and $4$ external legs terms) and the {\it local part} of the theory which is, at the same time, both problematic and the part containing the physical informations of the model. In particular, having in mind the explicit form of the local part at scale $h$ (\ref{local_effective_potential_scale_h_PBC}) and (\ref{local_effective_potential_scale_h_PBC_term_by_term}) we notice that:
\begin{itemize}
\item $l_hF^{(\leq h)}_\lambda$ reproduces, on scale $h$, the initial two points interaction with a different interaction potential encoded in $l_h\delta(\bm k'_1+\bm k'_2-\bm k'_3-\bm k'_4)$. Obviously, being true scale by scale it defines a recursive relation between the constants $\{l_h\}_{h\leq 1}$;
\item $n_hF^{(\leq h)}_{\nu}$ reproduces the counterterm operator of the initial Hamiltonian, where the constant value $\nu$ is replaced by $n_h$. As before, this explicit shape of the {\it counterterm at scale } $h$ gives us a recursive relations between $\{n_h\}_{h\leq 1}$
\item the terms $a_h F^{(\leq h)}_\alpha$ and $z_h F^{(\leq h)}_\zeta$ are at a first sight {\it new} if considered as part of the interaction, but their sum has the same shape, up to $\mathcal O(k'^2)$ terms, as
\begin{equation*}
\begin{split}
\left(\hat g^{(h)}_\omega(\bm k')\right)^{-1}=\left(-ik_0+(1-\cos k')\cos p_F+\omega v_0\sin k'\right) f^{-1}_h(\bm k')=\\
=\left(-ik_0+\omega v_0 k'+[(1-\cos k')\cos p_F+\omega v_0(\sin k'-k')]\right) f^{-1}_h(\bm k')=:\\
=:\left(-ik_0+\omega v_0 k'+t_{0,\omega}(k')\right) f^{-1}_h(\bm k'),
\end{split}
\end{equation*}
with constants $a_h$ and $z_h$ replacing $1$, and where we called $t_{0,\omega}(k')$ the $\mathcal O(k'^2)$ term.
\end{itemize}
The main idea is to {\it absorbe} step by step, in a sense that will be clarified during this paragraph, the quadratic terms $a_h F^{(\leq h)}_\alpha$ and $z_h F^{(\leq h)}_\zeta$ in the integration: this will have the effect to change the {\it propagator} (in RG language we will say that these terms will be used to {\it dress the propagator}) the Gaussian Grassman measure is associated with, and we will encode this {\it dressing} in a new {\it running coupling constant}, called $Z_h$ with $h\leq 0$ and $Z_0=1$, whose flow we will study again in an iterative way. In the following, we will describe the {\it generic $h-th$ step} but, to be able to handle these arguments in a technical way, we warmly raccomend to work out the very first step (from scale $h=0$ to scale $h=-1$), in which all the constants and the computations are quite explicit.\\
Let us introduce a sequence of constants $\left\{Z_h\right\}$, $Z_0=1$ and let us define the function $C_h(\bm k')$ by
\begin{equation}
C_h(\bm k')^{-1}=\sum_{j=h_\beta}^h f_h(\bm k').
\end{equation}
So, after the integration of the degrees of freedom on scales $>h$ we get, up to a constant, a Gaussian Grassman integral
\begin{equation}
\int P_{Z_h}(d\psi^{(\leq h)})e^{-\mathcal{V}^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)},
\label{integral_rescaled_Zh_PBC}
\end{equation}
where the Gaussian Grassman measure is defined as
\begin{equation}
\begin{split}
P_{Z_h}(d\psi^{(\leq h)})=\left(\prod_{\omega=\pm}\prod_{\bm k'\in \mathcal{D}_{L,\beta}^\omega} d\psi^{(\leq h)+}_{\omega,\bm k'}d\psi^{(\leq h)-}_{\omega,\bm k'}\right)\\
\exp\left[ -\frac{1}{L\beta}\sum_{\omega=\pm}\sum_{\bm k'\in \mathcal{D}_{L,\beta}^\omega}C_h(\bm k')Z_h \left(-ik_0+\omega v_0 k'+t_{0,\omega}(k')\right) \psi^{(\leq h)+}_{\omega,\bm k'}\psi^{(\leq h)-}_{\omega,\bm k'} \right],
\end{split}
\label{P_Zh_PBC}
\end{equation}
associated to a covariance which is the $\hat g^{(\leq h)}$ we are familiar with, except for the multiplicative factor $Z_h$ due to the {\it wave function renormalization}, as we are going to explain. As we anticipated, we want to move some terms from the interaction to the measure.\\
First of all, let us notice that the interaction is computed in $\sqrt{Z_h}\psi^{(\leq h)}$, so all the terms of the interaction are suitably multiplied by a power of $Z_h$; in particular, in (\ref{local_effective_potential_scale_h_PBC}) and (\ref{local_effective_potential_scale_h_PBC_term_by_term}) $$F_j^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)= Z_h F_j^{(\leq h)}(\psi^{(\leq h)}), \hspace{4mm} F_\lambda^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)= Z_h^2 F_\lambda^{(\leq h)}(\psi^{(\leq h)}),$$ for $j=\alpha, \nu, \zeta$:
\begin{itemize}
\item in order to {\it dress the propagator} ({\it i.e.} to move into the measure a part of the effective potential), we rewrite the {\it local part of the effective potential at scale $h$} (\ref{local_effective_potential_scale_h_PBC}) as
\begin{equation}
\begin{split}
\mathcal L\mathcal V^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)= \\ =\mathcal L \mathcal V^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right) + z_h F_{\alpha}^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right) - z_h F_{\alpha}^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right) =\\
=\gamma^h n_h F_{\nu} ^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right) + z_h \left( F_{\zeta}^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right) +F_{\alpha}^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right) \right)+\\ + \left(a_h-z_h\right) F_{\alpha}^{(\leq h)} \left(\sqrt{Z_h}\psi^{(\leq h)}\right)+ l_h F^{(\leq h)}_{\lambda} \left(\sqrt{Z_h}\psi^{(\leq h)}\right)=:\\
=: \mathcal L \tilde{ \mathcal V}^{(h)} \left(\sqrt{Z_h}\psi^{(\leq h)}\right) + z_h\left(F_\zeta^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right) +F_\alpha^{(\leq h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)\right).
\end{split}
\label{local_tilde_effective_potential_definition}
\end{equation}
where it worths pointing out that
\begin{equation*}
\begin{split}
z_h \left( F_{\zeta}^{(\leq h)}+F_{\alpha}^{(\leq h)}\right)=\\=\frac{1}{|\Lambda|\beta}\sum_{\bm k'\in\mathcal D_{\Lambda,\beta}^\omega} z_h \left(-ik_0+\omega v_0 k'\right) Z_h \psi^{(\leq h)+}_{\omega,\bm x}\psi^{(\leq h)-}_{\omega,\bm x},
\end{split}
\end{equation*}
{\it i.e.}, except for the constant $z_h$, it is the same as the exponent of the Grassmann integration.
\item Now, in the integral (\ref{integral_rescaled_Zh_PBC}), using the usual exponential properties we {\it move} the term $$z_h \left( F_{\zeta}^{(\leq h)}+F_{\alpha}^{(\leq h)}\right)$$ into the measure (\ref{P_Zh_PBC}), which becomes
\begin{equation}
\begin{split}
P_{Z_h}(d\psi^{(\leq h)})=\left(\prod_{\omega=\pm}\prod_{\bm k'\in \mathcal{D}_{L,\beta}^\omega} d\psi^{(\leq h)+}_{\omega,\bm k'}d\psi^{(\leq h)-}_{\omega,\bm k'}\right)\\
\exp\Biggl[ -\frac{1}{L\beta}\sum_{\omega=\pm}\sum_{\bm k'\in \mathcal{D}_{L,\beta}^\omega}C_h(\bm k')Z_h\left(1+ C_h(\bm k')^{-1}z_h\right)\cdot\\
\cdot \left(-ik_0+\omega v_0 k'+\vartheta_{h,\omega}(\bm k')\right) \psi^{(\leq h)+}_{\omega,\bm k'}\psi^{(\leq h)-}_{\omega,\bm k'}\Biggl].
\end{split}
\end{equation}
Since we need some recursive relations between the {\it running coupling constants}, by the latter formula we can define
\begin{equation}
Z_{h-1}(\bm k')=Z_h\left(1+C^{-1}_h(\bm k')z_h\right),
\label{Z_h-1(k')_definition}
\end{equation}
and $\vartheta_{h,\omega}(\bm k')$ is defined as
\begin{equation}
\vartheta_h(\bm k')=\begin{cases}
t_{0,\omega}(\bm k') &\mbox{ if } h=0,\\
\frac{Z_{h+1}}{Z_{h}(\bm k')}\vartheta_{h+1,\omega}(\bm k') &\mbox{ if } h<0.
\end{cases}
\end{equation}
Let us underline that we are dressing only the linear part of the covariance, so this rescaling of the $\vartheta_{h,\omega}$ terms simply allows to keep $\vartheta_{h,\omega}$ into the brackets.
\item Finally we can rewrite the integral (\ref{integral_rescaled_Zh_PBC}) as
\begin{equation}
\int P_{Z_h}(d\psi^{(\leq h)})e^{-\mathcal{V}^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)}= \frac{1}{\mathcal N_h}\int \tilde P_{Z_{h-1}}(d\psi^{(\leq h)})e^{-\mathcal{\tilde V}^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)}
\label{integral_scale_leqh_dressed_measure_PBC}
\end{equation}
where, of course,
\begin{equation}
\begin{split}
\tilde P_{Z_{h-1}}(d\psi^{(\leq h)})=\left(\prod_{\omega=\pm}\prod_{\bm k'\in \mathcal{D}_{L,\beta}^\omega} d\psi^{(\leq h)+}_{\omega,\bm k'}d\psi^{(\leq h)-}_{\omega,\bm k'}\right)\\
\exp\Biggl[ -\frac{1}{L\beta}\sum_{\omega=\pm}\sum_{\bm k'\in \mathcal{D}_{L,\beta}^\omega}C_h(\bm k')Z_{h-1}(\bm k')\cdot \\ \cdot \left(-ik_0+\omega v_0 k'+\vartheta_{h,\omega}(\bm k')\right) \psi^{(\leq h)+}_{\omega,\bm k'}\psi^{(\leq h)-}_{\omega,\bm k'}\Biggl],
\end{split}
\label{tilde_P_z_h-1}
\end{equation}
and
$$\tilde{\mathcal V}^{(\leq h)}\left(\sqrt {Z_h}\psi^{(\leq h)}\right)=\mathcal L \tilde{\mathcal V}^{(h)}\left(\sqrt {Z_h}\psi^{(\leq h)}\right) + \left(1-\mathcal L\right)\mathcal V^{(h)}\left(\sqrt {Z_h}\psi^{(\leq h)}\right).$$
\item It is worth noticing that, by definition, if $|\bm k'|<\gamma^{h-1}$, $Z_{h-1}(\bm k')$ assumes a constant value, properly $Z_{h-1}(\bm k')=Z_h(1+z_h)$ (this comment will become useful in performing the {\it usual} scale by scale integration, using the {\it addition principle} (\ref{addition_principle})).
\end{itemize}
As it should be clear, the power of this machinary is the possibility to integrate (\ref{integral_rescaled_Zh_PBC}) scale by scale. So we split the measure in the right hand side of (\ref{integral_scale_leqh_dressed_measure_PBC}) as
\begin{equation}
\frac{1}{\mathcal N_h}\int P_{Z_{h-1}}(d\psi^{(\leq h-1)})\int \tilde P_{Z_{h-1}}(d\psi^{( h)})e^{-\mathcal{\tilde V}^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)}
\label{integral_rescaled_splitted}
\end{equation}
which defines, first of all, the measure $P_{Z_{h-1}}(d\psi^{(\leq h-1)})$ as (\ref{tilde_P_z_h-1}) with
\begin{itemize}
\item $Z_{h-1}(\bm k')$ replaced by $Z_{h-1}$ (because of what we explained in the very last point of the latter list),
\item $C_h(\bm k')$ replaced by $C_{h-1}(\bm k')$,
\item $\psi^{(\leq h)}$ replaced by $\psi^{(\leq h-1)}$,
\end{itemize}
and the {\it single scale measure} $\tilde P_{Z_{h-1}}(d\psi^{(h)})$ is given again by (\ref{P_Zh_PBC}) with
\begin{itemize}
\item $Z_{h-1}(\bm k')$ replaced by $Z_{h-1}$,
\item $C_{h}(\bm k')$ replaced by
\begin{equation}
\tilde f_h (\bm k')=Z_{h-1}\left(\frac{C_h^{-1}(\bm k')}{Z_{h-1}(\bm k')}-\frac{C_{h-1}^{-1}(\bm k')}{Z_{h-1}}\right),
\end{equation}
\item $\psi^{(\leq h)}$ replaced by $\psi^{(h)}$.
\end{itemize}
Finally, and this is the {\it definition of the running coupling constants}, we rescale all the fields by $Z_{h-1}$, {\it i.e.} we multiply and divide by the same quantity:
$$\sqrt Z_h\psi^{(\leq h)}=\left(\frac{\sqrt Z_h}{\sqrt Z_{h-1}}\right)\sqrt Z_{h-1}\psi^{(\leq h)}$$
in order to rewrite $\mathcal L\tilde{\mathcal V}^{(h)}$ as
\begin{equation}
\begin{split}
\mathcal{L}\hat{\mathcal V}^{(h)}\left(\sqrt {Z_{h-1}}\psi^{(\leq h)}\right)=\\
=\gamma^h\nu_hF_{\nu}^{(\leq h)}\left(\sqrt {Z_{h-1}}\psi^{(\leq h)}\right)+\delta_h F_\alpha^{(\leq h)}\left(\sqrt {Z_{h-1}}\psi^{(\leq h)}\right)+\lambda F_{\lambda_h}^{(\leq h)}\left(\sqrt {Z_{h-1}}\psi^{(\leq h)}\right),
\end{split}
\end{equation}
and we rewrite the integral (\ref{integral_rescaled_splitted}) as
\begin{equation}
\frac{1}{\mathcal N_h}\int P_{Z_{h-1}}(d\psi^{(\leq h-1)})\int \tilde P_{Z_{h-1}}(d\psi^{( h)})e^{-\mathcal{\hat V}^{(h)}\left(\sqrt{Z_{h-1}}\psi^{(\leq h)},\right)}
\end{equation}
and, by definition,
\begin{equation}
\begin{split}
\nu_h&=\frac{Z_h}{Z_{h-1}}n_h,\\
\delta_h&=\frac{Z_h}{Z_{h-1}}(a_h-z_h),\\
\lambda_h&=\left(\frac{Z_h}{Z_{h-1}}\right)^2 l_h.
\end{split}
\label{running_coupling_constants_PBC_definition}
\end{equation}
Let us introduce a compact notation: the vector $\vec v$ collects these three constants on scale $h$: $$\vec v_h=(\nu_h,\delta_h,\lambda_h).$$
Now we can perform the integration with the Gaussian Grassman measure $\tilde P_{Z_{h-1}}(d\psi^{(h)})$ associated with the propagator
\begin{equation}
\frac{g^{(h)}(\bm x-\bm y)}{Z_{h-1}}=\sum_{\omega=\pm}e^{-i\omega p_F(x-y)}\frac{g^{(h)}_\omega(\bm x-\bm y)}{Z_{h-1}}
\end{equation}
where
\begin{equation}
\frac{g^{(h)}_\omega(\bm x-\bm y)}{Z_{h-1}}=\int \tilde P_{Z_{h-1}}(d\psi^{(h)})\psi^{(h)-}_{\omega,\bm x} \psi^{(h)+}_{\omega,\bm y}
\end{equation}
and again
\begin{equation}
g^{(h)}_\omega(\bm x-\bm y)=\frac{1}{L\beta}\sum_{\bm k'\in\mathcal{D}^{\omega}_{L,\beta}}e^{-i\bm k'\cdot (\bm x-\bm y)}\tilde{f}_{h}(\bm k')\left(\hat g^{(h)}_\omega(\bm k')\right)^{-1}
\label{dressed_propagator_scale_h_PBC}
\end{equation}
defining the {\it effective potential} on the next scale:
\begin{equation}
\int \tilde P_{Z_{h-1}}(d\psi^{(h)})e^{-\hat{\mathcal V}^{(h)}(\sqrt Z_{h-1}\psi^{(\leq h)})}=e^{L\beta e_h-\mathcal V^{(h-1)}(\sqrt Z_{h-1}\psi^{(\leq h-1)})}
\end{equation}
where $ e_h$ is a suitable constant and
\begin{equation}
\mathcal L\mathcal V^{(h-1)}(\psi^{(\leq h-1)})=\gamma^{h-1}n_{h-1}F_\nu^{(\leq h-1)}+a_{h-1}F_\alpha^{(\leq h-1)}+z_{h-1}F_\zeta^{(\leq h-1)}+l_{h-1}F_\lambda^{(\leq h-1)},
\end{equation}
so that we can iterate the just described procedure.
\begin{rem}
First of all, this iterative procedure gives {\it for free} the way to write the {\it running coupling constants} on scale $h$ as a function of the {\it running coupling constants} om higher scales:
\begin{equation}
\vec v_h=\vec \beta(\vec v_{h+1},\dots,\vec v_0),
\label{beta_function_definition}
\end{equation}
where $\vec \beta(\vec v_{h+1},\dots,\vec v_0)$ is called the {\it beta function}.
\end{rem}
\subsection{The renormalized tree expansion and renormalized bounds}
It is convenient to directly look at Fig. (\ref{figure_renormalized_trees}): we write $\mathcal{V}^{(0)}$ knowing that there can be endpoints representing contributions from $\mathcal{L} \mathcal{V}^{(1)}$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
[scale=1, transform shape]
\node at (1,3) {$\mathcal R \mathcal V^{(0)}$ =};
\node at (1,0) {$\mathcal \mathcal V^{(0)}$ =};
\node at (4.5, 3) {+};
\node at (7.5,3) {+};
\node at (10.5,3) {...};
\node at (4,0) {=};
\node at (7.5, 0) {+};
\node at (10.5,0) {+};
\node at (13.5,0) {...};
\draw [very thick] (2,3) -- ++ (2,0) ++ (1,0) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (1,1) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (-1,1) -- ++ (1,0);
\fill (2,3) circle (0.1);
\fill (2,3) ++ (1,0) circle (0.1);
\node at (3,3.3) {$\mathcal R$};
\fill (2,3) ++ (1,0) ++ (1,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\node at (6,3.3) {$\mathcal R$};
\node at (9,3.3) {$\mathcal R$};
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,1) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,-1) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (2,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,1) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,-1) circle (0.1);
\fill (2,3) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,0) circle (0.1);
\draw [very thick] (2,0) -- ++ (1,0);
\fill (2,0) circle (0.1);
\fill (2,0) ++ (1,0) circle (0.1);
\node at (6,0.3) {$\mathcal L$};
\node at (9,0.3) {$\mathcal L$};
\node at (12,0.3) {$\mathcal L$};
\draw [very thick] (5,0) -- ++ (2,0) ++ (1,0) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (1,1) -- ++ (1,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,-1) ++ (-1,1) -- ++ (1,0);
\fill (5,0) circle (0.1);
\fill (5,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,-1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (2,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,-1) circle (0.1);
\fill (5,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (1,0) ++ (3,0) ++ (1,0) circle (0.1);
\end{tikzpicture}
\caption{Effective potential on scale $h=0$, $\mathcal{V}^{(0)}$ splitted into the localized and renormalized contribute.}
\label{figure_renormalized_trees}
\end{figure}
Finally, plugging this splitting of $\mathcal V^{(0)}$ into the graphical representation we have given of $\mathcal{V}^{(-1)}$ in Fig. (\ref{figure_effective_potentiale_scale_0}) we get the expansion of Fig (\ref{figure_local_renormalized_trees_scale_-1})
\begin{figure}
\centering
\begin{tikzpicture}
[scale=0.7, transform shape]
\node at (0,11) {$\mathcal L\mathcal V^{(-1)}=$};
\draw [very thick] (1,11) -- ++ (2, 0) ++ (1, 0) -- ++ (2,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1, -1) ++ (1,1) -- ++ (2,0) -- ++ (1,1) ++ (-1,-1) -- ++ (1,0) ++ (-1,0) -- ++ (1,-1);
\draw [very thick] (1,8) -- ++ (1,0) -- ++ (2,1) ++ (-2,-1) -- ++ (2, -1) ++ (1,1) -- ++ (1,0) -- ++ (2,1) ++ (-2,-1) -- ++ (2,0) ++ (-2,0) -- ++ (2,-1);
\draw [very thick] (1,8) ++ (9,0) -- ++ (1,0) -- ++ (2,1) ++ (-2,-1) -- ++ (2,-1) ++ (-1,0.5) -- ++ (1,0.5);
\foreach \i in {1,2,3,4,5,6,8,9,10,11} {%
\fill (\i, 11) circle (0.1);
}
\fill (7,12) circle (0.1);
\fill (7,10) circle (0.1);
\fill (11,12) circle (0.1);
\fill (11,10) circle (0.1);
\foreach \i in {1,2,5,6,7,8,10,11}{%
\fill (\i,8) circle (0.1);
}
\fill (4,9) circle (0.1);
\fill (4,7) circle (0.1);
\fill (3,8.5) circle (0.1);
\fill (3,7.5) circle (0.1);
\fill (8,9) circle (0.1);
\fill (8,7) circle (0.1);
\fill (7,8.5) circle (0.1);
\fill (7,7.5) circle (0.1);
\fill (13,9) circle (0.1);
\fill (13,7) circle (0.1);
\fill (12,8.5) circle (0.1);
\fill (12,7.5) circle (0.1);
\fill (13,8) circle (0.1);
\foreach \i in {2,5,9}{
\node at (\i, 11.3) {$\mathcal L$};
}
\foreach \i in {6,10}{
\node at (\i, 11.3) {$\mathcal R$};
}
\foreach \i in {2,6,11}{
\node at (\i, 8.3) {$\mathcal L$};
}
\foreach \i in {2,6,11}{
\node at (\i+1.2, 8.8) {$\mathcal R$};
\node at (\i+1.2, 7.8) {$\mathcal R$};
}
\node at (7.2,8.3) {$\mathcal R$};
\node at (3.5,11) {+};
\node at (7.5,11) {+};
\node at (11.5,11) {+};
\node at (4.5,8) {+};
\node at (9,8) {+};
\node at (13.5,8) {...};
\end{tikzpicture}
\caption{Localized part of the effective potential $\mathcal{V}^{(-1)}$. The renormalized one is exactly the same except for the first vertex following the root, wich is associated to a label $\mathcal R$.}
\label{figure_local_renormalized_trees_scale_-1}
\end{figure}
which can be described as follows:
\begin{itemize}
\item we associate with each vertex $v\in V(\tau)\setminus V_f(\tau)$ a renormalization operator $\mathcal R$ up to the very first vertex $v_0$, which can have associated either an operator $\mathcal R$ or an operator $\mathcal L$ (contributing respectively to the {\it renormalized part} or to the {\it local part} of the effective potential).
\item It is no longer true that each endpoint is at scale $h_{v_f}=1$, indeed there can be endpoints at generic scale $h_v$:
\begin{itemize}
\item $h_v<1$, means that a contribution $\mathcal L\mathcal V^{(h_v)}$ is associated to the vertex $v$,
\item $h_v=1$ means that either a contribution $\mathcal L\mathcal{V}^{(0)}$ or a contribution $\mathcal R\mathcal{V}^{(0)}$ is associated with the vertex $v$.
\end{itemize}
\item If $v$ is an endpoint on scale $h_v\leq -1$, so $h_v=h_{v'}+1$, where $v'$ is the nontrivial vertex immediately preceding $v$.
\item The running coupling constants will be denoted by the variable $\rho_v$: for instance if $h=h_{v'}$, and the contribution to the local part of the effective potential $\mathcal{L}\mathcal{V}^{(h)}$ represented by the endpoint is $F_\nu^{(\leq h)}$, we have $\rho_v=\nu_h$, and so on.
\item The Feynman diagram expansion corresponds, in this case, to a usual expansion in which each cluster value is written as a Taylor expansion $\hat W^{(h)}=\mathcal{L}\hat W^{(h)}+\left(1-\mathcal{L}\right)\hat W^{(h)}$ in such a way that the bound for the remainder $\left(1-\mathcal{L}\right)\hat W^{(h)}$ has the gain we have just discussed $\gamma^{z_v\left(h_v-h_{v'}\right)}$ where
\begin{equation}
z_v=\begin{cases}
1 \mbox{ if } n_v^e=4,\\
2 \mbox{ if } n_v^e=2,\\
0 \mbox{ else},
\end{cases}
\end{equation}
so that $n_v^e/2+m_{2,v}-2+z_v>0$.
\end{itemize}
\paragraph{Renormalized values of the clusters} Obviously the renormalization procedure we described reflects on the bounds of the kernels (values of the clusters). In particular, in the definition (\ref{effective_potential_scale_h_recursive}) we have to replace $\psi^{(\leq h)}\to \sqrt{Z_h}\psi^{(\leq h)}$ and the kernels $\hat W^{(h)}_{2n, \bm \omega}(\bm k_1', \dots,\bm k'_{2n})$ have to be computed taking into account the {\it renormalization procedure} on previous (higher) scales: we call them the {\it renormalized values of the clusters}. In particular, we can rewrite the effective potential as
\begin{equation}
\begin{split}
\mathcal V^{(h)}(\sqrt{Z_h}\psi^{(\leq h)})&=\sum_{n=1}^\infty\sum_{\tau\in\mathcal T_{h,n}}\mathcal V^{(h)}(\tau, \sqrt{Z_h}\psi^{(\leq h)}),\\
\mathcal V^{(h)}(\tau, \sqrt{Z_h}\psi^{(\leq h)})&=\int d\bm x(I_{v_0})\sum_{P_{v_0}\in I_{v_0}}\sqrt{Z_h}^{|P_{v_0}|}\tilde \psi^{(\leq h)}(P_{v_0})\mathcal W^{(h)}(\tau, P_{v_0}, \bm x(I_{v_0})),
\end{split}
\end{equation}
where the kernels
\begin{equation}
\mathcal W^{(h)}(\tau, P_{v_0},\bm x(P_{v_0}))=\int d\bm x(I_{v_0\setminus P_{v_0}})\mathcal W^{(h)}(\tau, P_{v_0}, \bm x(I_{v_0})),
\end{equation}
are the Fourier transforms of the {\it renormalized values} $\hat W^{(h)}_{2n}(\bm k_1,\dots,\bm k_{2n})$ iteratively defines as follows:
\begin{equation}
\begin{split}
\mathcal R\mathcal V^{(h)}\left(\tau,\sqrt{Z_h}\psi^{(\leq h)}\right)=\\=\int d\bm x(I_{v_0})\sum_{P_{v_0}\subset I_{v_0}}\sum_{T\in\bm T}\sum_{\alpha\in A_T}\sqrt{Z_h}^{|P_{v_0}|}\cdot \\ cdot\left[\prod_{f\in P_{v_0}}\partial^{b(f)}_{j(f)} \psi^{(\leq h)\epsilon(f)}_{\bm x(f)}(P_{v_0})\right] \mathcal RW^{(h)}_T(\tau, P_{v_0},\bm x(I_{v_0})),
\end{split}
\end{equation}
where $b(f)\in\{0,1,2\}$, $j(f)\in\{0,1\}$ and $\bm T$ is the set of tree graphs on $\bm x_{v_0}$, obtained by putting together an anchored tree graph $T_v$ for each non trivial vertex $v$. $A_T$ is the set of indices which allows to distinguish the different terms produced by the non trivial $\bm R$ operations and the iterative decomposition of the zeroes. Finally the kernels $W^{(h)}(\tau, P_{v_0},\bm x(I_{v_0}))$ have to be read as the {\it renormalized values of the clusters}:
\begin{equation}
\begin{split}
\mathcal R W^{(h)}_T(\tau, P_{v_0},\bm x(I_{v_0}))=\\=\left[\prod_{v\notin V_f(\tau)}\left(\frac{Z_{h_v}}{Z_{h_{v}-1}}\right)^{\frac{|P_v|}{2}}\right]\left[\prod_{i=1}^{n}(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right]\cdot\\
\cdot \left\{\prod_{v\notin V_f(\tau)}\frac{1}{s_v!}\int dP_{T_v}(\bm t_v) \left( \det G_\alpha^{h_v, T_v}(\bm t_v)\right)\cdot\right.\\
\left.\cdot \left[\prod_{\ell \in T_v}(\bm x_\ell- \bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right]\right\}
\end{split}
\label{renormalized_kernels_explicit_expression_first_version}
\end{equation}
where $n$ is the number of endpoints being $\tau\in\mathcal T_{h,v}$, $v_1^*,\dots, v_n^*$ are the endpoints of $\tau$, $K^{(h_i)}_{{v^*_{i}}}$ is one of the terms of the local effective potential $\mathcal L \mathcal V^{(h_i)}$, $f_\ell^1$ and $f_\ell^2$ are the labels of the two fields forming the line $\ell$, $b_\alpha(\ell), b_\alpha(v_i^*), q_\alpha(\ell), q_\alpha(v_i^*)\in\{1,2\}$, and the fact that there are as many derivatives as {\it "zeroes"} is technically expressed by the constraint $\sum_{\ell, i}\left(b_\alpha(\ell)+ b_\alpha(v_i^*)- q_\alpha(f_\ell^{(1)})- q_\alpha(f_\ell^{(2)})\right)=0$, while $(\bm x_\ell- \bm y_\ell)^{b_\alpha(\ell)}_{j_\alpha(\ell)}$ are the zeroes we introduced in the renormalization procedure definition, where $j_\alpha\in \{0,1\}$ denotes the component of the vector, and $G^{h_v,T_v}$ has to be read by interpreting
\begin{equation}
\begin{split}
G^{h_v,T_v}_{\alpha;ij,i'j'}= t_{v,i,i'}\partial_{j(f_{ij}^1)}^{q(f_{ij}^1)}\partial_{j(f_{ij}^2)}^{q(f_{ij}^2)}g_{\omega_\ell}^{h_v}(\bm x_{ij}-\bm y_{i'j'}).
\end{split}
\end{equation}
It has to be stressed that this latter expression does not break the Gram structure of the matrix, see \cite{benfatto1993beta} for details. The latter formula is a heavy but schematic representation of how the renormalization acts on the clusters.\\
We can now state the {\it main theorem}, {\it i.e.} the bounds on the renormalized expansion we introduced. We will assume some {\it a priori} bounds on the {\it running coupling constants} we use to prove the estimates on the renormalized kernels. After that, we will check that the bounds we assumed hold, and we will fix the counterterm in the initial Hamiltonian.
\begin{figure}
\centering
\begin{tikzpicture}
[scale=0.7, transform shape]
\foreach \i in {1,2,3,4,5,6,7,8,9,10,11,12,13,14} {%
\draw (\i,2.9) -- (\i, 11.2); }
\foreach \j in {1,2,3,4,5} {%
\draw [very thick] (\j,7) -- ++ (1,0);
\fill (\j,7) circle (0.1);
\node at (\j, 6.7) {$\mathcal R$};
\fill (6,7) circle (0.1);
\node at (6, 6.7) {$\mathcal R$};
}
\foreach \j in {0,1,2,3,4,5} {%
\draw [very thick] (6+\j, 7 -\j *0.5) -- +(1,-0.5);
\fill (6+\j,7-\j*0.5) circle (0.1);
\node at (6+\j, 6.7-\j*0.5) {$\mathcal R$};}
\fill (6+6, 7-3) circle (0.1);
\node at (12, 4.7) {$\mathcal L$};
\foreach \j in {0,1,2,3} {%
\draw [very thick] (6+\j, 7 +\j *0.5) -- +(1,+0.5);
\fill (6+\j,7+\j*0.5) circle (0.1);
\node at (6+\j, 6.7+\j*0.5) {$\mathcal R$};}
\fill (6+4, 7+2) circle (0.1);
\node at (10, 8.7) {$\mathcal R$};
\foreach \j in {0,1} {%
\draw [very thick] (10+\j, 9 +\j *0.5) -- +(1,+0.5);
\fill (10+\j,9+\j*0.5) circle (0.1);
\node at (10+\j, 8.7+\j*0.5) {$\mathcal R$};}
\fill (12, 10) circle (0.1);
\node at (12, 9.7) {$\mathcal L$};
\foreach \j in {0,1,2} {%
\draw [very thick] (10+\j, 9 -\j *0.5) -- +(1,-0.5);
\fill (10+\j,9-\j*0.5) circle (0.1);
\node at (10+\j, 8.7-\j*0.5) {$\mathcal R$};}
\fill (13,7.5) circle (0.1);
\node at (13, 7.2) {$\mathcal L$};
\foreach \j in {0,1} {%
\draw [very thick] (12+\j, 8 +\j *0.5) -- +(1,+0.5);
\fill (12+\j,8+\j*0.5) circle (0.1);
\node at (12+\j, 7.7+\j*0.5) {$\mathcal R$};
}
\fill(14,9) circle (0.1);
\node at (14, 8.7) {$\mathcal R$};
\foreach \j in {0} {%
\draw [very thick] (12+\j, 4 +\j *0.5) -- +(1,+0.5);
\fill (12+\j,4+\j*0.5) circle (0.1);
\node at (12+\j, 3.7+\j*0.5) {$\mathcal R$};
}
\foreach \j in {0,1} {%
\draw [very thick] (12+\j, 4 -\j *0.5) -- +(1,-0.5);
\fill (12+\j,4-\j*0.5) circle (0.1);
\node at (12+\j, 3.7-\j*0.5) {$\mathcal R$};}
\fill (14,3) circle (0.1);
\node at (14, 3.3) {$\mathcal L$};
\fill (13,4.5) circle (0.1);
\node at (13, 4.2) {$\mathcal L$};
\draw [very thick] (8,8) -- (9, 7.5);
\fill (9,7.5) circle (0.1);
\node at (9, 7.2) {$\mathcal L$};
\draw [very thick] (11,8.5) -- (12, 9);
\fill (12,9) circle (0.1);
\node at (12, 8.7) {$\mathcal L$};
\draw [very thick] (6,7) -- (11,6);
\fill (11,6) circle (0.1);
\node at (11, 5.7) {$\mathcal R$};
\draw [very thick] (11,6) -- (12, 5);
\fill (12,5) circle (0.1);
\draw [very thick] (11, 6) -- ++ (2,0);
\fill (13,6) circle (0.1);
\node at (13, 5.7) {$\mathcal L$};
\node at (1,2.7) {$\bm h$};
\node at (2,2.7) {$\bm h+1$};
\node at (3,2.7) {$\bm h+2$};
\foreach \i in {4,5,6,7,8} {%
\node at (\i,2.8) {...};}
\node at (9,2.7) {$\bm h_v$};
\node at (10,2.7) {$\bm h_v+1$};
\foreach \i in {11,12,13} {%
\node at (\i,2.8) {...};}
\node at (14,2.7) {$\bm 1$};
\node at (9,8.8) {$ v$};
\node at (1,7.3) {$ r$};
\node at (2,7.3) {$ v_0$};
\fill (7,6.8) circle (0.1);
\node at (7, 7.8) {$\mathcal R$};
\fill (8,6.6) circle (0.1);
\node at (8, 6.3) {$\mathcal R$};
\fill (9,6.4) circle (0.1);
\node at (9, 6.1) {$\mathcal R$};
\fill (10,6.2) circle (0.1);
\node at (10, 5.9) {$\mathcal R$};
\fill (12, 6) circle (0.1);
\node at (12, 5.7) {$\mathcal R$};
\end{tikzpicture}
\caption{Example of a renormalized tree, with $n=9$ endpoints at scales $\leq 1$.}
\label{figure_renormalized_tree}
\end{figure}
\begin{thm}[Renormalized bounds]
\label{theorem_renormalized_bounds}
For renormalized clusters, the {\it renormalized bounds}
\begin{equation}
\begin{split}
\int d\bm x(P_{v_0})\left|\mathcal{W}^{(h)}(\tau, P_{v_0},\bm x (P_{v_0}))\right|\leq C^n \gamma^{-h\left[D(P_{v_0})+z_{v_0}(P_{v_0})\right]}\\
\left(\prod_{v\notin V_f(\tau)} \gamma^{-\left[D(P_{v})+z_{v}(P_v)\right](h_v-h_{v'})} \right)\left(\prod_{v\in V_f(\tau)\setminus V^*_f(\tau)}|\rho_v|\right)
\end{split}
\end{equation}
where $V^*_f(\tau)$ is the set of endpoints such that no running coupling constants is associated to them, $|\rho_v|\in\{|\nu_{h_v}|,|\delta_{h_v}|, |\lambda_{h_v}|\}$ while $m_{2,v}$ has already been defined as $1$ for $\nu-$type endpoints and $0$ otherwise.
\end{thm}
\begin{corollary}
Let, $h>h_\beta$. If, for some constant $c_1>0$ these bounds are verified:
\begin{equation}
\sup_{h'>h}|\vec v_{h'}|\equiv \epsilon_h,\hspace{3mm} \sup_{h'>h}\left|\frac{Z_{h'}}{Z_{h'-1}}\right|\leq e^{c_1\epsilon_h^2},
\end{equation}
and if there exists a constant $\bar \epsilon$, depending on $c_1$, such that $\epsilon_h\leq \bar \epsilon$, then, for another suitable constant $c_0$ uniform in $c_1, L$ and $\beta$ the following bounds are true:
\begin{eqnarray}
\sum_{\tau\in\mathcal T_{h,n}}\left[|n_h(\tau)|+|z_h(\tau)|+|a_h(\tau)|+|l_h(\tau)|\right]\leq \left(c_0\epsilon_h\right)^n,
\label{bound_coupling_constants_trees_theorem_PBC}
\\
\sum_{\tau\in\mathcal T_{h,n}}\left| \tilde e_{h+1}(\tau) \right|\leq \gamma^{2h}\left(c_0\epsilon_h\right)^n,
\label{bound_tilde_E_PBC}
\\
\frac{1}{L\beta}\sum_{\tau\in\mathcal T_{h,n}} \int d\bm x(P_{v_0})\left| \mathcal R\mathcal W^{(h)}(\tau, P_{v_0},\bm x(P_{v_0})) \right|\leq \gamma^{-\left(D(P_{v_0})+z_{v_0}\right)h}\left(c_0\epsilon_h\right)^n
\label{renormalized_values_PBC}
\end{eqnarray}
\label{theorem_fundamental_PBC}
\end{corollary}
Since we already discussed the {\it non-renormalized bounds}, we comment only the differences with respect to them.
\begin{proof}
Exploiting the dimensional gains coming from the operator $\mathcal R$ acting as described in equation (\ref{renormalized_kernels_explicit_expression_first_version}), we can repeat the proof of Theorem (\ref{theorem_bound_of_kernels}) by replacing
\begin{equation}
\prod_{v\notin V_f(\tau)}\gamma^{-D(v)(h_v-h_{v'})}\rightarrow \prod_{v\notin V_f(\tau)}\left(\frac{Z_{h_v}}{Z_{h_v-1}}\right)^{|P_v|/2}\gamma^{-[D(v)+z_v](h_v-h_{v'})}
\end{equation}
By the assumption $\sup_{h'>h}Z_{h'}/Z_{h'-1}\leq e^{c_1\epsilon_h^2}\leq$, taking $c_z\epsilon_h^2\leq 1/16$, one gets that
\begin{equation}
\prod_{v\notin V_f(\tau)}(Z_{h_v}/Z_{h_v-1})^{|P_v|/2}\gamma^{-[-2+|P_v|/2+z_v]}\leq \left(\prod_{\bar v }\gamma^{-\frac{1}{40}(h_{\bar v}-h_{\bar v'})}\right)\left(\prod_{v\notin V_f(\tau)}\gamma^{-|P_v|/40}\right)
\label{bound_product_z_h/z_h-1_gamma}
\end{equation}
where $\bar v$ are the non-trivial vertices, and $\bar v'$ is the non tricial vertex immediately preceding $\bar v$. Thanks to the product into the first bracket we bound the sum over the scale labels by $(const.)^n$. The second factor can be used to bound the sums, using
\begin{equation}
\sum_{\tau\in\mathcal T_{h,n}}\sum_{P_v}\sum_T\prod_{v\notin V_f(\tau)}\frac{1}{s_v!}\gamma^{-|P_v|/40}\leq C^n,
\end{equation}
we refer to \cite{benfatto2001renormalization} for details.
\end{proof}
\begin{rem}
As expected, we have just a relevant running coupling constant: $\nu_h$, coming from the fact that each endpoint $v$ with $m_{2,v}=1$ carries a factor $\gamma^{-h_{v'}}$. To have a renormalizable power counting, we {\it hope} to kill it putting a factor $\gamma^{h_{v'}}$ in front of the corresponding running coupling constant, and the strategy is to prove that $n_h$ remains bounded if we fix in a proper way the counterterm $\nu$ in the Hamiltonian.\\
\end{rem}
\begin{rem}
The trees involved in (\ref{renormalized_values_PBC}) are the trees such that a renormalization $\mathcal R$ operator is associated to the first vertex, while the trees involved in (\ref{bound_coupling_constants_trees_theorem_PBC}) correspond to trees such that a $\mathcal L$ operation is associated to the first vertex. The bound (\ref{bound_tilde_E_PBC}) represents the bound of the constant, {\it i.e.} field independent, contribution to the effective potential.
\end{rem}
\paragraph{Short memory property}
The {\it renormalized bounds} have an important consequence: for any $0<\kappa<1$ fixed a priori, the sum over all the trees with root scale $h$ having at least a vertex such that $h_v=k>h_v$ is $O|\lambda|\gamma^{\kappa(h-k)}$: in fact what we need to prove the convergence of the expansion is $-2+|P_v|/2+z_v>0$, and we can {\it rewrite} from $\gamma^{-[-2+|P_v|/2+z_v]}=\gamma^{-\kappa[-2+|P_v|/2+z_v]}\gamma^{-(1-\kappa)[-2+|P_v|/2+z_v]}$, where $\kappa$ has to be chosen in such a way that the bounds over the sums we just described are still valid.\\
As we will see in the next Subsection, this in particular tells us that $\lambda_h$ and $\delta_h$ stay constant because their beta functions vanish.
\subsection{Flow of running coupling constants}
\label{subsection_flow_of_running_coupling_constants_PBC}
From the iterative procedure we set up in this chapter, the flow equations for the running coupling constants $\vec v_h$ ({\it i.e.} the equations linking $\vec v_h$ to $\vec v_k, k\geq h+1$) are
\begin{equation}
\begin{split}
\nu_{h-1}=\gamma \nu_h+\beta_\nu^h (\vec v_h,\dots, \vec v_0),\\
\lambda_{h-1}=\lambda_h+\beta_\lambda^h (\vec v_h,\dots, \vec v_0),\\
\delta_{h-1}=\delta_h+\beta_\nu^h (\vec v_h,\dots, \vec v_0),\\
\frac{Z_{h-1}}{Z_h}=1+\beta_z^h (\vec v_h,\dots, \vec v_0).
\end{split}
\label{running_coupling_constants_flow_PBC}
\end{equation}
The {\it a priori} bounds on the running coupling constants we assumed in Theorem (\ref{theorem_fundamental_PBC}) implies first of all that the {\it absolute summability and analyticity} of the tree expansion kernels, and also that the beta function itself (\ref{beta_function_definition}) is analytic: being the beta function defined in terms of the {\it local parts of the quadratic and quartic kernels of the effective potential $\mathcal V^{(h)}$}.\\
The analyticity of the beta function would suggest, as a natural way to study the flow of the running coupling constant, to truncate the Taylor expansion for the beta function at the lowest non trivial order, try to check whether the {\it approximate flow} verifies the hypothesis of Theorem (\ref{theorem_fundamental_PBC}) and, if so, to prove that the solution is stable under the addition of higher order Taylor approximations. For a qualitative understanding, let us consider the flow equation of $\lambda_h$, assuming that the second order Taylor approximation is non trivial:
$$\lambda_{h-1}=\lambda_h+a_h\lambda_h^2+\dots$$
Of course, the main role is played by $a_h$: if $a_h>a>0$ uniformly in $h$, the truncated flow is divergent as $h\to -\infty$ and the same would be true for the non-truncated flow (in this case, one should introduce a critical scale, below which it is no more possible to apply perturbation theory in $\lambda_h$). If $a_h\leq -a\leq 0$ uniformely in $h$, the truncated flow would be convergent $\lambda_h\to 0$ as $h\to -\infty$, and also the non-truncated flow would be convergent: in this case, we would talk of {\it asymptotic freedom in RG sense}.\\
The fact that the ststem we are studying (\ref{hamiltonian_PBC}) belongs to the Luttinger universality class means that this system realized an intermediate {\it scenario}: one can check that, asymptotically for $h\to -\infty$, $a_h\to 0$, meaning that the truncated flow equation remains analitically close to the initial datum $\lambda_0$ uniformely in $h$. The problem, in this case, is the instability of the truncated flow, so one must show that similar cancellations take place at all orders in perturbation theory. Of course it is a {\it non trivial} and actually {\it very hard} problem, so it is necessary to use some {\it deep argument}, being direct computations not enough.\\
The strategy relies on the fact that the model described by the Hamiltonian $H$ (\ref{hamiltonian_PBC}) is, in a RG sense, {\it close to} the {\it Luttinger model}, which verifies a bunch of symmetries which are not verified by the {\it not-solvable} model we are dealing with (as discussed in the introduction). \\
The idea is to keep as a {\it reference model} the Luttinger model, being able to quantify in a rigorous way this {\it closeness} getting rigorous estimates on the size of the corrections. The first technical step is to recognize, that it is possible to rewrite the propagators $g_\omega^{(i.r.)}$ (and all the single scale propagators $g_\omega^{(h)}$) as the propagator of the {\it infrared Luttinger model} and a remainder.
\begin{lem}
\label{lemma_propagator_luttinger_+_remainder_PBC}
The propagator $g^{(h)}_\omega(\bm x-\bm y)$ in (\ref{dressed_propagator_scale_h_PBC}) can be rewritten as
\begin{equation}
g^{(h)}_\omega(\bm x-\bm y)=g^{(h)}_{0;\omega}(\bm x-\bm y)+C^{(h)}_\omega(\bm x-\bm y),
\label{propagator_as_luttinger_plus_remainder}
\end{equation}
where $C^{(h)}_\omega$ is the remainder of the {\it linear approximation}
\begin{equation}
g^{(h)}_{0;\omega}(\bm x-\bm y)=\frac{1}{L\beta}\sum_{\bm k'\in\mathcal{D}_{L,\beta}^\omega}e^{i\bm k'(\bm x-\bm y)}\frac{\tilde{f}_h(\bm k')}{-ik_0+\omega v_0 k'},
\end{equation}
such that, for any integer $N>1$ we have
\begin{equation}
\left|g_{0;\omega}^{(h)}(\bm x-\bm y)\right|\leq \frac{\gamma^hC_N}{1+(\gamma^h\left|\bm x-\bm y\right|)^N},
\end{equation}
and, with the further assumption $|x-y|\leq L/2$ ans $|x_0-y_0|\leq \beta/2$, we can bound the remainder as
\begin{equation}
|C_\omega^{(h)}(\bm x-\bm y)|\leq \frac{\gamma^{2h}C_N}{1+\left(\gamma^h\left|\bm x-\bm y\right|\right)^N}.
\end{equation}
\end{lem}
An immediate consequence of the latter Lemma is that, on scale $h$, any observable can be naturally decomposed as the sum of a dominant part, expressed in terms of Gallavotti-Nicolò trees whose values is computed considering all the single-scale propagators as $g_{0,\omega}^{(h)}$, and a remainder, that can be written as a sum of trees "containing" at least a propagator $C_\omega^{(h)}$. In particular, we group the running coupling constants of the {\it infrared Luttinger model} into the two components vector
\begin{equation}
\mu_h=(\lambda_h,\delta_h),
\label{running_coupling_constants_infrared_luttinger_model}
\end{equation}
in order to split the $\beta$-functions $\beta_i^{(h)}$ into a {\it Luttinger model part} plus a {\it remainder}: Lemma (\ref{lemma_propagator_luttinger_+_remainder_PBC}) allows us to rewrite the $\beta$-functions as the Luttinger's ones {\it plus} a remainder as follows: to get the {\it Luttinger model beta function}, first of all we split
\begin{equation}
\beta_i^{(h)}(\mu_h,\nu_h;\dots;\mu_0,\nu_0)=\bar\beta_i^{(h)}(\mu_h;\dots;\mu_1)+\hat \beta_i^{(h)}(\mu_h,\nu_h;\dots;\mu_1,\nu_1),
\end{equation}
where $i=\mu,\nu$ where the first term in the right hand side is obtained by putting $\nu_k=0$, $k\geq h$, and then we estract from the first term of the right hand side the Luttinger model $\beta$-function:
\begin{equation}
\bar \beta_i^{h}(\mu_h;\dots;\mu_0)=\hat \beta_i^{h,l}(\mu_h;\dots;\mu_0)+\hat \beta_i^{h,nl}(\mu_h;\dots;\mu_0),
\end{equation}
where the further labels $l$ and $nl$ we introduced mean trivially Luttinger and non-Luttinger, and the first one is obtained simply considering each propagator as the Luttinger one $g_{0,\omega}^{(h)}(\bm x-\bm y)$ so that the $\beta$-function coincides exactly with the $\beta$-function of the infrared Luttinger model.\\
The universal part $\hat \beta_i^{h,l}$ has been studied in deep detail in several papers, so we do not give the complicated details, we refer to \cite{benfatto2005ward}, but we recall here the main result of this paper: the so called {\it asymptotic vanishing of the beta function} (\cite{benfatto2005ward}, Theorem 2 and formula (57)).
\begin{prop}
Let $\mu_h:=(\lambda_h,\delta_h)$ and $|\mu_h|$ small enough. Then
\begin{equation}
|\hat \beta_i^{h,l}(\mu_h,\dots,\mu_h)|\leq C_\alpha |\lambda_h|^2\gamma^{\eta h},
\end{equation}
for $0<\alpha<1$ and a suitable $C_\alpha>0$.
\end{prop}
Finally, we can state the following
\begin{thm}\label{theorem_lambda_nu_solutions}
If $|\lambda |\leq \lambda_0$ with $\lambda_0$ small enough, we can fix once for all a counterterm $\nu^*(\lambda)=:\nu_1$, analytic in $\lambda$, such that the running coupling constants $\{\lambda_h,\nu_h\}_{\leq 1}$, verify $|\nu_h|\leq c |\lambda|\gamma^{(\theta/2)h}$ and $|\lambda_h|\leq c|\lambda|$. Moreover,
$$z_h\leq 1/2\hspace{3mm} \mbox{ and } \hspace{3mm}
e^{-c|\lambda|^2}\leq\left| \frac{Z_h}{Z_{h-1}}\right|\leq e^{c|\lambda|^2}.
$$
\end{thm}
Before proving the theorem, it is worth commenting this result: this tells us that the running coupling constants $\lambda_h$ and $\delta_h$ stay asymptotically constant, provided we fix an initial datum for $\{\nu_h\}_{h\leq 1}$ such that $\nu_h\to 0$ as $h\to -\infty$ exponentially fast and $\lambda_h, \delta_h$ do not exceed $\epsilon$: in fact we use the freedom of changing the chemical potential {\it correction} $\nu$ to make sure that this happens. Finally, the vanishing of the beta function tells us that the sequence of running coupling constants $\vec v_h=(\nu_h,\delta_h\lambda_h)$ exists and converges exponentially fast to $\vec v_{-\infty}=(0,\delta_{-\infty},\lambda_{-\infty})$, where in particular $\delta_{-\infty},\lambda_{-\infty}$ are analytic in $\lambda$ if $\lambda$ is small enough.
\begin{proof}[Proof of theorem \ref{theorem_lambda_nu_solutions}]
Let us consider the Banach space $\mathcal B_\theta$ of {\it real sequences} $\underline \nu=\{\nu_h\}_{h\leq 1}$ with the norm $||\cdot ||_\theta$ defined by
\begin{equation}
||\underline \nu ||_\theta :=\sup_{k\leq 1}|\nu_k|\gamma^{-k \theta/2}.
\end{equation}
Actually, we are interested in a {\it closed ball}, so let us consider the ball
\begin{equation}
\mathcal{M}_\theta:=\{\underline \nu=\{\nu_h\}_{h\leq 1}: |\nu_h|\leq c|\lambda|\gamma^{\theta/2}\}.
\end{equation}
The strategy of the proof is the following:
\begin{enumerate}
\item we show that for any $\underline \nu\in\mathcal M_\theta$, both the flow equation for $\nu_h$ and the property $|\lambda_h(\nu)|\leq c|\lambda|$ for some $c>0$ are verified uniformely in $\underline \nu$,
\item we fix the counterterm $\underline \nu\in\mathcal M_\theta$ via an exponentially convergent iterative procedure in such a way that the flow equation for $\nu_h$ is verified.
\item finally, we solve the flow of $Z_h$.
\end{enumerate}
So let us start:
\begin{enumerate}
\item given $\underline \nu\in\mathcal M_\theta$, let us iteratively suppose
\begin{equation}
||\lambda_{k-1}(\underline \nu)-\lambda_k(\underline \nu)||\leq c_0 |\lambda|^2\gamma^{(\theta/2)k}, \mbox{ for } c_0>0, k > h+1.
\end{equation}
First of all, it is true for $h=1$ and, besides, if it is true for any $k>h$, it implies $|\lambda_k|\leq c|\lambda|$.\\
Looking at the flow equation for $\lambda_h$ and the comments about the beta function written as Luttinger's one {\it plus a remainder}, we can further write,
\begin{equation}
\begin{split}
\beta^h_\lambda(\lambda_h,\nu_h; \dots ; \lambda_1, \nu_1)=\\
=\beta^{h,l}_\lambda(\lambda_h,\dots,\lambda_h)+\sum_{k=h+1}^1 D_\lambda^{h,k}+\beta^{h,nl}_\lambda(\lambda_h,\dots,\lambda_1)+\sum_{k\geq h}\nu_k\tilde \beta_\lambda^{h,k}(\lambda_k,\nu_k;\dots;\lambda_1,\nu_1),
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
|\beta^{h,l}_\lambda|\leq c|\lambda|^2\gamma^{\theta h},\hspace{3mm} |D_{\lambda}^{h,k}|\leq c|\lambda|\gamma^{\theta(h-k)}|\lambda_k-\lambda_h|,\\
|\beta^{h,nl}_\lambda|\leq c|\lambda|^2\gamma^{(\theta/2)h}, \hspace{3mm} |\tilde \beta_\lambda^{h,k}|\leq c|\lambda|\gamma^{\theta(h-k)}
\end{split}
\end{equation}
It worths remarking that we have the first of these inequalities by the assumption of the vanishing of the Luttinger beta funtion. So
\begin{equation}
\begin{split}
|\lambda_h(\underline \nu)- \lambda_{h+1}(\underline \nu)|\leq c|\lambda^2|\gamma^{\theta(h+1)}+\sum_{k\geq h+2}c|\lambda\gamma^{\theta(h+1-k)}|\sum_{k'=h+2}^k c_0 |\lambda^2|\gamma^{(\theta/2)k'}+\\
c|\lambda|^2\gamma^{(\theta/2)(h+1)}+\sum_{k\geq h+1}c^2|\lambda|^2\gamma^{(\theta/2)k}\gamma^{(\theta(h+1-k))}\leq c_0|\lambda|^2\gamma^{(\theta/2)h}.
\end{split}
\end{equation}
for some $c_0$ large enough. Thanks to the iterative assumption, we get also
\begin{equation}
|\lambda_h(\underline \nu)-\lambda_1(\underline \nu)|\leq c_0|\lambda|^2
\end{equation}
Now, we are left with proving that $\lambda(\underline \nu)$ is a continuous function of $\underline \nu\in\mathcal M_\theta$:
\begin{equation}
\begin{split}
\lambda_h(\underline\nu)-\lambda_k(\underline\nu')=\lambda_1(\underline\nu)-\lambda_1(\underline\nu')+\\+\sum_{h+1\leq k\leq 1}\left[\beta_\lambda^k(\lambda_k(\underline \nu),\nu_k;\dots;\lambda_1\underline \nu),\nu_1)-\beta_\lambda^k(\lambda_k(\underline \nu'),\nu'_k;\dots;\lambda_1\underline \nu'),\nu'_1)\right]
\end{split}
\end{equation}
First of all, we have $|\lambda_1(\underline\nu)-\lambda_1(\underline\nu')|\leq c_0|\lambda||\nu_1-\nu'_1|$. Furthermore, defining $||\underline \nu||_0=\sup_{h\leq 1}|\nu_h|$, if we assume that inductively $|\lambda_k(\underline \nu-)\lambda_k(\underline \nu')|\leq 2c_0 |\lambda| ||\underline \nu-\underline \nu'||_0$, we find (usinge the same decomposition strategy as before) that
\begin{equation}
\begin{split}
|\lambda_h(\underline \nu)-\lambda_h(\underline \nu')|\leq c|\lambda| |\nu_1-\nu'_1|+\\+c|\lambda|\sum_{k\geq h+1}\gamma^{(\theta/2)k}\sum_{k'\geq k}\gamma^{\theta(k-k')}\left(2c_0 |\lambda| ||\underline \nu-\underline \nu'||_0+|\nu_k-\nu'k|\right).
\end{split}
\end{equation}
So, we can choose $c_0$ in such a way that
\begin{equation}
|\lambda_h(\underline \nu)-\lambda_h(\underline \nu')|\leq c|\lambda|||\underline \nu-\underline \nu'||_0.
\end{equation}
\item In order to fix the counterterm, we use a {\it fixed point argument}: indeed we will look
at the recursive relation for $\nu_h$ as the result of the action of an operator acting on a Banach space, and we will prove that the operator {\it generatinf the flow} is a {\it contraction} in this space, so there exists a unique fixed point which will be properly the counterterm.\\
Let us consider the Banach space $\mathcal B_\theta$ of {\it real sequences} $\underline \nu=\{\nu_h\}_{h\leq 1}$ with the norm $||\cdot ||_\theta$ defined by
\begin{equation}
||\underline \nu ||_\theta :=\sup_{k\leq 1}|\nu_k|\gamma^{-k \theta/2}.
\end{equation}
In the context of Banach spaces, we can apply the {\it fixed point theorem for contractions}. Actually, we are interested in a {\it closed ball}, but by the closeness the fixed point argument is valid within it (of course by definition of closeness). So let us consider the ball
\begin{equation}
\mathcal{M}_\theta:=\{\underline \nu=\{\nu_h\}_{h\leq 1}: |\nu_h|\leq c|\lambda|\gamma^{\theta/2}\},
\end{equation}
we will fix $\underline \nu$ via an exponentially convergent iterative procedure in such a way that the flow equation for $\nu_h$ is satisfied.\\
Let us start from the recursive relation
$$\nu_{h-1}=\gamma \nu_h+\beta_\nu^h(\vec v_h;\dots;\vec v_0)$$
which can be iterated until $h=1$, getting
$$\nu_{h-1}=\gamma^{2-h}\nu_1+\sum_{k=h}^{1}\gamma^{k-h}\beta_\nu^k(\vec v_k;\dots;\vec v_0),$$
meaning that
$$\nu_1=\gamma^{h-2}\nu_{h-1}+\sum_{k=0}^{1-h}\gamma^{k-2}\beta_\nu^k(\vec v_k;\dots;\vec v_0).$$
The latter equation, since we are trying to fix $\underline \nu$ in such a way that $\nu_{-\infty}=0$, can be read as:
\begin{equation}
\nu_1=-\sum_{k=-\infty}^1\gamma^{k-2}\beta_\nu^k(\vec v_k;\dots;\vec v_1),
\end{equation}
from which we should get
\begin{equation}
\nu_h=-\sum_{k\leq h}\gamma^{k-h-1} \beta_\nu^k(\vec v_k;\dots; \vec v_1).
\end{equation}
In order to look at this equation from a {\it fixed point theorem} point of view, let us introduce the operator $\bm T:\mathcal M_\theta\to \mathcal M_\theta$ defined as
\begin{equation}
\left(\bm T \underline \nu\right)_h=\nu_h=-\sum_{k\leq h}\gamma^{k-h-1} \beta_\nu^k(\vec v_k(\underline \nu);\dots; \vec v_1(\underline \nu)),
\end{equation}
where $\vec v_k(\underline \nu)$ is the vector solution of the equations (\ref{running_coupling_constants_flow_PBC}) as functions of the {\it parameter} $\underline{\nu}$. In this way, we have translated our problem into a fixed point problem for this operator.\\
First of all, we check that the operator is well defined, meaning that it really sends $\mathcal M_\theta$ into itself: thanks to parity cancellations, it is true that
\begin{equation}
\beta_\nu^h(\vec v_h;\dots;\vec v_1)=\beta_{\nu,1}^h(\mu_h;\dots;\mu_1)+\sum_k\nu_k \tilde{\beta}_{\nu}^{h,k}(\mu_h,\nu_h;\dots;\mu_1\nu_1)
\end{equation}
with
\begin{equation}
|\beta_{\nu,1}^h|\leq c_1|\lambda|\gamma^{\theta h},\hspace{3mm}|\tilde \beta_{\nu}^{h,k}|\leq c_2 |\lambda|\gamma^{\theta(h-k)}
\end{equation}
where $c_1, c_2$ are suitable constants greater then zero. If we fix $c=2c_1$, we get
\begin{equation}
|(\bm T\underline\nu)_h|\leq \sum_{k\leq h}2c_1|\lambda|\gamma^{k(\theta/2+1)-h}\leq c|\lambda|\gamma^{h\theta/2}.
\end{equation}
Finally, we check that it is actually a contraction: $||(\bm T\underline \nu)-(\bm T \underline \nu')||_\theta\leq c''|\lambda|||\underline \nu-\underline \nu'||_\theta$, indeed
\begin{equation}
\begin{split}
|(\bm T\underline \nu)_h-(\bm T\underline \nu')_h|\leq \sum_{k\leq h}\gamma^{k-h-1}|\beta_\nu^{k}(\vec v_k;\dots;\vec v_1)-\beta_\nu'^{k}(\vec v'_k;\dots;\vec v'_1)|\leq\\
\leq c \sum_{k\leq h}\gamma^{k-h-1}\left[ \gamma^{\theta k}|\lambda_k'(\underline \nu)-\lambda_k'(\underline{\nu}')|+\sum_{k'=k}^1\gamma^{\theta(k-k')}|\lambda||\nu_{k'}-\nu'_{k'}|\right]\leq\\
\leq c \sum_{k\leq h}\gamma^{k-h-1}\left[ |k|\gamma^{\theta k}|\lambda| ||\underline \nu-\underline{\nu}'||_0+\sum_{k'=k}^1\gamma^{\theta(k-k')}|\lambda|\gamma^{k'\theta/2}||\underline \nu-\underline \nu'|\right]\leq\\
\leq c''|\lambda|\gamma^{h\theta/2}||\underline \nu-\underline \nu '||_\theta.
\end{split}
\end{equation}
So $\bm T$ is a contraction, and there exists a unique ficed point $\underline{\nu}^*$ for $\bm T$ in the closed ball $\mathcal M_\theta$.
\item Now we can use the previous results to claim that there exist two $O(\lambda)$ functions $\eta_z, F_\zeta^h$ such that
\begin{equation}
Z_h=\gamma^{\eta_z(h-1)+F_\zeta^h}.
\end{equation}
Indeed, knowing that $|z_h|\leq c|\lambda|^2$ uniformly in $h$, we can define
$$\gamma^{-\eta_z}:=\lim_{h\to -\infty}1+z_h,$$
so that
\begin{equation}
\log_\gamma Z_h=\sum_{k\geq h+1}\log_\gamma\left(1+z_k\right)=\eta_z(h-1)+\sum_{k\geq h+1}r_\zeta^h, \hspace{3mm} r_\zeta^k:=\log_\gamma\left(1+\frac{z_k-z_{-\infty}}{1+z_{-\infty}}\right).
\end{equation}
Now, knowing that $z_{k-1}-z_k$ is either proportional to $\lambda_{k-1}-\lambda_k$ or to $\nu_{k-1}-\nu_k$, we can bound
\begin{equation}
|r_\zeta^k|\leq c \sum_{k'\leq k}|z_{k'-1}-z_{k'}|\leq c|\lambda|^2\gamma^{(\theta/2)k}.
\end{equation}
Finally, if we define $F_\zeta^h:=\sum_{k\geq h+1}r_\zeta^k$ and $F_\zeta^1=0$, then $F_\zeta^h=O(\lambda)$ and $Z_h=\gamma^{\eta_z(h-1)+F_\zeta^h}$.
\end{enumerate}
\end{proof}
\begin{rem}
In light of that, in Corollary (\ref{theorem_bounds_kernels}), we can replace the assumption $Z_h/Z_{h-1}\leq e^{c_1\epsilon^2}$ by $Z_h\simeq A\gamma^{-h\eta}$ for some suitable $A>0$, where $\eta=a\lambda^2+\mathcal O(\lambda^3)$ and the symbol $\simeq$ means that the equivalence is asymptotically true for $h\to -\infty$, and improve the bounds we got in this chapter.
\end{rem}
\chapter{Interacting Fermions on the half line}
\label{chapter_Interacting_fermions_on_the_half_line}
\section{The model}
\label{the_model_DBB}
\subsection{Definition and main result}
We are interested in constructing the ground state of interacting spinless fermions living in a discrete one-dimensional box of mesh size $a=1$ and volume $L\gg 1$ with {\it open boundary conditions}, meaning that the system is defined on a segment instead of on a torus. \\
Let $\mathcal F=\oplus_{n=0}^\infty H^{\wedge n}$ be the standard {\it antisymmetric Fock space}, where $\wedge$ denotes the antisymmetric tensor product, and let $\psi^\pm_x$ be the {\it fermionic creation and annihilation} operators defined on $\mathcal F$, where $x$ is the space coordinate and $\Lambda:=\left\{x\in\mathbb Z: 1\leq x\leq L\right\}$, $L\in \mathbb N$. Let us define the Hamiltonian
\begin{equation}
H=H_0+\lambda V+ \varpi \mathcal N,
\end{equation}
where
\begin{equation}
\begin{split}
H_0&=T_0-\mu_0 N_0,\\
T_0&=\sum_{x\in\Lambda}\psi^+_x\left(-\Delta^d \psi^-_x\right)=\sum_{x\in\Lambda}\frac{1}{2}\left(-\psi^+_{x+1}\psi^-_x-\psi^+_{x-1}\psi^-_x+2 \psi^+_x\psi^-_x\right),\\
N_0&=\sum_{x\in \Lambda}\psi^+_x\psi^-_x,
\end{split}
\end{equation}
where, in the formula of $T_0$, we have to interpret $\psi^{\pm}_0=\psi^{\pm}_{L+1}=0$, $\mu_0$ is the chemical potential choosen in such a way that, if we call $\sigma(T_0):=[e_-, e_+]$ the spectral band of the kinetic operator, $\mu\in [e_-+\kappa, e_+-\kappa]$ for some $\kappa>0$ fixed once for all, the interaction of {\it strenght} $\lambda$ is
\begin{equation}
V=\sum_{x,y \in\Lambda}\psi^+_x\psi^-_x v(x,y)\psi^+_y\psi^-_y,\
\end{equation}
where $v(x,y)=v(y,x)$ is a real, compactly supported function, and satisfies what we call {\it Dirichlet property}, {\it i.e.} it can be written as
\begin{equation}
v(x,y)=\frac{2}{L+1}\sum_{k \in \mathcal{D}^d_{\Lambda}}\sin(kx)\sin(ky)\hat v(k),
\label{potential_v_DBC}
\end{equation}
where $\mathcal D_\Lambda^d=:\left\{k=\frac{n\pi}{L+1}, n=1,\dots,L\right\}$. We stress that the {\it Dirichlet property} of $v( x, y)$ (\ref{potential_v_DBC}) is not crucial at all but it simplifies some technical aspects of the proof.\\
Finally, the {\it boundary defect} of size $\varpi = \mathcal O(\lambda)$ is
\begin{equation}
\mathcal N =\sum_{x,y\in \Lambda}\psi^+_x\psi^-_y \pi(x,y),
\end{equation}
where $\pi(x,y)$ is a Hermitian matrix such that $\sup_{x\in\Lambda}\sum_{y\in\Lambda}|\pi(x,y)|=1$.\\
We recall here the main result we prove in this section: let $\beta\geq 0$ be the {\it inverse temperature} and let
\begin{equation}
f_{\Lambda,\beta}=-\frac{1}{|\Lambda|\beta}\log \left(Tr \left(e^{-\beta H}\right)\right)
\end{equation}
be the {\it finite volume specific free energy}. Let also
\begin{equation}
f_{\Lambda}=-\frac{1}{|\Lambda|}\lim_{\beta\nearrow \infty}\frac{1}{\beta}\log \left(Tr \left(e^{-\beta H}\right)\right),\hspace{3mm} f_{\infty}=-\lim_{|\Lambda|\nearrow \infty}\frac{1}{|\Lambda|}\lim_{\beta\nearrow \infty} \frac{1}{\beta}\log \left(Tr \left( e^{-\beta H}\right)\right);
\end{equation}
we prove the following result.
\begin{thm}
\label{theorem_main_DBC}
In this framework, there exists a radius $\lambda_0>0$ such that, for any $|\lambda|\leq \lambda_0$ it is possible to fix the {\it boundary defect} $\pi(x,y)$ and its strenght $\varpi=\varpi(\lambda)$ in such a way that, for any $\theta\in (0,1)$, there exists a constant $C_\theta$ such that
\begin{equation}
\sum_{y\in\Lambda} \left|\pi(x,y)\right| \leq C_\theta \left(\frac{1}{\left(1+|x|\right)^\theta}+\frac{1}{\left(1+|L-x|\right)^\theta}\right),
\end{equation}
and in such a way that $f_\Lambda$ admits a convergent expansion in $\lambda$ and $\varpi$.\\
Moreover
\begin{equation}
\left| f_\Lambda-f_\infty \right|\leq |\lambda|\frac{C_\theta}{L^\theta}.
\end{equation}
\end{thm}
Even though it is not explicitly investigated in this thesis, we stress that a straightforward extension of the proof of this theorem would allow one to construct the correlation functions of the Hamiltonian and to control their boundary corrections.
\subsection{Free Hamiltonian diagonalization and free propagator}
\label{section_the_non_interacting_system_DBC}
It is well known that the Laplacian problem with DBC is {\it diagonalized} by a {\it sine Fourier transform}. Indeed, if we introduce the transformation:
\begin{equation}
\hat \psi^\pm_k=\sum_{x\in\Lambda}\sin (kx)\psi^\pm_x, \hspace{5mm} \psi^\pm_x=\frac{2}{L+1}\sum_{k\in \mathcal D^d_\Lambda}\sin (kx)\hat\psi^\pm_k,
\label{sine_fourier_transform_dbc}
\end{equation}
where $\hat \psi_k^\pm$ creates and annihilates a spinless electron with momentum $k$, the Hamiltonian $H_0$ can be written as a diagonal matrix in the {\it dual space}:
\begin{equation}
H_0=\frac{2}{L+1}\sum_{k\in\mathcal D^d_\Lambda}\hat \psi^+_ke(k)\hat \psi_k^-,
\end{equation}
where $e(k)$ is the dispersion relation:
\begin{eqnarray}
e(k)=1-\cos k -\mu_0,\\
\label{dispersion_relation_DBC}
\mathcal D^d_\Lambda=\left\{k=\frac{n\pi}{L+1}, n\in \mathbb{Z}, n=1,\dots, L\right\}.
\label{momentum_space_DBC}
\end{eqnarray}
In particular, we choose $\mu_0$ in such a way that there exists $p_F\in\mathcal D^d_{\Lambda,\beta}$ such that $e(p_F)=\mathcal O(1/L)$.
\begin{rem}
\label{remark_quasi_particles_issue_DBC}
When we perform the limit $L\to \infty$, $\mathcal{D}^d_\Lambda\to [0,\pi]$ and of course $e(k)$, which is a cosine up to a constant, becomes a function defined in a semi-period of the cosine: $e(\cdot):[0,\pi]\to [-\mu_0, 2-\mu_0]$. This means that there is a unique point of the domain, and we call it $p_F\in [0,\pi]$, such that $e(p_F)=0$. In light of the previous chapter, it is clear that the zeros of the dispersion relation are fundamental because they correspond to the singularities at zero temperature of the propagator and, since the interesting physics happens near the Fermi points, we are interested in the excitations around these Fermi points.\\
We stress that, while in the translation invariant system there are two symmetric Fermi points $\pm p_F$ and we introduced two different quasi-particles $\{\hat \psi^\pm_\omega\}_{ \omega=\pm}$, in this case the theory naturally suggests the definition of a unique quasi-particle around the unique Fermi point $p_F$.
\end{rem}
\paragraph{Schwinger functions and Free Propagator}
Let $x_0\in[0,\beta)$ be the {\it imaginary time}, let $\bm x=(x,x_0)\in\Lambda\times [0,\beta)$ and let us consider the {\it time-evolved operator} $\psi^{\pm}_{\bm x}=e^{Hx_0}\psi^\pm_x e^{-Hx_0}$. So we can define the $m$-point Schwinger function at finite temperature $T=\beta^{-1}$ as
\begin{equation}
S_{\Lambda,\beta}(\bm x_1, \epsilon_1;\dots;\bm x_m, \epsilon_m):=\left< \psi^{\epsilon_1}(\bm x_1)\dots \psi^{\epsilon_m}(\bm x_m)\right>_{\Lambda,\beta}:=\frac{Tr\left( e^{-\beta H}\bm T \left( \psi^{\epsilon_1}(\bm x_1)\dots \psi^{\epsilon_m}(\bm x_m)\right)\right)}{Tr\left( e^{-\beta H}\right)},
\label{schwinger_function_n_points_DBC}
\end{equation}
where $\epsilon_i\in \{\pm\}$ for $i=1,\dots, m$ and $\bm T$ is the {\it Fermionic time ordering operator}, and where we have introduced a collection $\left\{t_1,\dots,t_m\right\}$ of {\it time variables} such that $t_i\in \left[0,\beta\right)$ $\forall i=1,\dots,m$. The strategy we follow is the same as the previous chapter: we want to derive {\it convergent expansions} for $f_{\Lambda,\beta}$, uniformly in the volume $|\Lambda|$ and in the inverse temperature $\beta$, and then to take the infinite volume and zero temperature limit $|\Lambda|,\beta\to \infty$ (thermodinamic limit in in the statistical mechanics point of view). In particular, we want to keep track of the {\it finite volume boundary corrections}.
\paragraph{Free Propagator}
The non interacting model described by the Hamiltonian $H_0$ is exactly solvable, meaning that all the Schwinger functions can be exactly computed by simply using the anticommutative (fermionic) {\it Wick rule}, starting from the {\it two point Schwinger function}, {\it i.e}. the {\it propagator}:
\begin{equation}
\begin{split}
\left<\bm T\left(\psi^{-}_{\bm x_1}\dots \psi^{+}_{\bm x_m}\right)\right>_{0,\Lambda,\beta} =\det G ,\\
G_{ij}=\left<\bm T\left(\psi^-_{\bm x_i}\psi^+_{\bm x_j}\right)\right>_{0,\Lambda,\beta}=S^0_{L,\beta}(\bm x, -;\bm y, +),
\end{split}
\end{equation}
where the subscript $0$ means that the expectation value is calculated with respect to the {\it free measure}, and in the first line there are as many creation as annhilation operators. We stress that every $n$-point Schwinger function with $\sum_{i=1}^n\epsilon_i\neq 0$ is identically zero. \\
We do not repeat the discussion of the {\it two point Schwinger function}, which is exaclty the same as the previous chapter (\ref{subsection_free_propagator}), with the only difference that
\begin{equation}
\psi^{\pm}_x=\frac{2}{L+1}\sum_{k\in\mathcal D^d_{\Lambda}}\sin(kx) \hat \psi^{\pm}_{k}.
\end{equation}
So if we use the notation $\bm x=(x,x_0), \bm y=(y,y_0)\in\Lambda\times [0,\beta)$ and $\bm k=(k,k_0)\in \mathcal D_L^d\times \mathcal D_{\beta,M}=:\mathcal D^d_{\Lambda,\beta,M}$, where $\mathcal{D}_{\beta,M}$ has already been defined in (\ref{momenta_space_time}) and $\mathcal D_L^d$ in (\ref{momentum_space_DBC}),
\begin{equation}
\begin{split}
S^0_{L,\beta}(\bm x,-;\bm y,+):= g(\bm x,\bm y)=\\=\frac{2}{\beta(L+1)}\lim_{M\to \infty}\sum_{\bm k\in\mathcal D^d_{\Lambda,\beta,M}}e^{i\delta_Mk_0}e^{-ik_0(x_0-y_0)}\sin(kx)\sin(ky)\hat g(\bm k)
\end{split}
\label{free_propagator_DBC}
\end{equation}
where
\begin{equation}
\hat g(\bm k):=\frac{1}{-ik_0+e(k)}, \hspace{5mm}\bm k\in\mathcal{D}_{\Lambda,\beta}^d
\label{propagator_momentum_DBC}
\end{equation}
is the same function as the translation invariant case, but the domain changes as already commented: $\hat g$ is singular only in $\bm p_F=(p_F,0)$.\\
As in the previous chapter, the constant $\delta_M=\beta/\sqrt{M}$ is introduced in order to take correctly into account the discontinuity of the propagator $g(\bm x,\bm y)$ at $\bm x=\bm y$, where it has to be defined as $\lim_{x_0-y_0\to 0^-}g(x,x;x_0-y_0)$, in fact the latter definition guarantees that $\lim_{M\to \infty}g_M(\bm x,\bm y):=g(\bm x,\bm y)$ for $\bm x\neq\bm y$, while $\lim_{M\to \infty}g_M(\bm x, \bm x):=g(x,x;0^-)$ at equal points.\\
As we already commented in Remark (\ref{remark_quasi_particles_issue_DBC}), $\hat g(\bm k)$ is singular when $\bm k=(p_F, 0)$. Since the introduction of an interaction between the fermions could move this singularity, it is convenient to rewrite
$$\mu_0=\mu+\nu,$$
where $\nu$ is a counterterm which will be eventually suitably chosen in order to fix the position of the singularity
at some interaction-indipendent point.
\paragraph{Symmetries and Fermi points}
\begin{lem}[Reflection rule]
$\forall$ $\bm x,\bm y \in \Lambda\times\left[0,\beta\right)$
\begin{equation}
g(\bm x,\bm y)=g_{2(L+1)}(x-y, x_0-y_0)-g_{2(L+1)}(x+y, x_0-y_0),
\end{equation}
where $g_{2(L+1)}$ is the free propagator of a system described by a hopping Hamiltonian $H_0$ defined on a box of size $2(L+1)$ with {\it periodic boundary conditions, i.e. }
\begin{equation}
g_{2(L+1)}(x, x_0):= \frac{1}{\beta 2(L+1)}\lim_{M\to \infty}\sum_{k_0\in\mathcal{D}_{\beta,M}}\sum_{k\in\mathcal{D}_{2(L+1)}}e^{-i\bm k\cdot \bm x}\hat g(\bm k),
\end{equation}
where $\mathcal{D}_{2(L+1)}:=\left\{k=\frac{n\pi}{L+1}, n=-(L+1),\dots,L\right\}$ and $\mathcal D_{\beta,M}$ has already been defined in (\ref{momenta_space_time}).
\label{lemma_reflection_trick}
\end{lem}
\begin{proof}
Let us note first of all that
\begin{equation}
\hat{g}(-k,k_0)=\hat g(k,k_0).
\label{hat_g(k)_parity}
\end{equation}
So (\ref{free_propagator_DBC}) is
\begin{equation}
\begin{split}
g(\bm x,\bm y)=\frac{2}{\beta (L+1)}\lim_{M\to \infty}\sum_{k \in\mathcal{D}^d_{\Lambda,\beta,M}}e^{-ik_0(x_0-y_0)}\sin(kx)\sin(k y)\hat g(\bm k)=\\
=\frac{2}{\beta (L+1)}\lim_{M\to \infty}\sum_{k_0\in\mathcal{ D}_{\beta,M}}e^{-ik_0(x_0-y_0)}\cdot \\ \cdot \sum_{k \in\mathcal{D}^d_{L}}\frac{e^{ik(x-y)}+e^{-ik(x-y)}-e^{ik(x+y)}-e^{-ik(x+y)}}{4}\hat g(\bm k).
\end{split}
\end{equation}
Using formula (\ref{hat_g(k)_parity}) and that the argument of the second sum vanishes if $k\in (L+1)\mathbb Z$, we can rewrite
\begin{equation}
\begin{split}
g(\bm x,\bm y)= \frac{1}{\beta 2(L+1)}\lim_{M\to \infty}\sum_{k_0\in\mathcal{D}_{\beta,M}}e^{-ik_0(x_0-y_0)}\sum_{k\in\mathcal{D}_{2(L+1)}}e^{-i k(x-y)}\hat g(\bm k)+\\
- \frac{1}{\beta 2(L+1)}\lim_{M\to \infty}\sum_{k_0\in\mathcal{D}_{\beta,M}}e^{-ik_0(x_0-y_0)}\sum_{k\in\mathcal{D}_{2(L+1)}}e^{-i k(x+y)}\hat g(\bm k)=\\
=: g_{2(L+1)}(x-y, x_0-y_0)-g_{2(L+1)}(x+y, x_0-y_0).
\end{split}
\end{equation}
\label{proof_lemma_replicas}
\end{proof}
Let us call
\begin{eqnarray}
g_{2(L+1)}(x-y, x_0-y_0):=g_P(\bm x, \bm y),
\label{propagators_P_definition_noscales}\\
-g_{2(L+1)}(x+y, x_0-y_0):=g_R(\bm x, \bm y),
\label{propagators_PR_definition_noscales}
\end{eqnarray}
where $P$ stays for {\it periodic}, referring to the $2(L+1)$ periodicity in the real-space direction, while $R$ stays for {\it remainder} (we will clarify why it is a remainder after the multiscale decomposition), in such a way that
\begin{equation}
g(\bm x,\bm y)=\sum_{\sigma\in\{P,R\}}g_\sigma(\bm x,\bm y).
\label{propagator_DBC_as_sum_PR}
\end{equation}
\begin{rem}
Since the parameter $L$ enters only in the real-space component of the problem, from now on, whenever we will mention the $2(L+1)-$periodicity, it will stay for "$2(L+1)-$periodicity in the real-space direction", even when not-explicitly specified.
\end{rem}
\begin{rem}
Following the same ideas used in proof of Lemma (\ref{lemma_reflection_trick}) we can rewrite, $\forall \bm x,\bm y\in \Lambda\times \left[ 0,\beta \right)$,
\begin{equation}
g(\bm x,\bm y)=\sum_{n\in\mathbb {Z}}(-1)^n g_{\infty}(\bm x-r_n\bm y),
\end{equation}
where $g_{\infty}$ is the propagator of a system described by a hopping Hamiltonian defined on $\mathbb Z$ (so translation invariant) and the operator $r_n: \Lambda \times \left[0,\beta\right)\to \mathbb Z \times \left[0,\beta\right)$ is defined as follows
\begin{equation}
\begin{split}
r_ny&=\begin{cases}
y+n(L+1) \mbox{ if } n \mbox{ is even},\\
-y+(1+n)(L+1) \mbox{ if } n \mbox{ is odd},
\end{cases}\\
r_ny_0&=y_0 \hspace{3mm} \forall n \in \mathbb{Z}.
\end{split}
\end{equation}
\end{rem}
\begin{rem}
\label{antisymmertic_reflection_remark}
It is worth noting that we could have obtained the same result acting directly on the Grassmann variables $\hat \psi^{\pm}_{\bm k}, \bm k \in \mathcal D_{\Lambda}\times\mathcal D_{\beta}$. Indeed, let us imagine to extend the grassmann variables on $\mathcal D_{2(L+1)}$, defining $\hat \psi^{\pm}_{2(L+1)}(\bm k)$ in such a way that
\begin{equation}
\begin{cases}
\hat \psi^{\pm}_{2(L+1)}(k,k_0)=\hat\psi^{\pm}(k,k_0), \mbox{ if } k \in\{ k \in \mathcal D_{2(L+1)}\cap k\geq 0\}\equiv \mathcal{D}_{\Lambda},\\
\hat \psi^{\pm}_{2(L+1)}(k,k_0)=-\hat \psi^{\pm}_{2(L+1)}(-k,k_0) \mbox{ if } k \in\{ k \in \mathcal D_{2(L+1)}\cap k<0\}\equiv \mathcal{D}_{\Lambda}.
\end{cases}
\label{antisymmetric_reflection_particles}
\end{equation}
Because of this symmetry property from now on, with a little abuse of notation, we will call all the momenta space Grassmann variables $\hat \psi^{\pm}_{\bm k}$, and we will take care to specify the domain of $\bm k$ in order to distinguish the original variables from the extended ones.
\end{rem}
\section{Interacting case}
\label{section_the_interacting_case_DBC}
\subsection{Trotter's formula and Grassmann integration}
\paragraph{Formal perturbation theory}
After switching on the interaction, the first step is to derive a {\it formal perturbation theory} for the specific free energy: we want to compute the generic perturbative order in $\lambda$ of
$$f_{\Lambda,\beta}:=-\frac{1}{|\Lambda|\beta}\lg \left(Tr \left(e^{-\beta H}\right)\right).$$
Recalling that $H=H_0+\lambda V+\nu N+\mathcal N=: H_0+ U$, where after the substitution $\mu_0=\mu+\nu$ we re-define $H_0=T_0-\mu N$, we use Trotter's product formula
\begin{equation}
\label{trotter's_formula}
e^{-\beta H}=\lim_{n\to \infty} \left[e^{-\beta H_0/n}\left(1-\frac{\beta}{n} U \right)\right]^n
\end{equation}
so that, if we define $$U(t):=e^{t H_0}U e^{-t H_0},$$
we can rewrite
\begin{equation}
\begin{split}
\frac{Tr\left(e^{-\beta H}\right)}{Tr\left(e^{-\beta H_0}\right)}
=1+\sum_{N\geq 1}\left(-1\right)^N \int_0^\beta dt_1\int_0^{t_1} dt_2 \dots \int_0^{t_{N-1}}dt_N \frac{Tr \left(e^{t H_0}U e^{-t H_0}\right)}{Tr\left(e^{-\beta H_0}\right)}.
\end{split}
\end{equation}
The {\it fermionic time-ordering operator} allows us to further rewrite
\begin{equation}
\frac{Tr\left(e^{-\beta H}\right)}{Tr\left(e^{-\beta H_0}\right)}=1+\sum_{N\geq 1}\frac{\left(-1\right)^n}{N!}\left<\bm T \left(\left( U_\beta(\psi)\right)^N\right)\right>_{0,\Lambda,\beta},
\label{expansion_trotter_formula}
\end{equation}
where again $\left<\cdot \right>_{0,\Lambda,\beta}=Tr\left(e^{-\beta H_0}\cdot\right)/Tr\left(e^{-\beta H_0}\right)$, and
\begin{equation}
\begin{split}
U_\beta(\psi)=\lambda \int_{[0,\beta)}d x_0 \sum_{x\in\Lambda}\int_{[0,\beta)}d y_0 \sum_{y\in\Lambda} \psi^+_{\bm x}\psi^-_{\bm x}v(x,y)\delta_{x_0,y_0}\psi^{+}_{\bm y}\psi^-_{\bm y}+\\
+\varpi \int_{[0,\beta)}d x_0 \sum_{x\in\Lambda}\int_{[0,\beta)}d y_0 \sum_{y\in\Lambda} \psi^+_{\bm x} \pi(x,y)\delta_{x_0,y_0}\psi^{-}_{\bm y}+ \nu\int_{[0\beta)}dx_0\sum_{x\in \Lambda}\psi^+_{\bm x}\psi^-_{\bm x}.
\end{split}
\end{equation}
Now, the $N$-th term of (\ref{expansion_trotter_formula}) can be computed by the {\it fermionic Wick rule} knowing explicitly the free propagator, and following the Feynman rules.
\subparagraph{Feynman rules}
\begin{figure}
\begin{center}
\begin{tikzpicture}
[thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\node at (-2,1.4) {{\bf x}};
\node at (0,1.4) {{\bf y}};
\fill (-2,1) circle (0.06);
\fill (0,1) circle (0.06);
\draw [postaction={decorate}] (-3,0) -- ++(1,1);
\draw [postaction={decorate}](-3,0) ++ (1,1)-- ++ (-1,1);
\draw [postaction={decorate}] (-3,0) ++ (1,1)++ ( 2,0) ++(1,1) ++ (-1,-1) --++ (1,-1);
\draw [postaction={decorate}] (-3,0) ++ (1,1)++ ( 2,0) ++(1,1) ++ (-1,-1) ++ (1,-1) ++ (-1,1) -- ++(1,1);
\draw [-,decorate,decoration=snake] (-3,0) ++ (1,1) ++ (-1,1)++ (1,-1) -- ++(2,0);
\node at (-4,3.3) {\bf x};
\fill (-4,3) circle (0.1);
\draw [postaction={decorate}] (-5,3) -- ++ (1,0);
\draw [postaction={decorate}] (-4,3) -- ++ (1,0);
\node at (1,3.3) {\bf x};
\fill (1,3) circle (0.06);
\node at (3,3.3) {\bf y};
\fill (3,3) circle (0.06);
\draw [postaction={decorate}] (0,3) -- ++ (1,0);
\draw [-,decorate,decoration={coil, aspect=2}] (0,3) ++ (1,0) --++ (2,0);
\draw [postaction={decorate}](0,3) ++ (1,0) ++ (2,0) --++(1,0);
\end{tikzpicture}
\end{center}
\caption{Graph elements, note that there is one element more than (\ref{figure_graph_elements_PBC}), which represents the {\it boundary defect} $\varpi \int_{[0,\beta)} dx_0dy_0\sum_{x,y\in\Lambda}\psi_{\bm x}^+\pi(x,y)\delta_{x_0,y_0}\psi_{\bm y}^-$.}
\label{figure_graph_elements_DBC}
\end{figure}
In order to compute $\left<\bm T\left(U_\beta(\psi))^N\right)\right>_0$, it is easy to check that one can follow these steps:
\begin{itemize}
\item $\forall k,h,l$ such that $0 \leq k,h,l\leq N $ and $k+h+l=N$, draw $k$ graph elements consisting of {\it four legged vertices}, $l$ graph elements consisting of {\it two legged local vertices} and $h$ graph elements consisting of {\it two legged non-local vertices} with the vertices associated to labels $\bm x_i$, $i=1,\dots,N$, in such a way that the {\it four legged vertices} are composed by two entering and to exiting fields, while the {\it two legged vertices} are associated with one exiting and one entering leg, but in the case of the {\it local vertices} the lines touches the same point (so one line enters the same point the same point the other exits), while in the case of the non local vertices the two legs are linked by a further graph element $(\bm x,\bm y)$ (Figure (\ref{figure_graph_elements_DBC}));
\item pair the fields in all possible ways, in such a way that every pair is obtained by contracting an entering and an exiting leg;
\item associate to every pairing the {\it right sign}, which is the sign of the permutation needed to bring every pair of contracted fields next to each other;
\item associate to every linked pair of fields $\left(\psi^-(\bm x_i),\psi^+(\bm x_j)\right)$ an {\it oriented} line connecting the $i-$th with the $j-$th vertex, oriented from $j$ to $i$ ({\it i.e.} from $+$ to $-$ field);
\item associate to every oriented line from $j$ to $i$ value $g(\bm x_i,\bm x_j)$ given by (\ref{free_propagator_DBC});
\item associate to every configuration of pairings, which is called {\it Feynman graph} a value, equal to the product of the sign of the pairing, times $\lambda^k\varpi^h\nu^l$ times the product of the values of all the oriented lines (see, for instance, Figure (\ref{figure_second_order_feynman_graph}));
\item integrate over $\bm x_i$, then perform the sum over all the possible pairings, over $k, h, l$ and over N;
\end{itemize}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture
[scale=0.8, transform shape, thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\fill (-1,0) circle (0.06);
\fill (2,0) circle (0.06);
\fill (-3,0) circle (0.06);
\fill (4,0) circle (0.06);
\draw [postaction={decorate}] (-4,1) -- ++(1,-1);
\draw [postaction={decorate}] (-4,-1)--++ (1,1);
\draw [-,decorate, decoration={snake}] (-3,0) -- ++(2,0);
\draw [postaction={decorate}] (-1,0) to [out=90, in=90, looseness=1] (2,0);
\draw [postaction={decorate}] (-1,0) to [out=-90, in=-90, looseness=1] (2,0);
\draw [-,decorate, decoration={snake}] (2,0) -- ++(2,0);
\draw [postaction={decorate}] (4,0) -- ++(1,-1);
\draw [postaction={decorate}] (4,0)--++ (1,1);
\end{tikzpicture}
\end{center}
\caption{Example of a second order Feynman graph, obtained by contracting two $\lambda$-type endpoints.}
\label{figure_second_order_feynman_graph}
\end{figure}
\paragraph{Grassmann integration}
As explained in the previous chapter, it is convenient to re-write the {\it free energy} and the {\it Schwinger functions} in terms of Grassmann integrals: first of all we introduce a finite set of {\it Grassmann variables} $\{\hat \psi^{\pm}_{\bm k}\}_{\bm k \in \mathcal D^d_{\Lambda,\beta, M}}$, hence we define the {\it Grassmann integration}
\begin{equation}
P_M(d\psi)=\left(\prod_{\bm k\in\mathcal D^d_{\Lambda,\beta,M}} \left(\frac{\beta(L+1)}{2}\hat g(\bm k)\right)\hat \psi^+_{\bm k}\hat \psi^-_{\bm k} \right) e^{-\sum_{\bm k\in\mathcal D^d_{\Lambda,\beta,M}}\left(\frac{\beta(L+1)}{2}\hat g(\bm k)\right)^{-1}\hat \psi^+_{\bm k}\hat \psi^-_{\bm k}},
\end{equation}
and by, introducing the sine Fourier transform
\begin{equation}
\psi^+_{\bm x}=\frac{2}{\beta(L+1)}\sum_{\bm k\in \mathcal D^d_{\Lambda,\beta,M}}\hat \psi^\pm_{\bm k} e^{-ik_0x_0}\sin (kx),
\end{equation}
we can define the {\it integral}
\begin{equation}
\int P_M(d\psi) \psi^-_{\bm x}\psi^+_{\bm y}=\frac{2}{\beta(L+1)}\sum_{\bm k\in \mathcal D_{\Lambda,\beta,M}}\hat g(\bm k) e^{-ik_0(x_0-y_0)}\sin(kx)\sin(ky),
\end{equation}
while the average of any monomial in the Grassmann variables with respect to the Grassmann integration $P(d\psi)$ is given by the fermionic Wick rule with propagator $g(\bm x,\bm y)$. Using the definitions of Grassmann integration and the Feynman rules just described, it comes out that
\begin{equation}
\frac{Tr\left(e^{-\beta H}\right)}{Tr\left(e^{-\beta H_0}\right)}=\int P(d\psi)e^{-\mathcal V(\psi)},
\end{equation}
where
\begin{equation}
\begin{split}
\mathcal V(\psi)=\lambda\int_{[0,\beta]}dx_0\sum_{x\in\Lambda}\int_{[0,\beta]}dy_0\sum_{y\in\Lambda}\psi^+_{\bm x}\psi^-_{\bm x}v(x,y)\delta_{x_0,y_0}\psi^+_{\bm y}\psi^-_{\bm y}+\\
+\varpi\int_{[0,\beta]}dx_0\sum_{x\in\Lambda}\int_{[0,\beta]}dy_0\sum_{y\in\Lambda}\psi^+_{\bm x}\pi(x,y)\delta_{x_0,y_0}\psi^-_{\bm y}+\nu\int_{[0,\beta)}\sum_{x\in\Lambda}\psi^+_{\bm x}\psi^-_{\bm x}.
\end{split}
\end{equation}
\begin{rem}
Starting from that, we can repeat formally the same construction as the previous chapter, getting first of all the formal equation for the free energy (\ref{free_energy_as_sum_of_trunc_expec}) that we recall
\begin{equation*}
f_{\Lambda,\beta}=-\frac{1}{|\Lambda|\beta}\sum_{N\geq 1}\frac{(-1)^N}{N!}\mathcal E^T(\mathcal V;N)=:\sum_{N\geq 1}f^{(N)}_{\Lambda,\beta}.
\end{equation*}
Of course, we can repeat the same argument we used to prove Lemma (\ref{lemma_bounds_no_multiscale_no_determinants}) to get the same rough bound. As in the previous chapter, we can solve the combinatorial problem by using the detereminant expansion (the fact that the free propagator (\ref{free_propagator_DBC}) can be expressed as a proper scalar product is proved in Appendix (\ref{appendix_gram_representation}), looking only at the functions $A^{d}$ and $B^{d}$):
\begin{equation*}
f_{\Lambda,\beta}^{(N)}=-\frac{1}{\beta|\Lambda|}\frac{(-1)^N}{N!}\epsilon^N \sum_{T\in \mathcal T_N}\alpha_T\int d\bm x_1\dots d\bm x_N \prod_{\ell\in T} g_\ell\int dP_T(\bm t)\det G^T(\bm t).
\end{equation*}
In order to solve the divergence problem arising when we take the infinite volume limit, we introduce a multiscale expansion.
\end{rem}
As we already commented in the previous chapter, with a little abuse of notation we called $f_{\Lambda,\beta}$ the difference between the specific free energy of the interacting system and the one of the free system.
\subsection{Multiscale decomposition}
\paragraph{Multiscale decomposition and quasi-particles}
\label{ingoing_outgoing_quasiparticles_subsection}
Let us recall the result of Lemma \ref{lemma_reflection_trick}:
\begin{equation*}
g(\bm x,\bm y)=g_P(\bm x,\bm y)+g_R(\bm x,\bm y),
\end{equation*}
where $g_P$ and $g_R$ have been defined in (\ref{propagators_P_definition_noscales}) and (\ref{propagators_PR_definition_noscales}), both starting from the periodic propagator on the extended box:
\begin{equation}
g_{2(L+1)}(\bm x)=\frac{1}{\beta2(L+1)}\lim_{M\to \infty}\sum_{\bm k\in\mathcal{D}_{2(L+1)}\times\mathcal{D}_{\beta,M}} e^{-i\bm k\cdot \bm x}\hat g(\bm k).
\label{free_propagator_2(L+1)_PBC)}
\end{equation}
Given that, we can separately perform, on both the propagators $g_P$ and $g_R$, first of all the multiscale decomposition and then the quasi-particle decomposition.
\subparagraph{Infrared and ultraviolet regime}
First of all, let us introduce a smooth $C^{\infty}$ function $\chi:\mathcal{D}_{2(L+1)}\times \mathcal{D}_{\beta, M}\to C^{\infty}([0,1])$ defined in such a way that
\begin{equation}
\chi(\bm k')=
\begin{cases}
1, \mbox{ if } |\bm k'|\leq \gamma^{-1} p_F/2 ,\\
0, \mbox{ if } |\bm k'|\geq p_F/2,
\end{cases}
\label{cut_off_chi_definition_DBC}
\end{equation}
where $\gamma >1$, and $|\bm k|=\sqrt{k_0^2+k^2}$. We know $\cos p_F-\cos k=0$ if $k=\pm p_F\in \mathcal{D}_{2(L+1)}$, so we rewrite the propagator as:
\begin{equation}
\hat g(\bm k)= \frac{1-\chi(k_0,k+p_F)-\chi(k_0,k-p_F)}{-ik_0+\cos p_F-\cos k}+\frac{\chi(k_0,k+p_F)+\chi(k_0,k-p_F)}{-ik_0+\cos p_F-\cos k}
\end{equation}
defining the {\it ultraviolet} and the {\it infrared} propagator, respectively $\hat g^{(u.v)}$ and $\hat g^{(i.r.)}$:
\begin{eqnarray}
\hat g^{(u.v.)}(\bm k)=\frac{1-\chi(k_0,k+p_F)-\chi(k_0,k-p_F)}{-ik_0+\cos p_F-\cos k},
\label{ultraviolet_momentum_propagator}
\\
\hat g^{(i.r.)}(\bm k)=\frac{\chi(k_0,k+p_F)+\chi(k_0,k-p_F)}{-ik_0+\cos p_F-\cos k}.
\label{infrared_momentum_propagator}
\end{eqnarray}
This decomposition induces a natural decomposition in the {\it real-space}
\begin{equation}
\begin{split}
g^{(u.v.)}_P(\bm x, \bm y)=\frac{1}{\beta 2(L+1)} \lim_{M\to \infty}\sum_{\substack{\bm k \in \mathcal D_{2(L+1)}^{\beta,M}}} e^{-i k_0(x_0-y_0)} e^{-i k(x-y)}\hat g^{(u.v.)}(\bm k),\\
g^{(i.r.)}_P(\bm x, \bm y)=\frac{1}{\beta 2(L+1)} \lim_{M\to \infty}\sum_{\substack{\bm k \in \mathcal D_{2(L+1)}^{\beta,M}}} e^{-i k_0(x_0-y_0)} e^{-i k(x-y)}\hat g^{(i.r.)}(\bm k),\\
g^{(u.v.)}_R(\bm x, \bm y)=\frac{1}{\beta 2(L+1)} \lim_{M\to \infty}\sum_{\substack{\bm k \in \mathcal D_{2(L+1)}^{\beta,M}}} e^{-i k_0(x_0-y_0)} e^{-i k(x+y)}\hat g^{(u.v.)}(\bm k),\\
g^{(i.r.)}_R(\bm x, \bm y)=\frac{1}{\beta 2(L+1)} \lim_{M\to \infty}\sum_{\substack{\bm k \in \mathcal D_{2(L+1)}^{\beta,M}}} e^{-i k_0(x_0-y_0)} e^{-i k(x+y)}\hat g^{(i.r.)}(\bm k).
\end{split}
\end{equation}
where we defined $\mathcal D_{2(L+1)}^{\beta,M}:=\mathcal D_{2(L+1)}\times \mathcal D_{\beta,M},$
and in particular we can introduce the label $\sigma$ in such a way that:
\begin{equation}
\begin{split}
g^{(u.v.)}(\bm x,\bm y)=\sum_{\sigma\in\{P,R\}}g_\sigma^{(u.v.)}(\bm x,\bm y),\\
g^{(i.r.)}(\bm x,\bm y)=\sum_{\sigma\in\{P,R\}}g_\sigma^{(i.r.)}(\bm x,\bm y).
\end{split}
\end{equation}
Using the addition principle (\ref{addition_principle}) we introduce two different sets of Grassmann fields $\{\psi_{\bm x}^{(u.v.)\pm}\}$ and $\{\psi_{\bm x}^{(i.r.)\pm}\}$, with $\bm x\in \Lambda\times [0,\beta)$ and the Gaussian integrations $P_M(\psi^{(u.v.)})$ and $P_M(\psi^{(i.r.)})$ in such a way that
\begin{equation}
\begin{split}
\int P(d \psi^{(u.v.)})\psi^{(u.v.)-}_{\bm x}\psi^{(u.v.)+}_{\bm y}= g^{(u.v.)}(\bm x,\bm y),\\
\int P(d \psi^{(i.r.)})\psi^{(i.r.)-}_{\bm x}\psi^{(i.r.)+}_{\bm y}= g^{(i.r.)}(\bm x,\bm y),
\end{split}
\end{equation}
implying that
\begin{equation}
\int P(d\psi) e^{-\mathcal V(\psi)}=\int P(d\psi^{(i.r.)})\int P(d\psi^{(u.v)}) e^{-\mathcal V\left(\psi^{(i.r.)}+\psi^{(u.v.)}\right)},
\end{equation}
so that
\begin{equation}
\begin{split}
e^{-\beta |\Lambda| f^{(M)}_{\Lambda,\beta}}=\int P(d\psi^{(i.r.)}) \exp \left( \sum_{n\geq 1}\frac{1}{n!}\mathcal E_{u.v.}^T\left(-\mathcal V\left(\psi^{(i.r.)}+\cdot\right);n\right)\right):=\\ := e^{-\beta |\Lambda| e_{M,0}}\int P(d\psi^{(i.r.)})e^{-\mathcal V_0(\psi^{(i.r.)})}.
\end{split}
\end{equation}
where the effective potential $\mathcal V_0(\psi)$ can be written as
\begin{equation}
\mathcal V_0(\psi)=\sum_{n=1}^{\infty}\sum_{\substack{ \bm x_1,\dots,\bm x_{2n}\\ \in\\ \Lambda\times [0,\beta)}} \left(\prod_{j=1}^{n} \psi^{(i.r.)+}_{\bm x_{2j-1}}\psi^{(i.r.)-}_{\bm x_{2j}} \right) W_{M,2n} (\bm x_1,\dots,\bm x_{2n}).
\label{effective_potential_scale_0_DBC}
\end{equation}
\begin{lem}[Ultraviolet integration]
The kernels $W_{M,2n}(\bm x_1,\dots, \bm x_{2n})$ in the previous expansion are given by power series in $\lambda$ convergent in the complex disc $|\lambda|\leq \lambda_0$ for $\lambda_0$ small enough and independent of $M,\Lambda, \beta$, and satisfy the following bound
\begin{equation}
\frac{1}{\beta |\Lambda|}\int d\bm x_1 \dots d\bm x_{2n} \left| W_{M,2n}(\bm x_1,\dots,\bm x_{2n}) \right|\leq C^n |\lambda|^{\max\{1,n-1\}}.
\end{equation}
Moreover, the limits $e_0=\lim_{M\to \infty} e_{M,0}$ and $W_{2n}=\lim_{M\to \infty}(\bm x_1,\dots,\bm x_{2n})$ exist and are reached uniformly in M.
\end{lem}
\begin{rem}
The fact that the limits are reached {\it uniformly} in $M$ tells us that the infrared problem is essentially independent of M. Since in the infrared region $M$ does not play any role, from now on we drop the label $M$.
\end{rem}
As in the previous chapter, we will not prove this Lemma, and we refer, for instance, to \cite{benfatto1993beta}.
\paragraph{Multiscale expansion of the infrared scales and quasi-particles}
\subparagraph{Infrared regime and quasi-particles}
After having integrated the ultraviolet degrees of freedom, we are left with the {\it infrared propagator}
\begin{equation}
\hat g^{(i.r.)}(\bm k)=\frac{\chi(k+p_F,k_0)+\chi(k-p_F,k_0)}{-ik_0+\cos p_F-\cos k}= :\sum_{\omega\in\{\pm1\}}\hat g^{(i.r.)}_{\omega}(\bm k),
\end{equation}
where
\begin{equation}
\hat{g}^{(i.r.)}_{\omega}(\bm k):=\frac{\chi(k-\omega p_F,k_0)}{-ik_0+\cos p_F-\cos k}.
\end{equation}
We can now define the {\it infrared propagator in real space-time}
\begin{equation}
\begin{split}
g^{(i.r.)}_{2(L+1)}(\bm x)=\frac{1}{\beta2(L+1)}\sum_{\omega\in\{\pm 1\}}\sum_{\bm k\in \mathcal{D}_{2(L+1)}\times\mathcal{D}_{\beta}}e^{-i\bm k\bm x}\frac{\chi(k-\omega p_F,k_0)}{-ik_0+\cos p_F-\cos k}=\\
=\frac{1}{\beta2(L+1)}\sum_{\omega\in\{\pm 1\}}\sum_{\bm k'\in \mathcal{D}^{\omega}_{2(L+1)}\times\mathcal{D}_{\beta}}e^{-i\omega p_F x}e^{-i\bm k'\bm x}\hat g^{(i.r.)}_\omega(\bm k'),
\end{split}
\end{equation}
where we have performed the change of variables $k-\omega p_F=k'$, so $\mathcal{D}^{\omega}_{2(L+1)}=\mathcal{D}_{2(L+1)}-\omega p_F$ and
\begin{equation}
g^{(i.r.)}_\omega(\bm k')=g^{(i.r.)}(k-\omega p_F, k_0).
\end{equation}
Finally, we can define
\begin{equation}
g^{(i.r.)}_{2(L+1)}(\bm x)=\sum_{\omega\in\{\pm 1\}}e^{-i\omega p_F x}g^{(i.r.)}_{2(L+1),\omega}(\bm x),
\end{equation}
where
\begin{equation}
g^{(i.r.)}_{2(L+1),\omega}(\bm x)=\frac{1}{\beta2(L+1)}\sum_{\bm k'\in \mathcal{D}^{\omega}_{2(L+1)}\times\mathcal{D}_{\beta}}e^{-i\bm k'\bm x}\hat g^{(i.r.)}_\omega(\bm k').
\end{equation}
\begin{rem}
From the latter expression we understand the behaviour of the propagator nearby the singularity. Indeed, when the momentum is close to the Fermi momentum $p_F$, ({\it i.e.} $k'\sim 0$), $g^{(i.r.)}_{\omega}(\bm k')\sim \frac{\chi(k_0,k')}{-ik_0+\omega k' \sin p_F}$, so we have a quasi-linear dispersion near the singularity.
\end{rem}
So, we can finally decompose the propagator $g^{(i.r.)}(\bm x,\bm y)$ as
\begin{equation}
\begin{split}
g^{(i.r.)}(\bm x,\bm y)=g_P^{(i.r.)}(\bm x,\bm y)+g_R^{(i.r.)}(\bm x,\bm y)=\\
=\sum_{\omega=\pm}e^{-i\omega p_F(x-y)}g_{P,\omega}^{(i.r.)}(\bm x,\bm y)+\sum_{\omega=\pm}e^{-i\omega p_F(x+y)}g_{R,\omega}^{(i.r.)}(\bm x,\bm y),
\end{split}
\end{equation}
where
\begin{eqnarray}
g^{(i.r)}_{P,\omega}(\bm x, \bm y):=g^{(i.r.)}_{2(L+1),\omega}(x-y, x_0-y_0),
\label{propagators_P_i.r._defn}\\
g^{(i.r)}_{R,\omega}(\bm x, \bm y):=g^{(i.r.)}_{2(L+1),\omega}(x+y, x_0-y_0),
\label{propagators_PR_i.r._defn}
\end{eqnarray}
that suggests to rewrite the propagator as
\begin{equation}
g^{(i.r.)}(\bm x, \bm y)=\sum_{\sigma\in\{P,R\}}\sum_{\omega=\pm}g^{(i.r.)}_{\sigma,\omega}(\bm x,\bm y)\left(e^{-i\omega p_F(x-y)}\delta_{\sigma,P}+e^{-i\omega p_F(x+y)}\delta_{\sigma,R}\right).
\label{propagator_DBC_as_sum_of_propagators_sigma_omega_infrared}
\end{equation}
Using the {\it addition property} of Gaussian Grassmann measures (\ref{addition_principle}), we can split
\begin{equation}
\psi^{(i.r.)\pm}_{\bm x}=\sum_{\sigma\in\{P,R\}}\sum_{\omega=\pm}e^{\mp ip_F \omega x}\psi^{(i.r.)\pm}_{\sigma,\omega,\bm x},
\label{quasi_particles_decomposition_infrared_DBC}
\end{equation}
associated with the Feynman contraction rule
\begin{equation}
\left<\psi^{(i.r.)-}_{\sigma,\omega}(\bm x)\psi^{(i.r.)+}_{\sigma',\omega'}(\bm y)\right>=g^{(i.r.)}_{\sigma,\omega}(\bm x,\bm y)\delta_{\sigma,\sigma'}\left(\delta_{\omega,\omega'}\delta_{\sigma,P}+\delta_{\omega,-\omega'}\delta_{\sigma,R} \right).
\label{feynman_rules_propagators_sigma_omega_infrared}
\end{equation}
\subparagraph{Decomposition on scales $h\leq 0$}
Once we have defined the infrared scale, we can take a step beyond and rewrite the propagators $\{g^{(i.r.)}_{\sigma,\omega}(\bm x,\bm y)\}_{\sigma\in \{P,R\}}^{\omega=\pm}$ as an infinite sum of {\it single scale propagators} we are going to define. The only trick we use is rewriting the cutoff function $\chi$ as the telescopic series:
\begin{equation}
\chi\left(\bm k'\right)=\sum_{h\leq 0}\left[\chi\left(\gamma^{-h}\bm k'\right)-\chi\left(\gamma^{-h+1}\bm k'\right)\right]=:\sum_{h\leq 0}f_h(\bm k').
\end{equation}
Using it in the very definition of infrared quasi particle propagator we get
\begin{equation}
\hat g_{\omega}^{(i.r.)}( \bm k')=\frac{\chi( k',k_0)}{-ik_0+\cos p_F-\cos (k'-\omega p_F)}=\sum_{h\leq 0}f_{h}(k',k_0)\hat g_{\omega}(\bm k') =:\sum_{h\leq 0}\hat g^{(h)}_{\omega}(\bm k').
\label{propagator_decomposition_scale_h_momentum_space}
\end{equation}
A direct consequence of the latter decomposition is the possibility to define
\begin{equation}
\hat g_{\omega}^{(\leq h)}(\bm k')=\sum_{j\leq h}\hat g_{\omega}^{(j)}(\bm k').
\label{propagator_leq_h_definition_momentum_space}
\end{equation}
It is appropriate now to introduce the real space-time representation of the single scale propagators by Fourier transforming $\hat g^{(h)}_\omega(\bm k)$,
\begin{equation}
g_{2(L+1),\omega}^{(h)}(\bm x)=\frac{1}{\beta 2 (L+1)}\sum_{\bm k'\in\mathcal{D}^{\omega}_{2(L+1)}\times\mathcal{D}_{\beta}}e^{-i\bm k'\cdot \bm x}\hat g^{(h)}_{\omega}(\bm k'),
\label{propagator_single_scale_real_space}
\end{equation}
which implies, of course,
\begin{equation}
\label{g^d(h)_definition}
g^{(h)}(\bm x,\bm y)=g_P^{(h)}(\bm x,\bm y)+g_R^{(h)}(\bm x,\bm y)
\end{equation}
and, in a way formally analogous to what we did in the infrared region:
\begin{equation}
g^{(h)}(\bm x,\bm y)=\sum_{\sigma\in\{P,R\}}\sum_{\omega\in \pm}g^{(h)}_{\sigma,\omega}(\bm x, \bm y)\left(e^{-i p_F\omega (x-y)}\delta_{\sigma,P}+e^{-i p_F\omega (x+y)}\delta_{\sigma,R}\right),
\label{gd_sum_of_quasi_particle_propagators}
\end{equation}
where $g^{(h)}_{\sigma,\omega}$ is the analogous of the elements appearing in (\ref{propagators_PR_i.r._defn}) if we replace $\chi_{i.r.}(\bm k)\to f_h(\bm k)$. Of course, we can introduce the quasi-particles fields analogously to what we did in the infrared case (\ref{quasi_particles_decomposition_infrared_DBC}):
\begin{equation}
\psi_{\bm x}^{(h)\pm}=\sum_{\omega=\pm}\sum_{\sigma\in\{P,R\}}e^{\mp i\omega p_F x}\psi^{(h)\pm}_{\sigma,\omega,\bm x},
\label{quasi_particles_decomposition_scale_h_DBC}
\end{equation}
contracting with the Feynman contraction rule
\begin{equation}
\left<\psi^{(h)-}_{\sigma,\omega,\bm x}\psi^{(h)+}_{\sigma',\omega',\bm y}\right>=g^{(h)}_{\sigma,\omega}(\bm x, \bm y)\delta_{\sigma,\sigma'}\left(\delta_{\omega,\omega'}\delta_{\sigma,P}+\delta_{\omega,-\omega'}\delta_{\sigma,R} \right).
\label{feynman_rules_scale_h_DBC}
\end{equation}
Finally, it is useful to decompose:
\begin{eqnarray}
g^{(\leq h)}(\bm x,\bm y)=\sum_{j\leq h}g^{(j)}(\bm x,\bm y),\\
g^{(\leq h)}_{\sigma,\omega}(\bm x,\bm y)=\sum_{j\leq h}g^{(j)}_{\sigma,\omega}(\bm x,\bm y).
\label{propagator_scale_decomposition_DBC_sigma_omega}
\end{eqnarray}
\begin{rem}
When we switch from the original representation in terms of the Grassmann variables $\{\psi^{\pm}\}$ to the {\it quasi particles representation} in terms of $\{\psi^{\pm}_{\sigma,\omega}\}$, we break the Dirichlet boundary conditions: indeed the only information we get from (\ref{remark_quasi_particles_issue_DBC}), knowing that $\psi^{\pm}(0, x_0)=\psi^{\pm}(L+1,x_0)=0$ $\forall x_0\in \left[0,\beta\right)$ is
\begin{equation}
\begin{split}
\sum_{\omega}\sum_{\sigma}\psi^{\pm}_{\sigma, \omega}(0,x_0)=0,\\
\sum_{\omega}\sum_{\sigma}e^{\mp ip_F\omega (L+1)}\psi^{\pm}_{\sigma, \omega}(L+1,x_0)=0.\\
\end{split}
\end{equation}
\end{rem}
\subsection{Properties of single-scale free propagators}
\label{subsection_estimates_on_single-scale_free_propagator}
\paragraph{Estimates}
The multiscale analysis involves the norms of the single scale propagators $g^{(h)}$, so it is useful to note that, in the case of $g_{2(L+1),\omega}^{(h)}$ we can directly apply the result of Lemma (\ref{lemma_propagator_faster_any_power}) that we recall here:
\begin{equation}
\left| g^{(h)}_{2(L+1),\omega}(\bm x)\right|\leq \gamma^{h} \frac{C_N}{1+\left(\gamma^h|\bm x|\right)^N}, \hspace{3mm} \forall N\in \mathbb{N}.
\end{equation}
In subsection (\ref{ingoing_outgoing_quasiparticles_subsection}) we showed how, at each scale $h$, we can rewrite
\begin{equation}
g^{(h)}(\bm x,\bm y)=\sum_{\omega\in\{\pm 1\}}\left[e^{-i\omega(x-y)}g_{P,\omega}^{(h)}(\bm x, \bm y)+e^{-i\omega(x+y)}g_{R,\omega}^{(h)}(\bm x,\bm y)\right]
\label{propagator_decomposed_in_quasiparticles_DBC}
\end{equation}
where $g^{(h)}_{P,\omega}$ and $g^{(h)}_{R,\omega}$ are defined in terms of $g_{2(L+1),\omega}^{(h)}$ so, again $\forall N\in\mathbb{N}$, we can estimate,
\begin{equation}
\begin{cases}
| g_{P,\omega}^{(h)}(x-y,x_0-y_0)|\leq \gamma^h\frac{C_N}{1+\left(\gamma^h \left|(x-y,x_0-y_0)\right|\right)^N},\\
| g_{R,\omega}^{(h)}(x+y,x_0-y_0)|\leq \gamma^h\frac{C_N}{1+\left(\gamma^h \left|(x+y,x_0-y_0)\right|\right)^N}.
\end{cases}
\label{bounds_propagator_faster_than_any_power_DBC}
\end{equation}
\begin{corollary}
\label{corollary_norms_propagators_DBC}
Thanks to (\ref{bounds_propagator_faster_than_any_power_DBC}), we can bound the norms $||\cdot||_\infty$ and $||\cdot||_1$ of the quasi particles propagators $\{g_{\sigma,\omega}\}_{\sigma\in\{P,R\}}^{\omega=\pm}$ as
\begin{eqnarray}
||g^{(h)}_{P,\omega}||_\infty:=\sup_{\bm x,\bm y}|g^{(h)}_{P,\omega}(\bm x,\bm y)|\leq C\gamma^h, \label{infty_norm_diagonal_DBC}\\
||g^{(h)}_{R,\omega}||_\infty:=\sup_{\bm x,\bm y}|g^{(h)}_{R,\omega}(\bm x,\bm y)|\leq C\gamma^h,\label{infty_norm_off_diagonal_DBC}\\
||g^{(h)}_{P,\omega}||_1:=\left|\frac{1}{2(L+1)\beta}\int d\bm x d\bm y g^{(h)}_{P,\omega}(\bm x,\bm y)\right|\leq C\gamma^{-h},\label{1_norm_diagonal_DBC}\\
||g^{(h)}_{R,\omega}||_1:=\left|\frac{1}{2(L+1)\beta}\int d\bm x d\bm y g^{(h)}_{P,\omega}(\bm x,\bm y)\right|\leq C\gamma^{h_L-h}\gamma^{-h},\label{1_norm_off_diagonal_DBC}
\end{eqnarray}
where ${h_L}:=\lfloor \log_\gamma \frac{1}{L+1}\rfloor$, {\it i.e.} $\gamma^{h_L}\sim \frac{1}{L+1}$ and $C>0$.
\end{corollary}
In light of that, we can define, for each $N=1,2,\dots$, a function that will be useful in the following
\begin{equation}
\label{definition_rho_h^N}
\rho^{(N)}_h(x)=\sup_{\substack{y\in\Lambda, \\ x_0,y_0\in [0,\beta)}}\frac{C_N}{\left(1+\gamma^h|(x+y,x_0-y_0)|\right)^N}\leq \frac{C_N}{(1+\gamma^hd_{L}(x))^N},
\end{equation}
where $d_L(x)=\min_{x\in \Lambda}\{|x|,|x-L||\}$, so that it holds
\begin{equation}
\left| g^{(h)}_R(\bm x,\bm y)\right|\leq ||g^{(h)}||_\infty \rho^{(N)}_h(x).
\label{bound_g_R_g_infty_rho}
\end{equation}
From now on, with a slight abuse of notation, we will denote
$$\frac{1}{(1+\gamma^hd_{L}(x))} =: \frac{1}{(1+\gamma^h|x|)}.$$
\begin{rem}
\label{remark_anchorage_property_norm_1_infty}
We will say that the dimensional gain $\gamma^{h_L-h}$ (scale jump) we have in (\ref{1_norm_off_diagonal_DBC}) with respect to (\ref{1_norm_diagonal_DBC}) and (\ref{norm_1_propagator}) is due to the {\it "anchorage property"}: the propagator $g^{(h)}_{R,\omega}$ does not depend on the distance of the arguments, so we can perform both the integrals over the positions using the decay properties of the propagator, without getting a volume factor.
\end{rem}
\paragraph{Gram representation} The propagator $g^{(h)}_P{(\bm x-\bm y)}$ has already been studied in the previous chapter, but we have to check that also $g^{(h)}_R(\bm x,\bm y)$ admits a Gram representation.
\begin{lem}{Gram estimate}
Let $M$ be a square matrix whose entries are $M_{ij}=\left<A_i, B_j\right>$ where $A_i$ and $B_j$ are vectors in a Hilbert space with scalar product $\left<\cdot,\cdot \right>$, then
\begin{equation}
|\det M|\leq \prod_{i}||A_i|| ||B_j||
\end{equation}
where $||\cdot ||$ is the norm induced by the scalar product.
\end{lem}
We do not prove this lemma, and we refer to \cite{gentile2001renormalization}, Theorem A.1.
\section{Non-renormalized expansion and properties of kernels}
\label{section_Non-renormalized expansion and properties of kernels}
By combining the multiscale expansion of the free propagator and the properties of the Grassmann integration, our goal is to compute
\begin{equation}
\begin{split}
e^{-\beta |\Lambda| f_{\Lambda,\beta}}=\int P(d \psi^{(\leq h)})\int P(d \psi^{(h+1)})\int P(d \psi^{(h+2)})\dots \int P(d\psi^{(0)})e^{-\mathcal V_0(\psi^{(0)}+\psi^{(\leq 1)})}=\\
=e^{-\beta |\Lambda| e_h}\int P(d\psi^{(\leq h)})e^{\mathcal V^{(h)}(\psi^{(\leq h)})}
\end{split}
\end{equation}
where the Grassmann integration is
\begin{equation}
\begin{split}
P(d\psi^{(\leq h)})=\prod_{\sigma\in\{P,R\}}\prod_{\omega=\pm}\prod_{\bm x\in\mathcal{D}^{d}_{\Lambda,\beta}}\left(\left[ \det g^{(\leq h)}\right]^{-1} \psi^{(\leq h)+}_{\sigma,\omega}(\bm x) \psi^{(\leq h)-}_{\sigma,\omega} (\bm x)\right)\\ \exp\left[-\sum_{\sigma\in\{P,R\}}\sum_{\omega=\pm}\sum_{\bm x,\bm y\in \Lambda\times [0,\beta)}\psi^{(\leq h)+}_{\sigma,\omega}(\bm x)\left[g^{(\leq h)}\right]^{-1}_{\sigma,\omega}(\bm x,\bm y)\psi_{\sigma,\omega}^{(\leq h)-}(\bm y)\right],
\label{measure_quasi_particles_real_space_DBC}
\end{split}
\end{equation}
and the effective potentials can be written as:
\begin{equation}
\label{non_renormalized_effective_potential_PBD}
\mathcal V^{(h)}(\psi^{\leq h})=\sum_{n=1}^{\infty}\int d\bm x_1\dots d\bm x_{2n}\left(\prod_{j=1}^{\infty}\psi^{(\leq h)+}_{\bm x_{2j-1}}\psi^{(\leq h)-}_{\bm x_{2j}}\right) W_2^{(h)}(\bm x_1,\dots, \bm x_{2n})
\end{equation}
where the integrals has to be interpreted as
$$\int d\bm x=\int_{[0,\beta)}dx_0\sum_{x\in\Lambda}.$$
From now on, each integral has to be interpreted in this way.
\begin{rem}
\label{remark_comparison_norms_diagonal_off_diagonal_DBC}
From corollary (\ref{corollary_norms_propagators_DBC}) we recognize that, while in norm-$\infty$ there is no difference between translation invariant propagators $g^{(h)}_{P,\omega}$ and the non translation invariant ones $g^{(h)}_{R,\omega}$, in particular, there is no difference between the {\it dimensional bound} $||g^{(h)}_{P,\omega}||_\infty$ (which scales as the translation invariant one $||g^{(h)}_{\omega}||_\infty$) and $||g^{(h)}_{R,\omega} ||_\infty$, by comparing the $||\cdot||_1$ norms we recognize that $||g^{(h)}_{R,\omega}||_1$ has a scale gain $\gamma^{h_L-h}$ with respect to $||g^{(h)}_{P,\omega}||_1$, which scales as the translation invariant one $||g^{(h)}_\omega||$.\\
Looking at the measure (\ref{measure_quasi_particles_real_space_DBC}) associated with a propagator labeled by two indices and using the {\it addition principle }(\ref{addition_principle}) we recognize that, in constructing the trees, we have to assign to each half-line a further label $\sigma\in\{P,R\}$.
\end{rem}
\paragraph{Non-renormalized expansion}
Taking into account what we have pointed out in remarks (\ref{remark_comparison_norms_diagonal_off_diagonal_DBC}), it is trivial to claim that the theorem (\ref{theorem_bound_of_kernels}) holds also in the framework described by the Hamiltonian $H$, where $\epsilon=\max\{|\nu|,|\lambda|,|\varpi|\}$. Indeed, it is enough to notice that:
\begin{itemize}
\item we can roughly localize the non-local endpoints:
$$ \int dy \left| \pi(x,y)\right|\leq \frac{C_\theta}{1+|x|^\theta}\leq C_\theta, \hspace{5mm} \forall \hspace{3mm} 0<\theta\leq 1, $$
and consider the $\varpi$-type endpoints as $\nu$-type endpoints, where just the associated constant changes ($\varpi$ instead of $\nu$).
\item after remark (\ref{remark_comparison_norms_diagonal_off_diagonal_DBC}), it is clear that the dominant part of the propagators (as expected) is the {\it translation-invariant one} scaling, in the sense of the norms $||\cdot||_\infty$ and $||\cdot||_1$, as the translation invariant propagator the dimensional analysis of section (\ref{section_multiscale_analysis}) is based on. So, in order to get the dimensional estimate for {\it non-renormalized kernels} as in theorem (\ref{theorem_bound_of_kernels}), we can roughly consider each propagator as a $g^{(h)}_{P,\omega}$ one. This choice corresponds to estimate only the dominant part (the part which survives when $\beta, L\to \infty$). Obviously, being interested in the finite size correction to the specific free energy and the Schwinger functions, it will be necessary to consider in a more sophisticated way the off-diagonal propagators, as it will be clear in the {\it renormalization procedure}, in which we will be forced to take into account the $\bm x$-dependence of propagators,
\end{itemize}
After these two simplifications, we can retrace the proof of Theorem (\ref{theorem_bound_of_kernels}), after having redefined $\epsilon=\max\{|\nu|,|\lambda|,|\varpi|\}$, and get exactly the same bounds. Consequently, we have exactly the same {\it a priori} classification of the clusters into {\it marginal, relevant} and {\it irrelevant ones}. So again, in order to express the {\it specific free energy} and the {\it Schwinger functions} as convergent series we have to expand in a different way the kernels with two and four external legs, indentifying properly the source of troubles. We stress that we cannot use as a black box the machinery we set up in the previous chapter (\ref{chapter_fermions_PBC}) because of the localization procedure is crucially based on the translation invariance of the system.
\subsection{Properties of the kernels}
First of all, let us stress that we can rewrite the non-renormalized effective potential (\ref{non_renormalized_effective_potential_PBD}) using (\ref{quasi_particles_decomposition_scale_h_DBC}):
\begin{equation}
\begin{split}
\mathcal V^{(h)}(\psi^{(\leq h)})=\sum_{n=1}^\infty\sum_{\bm \omega}\sum_{\bm \sigma}\int \bm x_1\dots\bm x_{2n}\cdot \\ \cdot\left(\prod_{j=1}^n\psi^{(\leq h)+}_{\omega_{2j-1},\sigma_{2j-1},\bm x_{2j-1}}\psi^{(\leq h)-}_{\omega_{2j},\sigma_{2j},\bm x_{2j}}e^{-ip_F(\omega_{2j-1}x_{2j-1}-\omega_{2j}x_{2j})}\right) W^{(h)}_{2n}(\bm x_1,\dots,\bm x_{2n}),
\end{split}
\label{effective_potential_scale_h_DBC}
\end{equation}
where $\bm \omega=\left\{\omega_1,\dots,\omega_{2n}\right\}$, $\bm \sigma=\left\{\sigma_1,\dots,\sigma_{2n}\right\}$.
\begin{rem}\label{remark_kernels_independent_of_quasi_particles} By construction the kernels are independent of the {\it external configuration of $\bm \omega$ and $\bm \sigma$}.
The only thing that could break this structure is the renormalization and localization procedure, so we will take care of proving that in fact it is not the case.
\end{rem}
\paragraph{Boundary conditions not-invariant under RG integration}
\begin{prop}
\label{proposition_kernels_not_diagonal_in_k_DBC}
The Dirichlet boundary conditions are not invariant under a single {\it renormalization group step}, meaning that $W^{(h)}_2(\bm x,\bm y)$ in (\ref{effective_potential_scale_h_DBC}) {\it is not diagonal in the sine Fourier base}, {\it i.e.}:
\begin{equation}
W^{(h)}_2(\bm x,\bm y)=\left[\frac{2}{\beta(L+1)}\right]^2\sum_{\bm k_1,\bm k_2\in\mathcal D^d_{\Lambda,\beta}}e^{i(k_{1_0}x_0-k_{2_0}y_0)}\sin (k_1x) \sin (k_2y)\hat W^{(h)}(\bm k_1, \bm k_2)\delta_{k_{1_0},k_{2_0}}
\end{equation}
where $\hat W_2(\bm k_1,\bm k_2)\neq 0$ for some suitable $\bm k_1,\bm k_2$, and we used the notation $\bm k_1=(k_1,k_{1_0})$ and $\bm k_2=(k_2,k_{2_0})$.
\end{prop}
Of course it is enough to show a counter-example: in fact the first order {\it non-local tadpole} (see Figure (\ref{figure_tadpoles}), the graph on the right) already breaks the diagonal form. We construct in full detail the counter-example in Appendix (\ref{appendix_non_local_tadpole}).
\begin{figure}
\centering
\begin{tikzpicture}
[thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\fill (1,1) circle (0.06);
\fill (1,2) circle (0.06);
\fill (6,1) circle (0.06);
\fill (8,1) circle (0.06);
\node at (1,0.8) {\bf x};
\node at (6,0.8) {\bf x};
\node at (8,0.8) {\bf y};
\draw [postaction={decorate}] (0,1) -- ++(1,0);
\draw [postaction={decorate}] (1,1) -- ++(1,0);
\draw [-, decorate, decoration={snake}] (1,1) -- ++(0,1);
\draw [->] (1,2) to [out=0, in=0, looseness=1] (1,3);
\draw (1,3) to [out=180, in=180, looseness=1] (1,2);
\draw [postaction={decorate}] (5,1) -- ++ (1,0);
\draw [-,decorate,decoration={snake}] (6,1) -- ++ (2,0);
\draw [postaction={decorate}] (8,1) -- ++ (1,0);
\draw [postaction={decorate}] (6,1) to [out=60, in=120, looseness=1.5] (8,1);
\end{tikzpicture}
\caption{Two first order Feynman diagrams: the local tadpole on the left, and the non-local tadpole on the right.}
\label{figure_tadpoles}
\end{figure}
\begin{rem}
\label{remark_scale_decomposition_main_difference_DBC}
This deep difference with respect to the translation invariant theory we studied in Chapter (\ref{chapter_fermions_PBC}) underlines a technical problem. We know that there exists a Fourier base such that the free propagator can be written in a diagonal form (\ref{free_propagator_DBC}). This allows us to define the single scale propagators in a very natural way by rewriting the two dimensional space $\mathcal D^d_{\Lambda,\beta}$ as union of annuli of radii $\gamma^{h}$ and $\gamma^{h-1}$, exactly in the same conceptual way we followed in the translation invariant case (\ref{cut_off_chi_definition}).\\
The novelty comes when we try to dress the theory: the previous Proposition (\ref{proposition_kernels_not_diagonal_in_k_DBC}) tells us that we cannot trivially extend this localization procedure to the case we are dealing with in this chapter:
\begin{itemize}
\item in the translation invariant case, the $\delta(\cdot)$ function in the definition (\ref{effective_potential_scale_h_recursive}) guarantees that the entering and exiting fields $\{\hat \psi^{(\leq h)\pm}_{\bm k}\}$ carry the same momentum so, in particular, live at the same scale $h$. The renormalization process preserves this structure, so in the dressing procedure (see Subsection (\ref{subsection_anomalous_integration_PBC})) we move the first-order localized term from the interaction to the Grassman integration in a trivial way, because it has the same property of the covariance and there is no ambiguity about the momentum and the scale splitting;
\item in the case we are treating in this chapter, on the one hand the free propagator, so the covariance of the Gaussian Grassman integration, is diagonal in $\bm k$, and consequently we can rewrite it via a trivial {\it scale decomposition}, as in fact we have done; on the other hand the quadratic term we would like (by analogy with the already known case) to move from the interaction to the measure, is not diagonal in $\bm k$ space so there is not a well defined notion of scale comparable with the notion we used for the propagator.
\end{itemize}
\end{rem}
\section{Renormalization Group}
\label{section_renormalization_group_DBC}
The idea we are led by is the same as the previous Chapter (\ref{chapter_fermions_PBC}): so far we have expanded the quantities we are interested in using a trivial cluster expansion, which does not conclude the analysis because of the divergences that arise when we perform the sum, over all the possible trees, of the {\it marginal} and {\it relevant} contributions, corresponding respectively to the {\it quartic} and {\it quadratic} diagrams in the expansion. The trick will be to change a bit our point of view on this expansion.\\
We will use the results we obtained as a black box for the {\it dominant theory}, and some new ideas will be introduced in order to deal with the technical problems arising from the presence of the boundary. The strategy we will follow is the following:
\begin{itemize}
\item in order to use the dimensional bounds we showed in Corollary (\ref{corollary_norms_propagators_DBC}), we rewrite each propagator as the combination of four quasi-particles propagators as in (\ref{gd_sum_of_quasi_particle_propagators}). This means that, in Gallavotti-Nicolo's trees, the {\it field labels} $f$ are associated with $\left\{\bm x(f), \epsilon(f),\omega(f), \sigma(f)\right\}$, where $\bm x, \epsilon$ and $\omega$ are the same labels we introduced in the previous chapter, while $\sigma(f)\in\{P,R\}$,
\item we will rewrite the quartic terms as the sum of the {\it bulk quartic terms} ({\it i.e.} the translation invariant ones we introduced in the previous chapter (\ref{chapter_fermions_PBC})), consisting of the clusters containing only $P-$type propagators integrated over a properly extended domain in order to get the right quantities, and a {\it remainder}, consisting of clusters containing at least either one $R$-labeled propagator or a $\varpi-$type endpoint. \\
We will show that the presence of the $R$-labeled propagators makes the quartic clusters irrelevant, so we do not need to renormalize them. So the study of the running coupling constant associated to the quartic term $\lambda_h$ will be exactly the same of the translation invariant part;
\item we apply the same idea to the quadratic term, but we discover a more complicated situation:
\begin{itemize}
\item by construction, the {\it bulk quadratic term} will be treated as in the previous chapter: we will split it in an {\it order zero localized term} that will be compensated by a properly chosen constant counterterm $\nu$, and an {\it order one localized term} we will dress the propagator with,
\item by a more sophisticated dimensional analysis, we will show that the remainder part of the quadratic terms is {\it marginal} (not {\it relevant}) so, differently from the quartic term case, it is still necessary to perform an {\it order zero localization}. At this point the boundary shows its effects: since the kernels are not translation invariant (Propostion (\ref{proposition_kernels_not_diagonal_in_k_DBC})), the localization process gives rise to a {\it running coupling function} instead of a running coupling constant. The role played by the counterterm $ \mathcal N$ is to compensate, step by step, these {\it marginal non-local} terms.
\end{itemize}
\end{itemize}
We want to look at the {\it localization operator} as the {\it composition of two localization operators}: the one extracting the bulk contribution, the other extracting the dominant contributions to the Taylor expansion, as we did in the previous chapter. In particular:
\begin{itemize}
\item we want to keep as reference a theory with Dirichlet boundary conditions, so we want to extract from the quadratic terms the contribution that can be written as
\begin{equation*}
W^{d(h)}(\bm x,\bm y)=\frac{2}{\beta(L+1)}\sum_{\bm k\in\mathcal D^d_{\Lambda,\beta}} e^{-ik_0(x_0-y_0)} \sin (kx) \sin(ky)\hat W^{d(h)}(\bm k),
\end{equation*}
for some $\hat W^{d(h)}$ we will define in the next section.
\item since, as it will be clear a posteriori, the boundary corrections to the quartic term are irrelevant, we want to extract from the quartic kernel
$$\bar W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3,\bm x_4)=\bar W^{(h)}_4(\bm x_1-\bm x_4, \bm x_2-\bm x_4,\bm x_3- \bm x_4),$$
which is $2(L+1)$ periodic in the space-direction of all the variables.
\end{itemize}
\subsection{"Preliminar" localization: $\mathcal L_{\mathcal B}$}
Let us first define the localization operator extracting the bulk terms $\mathcal L_{\mathcal B}$ and the renormalization operator $\mathcal R_{\mathcal B}$, where $\mathcal B$ stays for {\it bulk}, and then explain how they operate on the trees.
\begin{itemize}
\item
if $2n=2$
\begin{eqnarray}
\mathcal L_{\mathcal B} W^{(h)}_2(\bm x,\bm y):= &W_2^{d(h)}(\bm x,\bm y), \label{localization_L_D_definition}\\
\mathcal R_{\mathcal B}W^{(h)}_2(\bm x,\bm y) :=&\mathcal W_2^{(h)}(\bm x,\bm y)\label{remainder_R_D_definition},
\end{eqnarray}
where $W_2^{d(h)}(\bm x, \bm y)$ can be written as
\begin{equation}
W_2^{d(h)}(\bm x,\bm y)=\frac{2}{\beta(L+1)}\sum_{\bm k\in\mathcal D_{\Lambda,\beta}^d}e^{-ik_0(x_0-y_0)}\sin(kx) \sin(ky) \hat W^{d(h)}_2(\bm k),
\end{equation}
for a suitable $\hat W^{d(h)}(\cdot)$ that we will explicitly build up in the next paragraph, while $$\mathcal R_{\mathcal B}W_2^{(h)}=\mathcal W^{(h)}_2(\bm x,\bm y)=W^{(h)}_2(\bm x,\bm y)-W^{d(h)}_2(\bm x,\bm y)$$ contains, in its cluster representation, at least one {\it non-translation invariant} graph element, {\it i.e.} either a remainder propagator $g^{(k)}_R(\bm x,\bm y)$, $k\geq h$, or a $\varpi$-type endpoint.
\item if $2n=4$,
\begin{eqnarray}
\mathcal L_{\mathcal{B}}W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3, \bm x_4):=&\bar W^{(h)}_4(\bm x_1-\bm x_4,\bm x_2-\bm x_4,\bm x_3-\bm x_4),\label{localization_L_T_definition}\\
\mathcal R_{\mathcal{B}}W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3, \bm x_4):=&\mathcal W_4^{(h)}(\bm x_1, \bm x_2,\bm x_3, \bm x_4),\label{remainder_R_T_definition}
\end{eqnarray}
while $\mathcal R_{\mathcal B}W_4^{(h)}=\mathcal W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3,\bm x_4)=W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3,\bm x_4)-\bar W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3,\bm x_4)$ contains, in its cluster representation, at least one {\it non-translation invariant} graph element, {\it i.e.} either a remainder propagator $g^{(k)}_R(\bm x,\bm y)$, $k\geq h$, or a $\varpi$-type endpoint
\end{itemize}
\paragraph{Rigorous definition of $\mathcal L_{\mathcal B}$}
Let us recall that
$$g^{(h)}(\bm x,\bm y)=g^{(h)}_P(\bm x-\bm y)+g^{(h)}_R(\bm x, \bm y),$$
where $g_P(\bm x-\bm y)$ is $2(L+1)$-periodic in the {\it real-space direction}. Besides, we stress that
\begin{equation}
\begin{split}
g_P^{(h)}((x,x_0),(y,y_0))=-g_R^{(h)}((-x,x_0),(y,y_0))=\\
=-g_R^{(h)}((x,x_0),(-y,y_0))=g_P^{(h)}((-x,x_0),(-y,y_0)).
\end{split}
\label{g_P_g_R_symmetry}
\end{equation}
\subparagraph{Quadratic terms}
Let us first write down a complete decomposition of $W_2^{(h)}$, that we are going to comment term by term in the following:
\begin{equation}
\label{expansion_of_quadratic_terms}
\begin{split}
W_2^{(h)}(\bm x,\bm y)= W_2^{diff(h)}(\bm x,\bm y)+\left(W_2^{(h)}(\bm x,\bm y)-W_2^{diff(h)}(\bm x,\bm y)\right)=\\
=\bar W_2^{(h)}(\bm x-\bm y)+\left(W_2^{(h)}(\bm x,\bm y)-W_2^{diff(h)}(\bm x,\bm y)\right)+\left(W_2^{diff(h)}(\bm x,\bm y)-\bar W_2^{(h)}(\bm x-\bm y)\right)=\\
=W_2^{d(h)}(\bm x,\bm y)+\left(W_2^{(h)}(\bm x,\bm y)-W_2^{diff(h)}(\bm x,\bm y)\right)+\\+
\left(W_2^{diff(h)}(\bm x,\bm y)-\bar W_2^{(h)}(\bm x-\bm y)\right)+ \bar W_2^{(h)}(x+y, x_0-y_0),
\end{split}
\end{equation}
where
\begin{itemize}
\item $W^{diff(h)}_2(\bm x,\bm y)$ is defined as the kernel associated to {\it the sum of all those trees such that there are no $\varpi$-type endpoints and $\sigma(f)=P$ for each $f\in\bigcup_{v\in V_f(\tau)} I_v$}. This implies, by construction, that $$\left(W_2^{(h)}(\bm x,\bm y)-W_2^{diff(h)}(\bm x,\bm y)\right)$$ is associated with the trees such that there are {\it at least either two field labels associated with $\sigma(f)=R$ or a $\varpi$-type endpoint and, using Lemma (\ref{lemma_gram_hadamard_scalar_product_off_diagonal_propagators}), it can be rewritten using the determinant expansion we introduced in order to overcome the combinatorial problem}. We anticipate, and it will be discussed later, that there might be the case in which there is no remainder propagator associated with any $\ell\in T$, but the remainder propagators belong only to the matrix $G^{h_v,T_v}$.\\
Recall that $g^{(h)}_P$ is $2(L+1)$-periodic in the space direction, while the integrals over the {\it inner points} of the cluster are performed on $\Lambda$. So, in order to get a translation invariant $2(L+1)$ periodic function, we have to extend in a proper way the integration domain for any inner point $\bm z_i$
\begin{equation*}
\sum_{z_i\in \Lambda}\to \sum_{z_i=-L-1}^L.
\end{equation*}
\item $\bar W^{(h)}_2(\bm x-\bm y)$ is obtained starting from $W^{diff(h)}$ in the following way:
\begin{enumerate}
\item let $\{\bm x, \bm z_1,\dots, \bm z_n, \bm y\}$ be the {\it space-time} variables associated with the endpoints of the tree related to $W^{diff(h)}$,
\item keeping fixed the endpoints associated with $\bm x$ and $ \bm y$, for each possible unordered ({\it i.e.} the order of the elements does not play any role) $k$-tuple, $1\leq k \leq n$, of endpoints we perform the following operation on the position labels:
$$\left\{(z_{i_1},z_{{i_1}_0});\dots; (z_{i_k},z_{{i_k}_0})\right\}\to \left\{(-z_{i_1},z_{{i_1}_0});\dots; (-z_{i_k},z_{{i_k}_0})\right\}, \hspace{3mm} \forall \hspace{3mm} 1\leq k \leq n,$$
\item if we add all the $(\bm x,\bm y)$-depending kernels associated with the trees obtained in the previous point to $W^{(diff(h))}$, we obtain $\bar W^{(h)}_2(\bm x,\bm y)=\bar W^{(h)}_2(\bm x-\bm y)$ that, by construction, is a $2(L+1)$ periodic function, in the space direction, depending on the difference of its arguments.
\end{enumerate}
By construction, and thanks to the symmetry (\ref{g_P_g_R_symmetry}), $\bar W^{(h)}_2(\bm x-\bm y)-W^{diff(h)}_2(\bm x,\bm y)$ contains at least a remainder propagator $g_R(\bm z_i,\bm z_j)$.
\item $\bar W_2^{(h)}(x+y, x_0-y_0)$ is simply obtained changing $(y, y_0)\to (-y, y_0)$ in all the trees involved in the previous step, so also $\bar W_2^{(h)}(x+y, x_0-y_0)$ contains at least a remainder propagator.
\end{itemize}
So, by a straightforward calculation analogous to what we showed in the proof of Lemma (\ref{lemma_reflection_trick}), we obtain that
\begin{equation}
\begin{split}
W^{d(h)}_2(\bm x,\bm y):=&\bar W^{(h)}_2(\bm x- \bm y)- \bar W^{(h)}_2(x+y; x_0-y_0)=\\=&\frac{2}{\beta(L+1)}\sum_{\bm k\in\mathcal D^d_{\Lambda,\beta}} e^{-ik_0(x_0-y_0)} \sin(kx) \sin(k y) \hat W^{d(h)}(\bm k),
\end{split}
\end{equation}
thanks to the same argument we used in Lemma (\ref{lemma_reflection_trick}).
\paragraph{Quartic term}
Since the manipulations of the tree are basically the same as in the quadratic case, let us underline only the differences:
\begin{itemize}
\item now there are four external points to be kept fixed: $(\bm x_1,\bm x_2, \bm x_3, \bm x_4)$,
\item we can analogously define $W^{diff(h)}_4$ as the sum of all the trees such that there are no $\varpi-$type endpoints and such that $\sigma(f)=P$ for all the field labels $f\in\cup_{v\in V_f(\tau)} I_v$.
\item we can define $\bar W^{(h)}_4$ following the same procedure over the endpoints as before, so that
\begin{equation}
\label{explicit_expansion_quartic_terms}
\begin{split}
W^{(h)}_4(\bm x_1,\bm x_2, \bm x_3, \bm x_4)= \bar W^{(h)}_4(\bm x_1-\bm x_4, \bm x_2-\bm x_4, \bm x_3-\bm x_4)+\\+ \left( W^{diff(h)}(\bm x_1,\bm x_2, \bm x_3, \bm x_4)-\bar W^{(h)}_4(\bm x_1-\bm x_4, \bm x_2-\bm x_4, \bm x_3-\bm x_4)\right)+\\+ \left(W^{(h)}_4(\bm x_1,\bm x_2, \bm x_3, \bm x_4)- W^{diff(h)}(\bm x_1,\bm x_2, \bm x_3, \bm x_4)\right).
\end{split}
\end{equation}
\end{itemize}
\begin{rem}
\label{remark_R_B_decomposition_R1_R2}
We introduce a notation that will make the proof of the main theorem more readable:
\begin{equation}
\mathcal R_\mathcal B=\mathcal R^{(1)}_\mathcal B+\mathcal R^{(2)}_\mathcal B,
\end{equation}
acting as follows:
\begin{eqnarray}
\begin{aligned}
\mathcal{R}^{(1)}_\mathcal B W_2^{(h)}(\bm x,\bm y)&=W_2^{(h)}(\bm x,\bm y)-W_2^{diff(h)}(\bm x,\bm y),\\
\mathcal{R}^{(2)}_\mathcal B W_2^{(h)}(\bm x,\bm y)&= W_2^{diff(h)}(\bm x,\bm y)-\bar W_2^{(h)}(\bm x-\bm y)+\bar W^{(h)}_2(x+y,x_0-y_0),\\
\mathcal{R}^{(1)}_\mathcal B W_4^{(h)}(\bm x_1,\bm x_1,\bm x_3, \bm x_4)&=W_4^{(h)}(\bm x_1,\bm x_1,\bm x_3,\bm x_4)-W_4^{diff(h)}(\bm x_1,\bm x_2,\bm x_3,\bm x_4),\\
\mathcal{R}^{(2)}_\mathcal B W_4^{(h)}(\bm x_1,\bm x_2,\bm x_3, \bm x_4)&= W_4^{diff(h)}(\bm x_1,\bm x_2,\bm x_3,\bm x_4)-\bar W_4^{(h)}(\bm x_1,\bm x_2,\bm x_3,\bm x_4).
\end{aligned}
\end{eqnarray}
Furthermore, we underline that
\begin{enumerate}
\item the operator $\mathcal R^{(1)}_\mathcal B$ simply {\bf selects} the trees such that at least
\begin{itemize}
\item either two $f\in I_{v_0}$ are associated with $\sigma (f)=R$,
\item or an endpoint is a $\varpi-$type endpoint,
\end{itemize}
but it does not modify anything of the tree,
\item the operator $\mathcal R^{(2)}_\mathcal B$ operates on the trees that do not contain neither field labels $f\in I_{v_0}$ associated with $\sigma(f)=R$ nor $\varpi$-type end points, and {\bf modifies} the coordinates-labels of the tree.
\end{enumerate}
\end{rem}
\subsection{Definition of localization}
\paragraph{$\mathcal L_\mathcal T$ and $\tilde{\mathcal L}_\mathcal T$ localization operators}
Let us recall that
\begin{equation*}
\begin{split}
\mathcal L_{\mathcal B}W^{(h)}_2(\bm x,\bm y)&=W^{d(h)}_2(\bm x,\bm y),\\
\mathcal R_{\mathcal B}W^{(h)}_2(\bm x,\bm y)&=\mathcal W^{(h)}_2(\bm x,\bm y),\\
\mathcal L_{\mathcal B}W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3,\bm x_4)&=\bar W^{(h)}_4(\bm x_1-\bm x_4,\bm x_2-\bm x_4,\bm x_3-\bm x_4),\\
\mathcal R_{\mathcal B}W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3,\bm x_4)&=\mathcal W^{(h)}_4(\bm x_1,\bm x_2,\bm x_3,\bm x_4),
\end{split}
\end{equation*}
where we recall that $W^{d(h)}_2$ is {\it diagonal in the Dirichlet basis} and $\bar W^{(h)}_4$ is translation invariant and $2(L+1)\times \beta$-periodic. Plugging these decompositions in the expression of the effective potential,
\begin{equation}
\begin{split}
\mathcal V^{(h)}(\psi^{(\leq h)})=\int d\bm x d\bm y\psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}W^{d(h)}_2(\bm x,\bm y)+
\int d\bm x d\bm y\psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}\mathcal W^{(h)}_2(\bm x,\bm y)+\\
+\int d\bm x_1\dots d\bm x_4\psi^{(\leq h)+}_{\bm x_1}\psi^{(\leq h)+}_{\bm x_2}\psi^{(\leq h)-}_{\bm x_3}\psi^{(\leq h)-}_{\bm x_4} \bar W_4^{(h)}(\bm x_1-\bm x_4,\bm x_2-\bm x_4,\bm x_3-\bm x_4)+\\
+\int d\bm x_1\dots \bm x_4\psi^{(\leq h)+}_{\bm x_1}\psi^{(\leq h)+}_{\bm x_2}\psi^{(\leq h)-}_{\bm x_3}\psi^{(\leq h)-}_{\bm x_4}\mathcal W_4^{(h)}(\bm x_1,\bm x_2,\bm x_3,\bm x_4)+\\
+\sum_{n\geq 3}\int d\bm x_1\dots d\bm x_{2n}\left[\prod_{j=1}^n\psi^{(\leq h)+}_{\bm x_{2j-1}}\psi^{(\leq h)-}_{\bm x_{2j}}\right] W^{(h)}_{2n}(\bm x_1,\dots,\bm x_{2n}).
\end{split}
\label{effective_potential_decomposed_2-4_kerlels_DBC}
\end{equation}
Let us recall that,
\begin{equation}
\mathcal L_{\mathcal B}W^{(h)}_{2}(\bm x,\bm y)=W^{d(h)}_2(\bm x,\bm y)=\frac{2}{\beta(L+1)}\sum_{\bm k\in \mathcal{D}^d_{\Lambda,\beta}}e^{ik_0(x_0-y_0)}\sin (kx)\sin (ky) \hat W^{d(h)}_2(\bm k),
\end{equation}
that implies that the quadratic term can be rewritten in momentum representation as
\begin{equation}
\int d\bm x d\bm y\psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}W^{d(h)}_2(\bm x,\bm y)=\frac{2}{\beta(L+1)}\sum_{\bm k\in\mathcal{D}^d_{\Lambda,\beta}}\hat \psi^{(\leq h)+}_{\bm k}\hat \psi^{(\leq h)-}_{\bm k}\hat W^{d(h)}_{2}(\bm k).
\label{2el_W^d_in_diagonal_form}
\end{equation}
It is worth noting that, by writing this quadratic term in this diagonal form, we overcome the {\it quasi-particles definition} issue we pointed out in Remark (\ref{remark_quasi_particles_issue_DBC}): indeed we know that in the {\it original dual space} $\mathcal D^d_{\Lambda,\beta}$ there is only one Fermi point $\bm p_F=(p_F,0)$, so we can perform the change of variable $\bm k=\bm k'+\bm p_F$ and rewrite the latter expression as
\begin{equation}
\begin{split}
\frac{2}{\beta(L+1)}\sum_{\bm k'\in \mathcal D'^{d}_{\Lambda,\beta}}\hat \psi^{(\leq h)+}_{\bm k'+\bm p_F}\hat \psi^{(\leq h)-}_{\bm k'+\bm p_F}\hat W^{d(h)}_{2}(\bm k'+\bm p_F).
\end{split}
\label{2el_W^d_in_diagonal_quasiparticles_form}
\end{equation}
Of course this expression suggests us a {\it natural way to localize} directly in the {\it momentum-space} by Taylor expanding the kernel around the Fermi point $\bm p_F$, analogously to what we have done in (\ref{localization_2el_infinite_volume_limit}).\\
As we have done in the previous Chapter (\ref{chapter_fermions_PBC}), in sake of simplicity we give here the definition of Localization at infinite volume, and we refer to Appendix (\ref{appendix_finite_volume_loc_DBC}) to the rigorous definitions that take into account the finite volume corrections.
\begin{itemize}
\item {\bf Case $2n=2$, kernel $W^{d(h)}_2=\mathcal L_\mathcal BW^{(h)}_2$} As we already commented, formulae (\ref{2el_W^d_in_diagonal_form}) and (\ref{2el_W^d_in_diagonal_quasiparticles_form}) allow us to localize proceeding by analogy with the translation invariant case, defining a localization procedure directly in the dual space:
\begin{equation}
\begin{split}
\mathcal L_\mathcal T \left[\mathcal L_\mathcal B\left(\int d\bm x d\bm y \psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}W^{(h)}_2(\bm x,\bm y)\right)\right]=\\
\\\mathcal L_\mathcal T \left(\frac{2}{\beta(L+1)}\sum_{\bm k'\in \mathcal D'^{d}_{\Lambda,\beta}} \hat \psi^{(\leq h)+}_{\bm k'+p_F}\hat \psi^{(\leq h)-}_{\bm k'+p_F}\hat W^{d(h)}_2(\bm k'+\bm p_F)\right)=\\
=\frac{2}{\beta(L+1)}\sum_{\bm k'\in \mathcal D'^{d}_{\Lambda,\beta}} \hat \psi^{(\leq h)+}_{\bm k'+p_F}\hat \psi^{(\leq h)-}_{\bm k'+p_F}\left(\hat W^{d(h)}_2(\bm p_F)+k_0\partial_{k_0} \hat W^{d(h)}_2(\bm p_F)+k'\partial_k \hat W^{d(h)}_2(\bm p_F)\right)
\end{split}
\end{equation}
where we stress once more that this localization definition is independent of the quasi-particles.
\item {\bf Case $2n=2$, kernel $\mathcal W^{(h)}_2=\mathcal R_\mathcal B W^{(h)}_2$} Because of the non-diagonality of the kernel $\mathcal W^{(h)}_2$, there is no advantage in defining a localization procedur in $\bm k$ space, so we work directly in the real spacetime:
\begin{equation}
\begin{split}
\tilde{\mathcal L}_\mathcal T \mathcal L_\mathcal B\left(\int d\bm x d\bm y \psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}W^{(h)}_2(\bm x,\bm y)\right)=\\
=\tilde{\mathcal L}_\mathcal T\int d\bm x d\bm y \psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}\mathcal W^{(h)}_2(\bm x,\bm y)=\\
=\left.\int d\bm x d\bm y \psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}\right|_{x_0=y_0} \mathcal W^{(h)}_2(\bm x,\bm y).
\end{split}
\end{equation}
We used the symbol $\tilde{•}$ to stress the fact that $\tilde{\mathcal L}_\mathcal T$ operates only on the time variables.
\item {\bf Case $2n=4$, kernel $\bar W^{(h)}_4=\mathcal L_\mathcal B W^{(h)}_4$}
In this case, as in the previous chapter, we define
\begin{equation}
\begin{split}
\mathcal L_\mathcal T\left( \mathcal L_\mathcal B \int d\bm x_1\dots d\bm x_4 W_4^{(h)}(\bm x_1,\bm x_2,\bm x_3, \bm x_4) e^{-ip_F(\omega_1 x_1 +\omega_2 x_2 - \omega_3 x_3 - \omega_4 x_4)}\right. \cdot \\\left. \cdot
\psi^{(\leq h)+}_{\sigma_1,\omega_1,\bm x_1}\psi^{(\leq h)+}_{\sigma_2,\omega_2,\bm x_2}\psi^{(\leq h)-}_{\sigma_3,\omega_3,\bm x_3}\psi^{(\leq h)-}_{\sigma_4,\omega_4,\bm x_4}\right)=\\
=\int d\bm x_1\dots d\bm x_4 \mathcal L_\mathcal B W_4^{(h)}(\bm x_1,\bm x_2,\bm x_3, \bm x_4) e^{-ip_F(\omega_1 x_1 +\omega_2 x_2 - \omega_3 x_3 - \omega_4 x_4)} \cdot \\ \cdot
\psi^{(\leq h)+}_{\sigma_1,\omega_1,\bm x_4}\psi^{(\leq h)+}_{\sigma_2,\omega_2,\bm x_4}\psi^{(\leq h)-}_{\sigma_3,\omega_3,\bm x_4}\psi^{(\leq h)-}_{\sigma_4,\omega_4,\bm x_4}.
\end{split}
\end{equation}
\item {\bf Case $2n=4$, kernel $\mathcal W^{(h)}_4=\mathcal R_\mathcal B W^{(h)}_4$}
In this case, we do not need to renormalize
\begin{equation}
\begin{split}
\mathcal L_\mathcal T\left( \mathcal R_\mathcal B \int d\bm x_1\dots d\bm x_4 W_4^{(h)}(\bm x_1,\bm x_2,\bm x_3, \bm x_4) e^{-ip_F(\omega_1 x_1 +\omega_2 x_2 - \omega_3 x_3 - \omega_4 x_4)}\right. \cdot \\ \cdot
\psi^{(\leq h)+}_{\sigma_1,\omega_1,\bm x_1}\psi^{(\leq h)+}_{\sigma_2,\omega_2,\bm x_2}\psi^{(\leq h)-}_{\sigma_3,\omega_3,\bm x_3}\psi^{(\leq h)-}_{\sigma_4,\omega_4,\bm x_4}=0,
\end{split}
\end{equation}
so $\mathcal R_\mathcal T\mathcal R_\mathcal B=\mathcal R_\mathcal B$ if it operates on a quartic term (the same holds for $\tilde{\mathcal L}_\mathcal T$, so for $\tilde{\mathcal R}_\mathcal T\mathcal R_\mathcal B$).
\item Finally, if $2n\geq 6$
\begin{equation}
\mathcal L_\mathcal T\int d\bm x_1\dots d \bm x_{2n}\left(\prod_{j=1}^n\psi^{(\leq h)+}_{\bm x_{2j-1}}\psi^{(\leq h)-}_{\bm x_{2j}}\right)W_{2n}^{(h)}(\bm x_1,\dots, \bm x_{2n})=0,
\end{equation}
and the same holds for $\tilde {\mathcal L}_\mathcal T$.
\end{itemize}
\paragraph{Composition of $\mathcal L_{\mathcal B}$, $\mathcal L_\mathcal T$ and $\tilde{\mathcal L}_\mathcal T$}
By composing the operators we introduced, we define a linear operator $\mathcal L$, and consequently $\mathcal R=\left(1-\mathcal L\right)$, acting as follows:
\begin{itemize}
\item if $2n=2$:
\begin{eqnarray}
\mathcal L=\mathcal L_\mathcal T\mathcal L_{\mathcal B}+\tilde {\mathcal L}_\mathcal T\mathcal R_{\mathcal B},\\
\mathcal R= \mathcal R_\mathcal T\mathcal L_{\mathcal B}+\tilde {\mathcal R}_\mathcal T\mathcal R_{\mathcal B},
\end{eqnarray}
\item if $2n=4$
\begin{eqnarray}
\mathcal L=\mathcal L_\mathcal T\mathcal L_{\mathcal B},\\
\mathcal R= \mathcal R_{\mathcal T}\mathcal L_{\mathcal B}+\mathcal R_{\mathcal B},
\end{eqnarray}
\item if $2n=6$ $\mathcal R$ acts as the identity.
\end{itemize}
\begin{rem}\label{remark_commutation_renormalization operators}
As we commented in Remark (\ref{remark_R_B_decomposition_R1_R2}), the operator $\mathcal R^{(1)}_\mathcal B$ does not modify the trees, so it holds
$$\tilde{\mathcal R}_\mathcal T\mathcal R^{(1)}_\mathcal B=\mathcal R^{(1)}_\mathcal B\tilde{\mathcal R}_\mathcal T.$$
On the other hand, also the operator $\tilde {\mathcal R}_\mathcal T$ commutes with $\mathcal R^{(2)}_\mathcal B$ since the first one acts on the time-component of the coordinates, while the second one on the space-component, so
$$\tilde{\mathcal R}_\mathcal T\mathcal R^{(2)}_\mathcal B=\mathcal R^{(2)}_\mathcal B\tilde{\mathcal R}_\mathcal T.$$
\end{rem}
The local part of the effective potential at scale $h$ is
\begin{equation}
\mathcal L\mathcal V^{(h)}(\psi^{(\leq h)})=\gamma^h n_h F_\nu^{(\leq h)}+\gamma^h \left(\pi_h\ast F_{\varpi}^{(\leq h)}\right)+z_h F_{\zeta}^{(\leq h)}+a_h F_{\alpha}^{(\leq h)}+ l_h F_{\lambda}^{(\leq h)},
\label{linearized_effective_potential_DBC}
\end{equation}
where
\begin{eqnarray}
F_\nu^{(\leq h)}=\frac{2}{\beta(L+1)}\sum_{\bm k'\in\mathcal D'^d_{\Lambda,\beta}}\hat \psi^{(\leq h)+}_{\bm k'+\bm p_F}\hat \psi^{(\leq h)-}_{\bm k'+\bm p_F}\\%\frac{1}{2}\sum_{\eta=\pm} \hat W^{d(h)}_{+,2}(\underline{\bm k}_\eta)
\label{linearized_effective_potentia_nu_DBC}
\pi_h\ast F_{\varpi}^{(\leq h)}=\int d\bm x\int dy\psi^{(\leq h)+}_{\bm x}\psi^{(\leq h)-}_{\bm y}\left.\right|_{y_0=x_0} \pi_h(x,y)\\
\label{linearized_effective_potentia_tildenu_DBC}
F_{\zeta}^{(\leq h)}=\frac{2}{\beta(L+1)}\sum_{\bm k'\in \mathcal D'^d_{\Lambda,\beta}}(-ik_0)\hat \psi^{(\leq h)+}_{\bm k'+\bm p_F}\hat \psi^{(\leq h)-}_{\bm k'+\bm p_F}\\
\label{linearized_effective_potentia_zeta_DBC}
F_{\alpha}^{(\leq h)}=\frac{2}{\beta(L+1)}\sum_{\bm k'\in\mathcal D'^d_{\Lambda,\beta}} v_0k'\hat \psi^{(\leq h)+}_{\bm k'+\bm p_F}\hat \psi^{(\leq h)-}_{\bm k'+\bm p_F},\\
\label{linearized_effective_potentia_alpha_DBC}
F_{\lambda}^{(\leq h)}=\sum_{\bm \omega}\sum_{\bm \sigma}\int_{[0,\beta)} dx_0\sum_{ x\in\Lambda}\psi^{(\leq h)+}_{\sigma_1,\omega_1,\bm x}\psi^{(\leq h)+}_{\sigma_2,\omega_2,\bm x}\psi^{(\leq h)-}_{\sigma_3,\omega_3,\bm x}\psi^{(\leq h)-}_{\sigma_4,\omega_4,\bm x}.
\label{linearized_effective_potentia_lambda_DBC}
\end{eqnarray}
where $v_0=\sin p_F$.\\
While the constants $(n_h, z_h, a_h, l_h)$ behave as the analogous quantities appearing in (\ref{local_effective_potential_scale_h_PBC}), so we will call them the {\it (non-rescaled) bulk running coupling constants}, the novelty is the function $\pi_h(x, y)$, due to the boundary effects, defined as
\begin{equation}
\gamma^h \pi_h(x,y):=\int_{[0,\beta)} dy_0\mathcal W^{(h)}_2(x,y;x_0-y_0).
\label{definition_tilde_n}
\end{equation}
\begin{rem}
\label{remark_arbitrariness_localization_point_DBC}
We point out once more that we define the linear operator $\mathcal L$ to rewrite $\mathcal V^{(h)}(\psi^{(h)})=\mathcal L\mathcal V^{(h)}(\psi^{(h)})+\left(1-\mathcal L\right)\mathcal V^{(h)}(\psi^{(h)})$ in such a way that the term $\left(1-\mathcal L\right)\mathcal V^{(h)}(\psi^{(h)})$ is irrelevant: in defining $\pi_h(x,y)$, we have choosen to localize only in the time variable, while we could have choosen a localization consisting in a combination of zeroth order localization both in space and in time: since, in order to prove the theorem, this reduces to an aesthetic choice, we prefer the definition we defined in order to drop the dependence on quasi-particle labels, and to have, scale by scale, the term $\pi_h\ast F_{\varpi}^{d(\leq h)}$ having formally the same shape as $ \mathcal N$.
\end{rem}
\subsection{Scale h integration and dressed theory}
\label{subsection_anomalous_integration_and_dressed_theory_DBC}
Let us comment the terms which the local part consists of:
\begin{itemize}
\item $l_h F_\lambda^{(\leq h)}$ reproduces, on scale $h$, the initial two points interaction with a different interaction potential, which effectively behaves as {\it bulk potential}, being the same we found in the translation invariant setting;
\item $n_hF_\nu^{(\leq h)}$ reproduces the counterterm operator of the initial $\mathcal V(\psi)$, where the constant value $\nu$ is replaced by $n_h$;
\item the sum of $a_h F_\alpha^{(\leq h)}$ and $z_hF_\zeta^{(\leq h)}$ has the same shape, up to $\mathcal O(k'^2)$ terms, as
\begin{equation*}
\begin{split}
\left(\hat g^{(h)}(\bm k'+\bm p_F)\right)^{-1}=\left(-ik_0+(1-\cos k')\cos p_F+ v_0\sin k'\right) f^{-1}_h(\bm k')=\\
=\left(-ik_0+v_0k'+(1-\cos k')\cos p_F+ v_0(\sin k'-k')\right) f^{-1}_h(\bm k')=:\\
=:\left(-ik_0+ v_0 k'+t_{h}(k')\right) f^{-1}_h(\bm k'),
\end{split}
\end{equation*}
with constants $a_h$ and $z_h$ replacing $1$, and where we called $t_{h}(k')$ the $\mathcal O(k'^2)$ term (even though the subscript $h$ does not play any rule at the moment, it is anyway a convenient notation for what we will do in the next steps).
\item the term $\pi_h \ast F_{\varpi}^{(\leq h)}$ deserves some deeper comment. Indeed, its role is the same role played by the counterterm
$$\varpi \int_{[0,\beta)}dx_0\sum_{ x\in\Lambda}\int_{[0,\beta)}dy_0\sum_{ y\in\Lambda}\psi^{(\leq 0)}_{\bm x}\psi^{(\leq 0)}_{\bm y} \pi(x,y)\delta_{x_0,y_0},$$
in the scale zero effective potential $\mathcal V^{(0)}$. The reader may be surprised by the fact that we localized only in the {\it time variable} keeping track of a {\it non-local} counterterm, since a localization in real-space variables (analogous to what we defined in $F^{(\leq h)}_\nu$) not only would give us the right scale gain ({\it i.e.} an irrelevant remainder), but it would produce a {\it local} counterterm. However, one should recall that the {\it real space localization} has to be performed, in order to get the right scale gains, on the {\it quasi-particle} fields: so a priori, after a real-space localization, being the momentum no longer preserved, we would get {\it four different} running coupling functions $\{\pi_{\omega,\omega'}\}_{\omega,\omega'\in\{\pm\}}$. So, analogously to what we did in Remark (\ref{remark_uniqueness_of_counterterm_PBC}), we should exploit some symmetry of the kernels to find out that actually these four functions originate from the same function: in fact it is not the case, as it can be checked at the lowest order (the tadpole Feynman graph), and the fact that these functions are different reflects obviously the fact that these kernels are non-local so, in order to compensate them, it is necessary to introduce a non-local counterterm. We refer to the next chapter for some heuristic discussion about how to deal with these terms.
\end{itemize}
\paragraph{Dressed theory}
So, we can iteratively perform the integral
$$\int P(d\psi^{(\leq h)})e^{-\mathcal V^{(h)}(\psi^{\leq h})},$$
by including, step by step, the terms (\ref{linearized_effective_potentia_zeta_DBC}) and (\ref{linearized_effective_potentia_alpha_DBC}) into the Gaussian Grassman measure, as follows.\\
Let us introduce a sequence of constants $\{Z_h\}_{h\leq 0}$, and $Z_0=1$, and let us define a function
\begin{equation}
C_h(\bm k')^{-1}=\sum_{j=h_L}^hf_j(\bm k')
\end{equation}
where, as already mentioned, $h_L$ is defined by $-\lfloor \log_\gamma L \rfloor =h_L$. Let us imagine that the Grassman variables $\psi^{(0)},\dots, \psi^{(h+1)}$ have been already integrated, so that we are left with
\begin{equation}
\int P_{Z_h}\left(d\psi^{(\leq h)}\right)e^{-\mathcal V^{(h)}\left(\sqrt{ Z_h} \psi^{(\leq h)}\right)},
\end{equation}
where, up to a constant, the Gaussian Grassman measure is
\begin{equation}
\begin{split}
P_{Z_h}\left(d\psi^{(\leq h)}\right)=\left(\prod_{\bm k'\in \mathcal D'^{d}_{\Lambda,\beta}}d\psi^{(\leq h)+}_{\bm k',+}d\psi^{(\leq h)-}_{\bm k',+}\right)\\
\exp\left\{-\frac{2}{\beta(L+1)} \sum_{\bm k'\in \mathcal D'^{d}_{\Lambda,\beta}}C_h(\bm k')Z_h \left[-ik_0+v_0 k'+t_{h}(k')\right]\psi^{(\leq h)+}_{\bm k',+}\psi^{(\leq h)-}_{\bm k',+}\right\}.
\end{split}
\end{equation}
Now we recall that, in the previous subsection, we rewrote
$$\mathcal V^{(h)}=\mathcal L\mathcal V^{(h)}+\mathcal R \mathcal V^{(h)},$$
with $\mathcal R:=\left( 1-\mathcal L\right)$, and up to the linear coefficients, the linear combination $a_hF^{(\leq h)}_\alpha+z_h F^{(\leq h)}_{\zeta}$ has the same structure as the linear part of the covariance the integration is associated with. So we move, after proper manipulation that are included in the following definitions, these to terms into the measure, leaving in the interaction the terms $n_hF^{(\leq h)}_\nu$ and $\pi_h\ast F^{(\leq h)}_\varpi$, besides the (harmless) renormalized part $\mathcal R \mathcal V^{(h)}$.\\
So, if $\mathcal N_h$ is a suitable renormalization constant, we rewrite
\begin{equation}
\int P_{Z_h}(d\psi^{(\leq h)})e^{-\mathcal V^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)}= \frac{1}{\mathcal N_h} \int \tilde P_{Z_{h-1}}(d\psi^{(\leq h)})e^{-\tilde{\mathcal V}^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)},
\label{integral_dressed_theory_leqh_DBC}
\end{equation}
with
\begin{equation}
\begin{split}
\tilde P_{Z_{h-1}}\left(d\psi^{(\leq h)}\right)=\left(\prod_{\bm k'\in \mathcal D'^{d}_{\Lambda,\beta}}d\psi^{(\leq h)+}_{\bm k'+p_F}d\psi^{(\leq h)-}_{\bm k'+p_F}\right)\\
\exp\left\{-\frac{2}{\beta(L+1)} \sum_{\bm k'\in \mathcal D'^{d}_{\Lambda,\beta}}C_h(\bm k')Z_{h-1}(\bm k') \left[-ik_0+v_0 k'+\vartheta_h(\bm k')\right]\psi^{(\leq h)+}_{\bm k'+p_F}\psi^{(\leq h)-}_{\bm k'+p_F}\right\},
\end{split}
\label{measure_tildePZ_h-1_DBC}
\end{equation}
where:
\begin{equation}
\label{dressing_measure_defn_tildeV_DBC}
\begin{split}
Z_{h-1}(\bm k')&=Z_h\left(1+C_h^{-1}(\bm k')z_h\right),\\
Z_{h-1}&=Z_h(1+z_h),\\
\tilde{\mathcal V}^{(h)}&=\mathcal L\tilde{\mathcal V}^{(h)}+\left(1-\mathcal L\right)\mathcal V^{(h)},\\
\mathcal L\tilde{\mathcal V}^{(h)}&=\gamma^hn_h F^{(\leq h)}_\nu+\gamma^h \pi_h\ast F^{(\leq h)}_\varpi+(a_h-z_h)F^{(\leq h)}_\alpha+l_hF^{(\leq h)}_\lambda.
\end{split}
\end{equation}
and $\vartheta_h(\bm k')$ is iteratively defined as follows
\begin{equation}
\vartheta_h(\bm k')=
\begin{cases}
t_0( k') &\mbox{ if } h=0,\\
\frac{Z_{h+1}}{Z_{h}(\bm k')}\vartheta_{h+1}(\bm k') &\mbox{ if } h<0.
\end{cases}
\end{equation}
Of course, in this way we {\it dressed} the whole infrared theory ({\it i.e.} the linear part of the {\it whole propagator} $g^{(\leq h)}$), so in order to perform the {\it next single scale integration} in the iterative procedure we have to define a single scale measure, meaning that, using the addition principle (\ref{addition_principle}) and the change of the Gaussian Grassman integration measure (\ref{change_of_integration_measure_property}) we have to rewrite the integral (\ref{integral_dressed_theory_leqh_DBC}) as
\begin{equation}
\frac{1}{\mathcal N_h}\int P_{Z_{h-1}}(d\psi^{(\leq h-1)})\int \tilde P_{Z_{h-1}}(d\psi^{(h)})e^{-\tilde{\mathcal V}^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)}
\end{equation}
where, on the one hand, $P_{Z_{h-1}}(d\psi^{(\leq h-1)})$ is given by formula (\ref{measure_tildePZ_h-1_DBC}) with
\begin{itemize}
\item $Z_{h-1}(\bm k')$ replaced by $Z_{h-1}$,
\item $C_h(\bm k')$ replaced by $C_{h-1}(\bm k')$,
\item $\psi^{(\leq h)}$ replaced by $\psi^{(\leq h-1)}$,
\end{itemize}
while on the other hand the {\it single scale dressed measure} $\tilde P_{Z_{h-1}}(d\psi^{(h)})$ is also given by (\ref{measure_quasi_particles_real_space_DBC}) with
\begin{itemize}
\item $Z_{h-1}(\bm k')$ replaced by $Z_{h-1}$,
\item $C_h(\bm k')$ replaced by $\tilde f^{-1}_h(\bm k')$, where
\begin{equation}
\tilde f^{-1}_h(\bm k')=Z_{h-1}\left(\frac{C^{-1}_h(\bm k')}{Z_{h-1}(\bm k')}-\frac{C^{-1}_{h-1}(\bm k')}{Z_{h-1}}\right),
\end{equation}
\item $\psi^{(\leq h)}$ replaced by $\psi^{(h)}$.
\end{itemize}
It is worth remarking that the {\it scaling properties} of $\tilde f_h(\bm k')$ are the same as $f_h(\bm k')$, {\it i.e.} $\tilde f_h(\bm k')$ is a compact support function, with support of width $O\left(\gamma^h\right)$ and at a distance $O\left(\gamma^h\right)$ from $\bm p_F$ .\\
Finally, we rescale the Grassman fields in such a way that (\ref{integral_dressed_theory_leqh_DBC}) can be rewritten as
\begin{equation}
\frac{1}{\mathcal N_h}\int P_{Z_{h-1}}\left(d\psi^{\leq (h-1)}\right)\int \tilde P_{Z_{h-1}}\left(d\psi^{(h)}\right)e^{\hat{\mathcal V}^{(h)}\left(\sqrt{Z_{h-1}}\psi^{(\leq h)}\right)}
\end{equation}
where $\hat{ \mathcal V}^{(h)}$ is such that its local part is given by
\begin{equation}
\begin{split}
\mathcal L\hat{\mathcal V}^{(h)}\left(\sqrt{Z_{h-1}}\psi^{(\leq h)}\right)=\gamma^h\nu_h F_\nu^{(\leq h)}\left(\sqrt{Z_{h-1}}\psi^{(\leq h)}\right)+\\+\gamma^h\left(\varpi_h\ast F_\varpi^{(\leq h)}\right)\left(\sqrt{Z_{h-1}}\psi^{(\leq h)}\right)+\delta_h F_\alpha^{(\leq h)}+\lambda_h F_\lambda^{(\leq h)}\left(\sqrt{Z_{h-1}}\psi^{(\leq h)}\right)
\end{split}
\label{localized_effective_potential_DBC}
\end{equation}
defining the {\it running coupling constants} and the new {\it running coupling functions}:
\begin{equation}
\label{running_coupling_functions_DBC}
\begin{split}
\nu_h&=\frac{Z_h}{Z_{h-1}}n_h,\\
\delta_h&=\frac{Z_h}{Z_{h-1}}(a_h-z_h),\\
\lambda_h&=\left(\frac{Z_h}{Z_{h-1}}\right)^2l_h,\\
\varpi_h(x,y)&=\frac{Z_h}{Z_{h-1}}\pi_{h}(x,y), \hspace{3mm}.
\end{split}
\end{equation}
which we group together defining
\begin{equation}
\label{vec_v_h(x)}
\vec v_h(x,y)=\left(\nu_h, \delta_h, \lambda_h, \varpi_{h}(x,y)\right).
\end{equation}
At this point, we can perform the {\it single scale integration}
\begin{equation}
\int \tilde P_{Z_{h-1}}(d\psi^{(h)})e^{-\hat{\mathcal V}^{(h)}\left(\sqrt{Z_{h-1}}\psi^{(\leq h)}\right)}=e^{-\mathcal{V}^{(h-1)}\left(\sqrt{Z_{h-1}}\psi^{(\leq h-1)}\right)+L\beta {e}_h}
\end{equation}
where of course we have reconstructed the formal situation of (\ref{linearized_effective_potential_DBC}), with $h\to h-1$:
\begin{equation}
\begin{split}
\mathcal L\mathcal V^{(h-1)}(\psi^{(\leq h-1)})=\gamma^{h-1} n_{h-1} F_\nu^{(\leq h-1)}(\psi^{(\leq h-1)})+\gamma^{h-1} \left(\pi_{h-1}\ast F_{\varpi}^{(\leq h-1)}(\psi^{(\leq h-1)})\right)+\\+z_{h-1} F_{\zeta}^{(\leq h-1)}(\psi^{(\leq h-1)})+a_{h-1} F_{\alpha}^{(\leq h-1)}(\psi^{(\leq h-1)})+ l_{h-1} F_{\lambda}^{(\leq h-1)}(\psi^{(\leq h-1)})
\end{split}
\end{equation}
so that we can apply iteratively the same scheme. We remark that, iterating this procedure, we can write $\vec v_h(x,y)$ in terms of $\vec v_h'(x,y)$, $h'\geq h+1$:
\begin{equation}
\vec v_h=\vec \beta(\vec v_{h+1}(x,y),\dots,\vec v_0(x,y); x,y)
\end{equation}
where $\vec \beta\left(\vec v_{h+1}(x,y),\dots,\vec v_0(x,y); x,y\right)$ is called the {\it beta function}.\\
In order to set up an iterative integration process, we have to be sure that the {\it dressing procedure} we just defined does not break th property of the kernels we pointed out in Remark (\ref{remark_kernels_independent_of_quasi_particles}), {\it i.e.} the fact that the kernels are independent of the {\it quasi-particles}. Indeed:
\begin{prop}
The interaction $\hat{\mathcal V}$ has the form
\begin{equation}
\hat{\mathcal V}\left(\psi^{(\leq h)}\right)=\sum_{n\geq 1}\int d\bm x_1\dots d\bm x_{2n}\left(\prod_{j=1}^n \psi^{(\leq h)+}_{\bm x_{2j-1}}\psi^{(\leq h)-}_{\bm x_{2j}}\right)W^{(h)}_{2n}(\bm x_1,\dots,\bm x_{2nv}),
\end{equation}
{\it i.e.} the kernels $W_{2n}^{(h)}$ are independent of quasi-particles.
\end{prop}
\begin{proof}
We can proceed iteratively. By construction, $\mathcal V^{(\leq 0)}$ is independent of quasi-particles and, in general, if we go on integrating with respect to the {\it bare integration} $P_h(\psi^{(h)})$, all the effective interactions $\mathcal V^{(\leq h)}$ are independent of quasi-particles.\\
So let us suppose that $\hat {\mathcal V}^{(\leq h)}$ is independent of quasi-particles, and let us check that the dressing procedure we just described does not break this structure.
First of all, let us note that the integration $P(\psi^{(\leq h)})$ (\ref{measure_quasi_particles_real_space_DBC}) is associated with the propagator $g^{(\leq h)}$, independent of quasi-particles. Besides, we dress this propagator with the local part (\ref{localization_W^d_DBC}) that, as we already pointed out, is independent of quasi particles, as is the remainder
$$\left(1-\mathcal L_\mathcal T\right)\sum_{\bm k\in\mathcal D^d_{\Lambda,\beta}}\hat\psi^{(\leq h)+}_{\bm k}\hat\psi^{(\leq h)-}_{\bm k}\hat W^{d(h)}(\bm k),$$
that we left in the effective interaction $\tilde{\mathcal V}^{(h)}$.\\
Finally:
\begin{itemize}
\item the dressed measure $\tilde P(\psi^{(\leq h)})$ is still associated with a propagator independent of quasi-particles,
\item the effective potential $\tilde{\mathcal V}^{(h)}$ is obtained, starting from $\mathcal{V}^{(h)}$, by replacing $$\sum_{\bm k\in\mathcal D^d_{\Lambda,\beta}}\hat\psi^{(\leq h)+}_{\bm k}\hat\psi^{(\leq h)-}_{\bm k}\hat W^{d(h)}(\bm k)\to \left(1-\mathcal L_\mathcal T\right)\sum_{\bm k\in\mathcal D^d_{\Lambda,\beta}}\hat\psi^{(\leq h)+}_{\bm k}\hat\psi^{(\leq h)-}_{\bm k}\hat W^{d(h)}(\bm k),$$ which is also independent of quasi-particles as we just commented.
\end{itemize}
\end{proof}
\paragraph{Renormalized propagator and renormalized effective potential} The propagator associated with the {\it dressed Gaussian Grassman measure} $\tilde P_{Z_{h-1}}$, in the real-space representation, is
\begin{equation}
\begin{split}
\frac{g^{(h)}(\bm x,\bm y)}{Z_{h-1}}=\sum_{\omega=\pm}\frac{\left(e^{-i\omega (x- y)}g_{P,\omega}^{(h)}(\bm x-\bm y)+e^{-i\omega (x+ y)}g_{R,\omega}^{(h)}(\bm x,\bm y)\right)}{Z_{h-1}},
\end{split}
\end{equation}
where, with a slight abuse of notation, we call $g_{\sigma,\omega}^{(h)}$ the analogous of the already defined propagator (\ref{propagator_decomposed_in_quasiparticles_DBC}) where we replace $f_h(\bm k')$ by $\tilde f_h(\bm k')$:
\begin{equation}
\begin{split}
g_{P,\omega}^{(h)}(\bm x,\bm y)=\frac{1}{\beta 2(L+1)}\sum_{\bm k'\in \mathcal D_{\Lambda,\beta}}e^{-i\bm k'\cdot (\bm x-\bm y)}\frac{\tilde f_h(\bm k')}{-ik_0+e(\bm k'+\omega p_F)}.\\
g_{R,\omega}^{(h)}(\bm x,\bm y)=\frac{1}{\beta 2(L+1)}\sum_{\bm k'\in \mathcal D_{\Lambda,\beta}}e^{-ik'(x+y)}e^{-ik'_0(x_0-y_0)}\frac{\tilde f_h(\bm k')}{-ik_0+e(\bm k'+\omega p_F)}.
\end{split}
\end{equation}
Besides, it is important to remember that, if we did not perform the renormalization procedure just described, the effective potential $\mathcal V^{(h)}$ would have the same shape we have already shown in (\ref{effective_potential_decomposed_2-4_kerlels_DBC}). Actually, the renormalization procedure we just described produces a new sequence of effective potential, that we call {\it renormalized effective potentials}, being of the same form of (\ref{effective_potential_decomposed_2-4_kerlels_DBC}) where we replace $\psi^{(\leq h)}$ by $\sqrt{Z_h}\psi^{(\leq h)}$, and the kernels $W^{(h)}_{2n}$ by what we call the {\it renormalized values of the clusters}. Properly, the effective potentials can be written as
\begin{equation}
\begin{split}
\mathcal V^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)=\sum_{n=1}^{\infty}\sum_{\tau\in\mathcal T_{h,n}}\mathcal V^{(h)}\left(\tau, \sqrt{Z_h}\psi^{(\leq h)}\right),\\
\mathcal V^{(h)}\left(\tau,\sqrt{Z_h}\psi^{(\leq h)}\right)=\int d\bm x(I_{v_0})\sum_{P_{v_0}\subset I_{v_0}}\sqrt{Z_h}^{|P_{v_0}|}\tilde \psi^{(\leq h)}(P_{v_0}) W^{(h)}(\tau, P_{v_0},\bm x(I_{v_0}))
\end{split}
\end{equation}
where, in fact, the kernels $W^{(h)}(\tau, P_{v_0},\bm x(I_{v_0}))$ have to be read as the {\it renormalized values of the clusters}, that we discuss in the following subsection.
\subsection{The renormalized tree expansion}
\label{subsection_the_renormalized_tree_expansion}
\begin{figure}
\centering
\begin{tikzpicture}
[scale=0.7, transform shape]
\foreach \i in {1,2,3,4,5,6,7,8,9,10,11,12,13,14} {%
\draw (\i,2.9) -- (\i, 11.2); }
\foreach \j in {1,2,3,4,5} {%
\draw [very thick] (\j,7) -- ++ (1,0);
\fill (\j,7) circle (0.1);
\node at (\j, 6.7) {$\mathcal R$};
\fill (6,7) circle (0.1);
\node at (6, 6.7) {$\mathcal R$};
}
\foreach \j in {0,1,2,3,4,5} {%
\draw [very thick] (6+\j, 7 -\j *0.5) -- +(1,-0.5);
\fill (6+\j,7-\j*0.5) circle (0.1);
\node at (6+\j, 6.7-\j*0.5) {$\mathcal R$};}
\fill (6+6, 7-3) circle (0.1);
\node at (12, 4.7) {$\mathcal L$};
\foreach \j in {0,1,2,3} {%
\draw [very thick] (6+\j, 7 +\j *0.5) -- +(1,+0.5);
\fill (6+\j,7+\j*0.5) circle (0.1);
\node at (6+\j, 6.7+\j*0.5) {$\mathcal R$};}
\fill (6+4, 7+2) circle (0.1);
\node at (10, 8.7) {$\mathcal R$};
\foreach \j in {0,1} {%
\draw [very thick] (10+\j, 9 +\j *0.5) -- +(1,+0.5);
\fill (10+\j,9+\j*0.5) circle (0.1);
\node at (10+\j, 8.7+\j*0.5) {$\mathcal R$};}
\fill (12, 10) circle (0.1);
\node at (12, 9.7) {$\mathcal L$};
\foreach \j in {0,1,2} {%
\draw [very thick] (10+\j, 9 -\j *0.5) -- +(1,-0.5);
\fill (10+\j,9-\j*0.5) circle (0.1);
\node at (10+\j, 8.7-\j*0.5) {$\mathcal R$};}
\fill (13,7.5) circle (0.1);
\node at (13, 7.2) {$\mathcal L$};
\foreach \j in {0,1} {%
\draw [very thick] (12+\j, 8 +\j *0.5) -- +(1,+0.5);
\fill (12+\j,8+\j*0.5) circle (0.1);
\node at (12+\j, 7.7+\j*0.5) {$\mathcal R$};
}
\fill(14,9) circle (0.1);
\node at (14, 8.7) {$\mathcal L$};
\foreach \j in {0} {%
\draw [very thick] (12+\j, 4 +\j *0.5) -- +(1,+0.5);
\fill (12+\j,4+\j*0.5) circle (0.1);
\node at (12+\j, 3.7+\j*0.5) {$\mathcal R$};
}
\foreach \j in {0,1} {%
\draw [very thick] (12+\j, 4 -\j *0.5) -- +(1,-0.5);
\fill (12+\j,4-\j*0.5) circle (0.1);
\node at (12+\j, 3.7-\j*0.5) {$\mathcal R$};}
\fill (14,3) circle (0.1);
\node at (14, 3.3) {$\mathcal L$};
\fill (13,4.5) circle (0.1);
\node at (13, 4.2) {$\mathcal L$};
\draw [very thick] (8,8) -- (9, 7.5);
\fill (9,7.5) circle (0.1);
\node at (9, 7.2) {$\mathcal L$};
\draw [very thick] (11,8.5) -- (12, 9);
\fill (12,9) circle (0.1);
\node at (12, 8.7) {$\mathcal L$};
\draw [very thick] (6,7) -- (11,6);
\fill (11,6) circle (0.1);
\node at (11, 5.7) {$\mathcal R$};
\draw [very thick] (11,6) -- (12, 5);
\fill (12,5) circle (0.1);
\draw [very thick] (11, 6) -- ++ (2,0);
\fill (13,6) circle (0.1);
\node at (13, 5.7) {$\mathcal L$};
\node at (1,2.7) {$\bm h$};
\node at (2,2.7) {$\bm h+1$};
\node at (3,2.7) {$\bm h+2$};
\foreach \i in {4,5,6,7,8} {%
\node at (\i,2.8) {...};}
\node at (9,2.7) {$\bm h_v$};
\node at (10,2.7) {$\bm h_v+1$};
\foreach \i in {11,12,13} {%
\node at (\i,2.8) {...};}
\node at (14,2.7) {$\bm 1$};
\node at (9,8.8) {$ v$};
\node at (1,7.3) {$ r$};
\node at (2,7.3) {$ v_0$};
\fill (7,6.8) circle (0.1);
\node at (7, 7.8) {$\mathcal R$};
\fill (8,6.6) circle (0.1);
\node at (8, 6.3) {$\mathcal R$};
\fill (9,6.4) circle (0.1);
\node at (9, 6.1) {$\mathcal R$};
\fill (10,6.2) circle (0.1);
\node at (10, 5.9) {$\mathcal R$};
\fill (12, 6) circle (0.1);
\node at (12, 5.7) {$\mathcal R$};
\end{tikzpicture}
\label{figure_renormalized_tree_DBC}
\caption{Example of a renormalized tree, with $n=9$ endpoints at scales $\leq 1$.}
\end{figure}
As usual it is convenient to give a graphical representation of the renormalized and localized effective potentials in terms of trees. One starts drawing $\mathcal V^{(-1)}$ as in figure (\ref{figure_effective_potentiale_scale_0}), using the representation for $\mathcal V^{(0)}$ as the sum of renormalized and localized part, and then iterates the procedure on $\mathcal V^{(-1)}$ itself. Finally, one gets the family of renormalized trees given by the same trees we got expanding naively $\mathcal V^{(h)}$ with the following differences:
\begin{itemize}
\item Each vertex $v\neq V_f(\tau)$ is labeled by an operator
$$\mathcal R\in\left\{\mathcal R_\mathcal T\mathcal L_{\mathcal B}, \tilde{\mathcal R}_\mathcal T\mathcal R_{\mathcal B}, \mathcal R_\mathcal B\right\},$$
up to the first vertex $v_0$ which can be labeled either by $$\mathcal R\in\left\{\mathcal R_\mathcal T\mathcal L_{\mathcal B}, \tilde{\mathcal R}_\mathcal T\mathcal R_{\mathcal B}, \mathcal R_\mathcal B\right\},\mbox{ or by }\mathcal L\in \left\{ \mathcal L_\mathcal T\mathcal L_\mathcal{B}, \tilde{ \mathcal L}_\mathcal T\mathcal L_\mathcal{B}\right\}.$$
\item There are endpoints $v\in V_f(\tau)$ scale label $h_v\leq 1$ (while, in the non-renormalized expansion, each endpoint lives at scale $h_v=1$). If $v\in V_f(\tau)$ and $h_v<0$, a constribution $\mathcal L \mathcal V^{(h)}$ among (\ref{linearized_effective_potentia_nu_DBC})-(\ref{linearized_effective_potentia_lambda_DBC}) is associated with $v$, while if $h_v=0$ either a contribution $\mathcal L \mathcal V^{(0)}$ or a contribution $\mathcal R\mathcal V^{(0)}$ is associated with $v$. Of course, in this way the endpoints are associated with running coupling constants and functions by the label $v$, meaning that if $r_v=\nu_h$ and $h=h_{v'}$, $v$ is associated with $F^{(\leq h)}_\nu$, if $r_v=\varpi_h$ and $h=h_{v'}$, $v$ is associated with $\left(\varpi \ast F^{(\leq h)}_\varpi\right)$ and so on.
\item the hierarchical structure of the tree, and the very definition of the operators $\mathcal L_{\mathcal B}$, $\mathcal R_{\mathcal B}$, implies some ordering constraints on the remainder operators labeling the vertices: let us suppose that some vertex $v\in V(\tau)$ is labeled by a renormalization operator $\mathcal R_\mathcal T\mathcal R_{\mathcal B}$ or $\mathcal R_\mathcal B$. So, for each $w\prec v$,
\begin{equation}
\mathcal L_{\mathcal B} \mathcal W^{(h_w)}(\tau_w,P_w,\bm x(P_w))= 0.
\end{equation}
This means that, given $v\in V(\tau)$ labeled by $\mathcal R_\mathcal T\mathcal R_{\mathcal B}$ or $\mathcal R_\mathcal B$, any $w\prec v$ is necessairly labeled by $\mathcal R_0\mathcal R_{\mathcal B}$ or $\mathcal R_\mathcal B$. {\it Vice versa} it is not possible that a vertex labeled by $\mathcal R\in\{\mathcal R_\mathcal T\mathcal L_{\mathcal B}, \mathcal L_{\mathcal B}\}$ is an ancestor of vertices labeled by $\mathcal R\in\{\mathcal R_\mathcal T\mathcal R_B\}$.
\end{itemize}
This inductive definition of the renormalized effective potential is convenient since it is possible to get some estimates on the kernels of the effective potential which we use to show that the multiscale expansion is well behaved. We already pointed out that the multiscale integration induces a natural definition of the so called {\it running coupling constants and functions} $\vec v_k(x,y)$, $h<k\leq 1$ the effective kernels can be thought functions of. Our strategy is to:
\begin{enumerate}
\item first of all, we will consider $\vec v_k$ as an arbitrary sequence we will do some smallness assumptions on, without requiring that they are solution of the beta function (\ref{vec_v_h(x)}),
\item once we know that, under these smallness assumptions, the kernels of the effective potential are well defined, we prove that in particular the {\it running coupling constants and functions} are solutions of the beta function.
\end{enumerate}
\subsection{Renormalized bounds}
Let us recap what we have done so far and what we want to do. In Section (\ref{section_Non-renormalized expansion and properties of kernels}) we inferred, knowing that the {\it bulk contributions dominate with respect to the remainder contributions}, that the {\it sources of problems} for the convergence of the expansion are the {\it quadratic and quartic kernels}, as in the previous chapter (Theorem (\ref{theorem_bound_of_kernels})). In light of this fact we set up a renormalization procedure, consisting in dressing, scale by scale, the Grassmann integration with a marginal contribution coming from the effective potential. Of course we are left with proving that this procedure improves the bounds of the kernels in such a way that we can perform the sum over all the scales, {\it i.e. we want to prove the following theorem}.
\begin{thm}
\label{theorem_renormalized_bounds_DBC}
Let $\tau\in\mathcal T_{h,n}$ a renormalized tree, $h>h_L$, and $\mathcal W^{(h)}(\tau, P_{v_0}, \bm x(P_{v_0}))$ the respective renormalized kernel. So it holds:
\begin{equation}
\label{bounds_renormalized_kernels_DBC}
\begin{split}
\frac{1}{|\Lambda|\beta}\int d\bm x(P_{v_0})\left|\mathcal W^{(h)}(\tau,P_{v_0},\bm x(P_{v_0}))\right|\leq C^n\gamma^{-h[D(P_{v_0})+z_{v_0}]}\\
\left(\prod_{v\neq V_f(\tau)}\gamma^{-[D(P_{v})+z_{v}](h_v-h_v')}\right)\left(\prod_{v\in V_f(\tau)}r_v\right)
\end{split}
\end{equation}
where $r_v\in \{|\nu_h|,|\lambda_h|, |\delta_h|, \sup_{x\in\Lambda}\int_\Lambda dy|\varpi_h(x,y)|\},$
$z_v=\theta$ if $G_v$ has four external lines, $z_v=1+\theta$ if $G_v$ has two external lines and $C$ is an independent constant.
\end{thm}
It is convenient to start with looking at the {\it explicit formula} of the renormalized effective potentials:
\begin{equation}
\begin{split}
\mathcal V^{(h)}\left(\sqrt{Z_h}\psi^{(\leq h)}\right)=\sum_{n=1}^{\infty}\sum_{\tau\in\mathcal T_{h,n}}\mathcal V^{(h)}\left(\tau, \sqrt{Z_h}\psi^{(\leq h)}\right),
\end{split}
\end{equation}
where, if $v_0$ is the first vertex of $\tau$ and $\tau_1,\dots, \tau_{s_{v_0}}$ are the subtrees of $\tau$ with root $v_0$, and $\mathcal V^{(h)}$ is inductively defined as
\begin{equation}
\begin{split}
\mathcal V^{(h)}(\tau,\sqrt{Z_h}\psi^{(\leq h)})=\\
\frac{(-1)^{s_{v_0}+1}}{s_{v_0}!}\mathcal E_{h+1}^T\left(\mathcal V^{(h+1)}(\tau_1, \sqrt{Z_h}\psi^{(\leq h+1)});\dots ; \mathcal V^{(h+1)}(\tau_1, \sqrt{Z_h}\psi^{(\leq h+1)})\right)
\end{split}
\end{equation}
where
\begin{equation}
\mathcal E_{h_v}^T\left(\tilde \psi(P_{v_1}), \dots, \tilde \psi(P_{v_{s_0}})\right)=\sum_{T_v}\alpha_{T_v}\prod_{\ell \in T_v}g^{(h_\ell)}_{\ell}\int dP_{T_{v}}(\bm t) \det G^{T_v}(\bm t)
\end{equation}
where $T_v$ is a set of lines forming an {\it anchored tree} between the clusters of points $P_{v_1}, \dots, P_{v_{s_v}}$, $\alpha_T$ is a sign, and $g^{(h_\ell)}_{\ell}=g_{\sigma(\ell),\omega(\ell)}^{(h_\ell)}(\bm x(f_\ell),\bm y(f'_{\ell}))$, so:
\begin{equation}
\begin{split}
\mathcal R\mathcal V^{(h)}\left(\tau,\sqrt{Z_h}\psi^{(\leq h)}\right)=\\=\int d\bm x(I_{v_0})\sum_{P_{v_0}\subset I_{v_0}}\sum_{T\in\bm T}\sum_{\alpha\in A_T}\sqrt{Z_h}^{|P_{v_0}|}\left[\prod_{f\in P_{v_0}}\partial^{b_\alpha(f)}_{j_\alpha(f)} \psi^{(\leq h)\epsilon(f)}_{\bm x(f)}(P_{v_0})\right] \mathcal R_{\alpha}W^{(h)}(\tau, P_{v_0},\bm x(I_{v_0})),
\end{split}
\end{equation}
where $b_\alpha(f)\in\{0,1,2\}$, $j_\alpha(f)\in\{0,1\}$, and $A_T$ is a set of indices that formally allow to distinguish the different terms produced by the non trivial $\mathcal R\in\{\mathcal R_\mathcal T\mathcal L_\mathcal B, \tilde{\mathcal R}_\mathcal T\mathcal R_B,\mathcal R_\mathcal B\}$.
\paragraph{Explicit expression of renormalized kernels}
\begin{rem}When $\mathcal R=\mathcal R_\mathcal T\mathcal L_\mathcal B$, thanks to the very structure of the renormalized trees we explained in Subsection (\ref{subsection_the_renormalized_tree_expansion}), we are in the same case of Theorem (\ref{theorem_renormalized_bounds}), there is nothing new to comment.
So we are left with controlling the cases $\mathcal R_\alpha\in\{\tilde{\mathcal R}_\mathcal T\mathcal R^{(1)}_\mathcal B, \tilde{\mathcal R}_\mathcal T\mathcal R^{(2)}_ \mathcal B, \mathcal R^{(1)}_\mathcal B, \mathcal R^{(2)}_\mathcal B\}$.
\end{rem}
Let us comment properly the multiscale structure of $\mathcal R_\alpha W^{(h)}$ in these four cases.\\
In general, using Remarks (\ref{remark_R_B_decomposition_R1_R2}) and (\ref{remark_commutation_renormalization operators}), we can re-write
\begin{equation}
\begin{split}
\mathcal R_\alpha W^{(h)}(\tau, P_{v_0},\bm x(I_{v_0}))=\\=\left[\prod_{v\notin V_f(\tau)}\left(\frac{Z_{h_v}}{Z_{h_{v}-1}}\right)^{\frac{|P_v|}{2}}\right]
\mathcal R^{(\tau)}_{v_0,\alpha,\mathcal B}\left( \left\{\prod_{v\notin V_f(\tau)}\frac{1}{s_v!}\int dP_{T_v}(\bm t_v) \left( \det G_\alpha^{h_v, T_v}(\bm t_v)\right)\cdot\right.\right.\\\left.
\left.\cdot \left[\prod_{\ell \in T_v}(\bm x_\ell- \bm y_\ell)^{b_\alpha(\ell)}_{j_\alpha(\ell)}\partial^{q_\alpha(f_\ell^1)}_{j_\alpha(f_\ell^1)}\partial^{q_\alpha(f_\ell^2)}_{j_\alpha(f_\ell^2)} g^{(h_\ell)}_{\ell}\right]\right\}\left[\prod_{i=1}^{n}(\bm x^i-\bm y^i)^{b_\alpha(v^*_i)}_{j_\alpha(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right]\right)
\end{split}
\label{renormalized_kernels_explicit_expression}
\end{equation}
where
\begin{itemize}
\item we used the commutation of the renormalization operators (Remark (\ref{remark_commutation_renormalization operators})), acting first with the {\it Taylor renormalization operators}, so that $b_\alpha(\ell), b_\alpha(v_i^*), q_\alpha(\ell), q_\alpha(v_i^*)\in\{1,2\}$, and the fact that there are as many derivatives as {\it "zeroes"} is technically expressed by the constraint $\sum_{\ell, i}\left(b_\alpha(\ell)+ b_\alpha(v_i^*)- q_\alpha(f_\ell^{(1)})- q_\alpha(f_\ell^{(2)})\right)=0$, while $(\bm x_\ell- \bm y_\ell)^{b_\alpha(\ell)}_{j_\alpha(\ell)}$ are the zeroes we introduced in the renormalization procedure definition, where $j_\alpha\in \{0,1\}$ denotes the component of the vector, $K^{(h_i)}_{{v^*_{i}}}$ is one of the terms of the local effective potential $\mathcal L \mathcal V^{(h_i)}$, and $G_\alpha^{h_v,T_v}$ is the matrix whose entries are
\begin{equation}
\begin{split}
G^{h_v,T_v}_{\alpha, ij,i'j'}= t_{v,i,i'}\partial_{j_\alpha(f_{ij}^1)}^{q_\alpha(f_{ij}^1)}\partial_{j_\alpha(f_{ij}^2)}^{q_\alpha(f_{ij}^2)}g_{\sigma_\ell,\omega_\ell}^{h_v}(\bm x_{ij},\bm y_{i'j'}).
\end{split}
\label{matrix_G_h_v_t_v}
\end{equation}
\item $R_{v_0,\alpha,\mathcal B}^{(\tau)}$ is a formal way to represent the {\it bulk renormalization} operations, so it has to be interpreted as iteratively defined in a way dependent on the structure of the renormalized tree, as discussed in defining the {\it localization and renormalization operators}, Subsection (\ref{subsection_the_renormalized_tree_expansion}).
\end{itemize}
In particular, $R_{v_0,\alpha,\mathcal B}^{(\tau)}$ has to be thought of as a {\it composition} of $\mathcal R^{(i)}_\mathcal B$, $i=1,2$ operators acting on the vertices of $V(\tau)$ in the following way:
\begin{itemize}
\item {\bf Case $\mathcal R^{(1)}_\mathcal B$} As we commented in Remark (\ref{remark_R_B_decomposition_R1_R2}), $\mathcal R^{(1)}_\mathcal B$ does not modify the trees, but it selects the trees having at least a non-translation-invariant element (either $g_R$ propagators or $\varpi$-endpoints).
\item {\bf Case $\mathcal R^{(2)}_\mathcal B$} As we commented in (\ref{remark_R_B_decomposition_R1_R2}), $\mathcal R^{(2)}_\mathcal B$ modifies the trees that do not contain any non-translation-invariant element. So, in the right hand side of (\ref{renormalized_kernels_explicit_expression_first_version}), the product within the last brackets does not include, by construction, $\varpi$-type endpoints; $\mathcal R_\mathcal B^{(2)}$, by modifying the coordinates associated with the endpoints, modifies {\it in general} the coordinates of some propagators and the coordinates involved in the determinant.
\end{itemize}
\paragraph{Definition of the weight functions}
It is worth pointing out that, for the porpouses of the estimates that we are looking for, we can associate the {\it decay properties} related to non-local counterterms $\gamma^h \varpi_h(x,y)$ and remainder propagators $g_{R,\omega}^{(h)}(\bm x,\bm y)$ with vertices instead of with lines of the spanning tree, and it will be useful during the proof. Indeed:
\begin{itemize}
\item the inductive hypoyhesis on $\varpi_h(\cdot, \cdot)$
\begin{equation}
\int dy \left|\varpi_h(x,y)\right|\leq |\lambda| \frac{C_\theta}{1+\gamma^{\theta h}|x|^\theta}, \hspace{3mm} 0<\theta \leq 1,
\label{varpi_endpoints_decay}
\end{equation}
symmetrically in $\bm x \leftrightarrow \bm y$. Let us define
\begin{equation}
||\varpi_h||^{(\theta)}_{\infty,1}=\sup_{x\in\Lambda}(1+\gamma^h|x|)^\theta\int dy \left|\varpi_h(x,y)\right|.
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}
[thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\fill (0,1) circle (0.06);
\node at (0,0.6) {\bf x};
\fill (2,1) circle (0.06);
\node at (2,0.6) {\bf y};
\fill (7,1) circle (0.06);
\node at (7,0.6) {\bf x};
\node at (7,1.4) {$\varpi_h(x)$};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (7,1) {};
\draw [postaction={decorate}] (-1,1) -- ++ (1,0);
\draw [-,decorate, decoration={coil, aspect=2}] (0,1) --++ (2,0);
\draw [postaction={decorate}] (2,1) -- ++ (1,0);
\draw [postaction={decorate}] (6,1) -- ++ (1,0);
\draw [postaction={decorate}] (7,1) -- ++ (1,0);
\draw [->, very thick] (3.5,1) -- ++ (2,0);
\end{tikzpicture}
\centering
\begin{tikzpicture}
[thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\fill (0,1) circle (0.06);
\node at (0,0.6) {\bf x};
\fill (2,1) circle (0.06);
\node at (2,0.6) {\bf y};
\fill (8,1) circle (0.06);
\node at (6,0.6) {\bf x};
\node at (8,0.6) {\bf y};
\node at (6,1.4) {$\rho_h(x)$};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (6,1) {};
\draw [very thick, dashed] (0,1) --++ (2,0);
\draw [very thick] (6,1) -- ++ (2,0);
\draw [->, very thick] (3.5,1) -- ++ (2,0);
\end{tikzpicture}
\caption{First line: localization of a $\varpi$-type endpoint: $\varpi_h(x)=\int_{\Lambda}dy|\varpi_h(x,y)|$. In the second line we replace a $R$-labeled propapagot (dashed line) by a $P$-labeled propagator, dressing one of the two vertices with a weight function $ \rho$.}
\label{figure_localization_varpi_endpoints}
\end{figure}
This means that, in studying a {\it renormalized} or {\it linearized} tree $\tau \in \mathcal T_{h,n}$ (meaning that the endpoints can live at any scale $h< k \leq 0$, and they represent the terms appearing in the linearized part of the effective potential $\mathcal L \mathcal V^{(k)}$ (\ref{localized_effective_potential_DBC})), if we call $n_{\varpi}$ the number of $\varpi$-type endpoints (which are the only non-local endpoints), we can reduce the number of integration points by integrating $n_\varpi$ integration points as in (\ref{varpi_endpoints_decay}), see Figure (\ref{figure_localization_varpi_endpoints}): in this way, for the purpose of an upper bound, we are replacing a non local graph element representing $\gamma^h \varpi_h(x,y)$ by a local graph element, {\it i.e.} a vertex, and we associate with this vertex a {\it weight} that, with a slight abuse of notation and by suitably fixing a $\bar \theta$ in formula (\ref{varpi_endpoints_decay}), we call $\gamma^h \varpi_h(x)$, defined as
$$ \varpi_h(x)= |\lambda|\frac{C_{\bar \theta}}{1+\gamma^{\bar \theta h}|x|^{\bar{\theta}}},$$
\item following the same idea of associating a {\it weight} to vertices, let us recall the result of Corollary (\ref{corollary_norms_propagators_DBC})
\begin{equation}
\frac{1}{\beta}\left| \int d\bm x d\bm y g_{R,\omega}(\bm x,\bm y)\right| \leq \gamma^{- 2h},
\end{equation}
so, for the purpose of a dimensional estimate, we can replace the propagator associated with the line $(\bm x,\bm y)$ by a translation invariant propagator $g_P$ provided we {\it dress} one of the two vertices linked by the propagator with a proper weight: let us recall the definition of $\rho_h^{(N)}$ (\ref{definition_rho_h^N}), for each $N=1,2,\dots$:
$$\rho^{(N)}_h(x)= \frac{C_N}{1+\left(\gamma^h |x|\right)^N},$$ where $C_N$ is the same as Corollary (\ref{corollary_norms_propagators_DBC}) (again, we can arbitrarily choose $x$ or $y$), indeed:
\begin{equation}
\begin{split}
\frac{1}{\beta}\left| \int d\bm x d\bm y g^{(h)}_{R,\omega}(\bm x,\bm y)\right| \leq C \gamma^{-2h},\\
\frac{1}{\beta}\left| \int d\bm x d\bm y g^{(h)}_{P,\omega}(\bm x-\bm y)\rho_h(x)\right|= \left|\int d\bm y' g_{P,\omega}^{(h)}(\bm y')\int dx \rho_h(x) \right| \leq C \gamma^{- 2h},
\end{split}
\label{reminder_propagators_as_periodic_and_weight}
\end{equation}
if we call $\rho_h(\cdot):=\rho_h^{(\bar N)}$ with $\bar N$ suitably fixed {\it a priori} (see Figure \ref{figure_localization_varpi_endpoints}).
\item we will use, during the proof, the already commented estimate (\ref{bound_g_R_g_infty_rho}), that we recall here:
$$|g_R^{(h)}(\bm x,\bm y)|\leq ||g_R^{(h)}||_\infty \rho^{(N)}_h(x),\hspace{5mm}\forall \hspace{3mm} N=1,2,\dots$$
\end{itemize}
\begin{rem}
From now on we use the symbol $\rho_h(\cdot)$ to denote $\rho^{(\bar N)}_h$ for some suitably fixed $\bar N$, so in particular we can think:
\begin{equation}
\rho_h(x)\leq \frac{C_{\bar N}}{\left(1+\gamma^h |x|\right)^{\bar N}}.
\end{equation}
We stress that for any $\theta\in (0,1)$ there exists a constant $C_\theta>0$ such that
\begin{equation}
\sup_x \left| \rho_h(x) \right|\leq C_\theta,\hspace{3mm} \sup_x\left| \varpi_h(x)\right|\leq C_\theta |\lambda|,
\end{equation}
meaning that, not taking advantage of the decay properties of $\rho_h(\cdot)$ and $\varpi_h(\cdot)$ we would get, for the clusters containing at least one element which breaks translation invariance, the same bound as the dominant (translation invariant) part.
\end{rem}
Let us unify the notation by introducing the weight function
\begin{equation}
w_h(h)\in\{\rho_h(x), \varpi_h(x)\}.
\label{definition_weight_functions}
\end{equation}
\paragraph{"Simplification" of the tree and definition of $V_\mathcal B(\tau)$} We can {\it simplify} the hierarchical renormalization structure of the tree $\tau$ by suitably using Corollary (\ref{corollary_norms_propagators_DBC}) in bounding the norm $||\cdot||_1$ of the renormalized kernels:
\begin{equation}
\frac{1}{|\Lambda|\beta}\int d\bm x(P_{v_0})\left|\mathcal W^{(h)}\left(\tau,P_{v_0}, \bm x(P_{v_0})\right)\right|.
\end{equation}
\begin{figure}
\begin{tikzpicture
[scale=0.5,transform shape,thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\node at (1,7.3) {$r$};
\node at (3,7.3) {$v_0$};
\foreach \i in {-1,1} {%
\foreach \j in {-1,1}{%
\foreach \l in {-1,1}{%
\draw [very thick] (1,7) -- ++ (2,0) -- ++ (2,\i*2) -- ++ (2,\j*1) -- ++ (2,\l*0.5);
\fill (1,7) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) ++ (2,\l*0.5) circle (0.1);
\node at (1+2+2+2+2,7+\i*2+\j+\l*0.5-0.3) {$\mathcal L$};
}
\fill (1,7) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) circle (0.1);
\node at (1+2+2+2, 7+\i*2+\j - 0.3){$\mathcal R_\mathcal B$};
}
\fill (1,7) ++ (2,0) ++ (2,\i*2) circle (0.1);
\node at (1+2+2,7+\i*2-0.3) {$\mathcal R_\mathcal B$};
}
\fill (1,7) ++ (2,0) circle (0.1);
\node at (1+2,7-0.3) {$\mathcal R_\mathcal B$};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (9, 9.5) {};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (9, 5.5) {};
\foreach \i in {-1,1} {%
\foreach \j in {-1,1}{%
\foreach \l in {-1,1}{%
\fill (1,7) ++ (4,0) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) ++ (2,\l*0.5) circle (0.1);
}}}
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (13, 9.5) {};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (13, 5.5) {};
\foreach \i in {-1,1}{%
\foreach \j in {-1,1}{%
\draw [green, very thick] (1,7.05) ++ (4,0) ++ (2,0) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) ellipse (0.6 and 0.9);
\node at (1+4+2+2+2+2+0.4, 7.05+\i*2+\j) {\textcolor{green}{\bf h+2}};
}
\draw [red, very thick] (1,7.05) ++ (4,0) ++ (4,0) ++ (2,0) ++ (2,\i*2) ellipse (1.1 and 2);
\node at (1+4+2+2+2+2+0.7, 7.05+\i*2) {\textcolor{red}{\bf h+1}};
}
\draw [blue, very thick] (1,7) ++ (10,0) ++ (2,0) ellipse (2.2 and 4.2);
\node at (1+4+2+2+2+2+2, 7.05) {\textcolor{blue}{\bf h}};
\foreach \i in {0,1,2,3,4,5,6} {%
\draw (13,10.5 - \i) to [out=-45, in=45, looseness=1] (13,10.5 -1 -\i);}
\draw [very thick, dashed] (13,10.5)to [out=225, in=-225, looseness=1] (13,8.5);
\draw [very thick, dashed] (13,6.5)to [out=225, in=-225, looseness=1] (13,4.5);
\draw [very thick, dashed](13,7.5)to [out=225, in=-225, looseness=1] (13,6.5);
\draw [very thick, dashed] (13,8.5)to [out=225, in=-225, looseness=1] (13,7.5);
\draw [very thick, dashed] (13,4.5)to [out=225, in=-225, looseness=1] (13,3.5);
\draw [postaction={decorate}] (12,12) -- (13,10.5);
\draw [postaction={decorate}] (13,10.5) -- (14,12);
\draw [postaction={decorate}] (12,2) -- (13,3.5);
\draw [postaction={decorate}] (13,3.5) -- (14,2);
\end{tikzpicture}
\begin{tikzpicture
[scale=0.5,transform shape, thick,decoration={
markings,
mark=at position 0.5 with {\arrow{>}}}]
\node at (1,7.3) {$r$};
\node at (3,7.3) {$v_0$};
\foreach \i in {-1,1} {%
\foreach \j in {-1,1}{%
\foreach \l in {-1,1}{%
\draw [very thick] (1,7) -- ++ (2,0) -- ++ (2,\i*2) -- ++ (2,\j*1) -- ++ (2,\l*0.5);
\fill (1,7) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) ++ (2,\l*0.5) circle (0.1);
\node at (1+2+2+2+2,7+\i*2+\j+\l*0.5-0.3) {$\mathcal L$};
}
\fill (1,7) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) circle (0.1);
\node at (1+2+2+2, 7+\i*2+\j - 0.3){$\mathcal R_\mathcal B$};
}
\fill (1,7) ++ (2,0) ++ (2,\i*2) circle (0.1);
\node at (1+2+2,7+\i*2-0.3) {$\mathbb I$};
}
\fill (1,7) ++ (2,0) circle (0.1);
\node at (1+2,7-0.3) {$\mathbb I$};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (9, 9.5) {};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (9, 5.5) {};
\foreach \i in {-1,1} {%
\foreach \j in {-1,1}{%
\foreach \l in {-1,1}{%
\fill (1,7) ++ (4,0) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) ++ (2,\l*0.5) circle (0.1);
}}}
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (13, 9.5) {};
\node [regular polygon, regular polygon sides=4,
minimum size=3mm, fill] at (13, 5.5) {};
\foreach \i in {-1,1}{%
\foreach \j in {-1,1}{%
\draw [green, very thick] (1,7.05) ++ (4,0) ++ (2,0) ++ (2,0) ++ (2,\i*2) ++ (2,\j*1) ellipse (0.6 and 0.9);
\node at (1+4+2+2+2+2+0.4, 7.05+\i*2+\j) {\textcolor{green}{\bf h+2}};
}
\draw [red, very thick] (1,7.05) ++ (4,0) ++ (4,0) ++ (2,0) ++ (2,\i*2) ellipse (1.1 and 2);
\node at (1+4+2+2+2+2+0.7, 7.05+\i*2) {\textcolor{red}{\bf h+1}};
}
\draw [blue, very thick] (1,7) ++ (10,0) ++ (2,0) ellipse (2.2 and 4.2);
\node at (1+4+2+2+2+2+2, 7.05) {\textcolor{blue}{\bf h}};
\foreach \i in {0,1,2,3,4,5,6} {%
\draw (13,10.5 - \i) to [out=-45, in=45, looseness=1] (13,10.5 -1 -\i);}
\draw (13,10.5)to [out=225, in=-225, looseness=1] (13,8.5);
\draw (13,6.5)to [out=225, in=-225, looseness=1] (13,4.5);
\draw (13,7.5)to [out=225, in=-225, looseness=1] (13,6.5);
\draw [very thick, dashed] (13,8.5)to [out=225, in=-225, looseness=1] (13,7.5);
\draw [very thick, dashed] (13,4.5)to [out=225, in=-225, looseness=1] (13,3.5);
\draw [postaction={decorate}] (12,12) -- (13,10.5);
\draw [postaction={decorate}] (13,10.5) -- (14,12);
\draw [postaction={decorate}] (12,2) -- (13,3.5);
\draw [postaction={decorate}] (13,3.5) -- (14,2);
\draw [purple] (7,7) ellipse (0.8 and 4.2);
\node at (7,12) {\textcolor{purple}{$\bm{V_\mathcal B(\tau)\subset V(\tau)}$}};
\end{tikzpicture}
\caption{Simplification process of the hierarchical structure of the Renormalization Operators. The first two figures on the left represent the original renormalized tree we consider and the respective {\it Feynman diagrams structure}; on the right there is the "simplified tree"and the respective {\it Feynman diagrams structure}. In the {\it Feynman diagrams structure}, the black lines are the $P$-type propagators, the dashed lines are the $R-$type propagators, the black dots are $\lambda$-type endpoints and the black squares are $\varpi$-type endpoints. In sake of simplicity, each of the clusters has $4$ external legs and contains one non translation invariant element which is not contained in any sublcusters, so in this particular graph there is no need of Taylor renormalization. After the simplification procedure, only the "{\it innermost}" non-translation-invariant elements survive, and the main goal of this section will be to show that they are enough to renormalizethe whole tree. Finally, the vertices of the tree surrounded by the purple line belong to the set $V_\mathcal B(\mathcal \tau)\subset V(\tau)$.}
\label{figure_simplification_figure_after}
\end{figure}
First of all using Remark (\ref{remark_commutation_renormalization operators}), as in (\ref{renormalized_kernels_explicit_expression}), we imagine to have already {\it applied} the {\it Taylor renormalization operators} $\mathcal R_\mathcal T, \tilde{\mathcal R}_{\mathcal T}$, so in Figure (\ref{figure_simplification_figure_after}) we drop in sake of simplicity the symbols $\mathcal R_\mathcal T, \tilde{\mathcal R}_\mathcal T$.\\
Starting from each leaf of the Gallavotti-Nicolo ({\it "Taylor renoemalized"}) tree, we descend the tree toward the root until we meet for the first time a vertex $v\in V(\tau)$ labeled by $\mathcal R_{\mathcal B}\in\{\mathcal R^{(1)}_\mathcal B, \mathcal R^{(2)}_ \mathcal B\}$ : from this point on all the ancestors $w\prec v$ are labeled by $\mathcal R_{\mathcal B}$, but there are two possibilities:
\begin{itemize}
\item either there are {\it neither} remainder propagators {\it nor} $\varpi$-type endpoints at scale $h_w$ \footnote{In sake of clarity we repeat that, by {\it having a propagator (resp. an endpoint) at scale $h_w$} we properly mean that the propagator (resp. the endpoint) is an element of the cluster $G_w$, but it is not an element of any of the subclusters $G_{\bar w}\subset G_{w}$, where $\bar w$ is a descendent of $w$.}, and we do nothing,
\item or there is at least {\it either} a remainder propagator {\it or} $\varpi$-type endpoint at scale $h_w< h_v$, and for each of them we use Corollary (\ref{corollary_norms_propagators_DBC}) $$||g_R^{(h_w)}||_\infty\leq ||g_P^{(h_w)}||_\infty,\hspace{3mm} ||g_R^{(h_w)}||_1\leq ||g_P^{(h_w)}||_1,\hspace{3mm} \sup_{x,y}|\varpi_{h_w}(x)|\leq C_{\theta'},$$
for some $\theta'$ fixed {\it a priori} to, respectively, replace $g_R^{(h_w)}$ by $g^{(h_w)}_P$ and $|\varpi_{h_w}(x)|$ by $C_{\theta'}$ for some $\theta'$ suitably chosen a priori.
\end{itemize}
\begin{rem}
From now on, we will say that $\mathcal R_\mathcal T\mathcal R_\mathcal B$ (or $\tilde{\mathcal R}_\mathcal T\mathcal R_\mathcal B$) {\it acts in a non-trivial way} only on those clusters $G_v$ containing a non-trivial weight function just at scale $h_v$, and we call $V_{\mathcal B}(\tau)\subset V(\tau)$ the set of the vertices of the tree labeled in a non trivial way by $\mathcal R_\mathcal T\mathcal R_{\mathcal B}$ (or $\tilde{\mathcal R}_\mathcal T\mathcal R_\mathcal B$).
\end{rem}
\paragraph{Proof of Theorem \ref{theorem_renormalized_bounds_DBC}}
Once we introduced all the possible and useful simplifications and defined the notations, we can start proving Theorem (\ref{theorem_renormalized_bounds_DBC}). In particular, the proof will consist of two fundamental Lemmata that we are going to introduce:
\begin{itemize}
\item thanks to Lemma (\ref{lemma_bound_determinants_inside_renormalized_kernels}), we will reduce the problem of {\it bounding the kernels in Theorem (\ref{theorem_renormalized_bounds_DBC})} to the problem of {\it bounding the integral over a spanning tree whose vertices are weighted by the weight functions $w_h(\cdot)$} we just introduced.
\item thanks to Lemma (\ref{lemma_effective_gain}) we exploit the presence of these weight functions to get the dimensional gains that renormalize the tree. In particular, we will prove Lemma (\ref{lemma_effective_gain}) {\it via} two auxiliary lemmata: \begin{itemize}
\item Lemma (\ref{lemma_transfer}) tells us that we can re-arrange the spanning tree as we need, by moving the weight functions $w_h(\cdot)$ inside the cluster at scale $h$ it belongs to,
\item Lemma (\ref{lemma_integral_w_g}) tells us where the dimensional gains actually come from,
\end{itemize}
\item putting together Lemmata (\ref{lemma_bound_determinants_inside_renormalized_kernels}) and (\ref{lemma_effective_gain}), we obtain in a straightforward way the desired bound in Theorem (\ref{theorem_renormalized_bounds_DBC}).
\end{itemize}
\begin{lem}
\label{lemma_bound_determinants_inside_renormalized_kernels}
\begin{equation}
\begin{split}
\left|\int d\bm x(P_{v_0}) \mathcal R_{\alpha} W^{(h)}(\tau, P_{v_0},\bm x(I_{v_0}))\right|
\leq C \left[\prod_{v\notin V_f(\tau)}\left(\frac{Z_{h_v}}{Z_{h_{v}-1}}\right)^{\frac{|P_v|}{2}}\right]\int d\bm x(P_{v_0}) \cdot
\\ \cdot \left\{\prod_{v\notin V_f(\tau)}\frac{1}{s_v!}\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}||g^{(h_v)}||^{\frac{1}{2}\left(\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)\right)}_{\infty}\cdot\right.\\
\left.\cdot \left[\prod_{\ell \in T_v}\left|(\bm x_\ell- \bm y_\ell)^{b_\alpha(\ell)}_{j_\alpha(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right|\right]\right\}\cdot \\ \cdot\left[\prod_{i=1}^{n}\left|(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i})\right|\right]\left(\prod_{v\in V_\mathcal{B}(\tau)}w_h(x_v)\right),
\end{split}
\end{equation}
where the terms $\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}$ take into account the dimensional gains coming from the derivatives in (\ref{matrix_G_h_v_t_v}), the argument of the square brackets in the last line has to be read as in (\ref{renormalized_kernels_explicit_expression}) where we replaced $\varpi_h(x)$ by a constant, and the argument of the last brackets are the weight functions we defined in (\ref{definition_weight_functions}).
\end{lem}
We will use another Lemma to bound the integral appearing in the r.h.s. of the latter formula.
\begin{lem}
\label{lemma_effective_gain}
\begin{equation}
\begin{split}
\frac{1}{|\Lambda|\beta}\int d\bm x(P_{v_0})\prod_{v\notin V_f(\tau)}\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}\left(\left[\prod_{\ell \in T_v}\left|(\bm x_\ell- \bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right|\right]\right)\cdot \\ \cdot\left[\prod_{i=1}^{n}\left|(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i})\right|\right]\left(\prod_{v\in V_\mathcal{B}(\tau)}w_h(x_v)\right) \leq \\
\leq \left( \prod_{v\in V_f(\tau)}\rho_v\right)\left(\prod_{v\notin V_f(\tau)}\gamma^{-h_v(s_v-1)}\right)\left(\prod_{v\notin V_f(\tau)}\gamma^{-z_v(h_v-h_{v'})}\right)
\end{split}
\end{equation}
where
\begin{equation}
z_v=\begin{cases}
\theta &\mbox{ if } |P_v|=4,\\
1+\theta &\mbox{ if } |P_v|=2,
\end{cases} \hspace{5mm} m_{2,v}=\begin{cases}
1 \mbox{ if $v$ is of type $\nu$ or $\varpi$},\\
0 \mbox{ otherwise}.
\end{cases}
\end{equation}
\end{lem}
\begin{proof}[Proof of Theorem (\ref{theorem_renormalized_bounds_DBC})]
By putting together the results of Lemmata (\ref{lemma_bound_determinants_inside_renormalized_kernels}) and (\ref{lemma_effective_gain}) we can bound the right hand side of (\ref{bounds_renormalized_kernels_DBC}) by
\begin{equation}
\begin{split}
\left(\prod_{v\in V_f(\tau)}\rho_v\right)\left(\prod_{v\notin V_f(\tau)}\gamma^{\frac{h_v}{2}\left(\sum_{j=1}^{s_v}|P_{v_j}|-|P_v|-2(s_v-1)\right)}\gamma^{-h_v(s_v-1)}\right)\cdot \\ \cdot\left(\prod_{v\notin V_f(\tau)}\gamma^{-z_h(h_v-h_{v'})}\right)
\end{split}
\end{equation}
\end{proof}
\subparagraph{Proofs of Lemmata (\ref{lemma_bound_determinants_inside_renormalized_kernels}) and (\ref{lemma_effective_gain})}
\begin{proof}[Proof of Lemma (\ref{lemma_bound_determinants_inside_renormalized_kernels})]
Let us start by considering the action of the operator $\mathcal R_{\mathcal{B}}^{(1)}$ on a vertex $v\in V_\mathcal B(\tau)$.
\paragraph{Action of $\mathcal R^{(1)}_{\alpha,\mathcal B}$} We refer again to Remarks (\ref{remark_R_B_decomposition_R1_R2}) and (\ref{remark_commutation_renormalization operators}), and we recall the formal representation
\begin{equation*}
\begin{split}
\mathcal R^{(1)}_{\alpha,v,\mathcal B}\left\{\int dP_{T_v}(\bm t_v) \left( \det G_\alpha^{h_v, T_v}(\bm t_v)\right)\cdot\right.\\\left.
\cdot \left[\prod_{\ell \in T_v}\left|(\bm x_\ell- \bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right|\right]\right\}\\ \left[\prod_{v_i^*\in V_f^{(\varpi)}}\left|(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right|\right]\left[\prod_{v_i^*\in V_f\setminus V_f^{(\varpi)}}\left|(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right|\right]
\end{split}
\end{equation*}
where $\mathcal R^{(1)}_{\alpha,v,\mathcal B}$ formally means that the vertex $v\in V_\mathcal B(\tau)$ is renormalized by $\mathcal R^{(1)}_\mathcal B$, with the constraints given by the structure of the subtree $\tau_v$ encoded in $\alpha$.
\begin{itemize}
\item {\bf Case 0} The argument of the first square brackets of the last line gives us directly the weight functions $\varpi_h(\cdot)$ coming from the $\varpi-$type endpoints, and they are independent of $\det G_\alpha^{h_v,T_v}$. To bound $\det G_\alpha^{h_v,T_v}$, we use as usual the Gram-Hadamard inequality (\ref{lemma_gram_hadamard_for_G}).
\end{itemize}
So we are left with showing that, for any $v\in V_\mathcal B(\tau)$, we can extract a weight function $\rho_{h_v}(\cdot)$ from the propagators. There are two different cases: given $v\in V_\mathcal B(\tau)$, either there is at least one remainder propagator belonging to the spanning tree $T_v$, or each of the propagators belonging to the spanning tree is a $P$-labeled propagator and $G_\alpha^{h_v,T_v}$ has a block of remainder propagators.
\begin{itemize}
\item {\bf Case 1: given $v\in V_\mathcal B(\tau)$, there is at least one $R$-labeled propagators belonging to the spanning tree $T_v$:} first of all, we use the Gram-Hadamard inequality (\ref{lemma_gram_hadamard_for_G}) to bound the determinant. To extract the weight function from the remainder propagator belonging to the spanning tree, we use (\ref{reminder_propagators_as_periodic_and_weight}) to bound
$$\int d\bm x_\ell d\bm y_\ell |g_{\ell,R}|\leq c \int d\bm x_\ell d\bm y_\ell |g_{\ell,P}| \rho_{h_\ell}(x_\ell), \hspace{3mm} \ell\in T_v.$$
\item {\bf Case 2: given $v\in V_\mathcal B(\tau)$, there are no $R$-labeled propagators belonging to the spanning tree $T_v$, and $G_\alpha^{h_v,T_v}$ has a block of remainder propagators.} Let us call this set of vertices $\bar V_\mathcal B(\tau)\subseteq V_\mathcal B(\tau)$. The basic idea is to expand, for each $v\in\bar V_\mathcal B(\tau)$ , $\det G_\alpha^{h_v,T_v}(\bm t_v)$ using the very definition of {\it determinant}, along a row of remainder propagators:
\begin{equation}
det G_\alpha^{h_v,T_v}=\sum_{i=1}^{s_v}(-1)^{i+j}t_{ij}\partial_{j_\alpha(f_{ij}^1)}^{q_\alpha(f_{ij}^1)}\partial_{j_\alpha(f_{ij}^2)}^{q_\alpha(f_{ij}^2)}g_R^{h_v}(\bm x(i),\bm x(j)) G^{h_v, T_v}_{\alpha, ij},
\label{detG_expanded_along_a_line}
\end{equation}
where we recall that $T_v$ is defined as the set of lines such that $T=\cup_{v\notin V_f(\tau)} T_{v}$, and where $G_{\alpha, ij}^{h_v,T_v}$ is the determinant of the matrix obtained starting from $G_\alpha^{h_v,T_v}$ and erasing the row $i$ and the column $j$. Once we extracted the remainder propagators, by using the bound (\ref{bound_g_R_g_infty_rho}) we get:
\begin{equation}
\begin{split}
\left| \int d\bm x(P_{v_0})\prod_{\ell\in T_v} g_\ell \int P(d\bm t) \det G_\alpha^{h_v, T_v}(\bm t)\right|\leq \\ \leq c_{v,0} \left|\int d\bm x(P_{v_0})\prod_{\ell\in T_v} g_\ell \cdot \right.\\ \left.\cdot \int P(d\bm t) \sum_{i} (-1)^{i+j}t_{i_j} \partial_{j_\alpha(f_{ij}^1)}^{q_\alpha(f_{ij}^1)}\partial_{j_\alpha(f_{ij}^2)}^{q_\alpha(f_{ij}^2)}g_R^{(h_v)}(\bm x(i),\bm x(j))G_{\alpha,ij}^{h_v, T_v}(\bm t) \right| \leq \\
\leq c_{v,1} \int d\bm x(P_{v_0})\prod_{\ell\in T_v} |g_\ell | \cdot \\ .\cdot\int P(d\bm t) \sum_{i} \rho_{h_v}(x(j)) ||\partial_{j_\alpha(f_{ij}^1)}^{q_\alpha(f_{ij}^1)}\partial_{j_\alpha(f_{ij}^2)}^{q_\alpha(f_{ij}^2)}g_R^{(h_v)}||_\infty ||G_{\alpha, ij}^{h_v, T_v}(\bm t)||_\infty \leq\\
\leq C_{v} ||g^{(h_v)}||^{\frac{1}{2}\left(\sum_{i=1}^{s_v}|P_{v_i}|-|P_v|-2(s_v-1)\right)}_\infty \gamma^{h_vq_{\alpha,G^{h_v,T_v}}}\int d\bm x(P_{v_0})\left(\prod_{\ell\in T_v} \left|g_\ell \right| \right)\rho_{h_v}(x(j)).
\end{split}
\label{bound_spanning_tree_propagator_expanded}
\end{equation}
where $C_v$ depends on the size of (number of propagators belonging to) the cluster $G_v$. Since there are, in general, more than one vertices of this type, we are left with controlling $\prod_{v\in \bar V_\mathcal B(\tau)}C_v$.
\end{itemize}
The worst case possible is when $\bar V_\mathcal B(\tau)=V_\mathcal B(\tau)$, so we are forced to expand, for each $v\in V_{ \mathcal B}(\tau)$, the determinant of the $\left(\sum_{i=1}^{s_v}|P_{v_1}|-|P_{v}|\right)/2\times\left(\sum_{i=1}^{s_v}|P_{v_1}|-|P_{v}|\right)/2$ matrix $G^{h_v,T_v}$, so that $\prod_{v\in V_\mathcal B(\tau)}C_v\leq \prod_{v\in V_{{\mathcal B}}(\tau)}\left(\frac{\sum_{i=1}^{s_v}|P_{v_1}|-|P_{v}|}{2}\right)$. We want to prove that:
\begin{equation}
\prod_{v\in V_{{\mathcal B}}(\tau)}\left(\frac{\sum_{i=1}^{s_v}|P_{v_1}|-|P_{v}|}{2}\right)\leq c e^n, \hspace{3mm} \forall \tau\in\mathcal T_{h,n},
\end{equation}
where $n$ is the number of the endpoints. Let us prove the latter bound: thanks to the hierarchical structure of the set of vertices $V_{\mathcal B}(\tau)$ we just explained,
\begin{equation}
\prod_{v\in V_{\mathcal B}(\tau)}\left(\frac{\sum_{i=1}^{s_v}|P_{v_1}|-|P_{v}|}{2}\right)\leq c_1 \prod_{i=1}^k s_{v_i},\mbox{ with the constraints: } \begin{cases} 1\leq k\leq n,\\ \sum_{i=1}^k s_{v_i}=n.\end{cases}
\end{equation}
So
\begin{equation}
\begin{split}
\prod_{i=1}^k s_{v_i}=e^{\sum_{i=1}^k \log s_{v_i}}=e^{k\left(\frac{1}{k}\sum_{i=1}^k \log s_{v_i}\right)}\leq e^{k \log \left(\frac{1}{k}\sum_{i=1}^k s_{v_i}\right)}= \\ =e^{k\log \frac{n}{k}}\leq c e^n, \hspace{2mm} \forall 1\leq k \leq n.
\end{split}
\end{equation}
\paragraph{Action of $\mathcal R^{(2)}_{\mathcal B}$} Remark (\ref{remark_R_B_decomposition_R1_R2}) and the fact that
\begin{itemize}
\item all the propagators of the starting tree are $g_P$,
\item $T=\cup_{v\notin V_f(\tau)}T_{v}$,
\item the action of $\mathcal R_\mathcal B^{(2)}$ keeps fixed at least one among $\bm x$ and $\bm y$,
\end{itemize}
ensure that, as soon as we change the sign of some space variable, at least a remainder propagator appears on the spanning tree $T_v$ for each $v\in V_\mathcal B(\tau)$. It can be proved iteritively: starting from the innermost subclusters ({\it i.e.} the vertices of the tree immediately preceeding the leaves), we can look at the action of $\mathcal R^{2}_\mathcal B$ as giving three different subcases:
\begin{enumerate}
\item $\mathcal R^{2}_\mathcal B$ does not change the sing of the coordinate of any vertex belonging to the clusters, so nothing changes,
\item $\mathcal R^{2}_\mathcal B$ changes the sing of the coordinate of each of the vertices belonging to the clusters, so nothing changes by symmetry with the previous point,
\item $\mathcal R^{2}_\mathcal B$ changes the sing of the coordinate of a subset of the vertices belonging to the clusters, leaving at least one of the vertices unchanged: so at least one of the propagators belonging to the spanning tree becomes, by the symmetry properties of the propagators, a remainder propagator.
\end{enumerate}
Of course, in the case 1 and 2 the subclusters have to be, if necessary, "Taylor renormalized".\\
Iteratively, we apply these three points on the bigger clusters with an ingredient more: there is at least a line of the spanning tree connecting the vertices of the cluster we are considering with a vertex belonging to some subclusters: so even in cases 1 and 2 there could appear some $R$-labeled propagator on the spanning tree. This mechanism ensures that, if in some cluster the $P$ symmetry is broken, for sure it is broken on the spanning tree. Besides, the fact that at least one among $\bm x, \bm y$ is kept fixed, ensures that at some point of the tree this symmetry is broken.\\
Analogously to {\bf Case 1}, by using (\ref{reminder_propagators_as_periodic_and_weight}) and the Gram-Hadamard (\ref{lemma_gram_hadamard_for_G}) inequality we get the result.
\end{proof}
Now we prove Lemma (\ref{lemma_effective_gain}).
\begin{proof}[Proof of Lemma (\ref{lemma_effective_gain})]
Let us recall that we want to prove
\begin{equation*}
\begin{split}
\frac{1}{|\Lambda|\beta}\int d\bm x(P_{v_0})\prod_{v\notin V_f(\tau)}\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}\left(\left[\prod_{\ell \in T_v}\left|(\bm x_\ell- \bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right|\right]\right)\cdot \\ \cdot\left[\prod_{i=1}^{n}\left|(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i})\right|\right]\left(\prod_{v\in V_\mathcal{B}(\tau)}w_h(x_v)\right) \leq \\
\leq \left( \prod_{v\in V_f(\tau)}\rho_v\right)\left(\prod_{v\notin V_f(\tau)}\gamma^{-h_v(s_v-1)}\right)\left(\prod_{v\notin V_f(\tau)}\gamma^{-z_v(h_v-h_{v'})}\right)
\end{split}
\end{equation*}
{\bf Observation 1:} if we bounded, for each $v\in V_\mathcal B(\tau)$, $|w_{h_v}(x_v)|\leq C_\theta$ for a suitably fixed $\theta\in (0,1)$, we would get an analogous bound, provided we replaced $z_v$ by
\begin{equation*}
\tilde z_v=\begin{cases}
2 \mbox{ if } |P_v|=2 \mbox{ and } v\in V(\tau)\setminus V_\mathcal B(\tau),\\
1 \mbox{ if } |P_v|=4 \mbox{ and } v\in V(\tau)\setminus V_\mathcal B(\tau),\\
1 \mbox{ if } |P_v|=2 \mbox{ and } v\in V_\mathcal B(\tau),\\
0 \mbox{ if } |P_v|=4 \mbox{ and } v\in V_\mathcal B(\tau).
\end{cases}
\end{equation*}
This observation is a consequence of Remark (\ref{remark_commutation_renormalization operators}) and of the dimensional gains coming from the {\it Taylor} renormalization operators we described in the previous chapter. We will get the further gains by exploiting the presence of $\left(\prod_{v\in V_\mathcal{B}(\tau)}w_{h_v}(x_v)\right)$. \\
{\bf Observation 2:} once we reconstructed the bound of the determinants, we are left with computing an integral along the spanning tree $T=\cup_{v}T_v$ formally analogous to the one we bounded in proving Theorem (\ref{theorem_renormalized_bounds}), with the only difference that {\it some of the vertices of the spanning tree are weighted by a weight function $w_{h_v}(x_v)$}. Moreover, let us recall that the spanning tree $T=\cup_{v}T_v$ has a hierarchical sturcture.
In fact, we will exploit this hierarchical structure to obtain the dimensional gains we need, and we will proceed in two steps:
\begin{enumerate}
\item first of all, we show that we can arbitrarily transfer the function $w_h(\cdot)$ from a vertex belonging to a cluster at scale $h$ to any vertex belonging to the same cluster at scale $h$ (Lemma (\ref{lemma_transfer}));
\item then, we prove that in fact we can transfer the function $w_h(\cdot)$ to a vertex belonging to some cluster at lower scale $\bar h< h$ which contains as a subcluster the cluster at scale $h$, gaining a dimensional factor $\gamma^{\theta(\bar h- h)}$, (Lemma (\ref{lemma_integral_w_g})).
\end{enumerate}
\begin{figure}
\begin{tikzpicture}
[scale=0.5, transform shape]
\draw [blue, very thick] (7.5,7.5) ellipse (7 and 3);
\node at (5,10) {\textcolor{blue}{$\bm h$}};
\node at (12.8, 10.2) {\textcolor{red}{{$\bar {\bm h}$}}};
\fill (6, 6) circle (0.1);
\fill (8, 8.5) circle (0.1);
\fill (12, 9) circle (0.1);
\fill (4,7) circle (0.1);
\fill (10, 6) circle (0.1);
\fill (14,11) circle (0.1);
\draw [very thick] (4,7) -- (8,8.5);
\draw [-,decorate, decoration={snake}] (4,7) -- (3,6);
\draw [-,decorate, decoration={snake}] (4,7) -- (3,8);
\draw [very thick, loosely dotted] (3.5, 6.7) -- (3.5,7.3);
\draw [-,decorate, decoration={snake}] (6,6) -- (5,6);
\draw [-,decorate, decoration={snake}] (6,6) -- (6,5);
\draw [very thick, loosely dotted] (5.3, 5.7) -- (5.7,5.3);
\draw [-,decorate, decoration={snake}] (8,8.5) -- (7.5,9.3);
\draw [-,decorate, decoration={snake}] (8,8.5) -- (8.8,9.3);
\draw [very thick, loosely dotted] (7.85,9) -- (8.45,9);
\draw [-,decorate, decoration={snake}] (10,6) -- (9,5);
\draw [-,decorate, decoration={snake}] (10,6) -- (11,5);
\draw [very thick, loosely dotted] (9.4,5.3) -- (10.6,5.3);
\draw [-,decorate, decoration={snake}] (12,9) -- (13,9);
\draw [-,decorate, decoration={snake}] (12,9) -- (12,8);
\draw [very thick, loosely dotted] (12.1,8.2) -- (12.9,8.8);
\draw [very thick] (8,8.5) -- (6,6);
\draw [very thick] (8,8.5) -- (6,6);
\draw [very thick] (8,8.5) -- (10,6);
\draw [very thick] (8,8.5) -- (12,9);
\draw [very thick, red] (12,9) -- (14,11);
\node [regular polygon, blue, regular polygon sides=4,
minimum size=3mm, fill] at (6, 6) {};
\end{tikzpicture}
\begin{tikzpicture}
[scale=0.5, transform shape]
\draw [blue, very thick] (7.5,7.5) ellipse (7 and 3);
\node at (5,10) {\textcolor{blue}{$\bm h$}};
\node at (12.8, 10.2) {\textcolor{red}{{$\bar {\bm h}$}}};
\fill (6, 6) circle (0.1);
\fill (8, 8.5) circle (0.1);
\fill (12, 9) circle (0.1);
\fill (4,7) circle (0.1);
\fill (10, 6) circle (0.1);
\fill (14,11) circle (0.1);
\draw [very thick] (4,7) -- (8,8.5);
\draw [-,decorate, decoration={snake}] (4,7) -- (3,6);
\draw [-,decorate, decoration={snake}] (4,7) -- (3,8);
\draw [very thick, loosely dotted] (3.5, 6.7) -- (3.5,7.3);
\draw [-,decorate, decoration={snake}] (6,6) -- (5,6);
\draw [-,decorate, decoration={snake}] (6,6) -- (6,5);
\draw [very thick, loosely dotted] (5.3, 5.7) -- (5.7,5.3);
\draw [-,decorate, decoration={snake}] (8,8.5) -- (7.5,9.3);
\draw [-,decorate, decoration={snake}] (8,8.5) -- (8.8,9.3);
\draw [very thick, loosely dotted] (7.85,9) -- (8.45,9);
\draw [-,decorate, decoration={snake}] (10,6) -- (9,5);
\draw [-,decorate, decoration={snake}] (10,6) -- (11,5);
\draw [very thick, loosely dotted] (9.4,5.3) -- (10.6,5.3);
\draw [-,decorate, decoration={snake}] (12,9) -- (13,9);
\draw [-,decorate, decoration={snake}] (12,9) -- (12,8);
\draw [very thick, loosely dotted] (12.1,8.2) -- (12.9,8.8);
\draw [very thick] (8,8.5) -- (6,6);
\draw [very thick] (8,8.5) -- (6,6);
\draw [very thick] (8,8.5) -- (10,6);
\draw [very thick] (8,8.5) -- (12,9);
\draw [red, very thick] (12,9) -- (14,11);
\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (12, 9) {};
\end{tikzpicture}
\caption{Graphical explanation of Lemma (\ref{lemma_transfer}): the blue square represents a weight function at scale $h$, and our goal is to {\it move it} along the spanning tree until the vertex shared with the red propagator, living at scale $\bar h\leq h$.}
\label{figure_lemma_transfer}
\end{figure}
Let $$\rho_h(\bm x,\bm y)=\gamma^h\frac{C_{\bar N}}{1+(\gamma^h|\bm x-\bm y|)^{\bar N}},\hspace{3mm} \rho^{(q_1,q_2;j_1,j_2)}_\ell=\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)}\gamma^{h_\ell}\frac{C_{\bar N}}{1+(\gamma^h|\bm x(f_\ell^1)-\bm x(f_\ell^2)|)^{\bar N}},$$
for some suitably fixed $\bar N$, and let $\tilde K^{(h_i)}_{{v'^*_{i}}}(\bm x_{v'^*_i})$ be the contribution obtained by replacing, in the iterative definition of the endpoints contributions $K^{(h_i)}_{{v'^*_{i}}}(\bm x_{v'^*_i})$, each of the propagators by the suitable $\rho^{(q_1,q_2;j_1,j_2)}_\ell$.
\begin{lem}
\label{lemma_transfer}
Let $v\in V_\mathcal B(\tau)$, $\tau_v\subset \tau$ the subtree whose first vertex is $v$, and let $\bm s\in \bm x(P_v)$. So
\begin{equation}
\begin{split}
\int d\bm x(P_{v})\prod_{v'\notin V_f(\tau_v)}\left[\prod_{\ell \in T_{v'}}\left|(\bm x_\ell- \bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right|\right]\cdot \\ \cdot\left[\prod_{i=1}^{n}\left|(\bm x^i-\bm y^i)^{b(v'^*_i)}_{j(v'^*_i)}K^{(h_i)}_{{v'^*_{i}}}(\bm x_{v'^*_i})\right|\right]w_{h_{v}}(s) \leq \\
\leq C^{n^0_v-1} \int d\bm x(P_{v})\prod_{v'\notin V_f(\tau_v)}\left[\prod_{\ell \in T_{v'}}\left|(\bm x_\ell- \bm y_\ell)^{b(\ell)}_{j(\ell)}\rho^{(q_1,q_2;j_1,j_2)}_\ell\right|\right]\cdot \\ \cdot\left[\prod_{i=1}^{n}\left|(\bm x^i-\bm y^i)^{b(v'^*_i)}_{j(v'^*_i)} \tilde K^{(h_i)}_{{v'^*_{i}}}(\bm x_{v'^*_i})\right|\right]w_{h_v}(x)
\end{split}
\label{integral_lemma_transfer}
\end{equation}
$\forall \hspace{2mm} \bm x\in \bm x(P_v)$, where $n^0_v$ is the number of endpoints following $v$.
\end{lem}
\begin{proof}
Let us prove the Lemma by considering $w_h(s)=\varpi_h(s)$, where $s$ is the space-component of the integration point $\bm s$ (the proof for $w_h(s)=\rho_h(s)$ is conceptually the same).\\
Of course, being $|w_h(s)|\leq C_\theta/(1+\gamma^h|x|)^\theta$ for any $\theta\in (0,1)$ and a suitable $C_\theta$, and
\begin{equation}
\frac{C_\theta}{(1+\gamma^h|s|)^{\theta}}=\frac{C_\theta}{(1+\gamma^{ h}|x|)^{\theta}}\left(\frac{1+\gamma^{ h}|x|}{1+\gamma^{h}|s|}\right)^\theta
\end{equation}
we can bound the latter integral in (\ref{integral_lemma_transfer}) by
\begin{equation*}
\begin{split}
C \int d\bm x(P_{v})\left[\prod_{\ell \in T_v}\left|(\bm x_\ell-\bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right|\right]\cdot \\ \cdot\left[\prod_{i=1}^{n}\left|(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right|\right]w_{h_v}(x)\left(\frac{1+\gamma^{ h}|x|}{1+\gamma^{h}|s|}\right)^\theta
\end{split}
\end{equation*}
so the main goal is to replace the factor within the last brackets by a constant. By definition of spanning tree $T_v=\cup_{v'\notin V_f(\tau_v)}T_{v'}$, there exists a connected path of lines belonging to the spanning tree that connects $s$ to $x$, so we can expand along the tree $|x|^\theta$:
\begin{equation}
|x|^\theta\leq C_\theta \left(|s|^\theta+\sum_{i=0}^{m-1}|z_{i+1}-z_i|^\theta\right),
\end{equation}
where $z_0:=x$, $z_m:=s$ and $z_1,\dots,z_{m-1}$ are the real space coordinates associated with the vertices of the path. So
\begin{equation}
\left(\frac{1+\gamma^{\theta h}|x|^{\theta}}{1+\gamma^{\theta h}|s|^{\theta}}\right)\leq C_\theta\left(\left(\frac{1+\gamma^{\theta h}|s|^{\theta}}{1+\gamma^{\theta h}|s|^{\theta}}\right) + \sum_{i=0}^{m-1}\gamma^{\theta h}|z_{i+1}-z_i|^\theta\right).
\end{equation}
By construction, for each couple of points $(z_{i+1}, z_i)$ there is a propagator $g_{P,\omega}^{(k)}(\bm z_{i+1}- \bm z_i)$ with $k\geq h$ such that
$$\int d(z_{i+1_0}-z_{i_0}) \int d(z_{i+1}-z_i)|z_{i+1}-z_i| |g^{(k)}_{P,\omega}(\bm z_{i+1}-\bm z_i))|\leq \gamma^{-k} \gamma^{-k},$$
so that, in order to get the bound we are interested in, we can replace, inside the integral, $$\gamma^{\theta h}|z_{i+1}-z_i|^\theta\leq c_1 \gamma^{\theta (h-k)}\leq c_1,$$ since $h-k\leq 0$.
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}
[scale=0.5, transform shape]
\draw [blue, very thick] (7.5,7.5) ellipse (7 and 3);
\node at (5,10) {\textcolor{blue}{$\bm h$}};
\node at (12.8, 10.2) {\textcolor{red}{{$\bar {\bm h}$}}};
\node at (13.7,9.9) {\textcolor{red}{$\gamma^{\theta(h-\bar h)}$}};
\fill (6, 6) circle (0.1);
\fill (8, 8.5) circle (0.1);
\fill (12, 9) circle (0.1);
\fill (4,7) circle (0.1);
\fill (10, 6) circle (0.1);
\fill (14,11) circle (0.1);
\draw [very thick] (4,7) -- (8,8.5);
\draw [-,decorate, decoration={snake}] (4,7) -- (3,6);
\draw [-,decorate, decoration={snake}] (4,7) -- (3,8);
\draw [very thick, loosely dotted] (3.5, 6.7) -- (3.5,7.3);
\draw [-,decorate, decoration={snake}] (6,6) -- (5,6);
\draw [-,decorate, decoration={snake}] (6,6) -- (6,5);
\draw [very thick, loosely dotted] (5.3, 5.7) -- (5.7,5.3);
\draw [-,decorate, decoration={snake}] (8,8.5) -- (7.5,9.3);
\draw [-,decorate, decoration={snake}] (8,8.5) -- (8.8,9.3);
\draw [very thick, loosely dotted] (7.85,9) -- (8.45,9);
\draw [-,decorate, decoration={snake}] (10,6) -- (9,5);
\draw [-,decorate, decoration={snake}] (10,6) -- (11,5);
\draw [very thick, loosely dotted] (9.4,5.3) -- (10.6,5.3);
\draw [-,decorate, decoration={snake}] (12,9) -- (13,9);
\draw [-,decorate, decoration={snake}] (12,9) -- (12,8);
\draw [very thick, loosely dotted] (12.1,8.2) -- (12.9,8.8);
\draw [very thick] (8,8.5) -- (6,6);
\draw [very thick] (8,8.5) -- (6,6);
\draw [very thick] (8,8.5) -- (10,6);
\draw [very thick] (8,8.5) -- (12,9);
\draw [red, very thick] (12,9) -- (14,11);
\node [regular polygon,red, regular polygon sides=4,
minimum size=3mm, fill] at (14, 11) {};
\end{tikzpicture}
\caption{This figure has to be thought of as linked to Figure (\ref{figure_lemma_transfer}): by integrating together the blue dot and the red propagator in figure (\ref{figure_lemma_transfer}), we can transfer the blue dot into the red one outside the cluster and get a dimensional gain: this is the result of Lemma (\ref{lemma_effective_gain}).}
\end{figure}
\begin{lem}
\label{lemma_integral_w_g}
Let $w_h(x)\in\{\rho_h(x),\varpi_h(x)\}$ and $\bar h< h$. So
\begin{equation}
\left| \int d\bm y w_h(y)g^{(\bar h)}_{P,\omega}(\bm y-\bm x)\right|\leq C_\theta \gamma^{-\bar h} \gamma^{\alpha(\bar h- h)}w_{\bar h}(x),\mbox{ where } \alpha=\begin{cases}
1 \mbox{ if } w_h(\cdot)=\rho_h(\cdot),\\
\theta \mbox{ if } w_h(\cdot)=\varpi_h(\cdot).
\end{cases}
\end{equation}
\end{lem}
We present a detailed proof of this Lemma in Appendix (\ref{appendix_proof_lemma_effective_gain}) because, even thought simple, it is long and it would make the proof we are involved in less understandable.
\begin{rem}
\label{remark_gain_renormalization_reminder}
Lemma (\ref{lemma_transfer}) tells us that, if there is some weight function at scale $h$, we can associate it with the vertex shared by the kernel at scale $h$ and some propagator at scale $\bar h \leq h$, so it is natural to use Lemma (\ref{lemma_integral_w_g}), which tells us that by integrating a propagator $g^{(\bar h)}_{P,\omega}$ "against" a weight function living at some higher scale $h>\bar h$ (in particular, the one "coming" from the kernel $\mathcal W^{(h)})$, we {\it improve the usual bound} $\gamma^{-\bar h}$ by a factor $\gamma^{\theta(\bar h- h)} w_{\bar h}(\cdot)$, where in particular $\sup_x|w_{\bar h}(x)|\leq C$:
\begin{itemize}
\item we can "associate" the factor $\gamma^{\theta(\bar h-h)}$ to the cluster at scale $h$ the weight function came from: this means that we can extract, from the presence of a weight function, a scale gain in RG language, thanks to which the marginal terms become irrelevant, and the relevant one become marginal, being $0< \theta< 1$,
\item moreover, we transferred the weight function to scale $\bar h$ and of course, if we need it, we further transfer $w_{\bar h}(\cdot)$ to smaller scales getting some scale gain that iteratively improves the power counting.
\end{itemize}
\end{rem}
Now we are left with using these technical Lemmata to prove the bound (\ref{lemma_effective_gain}): the core of the proof consists of a precise integration prescription, that systematically uses Lemmata (\ref{lemma_transfer}) and (\ref{lemma_integral_w_g}) to iteratively renormalize the kernels.
\subparagraph{Integration procedure}
\begin{figure}
\centering
\begin{tikzpicture}
[very thick, scale=0.8, transform shape]
\draw [thick, green] (13,9) -- (13,10);
\draw (0,7) -- ++ (2,0) -- ++ (2,4) ++ (-2,-4) -- ++ (2, 0) ++ (-2,0) -- ++ (2, -4);
\fill (0,7) circle (0.1);
\fill (2,7) circle (0.1);
\fill (4,11) circle (0.1);
\fill (4,7) circle (0.1);
\fill (4,3) circle (0.1);
\draw (4,11) -- ++ (2,2) -- ++ (2,1) -- ++ (1,0.5) ++ (-1,-0.5) -- ++ (1,-0.5);
\draw (4,11) ++ (2,2) -- ++ (2,-1) -- ++ (1,0.5) ++ (-1,-0.5) -- ++ (1,-0.5) ++ (-1, 0.5) -- ++ (1,0);
\draw (4,11) -- ++ (2,-1) -- ++ (2,1) ++ (-2,-1) -- ++ (2,0) -- ++ (1,0.3) ++ (-1,-0.3) -- ++ (1,0) ++ (-1,0) -- ++ (1,-0.3) ++ (-3, 0.3) -- ++ (2,-1) -- ++ (1,0.3) ++ (-1,-0.3) -- ++ (1,0) ++ (-1,0) -- ++ (1,-0.3);
\draw (4,7) -- ++ (2,1) ++ (-2,-1) -- ++ (2,-1) -- ++ (2,0.5) ++ (-2,-0.5) -- ++ (2,-0.5);
\draw (4,3) -- ++ (2,1) ++ (-2,-1) -- ++ (2,-1) -- ++ (2,0) -- ++ (1,0.3) ++ (-1,-0.3) -- ++ (1,0) ++ (-1,0) -- ++ (1,-0.3) ++ (-3, 0.3) -- ++ (2,-1) -- ++ (1,0.3) ++ (-1,-0.3) -- ++ (1,0) ++ (-1,0) -- ++ (1,-0.3);
\draw (6,2) -- ++ (2,1);
\foreach \i in {2,4,6,8,10,13} {
\fill (6,\i) circle (0.1);
}
\foreach \i in {1,2,3,5.5,6.5,9,10,11,12,14}{
\fill (8,\i) circle (0.1);
}
\foreach \i in {0.7,1,1.3,1.7,2,2.3,8.7,9,9.3,9.7,10,10.3,11.5,12,12.5,13.5,14.5}{
\fill (9,\i) circle (0.1);
}
\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (8, 3) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (9, 1) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (6, 8) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (8, 6.5) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (8, 10) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (9, 12) {};\node [regular polygon,blue, regular polygon sides=4,
minimum size=3mm, fill] at (9, 13.5) {};
\foreach \i in {0.7,1,1.3,1.7,2,2.3,3,4,5.5,6.5,8,8.7,9,9.3,9.7,10,10.3,11,11.5,12,12.5,13.5,14.5}
{
\fill (13, \i) circle (0.1);
}
\foreach \i in {1,2,9,10}{
\draw (13,\i) ellipse (0.3 and 0.45);
}
\draw (13,6) ellipse (0.3 and 0.65);
\draw (13,12) ellipse (0.3 and 0.65);
\draw (13,14) ellipse (0.3 and 0.65);
\draw (13,14) ellipse (0.3 and 0.65);
\draw (13,13) ellipse (0.5 and 1.75);
\draw (13,9.8) ellipse (0.45 and 1.35);
\draw (13,6.8) ellipse (0.45 and 1.55);
\draw (13,1.5) ellipse (0.45 and 1);
\draw (13,1.8) ellipse (0.6 and 1.35);
\draw (13,2.3) ellipse (0.7 and 2);
\draw (13, 11.7) ellipse (0.8 and 3.3);
\draw (13, 7.5) ellipse (2 and 8);
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 3) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 1) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 8) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 6.5) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 10) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 12) {};\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 13.5) {};
\draw [thick, green] (13,13.5) -- ++ (0,1);
\draw [thick, green] (13,12.5) -- ++ (0,-0.5) -- ++ (0,-0.5);
\draw [thick, green] (13,11) -- ++ (0,-0.7) ++ (0,-0.3) -- ++ (0,-0.3) -- ++ (0,-0.4) ++ (0,-0.3) -- ++ (0,-0.3) -- ++ (0,-0.7) -- ++ (0,-1.5) -- ++ (0,-1) ++ (0,-1.5) -- ++ (0,-1) -- ++ (0,-0.7) -- ++ (0,-0.3) -- ++ (0,-0.3) ++ (0,-0.4) -- ++ (0,-0.3) -- ++ (0,-0.3);
\draw [green ](13,1.3) to [out =45, in = -45 ] (13, 2);
\draw [green ](13,14.5) to [out =-45, in = 45, looseness=0.5 ] (13, 11.5);
\draw [green ](13,12.5) to [out =225, in = -225, looseness=0.5 ] (13, 10.3);
\draw [green, thick] (13,10.3) -- ++ (0,-0.3);
\draw [green ](13,9) to [out =225, in = -225, looseness=1 ] (13, 4);
\draw (13, 14.5) -- (11,15);
\draw (13, 12.5) -- (15,13);
\draw (13, 9.03) -- (10,10.5);
\draw (13, 2) -- (15,1);
\node at (8,14.5) {$\mathcal R_\mathcal B$};
\node at (8,12.5) {$\mathcal R_\mathcal B$};
\node at (6,10.5) {$\mathcal R_\mathcal B$};
\node at (6,6.5) {$\mathcal R_\mathcal B$};
\node at (8,1.5) {$\mathcal R_\mathcal B$};
\node at (0,7.2) {$r$};
\node at (1.85,7.2) {$v_0$};
\end{tikzpicture}
\caption{Example of a renormalized tree (left) and its respective cluster structure (right). Only the $\mathcal R_\mathcal B$ operators are explicitly written, and the blue squares are the weight functions $w_h(\cdot)$. The union of the green lines on the right represents the {\it spanning tree}, while the four black lines are the four external legs. In Figure (\ref{figure_lemma_integration_prescription_step_two}) we describe the first step of the integration.}
\label{figure_integration_prescription_step_one}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
[very thick, scale=1, transform shape]
\foreach \i in {0.7,1,1.3,1.7,2,2.3,3,4,5.5,6.5,8,8.7,9,9.3,9.7,10,10.3,11,11.5,12,12.5,13.5,14.5}
{
\fill (13, \i) circle (0.1);
}
\foreach \i in {1,2,9,10}{
\draw (13,\i) ellipse (0.3 and 0.45);
}
\draw (13,6) ellipse (0.3 and 0.65);
\draw (13,12) ellipse (0.3 and 0.65);
\draw (13,14) ellipse (0.3 and 0.65);
\draw (13,14) ellipse (0.3 and 0.65);
\draw (13,13) ellipse (0.5 and 1.75);
\draw (13,9.8) ellipse (0.45 and 1.35);
\draw (13,6.8) ellipse (0.45 and 1.55);
\draw (13,1.5) ellipse (0.45 and 1);
\draw (13,1.8) ellipse (0.6 and 1.35);
\draw (13,2.3) ellipse (0.7 and 2);
\draw (13, 11.7) ellipse (0.8 and 3.3);
\draw (13, 7.5) ellipse (2 and 8);
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 3) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 1.3) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 8) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 6.5) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 9.7) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 12.5) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 14.5) {};
\draw [very thick, magenta] (13, 13.5) -- (13,14.5);
\draw [very thick, green] (13, 12) -- ++ (0,-0.5);
\draw [very thick, green] (13, 9.3) -- ++ (0,-0.3);
\draw [very thick, magenta] (13,12.5) -- ++ (0,-0.5);
\draw [very thick, magenta] (13,10) -- ++ (0,-0.3);
\draw [very thick, magenta] (13,1) -- ++ (0,0.3);
\draw [very thick, green] (13,11) -- ++ (0,-0.7) ++ (0,-0.3) ++ (0,-0.3) -- ++ (0,-0.4) ++ (0,-0.3) -- ++ (0,-0.3) -- ++ (0,-0.7) -- ++ (0,-1.5) -- ++ (0,-1) ++ (0,-1.5) -- ++ (0,-1) -- ++ (0,-0.7) -- ++ (0,-0.3) -- ++ (0,-0.3) ++ (0,-0.4) ++ (0,-0.3) -- ++ (0,-0.3);
\draw [green ](13,1.3) to [out =45, in = -45 ] (13, 2);
\draw [green ](13,14.5) to [out =-45, in = 45, looseness=0.5 ] (13, 11.5);
\draw [green ](13,12.5) to [out =225, in = -225, looseness=0.5 ] (13, 10.3);
\draw [green, thick] (13,10.3) -- ++ (0,-0.3);
\draw [green ](13,9) to [out =225, in = -225, looseness=1 ] (13, 4);
\draw (13, 14.5) -- (11,15);
\draw (13, 12.5) -- (15,13);
\draw [yellow] (13, 9.03) -- (10,10.5);
\draw (13, 2) -- (15,1);
\end{tikzpicture}
\centering
\begin{tikzpicture}
[very thick,scale=1, transform shape]
\foreach \i in {0.7,1,1.3,1.7,2,2.3,3,4,5.5,6.5,8,8.7,9,9.3,9.7,10,10.3,11,11.5,12,12.5,13.5,14.5}
{
\fill (13, \i) circle (0.1);
}
\foreach \i in {1,2,9,10}{
\draw (13,\i) ellipse (0.3 and 0.45);
}
\draw [->, red] (13,6.5) -- (13,8);
\draw [->, red] (13,8) -- (13,8.7);
\draw [->, red] (13,9.7) -- (13,9.3);
\draw (13,6) ellipse (0.3 and 0.65);
\draw (13,12) ellipse (0.3 and 0.65);
\draw (13,14) ellipse (0.3 and 0.65);
\draw (13,14) ellipse (0.3 and 0.65);
\draw (13,13) ellipse (0.5 and 1.75);
\draw (13,9.8) ellipse (0.45 and 1.35);
\draw (13,6.8) ellipse (0.45 and 1.55);
\draw (13,1.5) ellipse (0.45 and 1);
\draw (13,1.8) ellipse (0.6 and 1.35);
\draw (13,2.3) ellipse (0.7 and 2);
\draw (13, 11.7) ellipse (0.8 and 3.3);
\draw (13, 7.5) ellipse (2 and 8);
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 3) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 1.3) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 8) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 6.5) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 9.7) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 12.5) {};
\node [regular polygon,blue, regular polygon sides=4,
minimum size=1mm, fill] at (13, 14.5) {};
\draw [very thick, magenta] (13, 13.5) -- (13,14.5);
\draw [very thick, green] (13, 12) -- ++ (0,-0.5);
\draw [very thick, green] (13, 9.3) -- ++ (0,-0.3);
\draw [very thick, magenta] (13,12.5) -- ++ (0,-0.5);
\draw [very thick, magenta] (13,10) -- ++ (0,-0.3);
\draw [very thick, magenta] (13,1) -- ++ (0,0.3);
\draw [very thick, green] (13,11) -- ++ (0,-0.7) ++ (0,-0.3) ++ (0,-0.3) -- ++ (0,-0.4) ++ (0,-0.3) -- ++ (0,-0.3) ++ (0,-0.7) ++ (0,-1.5) -- ++ (0,-1) ++ (0,-1.5) ++ (0,-1) -- ++ (0,-0.7) -- ++ (0,-0.3) -- ++ (0,-0.3) ++ (0,-0.4) ++ (0,-0.3) -- ++ (0,-0.3);
\draw [red, -> ](13,3) -- (13, 4);
\draw [red, -> ](13,1.3) to [out =45, in = -45 ] (13, 2);
\draw [red, -> ](13,14.5) to [out =-45, in = 45, looseness=0.5 ] (13, 11.5);
\draw [red, -> ](13,12.5) to [out =225, in = -225, looseness=0.5 ] (13, 10.3);
\draw [green, thick] (13,10.3) -- ++ (0,-0.3);
\draw [green ](13,9) to [out =225, in = -225, looseness=1 ] (13, 4);
\draw (13, 14.5) -- (11,15);
\draw (13, 12.5) -- (15,13);
\draw [yellow] (13, 9.03) -- (10,10.5);
\draw (13, 2) -- (15,1);
\end{tikzpicture}
\caption{Step one of the integration procedure. The starting point is the cluster structure of figure (\ref{figure_integration_prescription_step_one}). First of all we fixed one of the external legs, the yellow one, as the root. Using Lemma (\ref{lemma_transfer}), we moved the weight functions, {\it i.e.} the blue squares, along the spanning tree (along the lines that were green in Figure (\ref{figure_integration_prescription_step_one}) and that are {\it magenta} in this Figure), in order to use Lemma (\ref{lemma_effective_gain}) getting the gain and going toward the root of the spanning tree. The first step of the integration is done along the red arrows (on the right), "{\it living}" at a higher scale with respect to the blue dots they are attached to.}
\label{figure_lemma_integration_prescription_step_two}
\end{figure}
At this point we can describe the integration procedure to bound:
\begin{equation*}
\begin{split}
\frac{1}{|\Lambda|\beta}\left|\int d\bm x(P_{v_0})\prod_{v\notin V_f(\tau)}\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}\left(\left[\prod_{\ell \in T_v}(\bm x_\ell-\bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right]\right)\cdot \right.\\\left. \cdot\left[\prod_{i=1}^{n}(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right]\left(\prod_{v\in V_\mathcal{B}(\tau)}w_{h_v}(x_v)\right)\right|
\end{split}
\end{equation*}
{\bf Step 1: re-arranging the weight functions} Using Lemma (\ref{lemma_transfer}), we can move for each $v\in V_\mathcal B(\tau)$, the weight functions $w_{h_v}(\cdot)$ along the spanning tree, re-arranging it in such a way that
\begin{itemize}
\item all the weight functions decorate vertices which are shared by propagators at different scales (in order to apply the result of Lemma (\ref{lemma_integral_w_g})),
\item if we imagine to erase the propagator at higher scale, the weighted vertex is still connected to the root of the rooted spanning tree by a subtree including the propagator at lower scale (of course in this procedure is still arbitrary, because in general there is not only a vertex of this kind, and in particular there could be vertices having the properties we are requiring, and being attached to propagators at different scales: we comment it in the remark at the end of this proof).
\end{itemize}
{\bf Step 2: integration order}
\begin{itemize}
\item we can arbitrairly choose an external line of the cluster $G_{v_0}$, and consider it as the {\it root} of the tree giving a natural order relation to the tree;
\item moreover, we use the {\it quantifier "for each $\theta\in (0,1)$"} appearing in the hypothesys of the weight functions $\varpi_{h}(\cdot)$: considering the {\it rooted spanning tree} rooted in the selected external leg we just defined, we pick the {\it closest vertex}, in the sense of the tree distance, to the root (if there are more then one at the same distance, we can arbitrarly choose one of them).
\begin{itemize}
\item if this vertex is associated with $w_{h^*}(\cdot)=\rho_{h^*}(\cdot)$, for some $h^*>h$, we do nothing,
\item if this vertex is associated with $w_{h^*}(\cdot)=\varpi_{h^*}(\cdot)$, for some $h^*>h$, we {\it fix once for all}, for all the other weight functions associated with the vertices of the spanning tree, $\theta\equiv \theta'$ in the decay hypothesis, except for the selected vertex, for which we keep the quantifier {\it "for each $\theta\in (0,1)$"}.
\end{itemize}
\end{itemize}
After these manipulations, the integral we are bounding has to be read as follows:
\begin{itemize}
\item all the propagators have to be read, with the purpos of an upper bound, as $P$-type propagators,
\item each of the weight functions $w_{h_v}(\cdot)$, $v\in V_\mathcal B(\tau)$, is associated with some special vertex of the spanning tree: {\it i.e.} it is associated with the exiting point of some propagator $g^{(h_\ell)}_\ell$ at scale $h_\ell<h_v$, so that in the integration procedure we can think of all the weight functions as associated to some propagator:
\end{itemize}
\begin{equation}
|w_{h'} \ast g_\ell|
\end{equation}
where $h'>h_\ell$. In particular, $h'$ is the scale of the cluster we want to {\it renormalize}, while $h_\ell$ is the scale of the line $\ell$ exiting this cluster.\\
{\bf Iterative integration} At this point, starting from the leaves of the rooted spanning tree we start the integration procedure, observing the following prescriptions:
\begin{itemize}
\item if none of the vertices linked by $\ell\in T_v$ is {\it weighted}, so we get the same bound as {\it usual} from the integration of the single line of the spanning tree,
\item if one of the two vertices linked by $\ell\in T_v$ is weighted, we use Lemma (\ref{lemma_integral_w_g}) to integrate $\int d\bm x_\ell |w_{h'}\ast g_\ell|$, both transfering the weight function to the lower scale $h_\ell$ and getting the {\it scale jump} $\gamma^{\theta(h_\ell-h')}$.
\begin{rem}
Let us stress that this procedure (see Figures (\ref{figure_integration_prescription_step_one}) and (\ref{figure_lemma_integration_prescription_step_two})) is constructed in such a way that, in using Lemma (\ref{lemma_transfer}), we never transfer the weight function along a line of the spanning tree that we already used to transfer weight functions. This fact ensures that the constants $C^{n_v^0-1}$ appearing inthe bound in Lemma (\ref{lemma_transfer}) do not accumulate at all.
\end{rem}
{\bf Observation:}
\begin{itemize}
\item the scale jump $\gamma^{\theta(h_\ell-h')}$ is exactly the gain factor associated to the operators $\mathcal R_{\mathcal B}$ acting on the cluster at scale $h'$,
\item we transferred the weight function to scale $h_\ell$, and so we can use again Lemma (\ref{lemma_transfer}) to move the weight function following the rules described in Step 1. We stress that it could of course be the case that the propagator at scale $h_\ell$ we use to {\it transfer the weight function} links two vertices at scales $(h',h^*)$, so that the weight function $w_{h_\ell}(\cdot)$ would be attached to some vertex belonging to some subcluster at scale $h^*>h_{\ell}$, but it is not an issue: of course we consider this vertex as a vertex of the cluster at scale $h_\ell$, and we can still use Lemma (\ref{lemma_transfer}), as it is clear in the proof of Lemma.
\end{itemize}
\item in this way we are left with computing an integral over a {\it pruned spanning tree} having the same formal structure of the one we starded from, so we can iterate the procedure with this further prescription: since it could happen that at some scale $\bar h$ we get several weight functions $w_h(\cdot)$, so
\begin{itemize}
\item if one of these weight function is the one associated with the {\it special vertex}, we bound all the others by $C_{\theta'}$, keeping the weight function associated with the quantifier "{\it for each $\theta\in (0,1)$}.
\item otherwise, we arbitrarily bound all of them but one by $C_{\theta'}$, and we use Lemma (\ref{lemma_transfer}) to move the weight function following the rule of Step 1.
\end{itemize}
\item the very last non trivial step consists of integrating the cluster $G_v$ containing the weight function $\varpi_{h_v}$ we used in order to preserve the {\it quantifier "for each $\theta\in (0,1)$"}: we perform this integral as follows, getting a {\it gain factor} $\gamma^{h_L-h_{v}}$.\\
We use the coordinate the weight finction $w_h(\cdot)$ is associated with as the root of a change of variables by which we associate an effective variable to each $\ell\in T_v$. We use all the propagators $g_\ell$, $\ell\in T_v$ to integrate these effective variables, {\it i.e.} we bound each line of the spanning tree by $||g_\ell^{h_\ell}||_\infty$, and we use the weight function to integrate the overleft variable, getting:
\begin{equation}
\frac{1}{L} \int_0^L dx \left| w_{h}(x)\right| \leq c_{|\lambda|,w} \gamma^{\theta\left(h_L-h\right)}
\end{equation}
where $h_L$ has already been defined after (\ref{1_norm_off_diagonal_DBC}), and
\begin{equation}
c_{|\lambda|,w}= \begin{cases}
|\lambda| C_\theta, &\mbox{ if } w_h(\cdot)=\varpi_h(\cdot),\\
c, &\mbox{ if } w_h(\cdot)=\rho_h(\cdot).
\end{cases}
\end{equation}
\begin{proof}
From the hypothesis it follows that, when $w_h(\cdot)=\varpi_h(\cdot)$:
\begin{equation}
\begin{split}
\frac{1}{L}\int_{\Lambda} dx \left|\varpi_{h}(x)\right| \leq \frac{1}{L} \int_\Lambda dx \frac{c |\lambda|}{1+\gamma^{\theta h}|x|^{\theta}}\leq C_\theta |\lambda| \gamma^{-\theta h}\gamma^{h_L\theta}= C_\theta |\lambda| \gamma^{\theta(h_L-h)},
\end{split}
\end{equation}
while, if $w_h(\cdot)=\rho_h(\cdot)$, for each $N=1,2,\dots$:
\begin{equation}
\begin{split}
\frac{1}{L}\int_{\Lambda} dx \rho_{h}(x)= \frac{1}{L} \int_\Lambda dx \frac{C_N}{1+\gamma^{N h}|x|^{N}}\leq C'_N \frac{1}{L}\gamma^{-h}\leq C \gamma^{h_L-h}.
\end{split}
\end{equation}
\end{proof}
Of course, we can rewrite this gain factor as
\begin{equation}
\label{scale_jump}
\gamma^{\theta(h_L-h_v)}=\gamma^{\theta(h_L-h)}\left[\gamma^{\theta(h-h_1)}\gamma^{\theta(h_1-h_2)}\cdots \gamma^{\theta(h_m-h_v)} \right]
\end{equation}
where $h<h_1<\dots <h_m< h_v$, and the factor included in square brackets renormalizes all the inclapsulated subclusters of $G_1\supset G_2\supset\dots\supset G_m\supset G_v$, while the overleft scale jump $\gamma^{\theta(h_L-h)}$, that at the moment is not crucial in order to prove this theorem, will become crucial in proving the {\it main theorem}.
\end{itemize}
\begin{rem}
First of all, it is worth noting that, following this procedure, having a {\it non translation invariant element} at scale $h_v$ corresponding to the vertex $v\in V(\tau)$ is enough to get a dimensional gain for all the ancestors of $v$: this fact justifies the "simplification" procedure we introduced over the hierarchical organization of the renormalization operators.\\
We already pointed out that there could be several vertices we can possibly choose to glue the weight function to, attached to different propagators living at different scales, so one can think that the more convenient thing to do is to choose the propagator, among those, living at the lowest scale in order to maximize the scale jump in the gain of Lemma (\ref{lemma_integral_w_g}). In fact all the possible choises are completely equivalent for our pourposes:
\begin{itemize}
\item on the one hand, if we choose the propagator living at the lowest scale, we can rewrite the gain we get
$$\gamma^{\theta (h^{\min}_\ell-h')}=\gamma^{\theta (h^{\min}_\ell-h_1)}\gamma^{\theta (h_1-h_2)}\dots \gamma^{\theta (h_n-h')}, \mbox{ where } h_\ell^{\min}<h_1<h_2<\dots<h',$$
are the scales of all the subclusters contained in the cluster at scale $h_\ell^{\min}$, in order to get the {\it right} gain related to the renormalization operators $\mathcal R_{\mathcal B}$,
\item on the other hand, thanks to the {\it transfer of the weight function} to lower scales (Lemma (\ref{lemma_integral_w_g})), we can get the same factor step by step (or splitting the scale jump as we want).
\end{itemize}
\end{rem}
\end{proof}
\begin{corollary}
\label{theorem_bounds_kernels}
Let $\tau\in\mathcal T_{h,n}$ a renormalized tree, $h>h_L$, and $\mathcal W^{(h)}(\tau, P_{v_0}, \bm x(P_{v_0}))$ the respective kernel. If, for some constant $c_1>0$ and if for any $\theta\in (0,1)$ there exists a constant $C_\theta$ such that these bounds are verified
\begin{equation}
\begin{split}
\int_{y\in\Lambda}dy \left| \varpi_{h'}(x,y)\right|\leq |\lambda| \frac{C_\theta}{(1+\gamma^{h'}|x|)^\theta}, \\
\sup_{h'>h}\left(\max\{|\nu_{h'}|,|\delta_{h'}|, |\lambda_{h'}|, |z_{h'}|\}\right)\equiv \epsilon_h, \hspace{3mm} \sup_{h'>h}\left| \frac{Z_{h'}}{Z_{h'-1}} \right|\leq e^{c_1\epsilon_h^2}
\end{split}
\end{equation}
and if there exists a constant $\bar \epsilon$, depending on $c_1$, such that $\epsilon_h\leq \bar \epsilon$, then, for another suitable constant $c_0$ uniform in $c_1$, $L$ and $\beta$ the following bounds are true
\begin{equation}
\sum_{\tau\in\mathcal T_{h,n}}\left[|n_h(\tau)|+|z_h(\tau)|+|a_h(\tau)|+|l_h(\tau)|+||\varpi_h(\tau)||_{\infty,1}\right]\leq \left(c_0\epsilon_h\right)^n,
\end{equation}
\begin{equation}
\sum_{\tau\in\mathcal T_{h,n}}|e_{h+1}|\leq \gamma^{2h}(c_0\epsilon_h)^n
\end{equation}
and
\begin{equation}
\begin{split}
\frac{1}{|\Lambda|\beta}\sum_{\tau\in\mathcal T_{hn}}\int d\bm x(P_{v_0})\left|\mathcal R\mathcal W^{(h)}(\tau,P_{v_0},\bm x(P_{v_0}))\right|\leq \gamma^{-h\left(D(P_{v_0})+z_{v_0}\right)}(c_0\epsilon_h)^n
\end{split}
\end{equation}
where for each $\theta \in (0,1)$,
\begin{equation*}
z_{v_0}=\begin{cases}
1+\theta, &\mbox{ if $G_V$ has two external lines},\\
\theta, &\mbox{ if $G_V$ has four external lines}.
\end{cases}
\end{equation*}
\end{corollary}
\begin{proof}
Exploiting the dimensional gains coming from the operator $\mathcal R$ acting as described in equation (\ref{renormalized_kernels_explicit_expression}), we can repeat the proof of Theorem (\ref{theorem_bound_of_kernels}) by replacing
\begin{equation}
\prod_{v\notin V_f(\tau)}\gamma^{-D(v)(h_v-h_{v'})}\rightarrow \prod_{v\notin V_f(\tau)}\left(\frac{Z_{h_v}}{Z_{h_v-1}}\right)^{|P_v|/2}\gamma^{-[D(v)+z_v](h_v-h_{v'})}
\end{equation}
By the assumption $\sup_{h'>h}Z_{h'}/Z_{h'-1}\leq e^{c_1\epsilon_h^2}\leq$, taking $c_z\epsilon_h^2\leq 1/16$, one gets that
\begin{equation}
\prod_{v\notin V_f(\tau)}(Z_{h_v}/Z_{h_v-1})^{|P_v|/2}\gamma^{-[-2+|P_v|/2+z_v]}\leq \left(\prod_{\bar v }\gamma^{-\frac{1}{40}(h_{\bar v}-h_{\bar v'})}\right)\left(\prod_{v\notin V_f(\tau)}\gamma^{-|P_v|/40}\right)
\label{bound_product_z_h/z_h-1_gamma_DBC}
\end{equation}
where $\bar v$ are the non-trivial vercies, and $\bar v'$ is the non tricial vertex immediately preceding $\bar v$. Thanks to the product into the first bracket we bound the sum over the scale labels by $(const.)^n$. The second factor can be used to bound the sums, using
\begin{equation}
\sum_{\tau\in\mathcal T_{h,n}}\sum_{P_v}\sum_T\prod_{v\notin V_f(\tau)}\frac{1}{s_v!}\gamma^{-|P_v|/40}\leq C^n,
\end{equation}
we refer to \cite{benfatto2001renormalization} for details.
\end{proof}
\subsection{Proof of the main theorem}
Let us recall the main result:
\begin{thm}
\label{theorem_main_theorem_introduction}
There exists a radius $\lambda_0>0$ such that, for any $|\lambda|\leq \lambda_0$ it is possible to fix the {\it boundary defect} $\pi(x,y)$ and its strenght $\varpi=\varpi(\lambda)$ in such a way that, for any $\theta\in (0,1)$, there exists a constant $C_\theta$ such that
\begin{equation}
\sum_{y\in\Lambda} \left|\pi(x,y)\right| \leq C_\theta \left(\frac{1}{\left(1+|x|\right)^\theta}+\frac{1}{\left(1+|L-x|\right)^\theta}\right),
\end{equation}
in such a way that $f_\Lambda$ admits a convergent expansion in $\lambda$ and $\varpi$.\\
Moreover
\begin{equation}
\left| f_\Lambda-f_\infty \right|\leq |\lambda|\frac{C_\theta}{L^\theta}.
\end{equation}
\end{thm}
First of all, let us recall that the {\it diagrams} that contribute to the {\it specific free energy} are the so called {\it vacuum diagrams}, {\it i.e.} the diagrams such that $\left| P_{v_0}\right|=0$.\\
As we have done in the case of the kernels $W^{(h)}_2$ and $W_4^{(h)}$, we can split the {\it free fermi energy} into the {\it bulk term} and a {\it remainder}:
\begin{equation}
f_{\Lambda,\beta}=f^{(P)}_{\Lambda,\beta}+f^{(R)}_{\Lambda,\beta},
\end{equation}
where, by construction, all the diagrams contributing to $f^{(R)}_{\Lambda,\beta}$ contain at least either a remaider propagator or a non local endpoint.\\
In order to explicitly control the boundary corrections, we define
\begin{equation}
f_{\Lambda}=\lim_{\beta\nearrow \infty}f_{\Lambda,\beta},\hspace{3mm} f_{\infty}=\lim_{|\Lambda|\nearrow \infty} f_{\Lambda}.
\end{equation}
and we study the difference:
\begin{equation}
|f_\infty-f_{\Lambda}|,
\end{equation}
knowing that $|f_\infty-f^{(P)}_{\Lambda}|\leq \frac{C}{L^2}$, which can be proved by proceeding as in (\cite{giuliani2013universal}).\\
Using {\it exactly the same technique} as in proving the Theorem (\ref{theorem_bounds_kernels}) with the constraints $\left| P_{v_0}\right|=0$, and keeping track of the {\it scale jump} $\gamma^{\theta(h_L-h)}$ (\ref{scale_jump}) we already commented in the proof of Theorem (\ref{theorem_renormalized_bounds_DBC}) we get, for each $\theta\in (0,1)$,
\begin{equation}
|f_\infty-f_{\Lambda}|\leq |\lambda|c_\theta\sum_{h\leq 1} \gamma^{2h} \gamma^{\theta(h_L-h)}\leq |\lambda|\frac{C_\theta}{L^{\theta}}.
\end{equation}
The boundary defect $\pi(x,y)$ and its strenght $\varpi$ will be fixed in the next subsection (\ref{subsection_flow_rcc_DBC}).
\subsection{Flow of running coupling constants and functions}
\label{subsection_flow_rcc_DBC}
From the iterative procedure we set up, we can write the flow equations for $\vec v_h(x)$ for the quantities we defined in (\ref{running_coupling_functions_DBC}):
\begin{equation}
\label{flows_running_coupling_DBC}
\begin{split}
\nu_{h-1}&=\gamma \nu_h+\beta_\nu^h(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y),\\
\lambda_{h-1}&=\lambda_h+\beta_\lambda^h(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y),\\
\delta_{h-1}&=\delta_h+\beta_\delta^h(\vec v(x,y),\dots,\vec v_0(x,y);x,y),\\
\frac{Z_{h-1}}{Z_h}&=1+\beta^h_z(\vec v_h(x,y),\dots, \vec v_0(x,y);x,y),\\
\varpi_{h-1}(x,y)&=\gamma \varpi_{h}(x,y)+\beta^h_\varpi(\vec v_h(x,y),\dots, \vec v_0(x,y);x,y).
\end{split}
\end{equation}
The convergence of the multiscale expansion has been proved under the hypotesys of that the running coupling constants and functions are small enough. Now, we have to show that, choosing $\lambda$ small enough and fixing once for all the counterterm $\nu$ (which is an analytic function of $\lambda$ as we have already seen in subsection (\ref{subsection_flow_of_running_coupling_constants_PBC})) and $\varpi N(x,y)$ as functions of $\lambda$, such hypotesis are verified.\\
The strategy is to write down the Taylor expansion for the beta function (convergent as long as the hypotesis are fulfilled), truncate this Taylor expansion at lowest non-trivial order, check whether the {\it approximate flow} still verifies the hypotesis, and finally prove that the solution of this approximate flow is stable under the addition of higher order Taylor approximation.\\
The idea is that the beta function of this model is asympyoyically close to the beta function of the Luttinger model with an ultraviolet cut-off, so it belongs to the Luttinger liquid universality class (that we introduced in the Introduction). The main difference, as we already mentioned in the introduction, is that the reference model shows more symmetries than the models of the universality class, that can be used to show that the beta function $\beta_\lambda^{(h)}$, in the reference model, is asymptotically zero. Thanks to the {\it asymptotic closeness} of the models, the same holds for the model we are studying.\\
It is worth stressing that, by the very definition of running coupling constants and functions, we can rewrite (\ref{flows_running_coupling_DBC}) as
\begin{equation}
\begin{split}
\nu_{h-1}&=\gamma \nu_h+\beta_\nu^h(\vec v_h,\dots,\vec v_0),\\
\lambda_{h-1}&=\lambda_h+\beta_\lambda^h(\vec v_h,\dots,\vec v_0),\\
\delta_{h-1}&=\delta_h+\beta_\delta^h(\vec v,\dots,\vec v_0),\\
\frac{Z_{h-1}}{Z_h}&=1+\beta^h_z(\vec v_h,\dots, \vec v_0),\\
\varpi_{h-1}(x,y)&=\gamma \varpi_{h}(x,y)+\beta^h_\varpi(\vec v_h(x,y),\dots, \vec v_0(x,y);x,y),
\end{split}
\end{equation}
that basically means that, while the bulk constants enter in the flow equation of $\varpi_h(x,y)$, $\varpi_h(x,y)$ does not enter in the flow equations of the bulk running coupling constants. As a consequence, we can assume to have already studied the bulk flow equations of the running coupling constants $(\nu_h,\lambda_h,\delta_h,Z_h)$, and study the flow equation of the running coupling function.
\paragraph{Fixing the non local counterterm}
Let us study the flow of $\varpi_{h}(x,y)$, that has already been defined as
\begin{equation}
\gamma^h\varpi_h(x,y):=\frac{Z_{h-1}}{Z_h}\int_0^\beta dy_0 \mathcal W^{(h)}(\bm x,\bm y)
\end{equation}
and let us recall the flow
\begin{equation}
\varpi_{h-1}(x,y)=\gamma \varpi_{h}(x,y)+\beta^h\varpi(\vec v_h(x,y),\dots, \vec v_0(x,y);x,y).
\label{flow_varpi(x,y)}
\end{equation}
So, from the very last of (\ref{flows_running_coupling_DBC}), we get
\begin{equation}
\varpi_1(x,y)= -\sum_{k=h}^1\gamma^{k-2}\beta^{(k)}_\varpi(\vec v_h(x,y),\dots, \vec v_0(x,y);x,y),
\end{equation}
so
\begin{equation}
\varpi_h(x,y)=-\sum_{k\leq h}\gamma^{k-h-1}\beta^{(k)}_\varpi(\vec v_h(x,y),\dots, \vec v_0(x,y);x,y).
\label{varpi_as_sum_over_betas}
\end{equation}
Let us recall the definition of the norm:
\begin{equation}
||\varpi_h||^{(\theta)}_{\infty,1}=\sup_{x\in \Lambda}\left(1+\gamma^{ h}|x|\right)^\theta\sum_{y\in\Lambda}|\varpi_h(x,y)|,
\end{equation}
allowing us to define the Banach space $\mathcal B$ as follows.
\begin{defn}
Let $\mathcal B$ be the set of the real sequences $\underline \varpi (x,y):=\left\{\varpi_h(x,y)\right\}_{h\leq 1}$ with norm $$|| \underline \varpi||^{(\theta)}:=\sup_{h\leq 1}||\varpi_h||^{(\theta)}_{\infty,1}.$$
Besides, let us define the closed ball $\mathcal M_{\bar \theta}=:\mathcal M\subset \mathcal B$: let us fix $\bar \theta$, and let us define
\begin{equation}
\label{closed_ball_banach_space}
\mathcal M:=\left\{\underline \varpi(x,y): \forall\hspace{1mm} \theta\leq \bar \theta, \hspace{1mm} ||\underline \varpi^{(h)} ||^{(\theta)}\leq |\lambda| C\right\}.
\end{equation}
\end{defn}
\begin{rem}
We define such a Banach space because, to fix the initial value of
\begin{equation}
\pi(x,y)=:\varpi_1(x,y),
\end{equation}
we will look for a fixed point of the flow equation (\ref{flow_varpi(x,y)}) using the Banach fixed point theorem; in particular, we are interested in the elements belonging to the closed ball $\mathcal M$, and the closeness guarantees that strarting from an initial datum inside the ball $\mathcal M$, the fixed point belongs to $\mathcal M$.
\end{rem}
Let us start with defining an operator $\bm T$ acting on $\mathcal M$ as
\begin{equation}
\left(\bm T \underline \varpi(x,y)\right)_h=-\sum_{k\leq h}\gamma^{k-h-1}\beta^{(k)}_\varpi(\vec v_h(\underline \varpi;x,y), \dots, \vec v_0(\underline \varpi;x,y);x,y).
\label{flow_varpi_as_operator}
\end{equation}
{\bf Claim} If we find a fixed point $\underline \varpi^*(x,y)$ of (\ref{flow_varpi_as_operator}), the solution will be such that $\varpi_h(x,y)$ is {\it small} as desired.\\ \noindent
In order to find the fixed point for the operator $\bm T$:
\begin{enumerate}
\item we have to check that it leaves $\mathcal M$ invariant, {\it i.e.} that \begin{equation}
T:\mathcal M\to \mathcal M,
\end{equation}
\item we have to check that $\bm T$ is a contraction in $\mathcal M$, {\it i.e.}
\begin{equation}
||\bm T \underline\varpi-\bm T\underline\varpi'||^{(\theta)}\leq ||\underline \varpi - \underline \varpi'||^{(\theta)}.
\end{equation}
\end{enumerate}
Let us prove the Claim.
\begin{proof} [Proof of Claim]
\begin{enumerate}
\item Let us prove that $\bm T:\mathcal M\to \mathcal M$. Let us recall that, by definition, $$\beta_{\varpi}^{(h)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x;y)$$ is the sum of all possible Feynman graphs whose internal lines live at scale $\geq h$, such that there must be at least one line living right at scale $h$ and at least one element breaking the translation invariance, {\it i.e.} either $\gamma^k\varpi_k(x,y)\delta_{x_0,y_0}$ or $g^{(k)}_{R,\omega}(\bm x,\bm y)$, $k\geq h$. \\
So, first of all, let us check that, for each $\theta \leq \bar theta$
\begin{equation}
\sup_{x\in\Lambda}\left(1+\gamma^{\theta h}|x|^\theta\right)\sum_{y\in\Lambda}\left|\beta_{\varpi}^{(h)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y)\right|\leq C |\lambda|,
\label{norm_infty1_beta_functions}
\end{equation}
As we already explained, we can assume without loss of generality that:
\begin{itemize}
\item all the remainder propagators belong to the spanning tree,
\item we transfered the {\it anchorage property} to the vertices, localizing the non-local counterterms and rewriting $g_{R,\omega}^{(h)}\to g_{P,\omega}^{h}\ast w_h$, and we call $n_w\geq 1$ the number of {\it weighted vertices}.
\end{itemize}
We keep track of only one {\it weight function}, bounding the contribution of $n_w-1$ non-local endpoints using the hypothesis: for each $\theta'\leq \bar \theta$ (resp. $N=1,2,\dots$) there exist a constant $C_{\theta'}$ (resp. $C_N$) such that
\begin{equation}
|w_h(x)|\leq \begin{cases} |\lambda|\frac{C_{\theta'}}{\left(1+\gamma^h|x|\right)^{\theta'}}\leq |\lambda| C_{\theta'}, \hspace{3mm} &\mbox{ if } w_h(\cdot) =\varpi_h(\cdot),\\
\frac{C_N}{(1+\gamma^h|x|)^N}\leq C_N, & \mbox{ if }w_h(\cdot)=\rho_h(\cdot).
\end{cases}
\end{equation}
We stress that the constant $|\lambda|$, if there are not non-local endpoints but only remainder propagators, arises from the fact that there must be at least a {\it four-external legs} endpoint to contribute to $\beta_\varpi^{(h)}$. In light of this fact, from now on we assume without loss of generality us assume that $w_k(\cdot)=\varpi_h(\cdot)$,
So by construction there is a vertex of the spanning tree associated with the {\it weight function} $\varpi_h(\cdot)$. As in proof of Theorem (\ref{theorem_renormalized_bounds}), the determinant expansion is the same of the {\it translation invariant case}, while the novelty is the weight function $\varpi_h(\cdot)$ appearing in the integration over the spanning tree: so we are interested in bounding, for each $\theta\leq \bar \theta$:
\begin{equation*}
\begin{split}
\sup_{x\in \Lambda}(1+\gamma^h|x|)^\theta \left| \int d(\underline{ \bm z}\setminus \bm x)\prod_{v\notin V_f(\tau)}\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}\left(\left[\prod_{\ell \in T_v}(\bm x_\ell-\bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right]\right)\cdot \right.\\\left. \cdot\left[\prod_{i=1}^{n}(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right]\varpi_k(s)\right|\leq\\
\leq \sup_{x\in \Lambda}(1+\gamma^h|x|)^\theta \left| \int d(\underline{ \bm z}\setminus \bm x)\prod_{v\notin V_f(\tau)}\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}\left(\left[\prod_{\ell \in T_v}(\bm x_\ell-\bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right]\right)\cdot \right.\\\left. \cdot\left[\prod_{i=1}^{n}(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right] \frac{|\lambda|C_{\theta'}}{(1+\gamma^k|s|)^{\theta'}}\right|\leq \\
\leq \sup_{x\in \Lambda} \left| \int d(\underline{ \bm z}\setminus \bm x)\prod_{v\notin V_f(\tau)}\gamma^{h_vq_{\alpha,G^{h_v,T_v}}}\left(\left[\prod_{\ell \in T_v}(\bm x_\ell-\bm y_\ell)^{b(\ell)}_{j(\ell)}\partial^{q(f_\ell^1)}_{j(f_\ell^1)}\partial^{q(f_\ell^2)}_{j(f_\ell^2)} g^{(h_\ell)}_{\ell}\right]\right)\cdot \right.\\\left. \cdot\left[\prod_{i=1}^{n}(\bm x^i-\bm y^i)^{b(v^*_i)}_{j(v^*_i)}K^{(h_i)}_{{v^*_{i}}}(\bm x_{v^*_i}))\right] \frac{|\lambda|C_{\theta'}(1+\gamma^h|x|)^\theta}{(1+\gamma^k|s|)^{\theta'}}\right|\leq
\end{split}
\end{equation*}
The strategy is the same we used in proving the Lemma (\ref{lemma_transfer}): by definition of spanning tree $T$, there must be a connected path of lines $\ell \in T$ connecting $x$ (the non-integrated point, so the anchored point) to one of the points $s$: $\{(x,z_1);(z_1,z_2),\dots,(z_{i-1},s)\}\subseteq T$. So, setting $z_0:=x$ and $z_i=s$,
\begin{equation}
|x|^\theta\leq c_{0,\theta}\left(\sum_{j=1}^i |z_{i-1}-z_i|^\theta+|s|^\theta\right),
\end{equation}
so that
\begin{equation}
\left(1+\gamma^h|x|\right)^\theta\leq c_{1,\theta}\left[ \left(1+\gamma^h|s|\right)^\theta+\sum_{j=0}^i\gamma^{\theta h} |z_{i-1}-z_i|^\theta\right]
\end{equation}
Then each term of the sum $|z_i-z_j|^\eta$ is associated to a line of the spanning tree, so to a propagator $g^{(k)}_{P,\omega}(\bm z_i-\bm z_j)$ so that, analogously to what we did in proving Lemma (\ref{lemma_transfer}), in order to get the bounds we want we can replace, inside the integrals, $|z_i-z_j|^\theta\leq \gamma^{-\theta k}$ so $$\gamma^{\theta h}|z_i-z_j| \leq \gamma^{\theta (h-k)}\leq 1,$$ being, by construction, $h\leq k$.\\
So we are left with proving that
$$\frac{\left(1+\gamma^h|s|\right)^\theta}{(1+\gamma^k|s|)^{\theta'}}\leq c, \hspace{3mm} \mbox{ for any } \theta,\theta'\leq \bar \theta,$$
and indeed it is:
\begin{equation}
\frac{\left(1+\gamma^h|s|\right)^\theta}{(1+\gamma^k|s|)^{\theta'}}\leq \begin{cases}
\left(1+\gamma^{h-k}\right)^\theta\leq 2,& \mbox{ if } |s|\leq \gamma^{-k},\\
\frac{\left(1+\gamma^h\gamma^{-h}\right)^\theta}{(1+\gamma^k{\gamma^{-k}})^{\theta'}}\leq 1, &\mbox{ if } \gamma^{-k}\leq |s|\leq \gamma^{-h},\\
c \frac{\gamma^{\theta h}|s|^\theta}{\gamma^{\theta' k}|s|^{\theta'}}\leq c \gamma^{\theta'(h-k)}\leq c, &\mbox{ if } \gamma^{-h}\leq |s|.
\end{cases}
\end{equation}
Now, we use this bound to verify that $||\left(\bm T \underline \varpi(x,y)\right)_h||_{\infty,1}\leq |\lambda| c$. Indeed
\begin{equation}
\begin{split}
\sup_{x\in\Lambda}\left(1+\gamma^{\theta h}|x|^{\theta}\right)\sum_{k\leq h}\gamma^{k-h-1}\left(\sum_{y\in\Lambda}\beta_{\varpi}^{(k)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y)\right)\leq \\
\leq \sum_{k\leq h}\sup_{x\in\Lambda}\left(\gamma^{k-h-1}+\gamma^{k-h-1}\gamma^{\theta h}|x|^{\theta}\right)\left(\sum_{y\in\Lambda}\beta_{\varpi}^{(k)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y)\right)\leq \\
\leq \sum_{k\leq h}\Biggl (C_{\theta'} c_{2,\theta}|\lambda|\gamma^{k-h-1}+\\+\left.\gamma^{(k-h)(1-\theta)}\sup_{x\in\Lambda}\left(\sum_{y\in\Lambda}\gamma^{\theta k}|x|^{\theta}\beta_{\varpi}^{(k)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y)\right)\right)\leq c_{3,\theta}C_{\theta'}|\lambda|,
\end{split}
\end{equation}
so $\left(\bm T \underline \varpi(x,y)\right)\in \mathcal M$ if we choose $C= c_{3,\theta}C_{\theta'}$.
\item Let us check that $\bm T$ is a contraction, {\it i.e.} $||\bm T\underline \varpi- \bm T\underline \varpi'||\leq ||\underline \varpi-\underline \varpi'||$. First of all, let us remark that, by the very definition of the running coupling functions (\ref{vec_v_h(x)}) $$\vec v_h(x,y)=\left(\nu_h,\delta_h,\lambda_h,\varpi_h(x,y)\right),$$
the running coupling constants $(\nu_h,\delta_h,\lambda_h)$ do not depend on the running coupling functions $\varpi_{h}(x)$. So we can split
\begin{equation}
\beta_{\varpi}^{(k)}(\vec v_h,\dots,\vec v_0;x,y)=\beta_{\varpi=0}^{(k)}(\vec v_h,\dots,\vec v_0;x,y)+\bar\beta_{\varpi}^{(k)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y)
\end{equation}
where $\bar \beta_{\varpi}^{(k)}$ corresponds, in the Feynman graphs picture, to the contribution of all the diagrams containing at least a $\varpi_{h}$ term, and $\beta_{\varpi=0}^{(k)}$ is the remainder. So, if we use the notation
$$\vec v_h(x,y)=\left(\nu_h,\delta_h,\lambda_h,\varpi_h(x,y)\right), \hspace{5mm} \vec v'_h(x,y)=\left(\nu_h,\delta_h,\lambda_h,\varpi'_h(x,y)\right),$$
the difference between the beta functions depending on different {\it running coupling functions} depends only on the diagrams containing at least a $\varpi_{h}$ term:
\begin{equation}
\begin{split}
\beta_{\varpi}^{(k)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y)-\beta_{\varpi}^{(k)}(\vec v'_h(x,y),\dots,\vec v'_0(x,y);x,y)=\\=\bar\beta_{\varpi}^{(k)}(\vec v_h(x,y),\dots,\vec v_0(x,y);x,y)-\bar\beta_{\varpi}^{(k)}(\vec v'_h(x,y),\dots,\vec v'_0(x,y);x,y)
\end{split}
\end{equation}
So, using the notation $\vec v_k(x,y)=:\vec v_k$
\begin{equation}
\begin{split}
||\left(\bm T\underline \varpi\right)_h- \left(\bm T\underline \varpi'\right)_h||^{(\theta)}\leq \\
\leq \sum_{k\leq h}\sup_{x\in\lambda}(1+\gamma^{\theta h}|x|^\theta)\gamma^{k-h-1}\sum_{y\in\lambda}\left|\beta_{\varpi}^{(k)}(\vec v_h,\dots,\vec v_0;x,y)-\beta_{\varpi}^{(k)}(\vec v'_h,\dots,\vec v'_0;x,y)\right|=\\
=\sum_{k\leq h}\gamma^{k-h-1} \sum_{y\in\lambda}\left| \bar \beta_{\varpi}^{(k)}(\vec v_h,\dots,\vec v_0;x,y)-\bar \beta_{\varpi}^{(k)}(\vec v'_h,\dots,\vec v'_0;x,y)\right|+\\+\sup_{x\in\Lambda}\gamma^{h\theta} |x|^\theta \sum_{k\leq h}\gamma^{k-h-1} \sum_{y\in\lambda}\left|\bar \beta_{\varpi}^{(k)}(\vec v_h,\dots,\vec v_0;x,y)-\bar \beta_{\varpi}^{(k)}(\vec v'_h,\dots,\vec v'_0;x,y)\right|\leq \\
\leq c' \sum_{k\leq h}\gamma^{k-h-1} \sum_{y\in\lambda}\left|\bar \beta_{\varpi}^{(k)}(\vec v_h,\dots,\vec v_0;x,y)-\bar \beta_{\varpi}^{(k)}(\vec v'_h,\dots,\vec v'_0;x,y)\right|+\\ + c'\sum_{k\leq h}\gamma^{(k-h)(1-\theta)-1} \sup_{x\in\Lambda}\gamma^{k \theta} |x|^\theta \sum_{y\in\lambda}\left|\bar \beta_{\varpi}^{(k)}(\vec v_h,\dots,\vec v_0;x,y)-\bar \beta_{\varpi}^{(k)}(\vec v'_h,\dots,\vec v'_0;x,y)\right|\leq \\
\leq c'' \sum_{k\leq h}\gamma^{k-h-1} |\lambda| \left| \varpi_k(x)-\varpi'_k(x)\right| \sum_{k'\geq k} \gamma^{\theta(k-k')}+
\\
+\sum_{k\leq h}\gamma^{(k-h)(1-\theta)-1} \sup_{x\in\Lambda}\gamma^{k \theta} |x|^\theta |\lambda| \left| \varpi_k(x)-\varpi'_k(x)\right| \sum_{k'\geq k} \gamma^{\theta(k-k')} \leq C \lambda ||\underline \varpi-\underline \varpi'||^{(\theta)}.
\end{split}
\end{equation}
\end{enumerate}
Finally, we fix $$\varpi_1(x,y)=\varpi \pi(x,y),$$ and $\varpi$ is fixed by imposing that
\begin{equation}
\varpi =\sup_{x\in \Lambda}\left|\int_\Lambda dy \varpi_1(x,y)\right|.
\end{equation}
\end{proof}
\paragraph{Bulk running coupling constants}
Since the bulk running coupling constants are {\it space independent}, the strategy to study them is conceptually the same as the previous chapter (\ref{chapter_fermions_PBC}), even though at finite volume they are not exactly the same constants. Let us denote by $\{\bar \nu_h,\bar \delta_h,\bar \lambda_h\}$ the running coupling constants at finite volume of the translation invariant setting described by the Hamiltonian (\ref{hamiltonian_PBC}) in a volume $|\bar \Lambda|=2(L+1)$.
So, let us recall where the bulk running coupling constants come from: $\lambda_h$ comes from $\mathcal L_\mathcal T \mathcal L_{\mathcal B}\mathcal V^{(h)}(\psi^{(\leq h)})$. Thanks to Theorem (\ref{theorem_renormalized_bounds_DBC}), we infer that, even at finite volume, $\lambda_h=\bar \lambda_h$, since the difference between the quartic terms in the two settings is an irrelevant term. The running coupling constants coming from the quadratic terms localization deserve a deeper comment. Indeed, by construction $\mathcal L_{\mathcal B} W^{(h)}_2=\bar W^{(h)}_2(x-y, x_0-y_0)-\bar W^{(h)}_2(x+y,x_0-y_0)$ consists of the first term, which is exactly the {\it quadratic kernel} of the translation invariant theory defined in the box $\bar \Lambda$, and the second one which is a {\it remainder term} in the sense of the norm $||\cdot||_1$. This suggests to treat $\delta_h$ and $z_h$, coming from $\mathcal L_\mathcal T\mathcal L_{\mathcal B} W^{(h)}_2$, morally as we treated $\lambda_h$, {\it i.e.} by inferring that $|\delta_h-\bar \delta_h|$ and $|z_h-\bar z_h|$ are irrelevant quantities, so we can actually reduce our study to the translational invariant case one.\\
So we are left with fixing the {\it constant counter-term} $\nu_h$: at a formal technical level ({\it i.e.} the fixed point argument), there is no difference with respect to what we have done in the very last section of the previous chapter (\ref{subsection_flow_of_running_coupling_constants_PBC}); anyway, since $\nu_h$ is a {\it relevant running coupling constants}, we cannot proceed as we did for the other constants, because $|\nu_h-\bar \nu_h|$ is a {\it marginal} quantity. At this point it should be clear that, following the {\it definition} of $\nu_h$ that we have chosen, the actually {\it relevant} contribution to $\nu_h$ comes from the linearization of the operators associated with the integral kernel $\bar W_2^{(h)}(\bm x-\bm y)$ and, by applying the same estimates of Theorem (\ref{theorem_renormalized_bounds_DBC}) we get, at finite volume, $|\bar \nu_h-\nu_h|\leq \gamma^{h_L-h}$. To conclude, only when the thermodinamic limit is reached the bulk counterterm on the halfline is the same as the one of the system defined on the whole line.\\
Since all these considerations have a meaning only at finite volume, we underline that the difference between the finite volume and infinite volume running coupling constants has already been rigorously studied in \cite{giuliani2013universal}, during the study of the flow of running coupling constants.
\chapter{Conclusion}
\label{chapter_conlcusion}
\section{Summary}
\label{section_summary}
With the purpose of extending the Constructive Renormalization Group formalism to systems defined in general domains, we attacked a multiscale problem that breaks the {\it translation invariance symmetry} in the simplest possible way: we considered a system of spinless fermions hopping on a 1D semi-infinite lattice with Dirichlet boundary conditions, in the presence of a {\it weak density-density interaction} (of size $\lambda$).\\
We showed, by rigorous Renormalization Group methods, that, if the perturbation is {\it weak enough}, it is possible to fix a {\it quadratic boundary counterterm}, localized at the boundary as ${|\lambda|C_\theta}/{\left((1+|x|\right)^\theta}$ for each $\theta\in(0,1)$, in such a way that the specific free energy is expressed as an analytic function of the perturbation size. In particular, we derived constructive bounds on the difference between the {\it finite volume} specific free energy $f_\Lambda$ and its thermodynamic limit $f=\lim_{|\Lambda|\nearrow \infty } f_\Lambda$:
$$|f-f_\Lambda|\leq |\lambda| \frac{C_\theta}{L^\theta}, \hspace{5mm} \forall \hspace{5mm} 0<\theta<1.$$\\
Our proof involves a systematic treatment of what we call {\it boundary terms}. In particular we developed a method thanks to which, in a {\it multiscale} language, given a family of incapsulated clusters, the presence of a {\it non-translation invariant} element in the innermost cluster $G_v$ is enough to get a {\it dimensional gain}, that improves the renormalization analysis, for each of the clusters $G_w \supseteq G_v$ containing $G_v$.\\
If, on the one hand, this improvement is enough to renormalize the quartic boundary contributions, on the other hand it allows us to conclude that the quadratic boundary contributions are {\it marginal}. The fact that the boundary conditions are not invariant under RG integrations makes it technically difficoult to absorb this quadratic boundary contributions into the Grassmann integration, so we decided to introduce a quadratic {\it boundary correction}, localized at the boundary as $|\lambda|C_\theta/(1+|x|)^\theta$ for each $\theta\in (0,1)$, to control them.\\
It is worth pointing out that we did not take care of keeping track of the $\theta$-dependent constants coming from the bounding procedure, thanks to which one could find {\it more explicit} bounds for the corrections to the free energy. \\
In particular one expects, in this way, to be able to express $C_\theta$ as
\begin{equation}
C_\theta=C\frac{1}{(1-\theta)^\alpha},
\end{equation}
for some suitable $\alpha>0$.\\
This would allow us to choose {\it the optimal $\theta\in (0,1)$} by fixing $\theta$ in such a way that
\begin{equation}
\frac{d}{d\theta}\left(\frac{C_\theta}{L^\theta}\right)=C\frac{d}{d\theta}\left(\frac{1}{(1-\theta)^{\alpha}L^\theta}\right)=0
\end{equation}
so the {\it optimal}
\begin{equation}
|f-f_\Lambda|\leq C |\lambda| \frac{\left(\log L\right)^\alpha}{L},
\end{equation}
for some $C>0$.\\
In order not to make heavier the analysis, we decided neither to give these detalis nor to discuss the construction of the Schwinger functions not even in the translation invariant case (Chapter (\ref{chapter_fermions_PBC})). Indeed, even in the translation invariant setting, the construction of Schwinger functions requires an {\it adapted multiscale analysis} slightly different from the one we set up to construct the specific free energy (see \cite{gentile2001renormalization} Section $12$ for an introduction, \cite{benfatto1993beta} for the details). A modification of this {\it multiscale argument}, in the spirit of the modification we introduced in Chapter (\ref{chapter_Interacting_fermions_on_the_half_line}) with respect to Chapter (\ref{chapter_fermions_PBC}), would extend the control of the boundary correction to the case of Schwinger functions: as already mentioned, one expects different behaviours depending on the {\it comparison} between the mutual distance and the {\it distance from the boundary.}
\section{Outlook}
\label{section_outlook}
The next natural step is the program we started this thesis with: {\it i.e.} to {\it invert the counterterm}, meaning properly to {\it build the ground state} of a system described by the Hamiltonian:
\begin{equation*}
H=H_0+\lambda V,
\end{equation*}
with Dirichlet boundary conditions.\\
This corresponds to finding a way to re-sum the quadratic boundary contributions into the Grassmann integration, absorbing the boundary effects into the dressed propagator. The expected result would be encoded in some {\it space dependent} critical exponent $\eta(x)$ in the long distance behaviour of the dressed propagaor. In the context of Luttinger liquids some {\it non rigorous results} have been obtained, via {\it non-rigorous} bosonization techniques, in \cite{fabrizio1995interacting, meden2000luttinger, grap2009renormalization,mattsson1997properties}, where the presence of {\it space dependent} critical exponents is investigated by comparing the two-point correlation functions in different asymptotic regimes: both the variables {\it well inside the bulk}, one {\it well inside the bulk} and the other {\it close to the boundary}, both of them {\it close to the boundary}.\\
As we pointed out, the main technical problem in doing this is the fact that {\it the boundary conditions are not invariant under RG iterations}, so absorbing the boundary corrections into the propagator would break the multiscale structure, {\it mixing up} different scales in a {\it non-hierarchical way}. This complication can basically be summarized by saying that the momentum, being non-preserved, is not the {\it right quantum number} to look at, so in order to solve this problem one should be able to {\it diagonalize}, scale by scale, the single scale Laplacian with covariance $\left( g^{(h)}\right)^{-1}$ perturbed by a {\it weak and localized potential}:
$$\left(\left(g^{(h)}\right)^{-1}-W_2^{(h)}\right)^{-1}(\bm x, \bm y)= \tilde g^{(h)}(\bm x,\bm y)=\sum_{j\in \mathcal D}\hat{\tilde g}^{(h)}_j \varphi_j^*(\bm x)\varphi_j(\bm y),$$
for some suitable {\it dual space} we denote by $\mathcal D$ and orthonormal basis $\left\{\varphi_j(\bm x)\right\}_{j\in\mathcal D}^{\bm x\in\Lambda\times [0,\beta)}$, where $W_2^{(h)}(\bm x, \bm y)$ is such that
$$\frac{1}{\beta}\int_{[0,\beta)} dx_0 \int_{[0,\beta)} dy_0 \sum_{y\in\Lambda} \left| W_2^{(h)}(\bm x,\bm y)\right|\leq |\lambda| \gamma^h e^{-\gamma^h|x|}.$$
In particular, one should imagine the dual space $\mathcal{D}$ as the {\it energy-space}, being the energy preserved. In other words, it seems that the main difficoulty of the problem is to solve, scale by scale, a {\it scattering problem}, or a {\it perturbed Schr\"odinger equation}, getting explicit expressions both for the eigenfunctions and for the spectrum of the system. Being a quite common problem, there is a huge literature about it mostly interested in the spectral property of the {\it perturbed system} (see {\it e.g.} the review about rank-one perturbations of the Laplacian \cite{simon1995spectral}, or \cite{kato2013perturbation} for a {\it scattering theory} point of view). However at this point it should be clear that, in order to {\it construct} the observables we are interested in via a RG method, we need an {\it "explicit enough"} representation of the covariance allowing us to exploit the {\it selfsimilar structure} of the theory at {\it different scales} in order to iterate this procedure. This quite natural, but challenging, method, seems in fact to match with a {\it multiscale implementation of} the ideas used first by Symanzik, then by Diehl {\it et al.} to study $\phi^4_{4-\epsilon}$ theories in non trivial domains \cite{symanzik1981schrodinger, diehl1981field2,diehl1983universality} via non-rigorous RG methods, in order to investigate the Casimir effect. Even though, on the one hand, the single scale problem seems to be reasonable, we expect its multiscale implementation to be non-trivial. \\
Let us stress, in light of the fact that the boundary correction to the {\it quadratic part of the effective action are marginal}, the same novelties we met in dealing with {\it 1D spinless fermions} would come out even in the case of systems with {\it irrelevant interactions} ({\it e.g.} Ising model, see \cite{mastropietro2008non} Chapter 9 for a RG language treatment of this topic). In fact we expect that our analysis is adaptable to 2D statistical models as Ising, dimers {\it etc.} provided one is able to find a {\it manageable} fermionic representation of the starting model (see again \cite{mastropietro2008non} Chapter 9 for the Ising model, see \cite{giuliani2015height} for dimers, both in translation invariant settings).\\
A further challenging topic related to a full understanding of the problem we started to study is the investigation of the Kondo model \cite{kondo1964resistance} (both the original and the {\it multichannel one}) around the {\it strong coulping regime fixed point}. Indeed, a natural way to study the {\it Kondo effect} seems to be a {\it conformal field theory approach} \cite{affleck1995conformal}: studying the strong coupling regime basically means to assume that the interaction with the impurity (that we assume to be sitting at the origin) is much stronger than the kinetic part. This assumption would imply that the fermion sitting at the origin is bounded to the impurity forming a singlet state with it. So an arbitary electron configuration occurs on all other sites, but other electrons are {\it forbidden} to enter the origin, since that would destroy the singlet state costing a {\it big energy}: that is the reason why the impurity at the origin has roughly the same effect as a Dirichlet boundary condition. Anyway, a rigorous understanding of the Kondo model is still far, even though a first step, based on rigorous hierarchical RG methods, has been completed in \cite{benfatto2015kondo}. |
1701.05082 | \section{Introduction}
\noindent
Let $(M,g)$ be a Lorentzian spacetime and $(N,h)$ a Riemannian manifold. In this paper, we study wave maps $u:(M,g) \longrightarrow (N,h)$, that is, critical points of the geometric action functional
\begin{align*}
S_{g}[u]:=\frac{1}{2} \int _{M} |d_{g} u|^{2}~d \mu_{g}.
\end{align*}
Here,
\begin{align*}
|d_{g}u(x)|^{2} \equiv |d_{g}u (x)|_{ T^{\star}_{x}M \otimes T_{u(x)} N }^{2}:= \text{tr}_{g} \left ( u^{\star} \left( h \right) \right)
\end{align*}
is the trace (with respect to $g$) of the pullback metric on $(M,g)$ via the map $u$. The integral is understood with respect to the standard measure $d\mu _{g}$ on the domain manifold. In local coordinates $(x_{\mu})$ on $(M,g)$, this expression reads
\begin{align*}
S_{g}[u]= \int_{M} g ^{\mu \nu} (\partial _{\mu} u^{a}) (\partial _{\nu} u^{b}) h_{a b} \circ u~ d \mu_{g}
\end{align*}
where the Einstein summation convention is used. The Euler-Lagrange equations associated to this functional are
\begin{align} \label{WM}
\Box _{g} u^{a} + g ^{\mu \nu} (\Gamma ^{a}_{b c} \circ u) (\partial _{\mu} u^{b} ) (\partial _{\nu} u^{c}) =0
\end{align}
and they constitute a system of semi-linear wave equations. Here, $\Box _{g}$ is the Laplace-Beltrami operator on $(M,g)$
\begin{align*}
\Box_{g} := \frac{1}{|g|} \partial _{\mu} (g^{\mu \nu} |g| \partial _{\nu}),\quad |g|:=\sqrt{ \left| \text{det}(g_{\mu \nu}) \right| }
\end{align*}
and $\Gamma ^{a}_{b c}$ are the Christoffel symbols associated to the metric $h$ on the target manifold. Eq.~\eqref{WM} is called the wave maps equation (known in the physics literature as non-linear $\sigma$ model) and is the analog of harmonic maps between Riemannian manifolds in the case where the domain is a Lorentzian manifold instead. For more details, we refer the reader to \cite{Ren08} and \cite{Str97}.
\subsection{Intuition} Recently, the wave maps equation has attracted a lot of interest. On the one hand, the wave maps equation is a rich source for understanding nonlinear geometric equations since it is a nonlinear generalization of the standard wave equation on Minkowski space. In addition, the wave maps equation has a pure geometric interpretation: it generalizes the notion of geodesic curves. Notice that, if $M = (\alpha, \beta)$ is an open interval and $(N,h)$ any curved Riemannian manifold, the wave maps equation is the geodesic equation
\begin{align*}
\frac{d^2 u^{a}}{dt ^2} (t) + (\Gamma ^{a}_{b c} \circ u(t)) \frac {d u^{b} }{dt} (t) \frac{d u^{c} }{dt} (t)=0.
\end{align*}
On the other hand, the Cauchy problem for the wave maps system provides an attractive toy-model for more complicated relativistic field equations. Specifically, wave maps contain many features of the more complex Einstein equations but are simple enough to be accessible for rigorous mathematical analysis.
Further details on the correlation between the wave maps system and the Einstein equations can be found in \cite{Mis78, Mon89, Wei90, Kla97}.
Being a time evolution equation, the fundamental problem is the Cauchy problem: given specified smooth initial data, does there exist a unique smooth solution to the wave maps equation with this initial data? Furthermore, does the solution exist for all times? On the other hand, if the solution only exists up to some finite time $T$, how does the solution blow up as $t$ approaches $T$?
The investigation of questions of global existence and formation of singularities for the wave maps equation can give insight into the analogous, but much more difficult, problems in general relativity.
\subsection{Equivariant wave maps} Now, we turn our attention to the Cauchy problem in the case where the domain is the Minkowski spacetime $(\mathbb{R}^{1+d},g)$ and the target manifold is the sphere $(\mathbb{S}^{d},h)$ for $d \geq 3$. Hence, we pick $g =$diag$(-1,1,\dots,1)$ and $h$ to be the standard metric on the sphere. Furthermore, we choose standard spherical coordinates on Minkowski space and hyper-spherical coordinates on the sphere. The respective metrics are given by
\begin{align*}
g = - dt^2 + dr^2 + r^2 d \omega ^2, \quad h= d \Psi ^2 + \sin^2(\Psi) d \Omega ^2,
\end{align*}
where $d\omega ^2$ and $d \Omega ^2$ are the standard metrics on $\mathbb{S}^{d-1}$. Moreover, a map $u:(\mathbb{R}^{1+d},g) \longrightarrow (\mathbb{S}^{d},h)$ can be written as
\begin{align*}
u (t,r,\omega) = \big( \Psi (t,r,\omega ), \Omega (t,r,\omega) \big).
\end{align*}
We restrict our attention to the special subclass known as 1-equivariant or co-rotational, that is
\begin{align*}
\Psi (t,r,\omega ) \equiv \psi (t,r),~~~ \Omega (t,r, \omega ) = \omega.
\end{align*}
Under this ansatz, the wave maps system for functions $u:(\mathbb{R}^{1+d},g) \longrightarrow (\mathbb{S}^{d},h)$ reduces to the single semi-linear wave equation
\begin{equation}
\label{eq:main} \psi_{tt}-\psi_{rr}-\frac{d-1}{r}\psi_r+\frac{d-1}{2}\frac{\sin(2\psi)}{r^2}=0.
\end{equation}
By finite speed of propagation and radial symmetry it is natural to
study this equation in backward light-cones with vertex $(T,0)$, that is
\begin{align*}
C_{T} :=\left \{ (t,r) : 0<t<T,~0 \leq r \leq T-t \right \}
\end{align*}
where $T>0$.
Consequently, we consider the Cauchy problem \\
\begin{align} \label{cauchy}
\begin{cases}
\psi _{tt} (t,r) - \Delta ^{ \text{rad} }_{r,d} \psi (t,r) = - \frac{d-1}{2} \frac{ \sin ( 2 \psi (t,r) ) }{r^2}, &\quad \text{in } C_{T} \\
\psi (0,r)= f(r),~~~ \psi _{t} (0,r)= g(r), &\quad \text{on } \{ t=0 \} \times [0,+\infty) \
\end{cases}
\end{align}
where $\Delta ^{ \text{rad} }_{r,d} $ stands for the radial Laplacian
\begin{align*}
\Delta ^{ \text{rad} }_{r,d} \psi (t,r) := \psi _{rr} (t,r) + \frac{d-1}{r} \psi _{r} (t,r).
\end{align*}
To ensure regularity of solutions, equations $\eqref{cauchy}$ must be supplemented by the boundary condition
\begin{align} \label{reg}
\psi (t,0)=0,\quad \text{~for all~} t \in (0,T).
\end{align}
\subsection{Self-similar solutions}
A basic question for the Cauchy problem $\eqref{cauchy}$ is whether solutions starting from smooth initial data
\begin{align*}
(f,g)=\left( \psi (0, \cdot), \partial _{t} \psi (0, \cdot) \right)
\end{align*}
can become singular in the future. Note that Eq.~\eqref{eq:main} has the conserved energy
\begin{align*}
E[\psi]:=\int_{0}^{\infty} \left( \psi _{t}^2 + \psi _{r}^2 + (d-1)\frac{\sin ^2(\psi) }{r^2} \right) r^2 dr .
\end{align*}
However, the energy cannot be used to control the evolution since Eq.~\eqref{cauchy} is not well-posed at energy regularity, cf.~\cite{ShaTah94}.
Indeed, Eq.~\eqref{eq:main} is invariant under dilations
\begin{align} \label{dilation}
\psi _{\lambda} (t,r):=\psi \left( \frac{t}{\lambda},\frac{r}{\lambda} \right),~ \lambda >0
\end{align}
and the critical Sobolev space for the pair $(\psi(t,\cdot),\partial_t \psi(t,\cdot))$ is $\dot H^{\frac{d}{2}}\times \dot H^{\frac{d}{2}-1}$.
Consequently, Eq.~\eqref{eq:main} is energy-supercritical for $d\geq 3$.
In fact, due to the scaling $\eqref{dilation}$ and the supercritical character it is natural to expect self-similar solutions and indeed, it is well known that there exist smooth initial data which lead to solutions that blowup in finite time in a self-similar fashion. Specifically, Eq.~\eqref{eq:main} admits the self-similar solution
\begin{align*}
\psi ^{T} (t,r) := f_{0} \Big ( \frac{r}{T-t } \Big) = 2 \arctan \Bigg( \frac{r}{\sqrt{d-2}(T-t) } \Bigg),\qquad T>0.
\end{align*}
This example is due to Shatah \cite{Sha88}, Turok-Spergel \cite{TurSpe90} for $d=3$, and Bizo\'n-Biernat \cite{BizBie15} for $d \geq 4$ and provides an explicit example for singularity formation from smooth initial data. Indeed, the self-similar solution $\psi^T$ is perfectly smooth for all $0<t < T$ but breaks down at $t = T$ in the sense that
\begin{align*}
\partial_r\psi ^{T} (t,r)|_{r=0}
\simeq \frac{1}{T-t} \longrightarrow + \infty,~~~\text{as}~ t \longrightarrow T^{-}.
\end{align*}
We note in passing that for $d\in \{3,4,5,6\}$, $\psi^T$ is just one member of a countable family of self-similar solutions, see \cite{Biz00, BieBizMal16}.
\subsection{The main result}
By finite speed of propagation one can use $\psi ^{T}$ to construct smooth, compactly supported initial data which lead to a solution that blows up as $t \longrightarrow T$. Our main theorem is concerned with the asymptotic nonlinear stability of $\psi^T$. In other words, we prove the existence of an open set of radial data which lead to blowup via $\psi ^{T}$. In this sense, the blowup described by $\psi ^{T}$ is stable. To state our main result, we will need the notion of the blowup time at the origin.
From now on we use the abbreviation $\psi[t]=(\psi(t,\cdot),\partial_t \psi(t,\cdot))$.
\begin{definition}
Given initial data $(\psi_{0},\psi_{1})$, we define
\begin{align*}
T_{(\psi_{0},\psi_{1})} := \sup \left\{
T >0 \middle|
\begin{subarray}{c}
\exists \text{~solution~} \psi : C_{T} \longrightarrow \mathbb{R} \text{~to~} \eqref{cauchy} \text{~in the sense of} \\
\text{~Definition~} \ref{def} \text{~with initial data~} \psi[0]=(\psi_{0},\psi_{1}) |_{\mathbb{B}_{T}^{d}}
\end{subarray} \right\} \cup \{0\}.
\end{align*}
In the case where $T_{(\psi_{0},\psi_{1})} < \infty$, we call $T \equiv T_{(\psi_{0},\psi_{1})}$ the blowup time at the origin.
\end{definition}
We remark that the effective spatial dimension for the problem \eqref{cauchy} is $d+2$.
To see this, recall that, by regularity, we get the boundary condition $\eqref{reg}$. Therefore, it is natural to switch to the variable $\widehat \psi(t,r):=r^{-1}\psi (t,r)$. Then \eqref{cauchy} transforms into
\begin{align*}
\begin{cases}
\widehat{\psi} _{tt} (t,r) - \Delta ^{ \text{rad} }_{r,d+2} \widehat{\psi} (t,r) = - \frac{d-1}{2} \frac{ \sin ( 2 r \widehat{\psi} (t,r) ) -2r \widehat{\psi} (t,r)}{r^3}, &\quad \text{in } C_{T} \\
\widehat{\psi} (0,r)= \frac{f(r)}{r},~~~ \widehat{\psi} _{t} (0,r)= \frac{g(r)}{r}, &\quad \text{on } \{ t=0 \} \times [0,+\infty) \
\end{cases}
\end{align*}
Note that the nonlinearity is now generated by a smooth function and the radial Laplacian is in $d+2$ dimensions.
\begin{theorem}
Fix $T_{0}>0$ and $d\geq 3$ odd. Then there exist constants $M,\delta,\epsilon >0$ such that for any radial initial data
$\psi [0]$
satisfying
\begin{align*}
\Big \| |\cdot|^{-1} \Big( \psi[0] -\psi^{T_{0}}[0] \Big) \Big \|_{ H^{\frac{d+3}{2}} (\mathbb{B}_{T_{0}+\delta}^{d+2}) \times H^{\frac{d+1}{2}} (\mathbb{B}_{T_{0}+\delta}^{d+2} )} \leq \frac{\delta}{M}
\end{align*}
the following statements hold:
\begin{enumerate}
\item $T \equiv T_{\psi[0]} \in [T_{0}-\delta,T_{0}+\delta]$,
\item the solution $\psi :C_{T} \longrightarrow \mathbb{R}$ satisfies
\begin{align*}
(T-t)^{k-\frac{d}{2}} \Big \| |\cdot|^{-1} \Big( \psi (t,\cdot) - \psi ^{T} (t, \cdot) \Big) \Big \|_{ \dot{H}^{k}(\mathbb{B}^{d+2}_{T-t} ) } &\leq\delta (T-t)^{\epsilon} \\
(T-t)^{\ell+1-\frac{d}{2}} \Big \| |\cdot|^{-1} \Big( \partial_t \psi (t,\cdot) - \partial_t \psi ^{T} (t, \cdot) \Big) \Big \|_{ \dot{H}^{\ell}(\mathbb{B}^{d+2}_{T-t} ) } &\leq \delta (T-t)^{\epsilon}
\end{align*}
for all $k=0,1,2, \dots, \frac{d+3}{2}$ and $\ell=0,1,2\dots,\frac{d+1}{2}$.
\end{enumerate}
\end{theorem}
\begin{remark}
Note that the normalizing factors on the left-hand sides appear naturally and reflect the behavior of the self-similar solution $\psi^T$ in the respective homogeneous Sobolev norms,
i.e.,
\begin{align*}
\||\cdot|^{-1}\psi^T(t,\cdot)\|_{\dot H^k(\mathbb B^{d+2}_{T-t})}&=\left \||\cdot|^{-1} f_0 \left(\frac{|\cdot|}{T-t} \right) \right \|_{ \dot{H}^{k}( \mathbb{B}^{d+2}_{T-t} ) } =(T-t)^{\frac{d}{2}-k} \||\cdot|^{-1} f_0 \left(|\cdot| \right) \|_{ \dot{H}^{k}( \mathbb{B}^{d+2}_{1} ) }
\end{align*}
and
\begin{align*}
\||\cdot|^{-1}\partial_t \psi^T(t,\cdot)\|_{\dot H^\ell(\mathbb B^{d+2}_{T-t})}&=(T-t)^{-2}\left \|f_0' \left(\frac{|\cdot|}{T-t} \right) \right \|_{ \dot{H}^{\ell}( \mathbb{B}^{d+2}_{T-t} ) } \\
& =(T-t)^{\frac{d}{2}-\ell-1} \| f_0' \left(|\cdot| \right) \|_{ \dot{H}^{\ell}( \mathbb{B}^{d+2}_{1} ) } .
\end{align*}
\end{remark}
\subsection{Related results}
The question of singularity formation for the wave maps equation attracted a lot of interest in the recent past, in particular in the energy-critical case $d=2$.
Bizo\'n-Chmaj-Tabor \cite{BizChmTab01} were the first to provide numerical evidence for the existence of blowup for critical wave maps with $\mathbb S^2$ target.
Rigorous constructions of blowup solutions for this model are due to Krieger-Schlag-Tataru \cite{KriSchTat08}, Rodnianski-Sterbenz \cite{RodSte10}, and Rapha\"el-Rodnianski \cite{RapRod12}. Struwe \cite{Str03} showed that blowup for equivariant critical wave maps takes place via shrinking of a harmonic map.
This result was considerably generalized to the nonequivariant setting by Sterbenz-Tataru \cite{SteTat10a, SteTat10b}, see also Krieger-Schlag \cite{KriSch12} for a different approach to the large-data problem and e.g.~\cite{CotKenLawSch15a, CotKenLawSch15b, Cot15,
CanKri15, Sha16, LawOh16} for more recent results on blowup and large-data global existence.
The energy-supercritical regime $d\geq 3$ is less understood. The small-data theory at minimal regularity is due to Shatah-Tahvildar-Zadeh \cite{ShaTah94} in the equivariant setting whereas Tataru \cite{Tat98, Tat01} and Tao \cite{Tao01a, Tao01b} treat the general case, see also \cite{KlaRod01, ShaStr02, NahSteUhl03, Kri03, Tat05}.
Self-similar blowup solutions were found by Shatah \cite{Sha88}, Turok-Spergel \cite{TurSpe90}, Cazenave-Shatah-Tahvildar-Zadeh \cite{CazShaTah98}, and Bizo\'n-Biernat \cite{BizBie15}.
The stability of self-similar blowup was investigated numerically in \cite{BizChmTab00, BizBie15, BieBizMal16} and proved rigorously in \cite{Don11, DonSchAic12, CosDonXia16, CosDonGlo16} in the case $d=3$. Furthermore, Dodson-Lawrie \cite{DodLaw15} proved that solutions with bounded critical norm scatter.
Finally, concerning the method, we remark that our proof relies on the techniques developed in the series of papers \cite{Don11, DonSchAic12, DonSch12, DonSch14, Don14, DonSch16, DonSch16a}.
However, we would like to emphasize that the present paper is not just a straightforward continuation of these works. In fact, new interesting issues arise, e.g.~in the spectral theory part, see Proposition \ref{projection} below.
\section{Radial wave equation in similarity coordinates}
\label{sec:sim}
\noindent
To start our analysis, we rewrite the initial value problem $\eqref{cauchy}$ as an abstract Cauchy problem in a Hilbert space. First, we rescale the variable $\psi \equiv \psi (t,r)$ and switch to similarity coordinates. Then, we linearize around the rescaled blowup solution and derive the evolution problem satisfied by the perturbation.
\subsection{Rescaled variables}
We define
\begin{align*}
\chi _{1} (t,r) := \frac{T-t}{r} \psi (t,r),\qquad \chi _{2} (t,r) := \frac{(T-t)^2}{r} \psi _{t} (t,r).
\end{align*}
Using the fact that $\psi$ is a solution to \eqref{cauchy}, we get
\begin{align*}
\partial _{t} \chi _{1} (t,r) = & - \frac{1}{T-t} \chi _{1} (t,r) + \frac{1}{T-t} \chi _{2} (t,r), \\
\partial _{t} \chi _{2} (t,r) = & - \frac{2}{T-t} \chi _{2} (t,r) + (T-t) \Delta ^{ \text{rad} }_{r,d} \chi _{1} (t,r) + \frac{2(T-t)}{r} \partial _{r} \chi _{1} (t,r) \\
& + (d-1) \frac{T-t}{r^2} \chi _{1} (t,r) - \frac{d-1}{2} (T-t)^2 \frac{ \sin \big( \frac{2r}{T-t} \chi _{1} (t,r) \big) }{r^3}.
\end{align*}
We introduce similarity coordinates
\begin{align*}
\mu: C_{T} \longrightarrow \mathcal{C} ,~~ (t,r) \longmapsto \mu (t, r) =(\tau,\rho) := \Big( \log \Big ( \frac{T}{T-t} \Big ), \frac{r}{T-t} \Big ),
\end{align*}
which map the backward light-cone $C_T$ to the cylinder $\mathcal{C}:=(0,+ \infty) \times [0,1]$.
By the chain rule, the derivatives transform according to
\begin{align*}
\partial _{t} = \frac{e^{\tau}}{T} (\partial _{\tau} + \rho \partial _{\rho}),~ \partial _{r} = \frac{e^{\tau}}{T} \partial _{\rho},~ \partial ^{2}_{r} = \frac{e^{2 \tau}}{T^{2}} \partial ^{2}_{\rho},~ \Delta ^{ \text{rad} }_{r,d} = \frac{e^{2 \tau}}{T^{2}} \Delta ^{ \text{rad} }_{\rho,d}.
\end{align*}
Finally, setting
\begin{align*}
\psi _{j} (\tau, \rho) := \chi _{j} (t(\tau,\rho),r(\tau,\rho) ) = \chi _{j} (T(1-e^{-\tau}), T \rho e^{-\tau}),
\end{align*}
for $j=1,2$, we obtain the system
\begin{align} \label{free}
& \begin{pmatrix}
\partial _{\tau} \psi _{1} (\tau,\rho) \\
\partial _{\tau} \psi _{2} (\tau,\rho)
\end{pmatrix}
=
\begin{pmatrix}
- \psi _{1} (\tau,\rho) + \psi _{2} (\tau,\rho) - \rho \partial _{\rho} \psi _{1} (\tau,\rho) \\
\Delta ^{ \text{rad} }_{\rho,d+2} \psi _{1} (\tau,\rho) - \rho \partial _{\rho} \psi _{2} (\tau,\rho) - 2 \psi _{2} (\tau,\rho)
\end{pmatrix} \\ \nonumber
& \quad \quad \quad \quad \quad \quad - \frac{d-1}{ 2 \rho ^3}
\begin{pmatrix}
0 \\
\sin ( 2 \rho \psi _{1} (\tau,\rho) ) -2 \rho \psi _{1} (\tau,\rho)
\end{pmatrix},
\end{align}
for $(\tau,\rho) \in \mathcal{C}$.
Note that the linear part is the free operator of the $(d+2)-$dimensional wave equation in similarity coordinates and the nonlinearity is perfectly smooth. Furthermore, the initial data transform according to
\begin{align*}
\begin{pmatrix}
\psi _{1} (0,\rho) \\
\psi _{2} (0,\rho)
\end{pmatrix}
= \frac{1}{\rho}
\begin{pmatrix}
f(T\rho) \\
T g (T\rho)
\end{pmatrix}
= \frac{1}{\rho}
\begin{pmatrix}
\psi^{T_{0}}(0,T\rho) \\
T\partial_0 \psi ^{T_{0}} (0,T\rho)
\end{pmatrix}+
\frac{1}{\rho}
\begin{pmatrix}
F(T\rho) \\
T G(T\rho)
\end{pmatrix},
\end{align*}
for all $\rho \in [0,1]$. Here, $T_{0}>0$ is a fixed parameter and
\begin{align*}
& \psi ^{T_{0}} (0,T\rho) = 2 \arctan \left( \frac{T }{T_{0}} \frac{\rho}{ \sqrt{d-2} } \right),\quad \rho \equiv \rho (t,r):=\frac{r}{T-t}, \\
& F:=f-\psi ^{T_{0}}(0,\cdot), \quad G:=g-\partial_0 \psi^{T_{0}}(0,\cdot).
\end{align*}
We emphasize that the only trace of the parameter $T$ is in the initial data.
\subsection{Perturbations of the rescaled blowup solution} We linearize around the rescaled blowup solution and use the initial value problem for $(\psi_{1},\psi_{2})^{T}$ to obtain an initial value problem for the perturbation as an abstract Cauchy problem in a Hilbert space. For notational convenience we set
\begin{align*}
\mathbf{ \Psi } (\tau) (\rho) :=
\begin{pmatrix}
\psi _{1} (\tau, \rho) \\
\psi _{2} (\tau, \rho)
\end{pmatrix} .
\end{align*}
The blowup solution is given by
\begin{align*}
\mathbf{ \Psi } ^{ \text{res} } (\tau) (\rho)
=
\begin{pmatrix}
\frac{T-t}{r} \psi ^{T} (t,r) \\
\frac{(T-t)^2}{r} \psi _{t}^{T} (t,r)
\end{pmatrix} \Bigg | _{
(t,r)=\mu ^{-1} (\tau,\rho)
} =
\begin{pmatrix}
\frac{1}{\rho} f_0 (\rho) \\
f_0'(\rho)
\end{pmatrix},
\end{align*}
i.e., it is static.
We linearize around $\mathbf{\Psi}^{\text{res}}$ by inserting the ansatz $\mathbf{ \Psi }= \mathbf{ \Psi } ^{ \text{res} } + \mathbf{ \Phi } $ into \eqref{free}.
For brevity we write
\begin{align*}
\eta (x):= \sin(2x) - 2x,\quad x \in \mathbb{R}
\end{align*}
and use Taylor's theorem to expand the nonlinearity around
$\frac{1}{\rho}f_0(\rho)$. We get
\begin{align*}
\sin \left( 2 \rho \psi _{1} \right) - 2 \rho \psi _{1} & = \eta \left( \rho \psi _{1} \right) = \eta \left( f_0 + \rho \phi _{1} \right) = \eta \left( f_0 \right) + \eta ^{\prime} \left( f_0 \right) \rho \phi _{1} + N(\rho \phi _{1}),
\end{align*}
where, by definition,
\begin{align*}
N (\rho \phi _{1} ) := \eta( f_0 + \rho \phi _{1}) - \eta ( f_0 ) - \eta ^{\prime} ( f_0 ) \rho \phi _{1}.
\end{align*}
We plug the ansatz and the Taylor expansion into Eq.~\eqref{free} which yields the abstract evolution equation
\begin{align} \label{Evolution}
\left\{
\begin{array}{ll}
\partial _{\tau} \mathbf{ \Phi } (\tau) = \widetilde{\mathbf{ L } } \big( \mathbf{ \Phi } (\tau) \big) + \mathbf{ N } \big( \mathbf{ \Phi }( \tau ) \big ), & \mbox{for } \tau \in (0,+\infty) \\
\mathbf{ \Phi } (0)= \mathbf{U}(\mathbf{v},T),
\end{array}
\right.
\end{align}
for the perturbation
\begin{align*}
\mathbf{ \Phi } (\tau) (\rho)=
\begin{pmatrix}
\phi _{1} (\tau, \rho) \\
\phi _{2} (\tau, \rho)
\end{pmatrix}
=
\begin{pmatrix}
\psi _{1} (\tau, \rho) -\frac{1}{\rho} f_0 (\rho) \\
\psi _{2} (\tau, \rho) - f_0' (\rho)
\end{pmatrix}
\end{align*}
where
\begin{align}
&\widetilde{ \mathbf{ L } } := \widetilde{ \mathbf{ L } }_{0} + \mathbf{ L ^{\prime} }, \label{1} \\
& \widetilde{ \mathbf{ L } }_{0} \mathbf u (\rho):=
\begin{pmatrix}
- \rho u_1'(\rho)-u_1(\rho)+ u_2(\rho) \\
\Delta _{\rho,d+2}^{ \text{rad} } u_1(\rho) - \rho u_2'(\rho) - 2 u_{2}(\rho)
\end{pmatrix}, \label{2} \\
& \mathbf{ L ^{\prime} } \mathbf u(\rho):=
\begin{pmatrix}
0 \\
- \frac{d-1}{2} \frac{\eta' ( f_0(\rho) ) }{\rho ^2} u_{1}(\rho),
\end{pmatrix}, \label{3} \\
& \mathbf{ N }(\mathbf u) (\rho):=
\begin{pmatrix}
0 \\
- \frac{d-1}{2} \frac{N( \rho u_{1}(\rho) )}{\rho ^3}
\end{pmatrix}, \label{4}
\end{align}
for $\mathbf u=(u_1,u_2)$ and
\begin{align*}
\eta'(f_0(\rho))=2 \cos (2f_0(\rho)) -2 = -16(d-2) \frac{\rho ^2}{ \left( \rho^2 +d-2 \right)^2 }.
\end{align*}
Furthermore, the initial data are given by
\begin{align}
\label{5}
\mathbf \Phi(0)(\rho)=\mathbf U(\mathbf v,T)(\rho)=
\left (\begin{array}{c}
\frac{1}{\rho}f_0(\frac{T}{T_0}\rho) \\
\frac{T^2}{T_0^2}f_0'(\frac{T}{T_0}\rho) \end{array} \right )
-\left ( \begin{array}{c}
\frac{1}{\rho}f_0(\rho) \\ f_0'(\rho) \end{array} \right )
+\mathbf V(\mathbf v,T)(\rho)
\end{align}
where
\[ \mathbf{ V } (\mathbf{v},T) (\rho):=
\begin{pmatrix}
\frac{1}{\rho} F (T \rho) \\
\frac{T}{\rho} G (T \rho)
\end{pmatrix},~ \mathbf{v} :=
\begin{pmatrix}
F \\
G
\end{pmatrix} .
\]
\subsection{Strong light-cone solutions}
To proceed, we need to define what it means to be a solution to the evolution problem $\eqref{Evolution}$. We introduce the Hilbert space
\begin{align*}
\mathcal{H} := H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2} ) \times H_{\text{rad}}^{\frac{d+1}{2}} (\mathbb{B}^{d+2} ).
\end{align*}
Below we prove that the closure of the operator $\widetilde{\mathbf L}$, augmented with a suitable domain, generates a semigroup $\mathbf S(\tau)$ on $\mathcal H$.
This allows us to formulate $\eqref{Evolution}$ as an abstract integral equation via Duhamel's formula,
\begin{align} \label{Duhamel}
\Phi(\tau)= \mathbf{S}(\tau) \mathbf{U}(\mathbf{v},T)+ \int _{0}^{\tau} \mathbf{S}(\tau - s) \mathbf{N} \big( \Phi (s) \big) ds.
\end{align}
Eq.~\eqref{Duhamel} yields a natural notion of strong solutions in light-cones.
\begin{definition} \label{def}
We say that $\psi:C_{T} \longrightarrow \mathbb{R}$ is a solution to $\eqref{cauchy}$ if the corresponding $\Phi:[0,\infty) \longrightarrow \mathcal{H}$ belongs to $C \big( [0,\infty);\mathcal{H} \big)$ and satisfies $\eqref{Duhamel}$ for all $\tau \ge 0$.
\end{definition}
\section{Proof of the theorem}
\subsection{Notation} Throughout we denote by $\sigma(\mathbf{L}),~\sigma_{p}(\mathbf{L})$ and $\sigma_{e}(\mathbf{L})$ the spectrum, point spectrum, and essential spectrum, respectively, of a linear operator $\mathbf{L}$. Furthermore, we write $\mathbf{R}_{\mathbf{L}}(\lambda):=\left(\lambda - \mathbf{L} \right)^{-1}$, $\lambda \in \rho(\mathbf{L})$, for the resolvent operator where $\rho (\mathbf{L}):=\mathbb{C} \setminus \sigma(\mathbf{L})$ stands for the resolvent set. As usual, $a \lesssim b$ means $a \leq cb$ for an absolute, strictly positive constant $c$ which may change from line to line. Similarly, we write $a \simeq b$ if $a \lesssim b$ and $b \lesssim a$.
\subsection{Functional setting}
In the following we consider radial Sobolev functions $\hat{u} : \mathbb{B}_{R}^{d+2} \rightarrow \mathbb{C}$, that is, $\hat{u} (\xi)=u(|\xi|)$ for all $\xi \in \mathbb{B}_{R}^{d+2}$ where
$u:(0,R) \rightarrow \mathbb{C}$. In particular, we define
\begin{align*}
u \in
H_{\text{rad}}^{m} ( \mathbb{B}_{R}^{d+2} ) \iff \hat{u} \in H^{m} (\mathbb{B}_{R}^{d+2} ) := W^{m,2} (\mathbb{B}_{R}^{d+2} ).
\end{align*}
The function space $H_{\text{rad}}^{m} (\mathbb{B}_R^{d+2} )$ becomes a Banach space endowed with the norm
\begin{align*}
\| u \|_{{H^{m}_{\text{rad}}} ( \mathbb{B}_{R}^{d+2} )} = \| \hat{u} \|_{ {H}^{m} (\mathbb{B}_{R}^{d+2} )}.
\end{align*}
From now, we shall not distinguish between $u(|\cdot|)$ and $\hat{u}$. In addition, we introduce the Hilbert space
\begin{align} \label{H}
\mathcal{H} := H_{\text{rad}}^{m} (\mathbb{B}^{d+2} ) \times H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ),\quad m \equiv m_{d}:=\frac{d+3}{2}
\end{align}
associated with the induced norm
\begin{align*}
\| \mathbf{u} \|^{2} = \left \| (u_{1},u_{2}) \right \|^{2} := \| u_{1} \|_{H_{\text{rad}}^{m} (\mathbb{B}^{d+2} )}^{2} + \| u_{2} \|_{H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ) }^{2}.
\end{align*}
\subsection{Well-posedness of the linearized problem}
We start with the study of the linearized problem and we convince ourselves that it is well-posed. Recall that the linear operator is given by $\eqref{1}$. To proceed, we follow \cite{DonSch16} and define the domain of the free part by
\begin{align*}
\mathcal{D} (\widetilde{ \mathbf{L} }_{0}) := \Big \{ \mathbf{u} \in C^{\infty}(0,1)^2 \cap \mathcal H: w_{2} \in C^{2}\left([0,1]\right),~w_{1} \in C^{3} \left([0,1] \right),~ w_{1}^{\prime \prime} (0)=0 \Big \},
\end{align*}
where, for all $\rho \in [0,1]$ and $j=1,2$,
\begin{align*}
w_{j} (\rho) := D_{d+2} u_{j} (\rho) := \Big ( \frac{1}{\rho} \frac{d}{d\rho} \Big )^{ \frac{d-1}{2} } \big( \rho ^{d} u_{j}(\rho) \big) = \sum _{n=0}^{ \frac{d-1}{2} } c _{n} \rho^{n+1} u_{j}^{(n)} (\rho),
\end{align*}
for some strictly positive constants $c_{n}~ (n=0,1,\dots,\frac{d-1}{2})$. Note that the density of $C^{\infty} ( \overline{ \mathbb{B}^{d+2} } )$ in $H^{m}(\mathbb{B}^{d+2})$ implies the density of
\begin{align*}
\big( C_{\text{even}}^{\infty} [0,1] \big)^2 := \Big \{ \mathbf{u} \in \big( C^{\infty} [0,1] \big)^2:~~\mathbf{u}^{(2k+1)}(0)=0,~~k=0,1,2,\dots \Big \} \subset \mathcal{D} ( \widetilde{ \mathbf{L} }_{0})
\end{align*}
in $\mathcal{H}$ which in turn proves the density of $\mathcal{D} ( \widetilde{ \mathbf{L} }_{0})$ in $\mathcal{H}$. In other words, $\overline{ \mathcal{D} ( \widetilde{ \mathbf{L} }_{0}) } = \mathcal{H}$ and $\widetilde{ \mathbf{L} }_{0}$ is densely defined.
\begin{prop} \label{growthestimatelinear}
The operator $\widetilde{ \mathbf{L} }_{0}: \mathcal{D}(\widetilde{ \mathbf{L} }_{0}) \subset \mathcal{H} \longrightarrow \mathcal{H} $ is closable and its closure $ \mathbf{L} _{0} : \mathcal{D}(\mathbf{L} _{0}) \subset \mathcal{H} \longrightarrow \mathcal{H} $ generates a strongly continuous one-parameter semigroup $( \mathbf{S}_{0}(\tau) )_{\tau \ge 0}$ of bounded operators on $\mathcal{H}$ satisfying the growth estimate
\begin{align} \label{growth}
\| \mathbf{S} _{0} (\tau) \| \leq M e^{-\tau}
\end{align}
for all $\tau\geq 0$ and some constant $M\geq 1$.
In addition, the operator $\mathbf{L}:= \mathbf{L}_{0} + \mathbf{L}^{\prime}: \mathcal{D}(\mathbf{L}) \subset \mathcal{H} \longrightarrow \mathcal{H},~\mathcal{D}(\mathbf{L}) =\mathcal{D}(\mathbf{L}_{0})$, is the generator of a strongly continuous semigroup $( \mathbf{S}(\tau) )_{\tau \ge 0}$ on $\mathcal H$
and $\mathbf L': \mathcal H\to \mathcal H$ is compact.
\end{prop}
\begin{proof}
The fact that $\widetilde{ \mathbf{L} }_{0}$ is closable and its closure generates a semigroup satisfying the growth estimate $\eqref{growth}$ follows from Proposition 4.9 in \cite{DonSch16} by replacing $d$ in \cite{DonSch16} with $d+2$ and setting $p=3$. It remains to apply the Bounded Perturbation Theorem to show that $\mathbf{L}:=\mathbf{L}_{0} + \mathbf{L}^{\prime}$ is the generator of a strongly continuous semigroup $( \mathbf{S}(\tau) )_{\tau \ge 0}$. In fact, we prove that $\mathbf{L}^{\prime}: \mathcal{H} \longrightarrow \mathcal{H}$, defined in $\eqref{3}$, is compact. We pick an arbitrary sequence $(\mathbf{u}_{n})_{n \in \mathbb{N}} \subseteq \mathcal{H}$ that is uniformly bounded. By Lemma 4.2 in \cite{DonSch16}, $(D_{d+2} u_{1,n} )_{n \in \mathbb{N}}$ is uniformly bounded in $H^{2} (0,1)$ and the compactness of the Sobolev embedding $H^{2}(0,1) \xhookrightarrow{} H^{1}(0,1)$ implies the existence of a subsequence, again denoted by $(D_{d+2} u_{1,n} )_{n \in \mathbb{N}}$, which is Cauchy in $H^{1}(0,1)$. Hence, for any $n,m \in \mathbb{N}$ sufficiently large, we get
\begin{align*}
\| \mathbf{L}^{\prime} \mathbf{u}_{n} - \mathbf{L}^{\prime}
\mathbf{u}_{m} \| & \lesssim \left \| \frac{ \eta'
\circ f_0 }{|\cdot|^2} \right \|_{W^{1,\infty}(0,1)} \| D_{d+2} u_{1,n} - D_{d+2} u_{1,m} \|_{H^{1}(0,1)} \\
& \simeq \left \| \frac{ 1}{ (|\cdot|^{2} + d-2)^2 }\right \|_{W^{1,\infty}(0,1)} \|D_{d+2} u_{1,n} - D_{d+2} u_{1,m} \|_{H^{1}(0,1)} \\
& \simeq \| D_{d+2} u_{1,n} - D_{d+2} u_{1,m} \|_{H^{1}(0,1)},
\end{align*}
which shows that $(\mathbf{L}^{\prime} \mathbf{u}_{n})_{n \in \mathbb{N}}$ is Cauchy in $\mathcal{H}$. This proves that $\mathbf{L}^{\prime}$ is compact.
\end{proof}
\subsection{The spectrum of the free operator} \label{O1}
We can use the previous decay estimate for the semigroup $( \mathbf{S}_{0}(\tau) )_{\tau \ge 0}$ to locate the spectrum of the free operator $\mathbf L_0$. Indeed, by \cite{EngNag00}, p.~55, Theorem 1.10, we immediately infer
\begin{equation}
\label{spectrumLo}
\sigma(\mathbf L_0)\subseteq \{\lambda\in \mathbb C: \text{Re}\lambda\leq -1\}.
\end{equation}
\subsection{The spectrum of the full linear operator} Next, we need to derive a suitable growth estimate for the semigroup $\mathbf{S}(\tau)$ and therefore turn our attention to the spectrum of the operator $\mathbf{L}$. To begin with, we consider the point spectrum.
\begin{prop} \label{OhaioProp}
We have
\begin{align*}
\sigma_p (\mathbf{L}) \subseteq \{ \lambda \in \mathbb{C}:~~\mathrm{Re} \lambda <0 \} \cup \{1\}.
\end{align*}
\end{prop}
\begin{proof}
We argue by contradiction and assume there exists a $\lambda \in \sigma_p(\mathbf{L}) \setminus \{ 1\}$ with $\mathrm{Re}\lambda \geq 0$. The latter means that there exists an element $\mathbf{u}=(u_{1},u_{2}) \in \mathcal{D} (\mathbf{L}) \setminus \{ 0 \}$ such that $\mathbf{u} \in$ $\ker(\lambda-\mathbf{L})$. A straightforward calculation shows that the spectral equation $(\lambda-\mathbf{L}) \mathbf{u} =0$ implies
\begin{small}
\begin{align*}
\big (1-\rho ^2 \big) u_{1}^{\prime \prime} (\rho) + \Bigg( \frac{d+1}{\rho} - 2(\lambda +2) \rho \Bigg)u_{1}^{\prime} (\rho)- \Bigg( (\lambda + 1) (\lambda +2)+ \frac{d-1}{2} V(\rho) \Bigg ) u_{1} (\rho) =0,
\end{align*}
\end{small}
for $\rho \in (0,1)$, where
\begin{align*}
V(\rho):=\frac{\eta'(f_0(\rho))}{\rho ^2} = \frac{-16(d-2)}{(\rho^2+d-2 )^2}.
\end{align*}
Since $\mathbf u\in \mathcal H$, we see that $u_{1}$ must lie in $H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2})$. To proceed, we set $v_{1}(\rho):=\rho u_{1}(\rho)$. A straightforward computation implies that $v_{1}$ solves the second order ordinary differential equation
\begin{align}
\label{eq:specode}
\big (1-\rho ^2 \big) v_{1}^{\prime \prime} (\rho) + \Bigg( \frac{d-1}{\rho} - 2(\lambda +1) \rho \Bigg)v_{1}^{\prime} (\rho)- \Bigg( \lambda (\lambda +1)+ \frac{d-1}{2}\hat{V}(\rho) \Bigg ) v_{1} (\rho) =0,
\end{align}
for $\rho \in (0,1)$, where
\begin{align*}
\hat{V}(\rho):= 2\frac{ \rho ^4 -6(d-2)\rho ^2 +(d-2)^2 }{\rho ^2 ( \rho^2 +d-2 )^2 }.
\end{align*}
We remark that this is the spectral equation studied in \cite{CosDonXia16, CosDonGlo16}.
Since all coefficients in \eqref{eq:specode} are smooth functions in $(0,1)$, we immediately get the a priori regularity $v_{1} \in C^{\infty}(0,1)$.
We claim that $v_1\in C^\infty[0,1]$. To prove this, we employ Frobenius' method. The point $\rho =0$ is a regular singularity with Frobenius indices $s_{1}=1$ and $s_{2}=-(d-1)$. Therefore, by Frobenius theory, there exists a solution of the form
\begin{align*}
v^{1}_{1} (\rho)=\rho \sum _{i=0}^{\infty} x_{i} \rho ^{i} = \sum _{i=0}^{\infty} x_{i} \rho ^{i+1},
\end{align*}
which is analytic locally around $\rho=0$. Moreover, since $s_{1}-s_{2}=d \in \mathbb{N}_{\text{odd}}$, there exists a second linearly independent solution of the form
\begin{align*}
v^{2}_{1} (\rho) &=C \log(\rho) v^{1}_{1} (\rho) + \rho ^{-(d-1)} \sum _{i=0}^{\infty} y_{i} \rho ^{i}
\end{align*}
for some constant $C\in \mathbb C$ and $y_0=1$.
However, $v^{2}_{1}(\rho)/\rho$ does not lie in the Sobolev space $H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2} )$ due to the strong singularity in the second term, no matter the value of the constant $C$. Consequently, $v_1$ must be a multiple of $v_1^1$ and we infer $v_1\in C^\infty[0,1)$. Similarly, the point $\rho =1$ is a regular singularity with Frobenius indices
$s_{1}=0$ and $s_{2}=\frac{d-1}{2}-\lambda$.
Now we need to distinguish different cases. If $\frac{d-1}{2} -\lambda \notin \mathbb{Z}$, we have two linearly independent solutions of the form
\begin{align*}
& v_{1} ^{1}(\rho) =\sum_{i=0}^\infty x_i (1-\rho)^i, \\
& v_{1} ^{2}(\rho)=(1-\rho)^{\frac{d-1}{2}-\lambda}\sum_{i=0}^\infty y_i (1-\rho)^i
\end{align*}
with $x_0=y_0=1$. The solution $v_{1} ^{2}(\rho)/\rho$ does not belong to the Sobolev space $H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2} )$ and thus, $v_1\in C^\infty[0,1]$. In the case
$\frac{d-1}{2} - \lambda:=k \in \mathbb{N}_{0}$, we have two fundamental solutions of the form
\begin{align*}
v_{1} ^{1}(\rho)&=(1-\rho)^{k}\sum_{i=0}^\infty x_i(1-\rho)^i,\qquad x_0=1 \\
v_{1} ^{2}(\rho)&= \sum_{i=0}^\infty y_i (1-\rho)^i+C\log (1-\rho)v_1^1(\rho),\qquad y_0=1
\end{align*}
near $\rho= 1$. By assumption, $\mathrm{Re}\lambda\geq 0$ and thus, $k \leq \frac{d-1}{2}$. Hence, $v_1^2(\rho)/\rho$ does not lie in the Sobolev space $H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2} )$ unless $C=0$ and we conclude $v_1\in C^\infty[0,1]$.
Finally, if $\frac{d-1}{2}-\lambda=:-k$ is a negative integer, the fundamental system around $\rho=1$ has the form
\begin{align*}
v_1^1(\rho)&=\sum_{i=0}^\infty x_i(1-\rho)^i \\
v_1^2(\rho)&=C\log(1-\rho)v_1^1(\rho)+(1-\rho)^{-k}\sum_{i=0}^\infty y_i (1-\rho)^i
\end{align*}
with $x_0=y_0=1$.
Again, $v_1^2(\rho)/\rho$ does not belong to $H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2} )$ and we infer $v_1\in C^\infty[0,1]$ also in this case.
In summary, we have found a nontrivial solution $v_1\in C^\infty[0,1]$ to Eq.~\eqref{eq:specode} with $\mathrm{Re}\lambda \geq 0$, $\lambda\not=1$, but this contradicts \cite{CosDonXia16, CosDonGlo16}.
\end{proof}
The fact that $\mathbf L'$ is compact implies that the result on the point spectrum from Proposition \ref{OhaioProp} is already sufficient to obtain the same information on the full spectrum.
\begin{cor}
\label{cor:spec}
We have
\begin{align*}
\sigma (\mathbf{L}) \subseteq \{ \lambda \in \mathbb{C}:~~\mathrm{Re} \lambda <0 \} \cup \{1\}.
\end{align*}
\end{cor}
\begin{proof}
Suppose there exists a $\lambda\in \sigma(\mathbf L)\setminus \{1\}$ with $\mathrm{Re}\lambda\geq 0$.
Then $\lambda\notin\sigma(\mathbf L_0)$ and thus, $\mathbf R_{\mathbf L_0}(\lambda)$ exists.
From the identity $\lambda-\mathbf L=[1-\mathbf L'\mathbf R_{\mathbf L_0}(\lambda)](\lambda-\mathbf L_0)$ we see that $1\in \sigma(\mathbf L'\mathbf R_{\mathbf L_0}(\lambda))$. Since $\mathbf L'\mathbf R_{\mathbf L_0}(\lambda)$ is compact, it follows that $1\in \sigma_p(\mathbf L'\mathbf R_{\mathbf L_0}(\lambda))$ and thus, there exists a nontrivial $\mathbf f \in \mathcal H$ such that $[1-\mathbf L'\mathbf R_{\mathbf L_0}(\lambda)]\mathbf f=0$.
Consequently, $\mathbf u:=\mathbf R_{\mathbf L_0}(\lambda)\mathbf f\not= 0$
satisfies $(\lambda-\mathbf L)\mathbf u=0$ and thus, $\lambda\in \sigma_p(\mathbf L)$.
This contradicts Proposition \ref{OhaioProp}.
\end{proof}
Next, we provide a uniform bound on the resolvent.
To this end, we define
\begin{align*}
\Omega _{\epsilon,R} := \{ \lambda \in \mathbb{C}:~~~\text{Re}\lambda \geq -1+\epsilon, |\lambda|\geq R \}
\end{align*}
for $\epsilon, R >0$.
\begin{prop} \label{O2}
Let $\epsilon >0$. Then there exist constants $R_{\epsilon}, C_{\epsilon}>0$ such that the resolvent $\mathbf{R}_{\mathbf{L}}$ exists on $\Omega _{\epsilon,R_\epsilon}$ and satisfies
\begin{align*}
\| \mathbf{R} _{\mathbf{L} } (\lambda)\| \leq C_{\epsilon}
\end{align*}
for all $\lambda\in \Omega_{\epsilon,R_\epsilon}$.
\end{prop}
\begin{proof}
Fix $\epsilon>0$ and take $\lambda\in \Omega_{\epsilon,R}$ for an arbitrary $R>0$.
Then $\lambda\in \rho(\mathbf L_0)$ and the identity $(\lambda-\mathbf L)=[1-\mathbf L' \mathbf R_{\mathbf L_0}(\lambda)](\lambda-\mathbf L_0)$ shows that
$\mathbf R_{\mathbf L}(\lambda)$ exists if and only if
$1-\mathbf L'\mathbf R_{\mathbf L_0}(\lambda)$ is invertible.
By a Neumann series argument this is the case if $\|\mathbf L'\mathbf R_{\mathbf L_0}(\lambda)\|<1$.
To prove smallness of $\mathbf{L}^{\prime} \mathbf{R}_{\mathbf{L}_{0}} (\lambda)$, we recall the definition of $\mathbf L'$, Eq.~\eqref{3},
\begin{align*}
\mathbf{ L ^{\prime} } \mathbf{u} (\rho)=
\begin{pmatrix}
0 \\
- \frac{d-1}{2} V(\rho )u _{1} (\rho)
\end{pmatrix},
\quad V(\rho)=\frac{\eta'(f_0(\rho))}{\rho ^2} = \frac{-16(d-2)}{(\rho^2+d-2 )^2}.
\end{align*}
Let $\mathbf u=\mathbf R_{\mathbf L_0}(\lambda)\mathbf f$ or, equivalently,
$(\lambda - \mathbf{L}_{0}) \mathbf{u} =\mathbf{f}$. The latter equation implies
\begin{align*}
(\lambda +1) u_{1} (\rho ) =u_{2} (\rho)- \rho u_{1}^{\prime} (\rho) + f_{1} (\rho).
\end{align*}
Now we use Lemma 4.1 from \cite{DonSch16} and $\|V^{(k)}\|_{L^\infty(0,1)}\lesssim 1$ for all $k\in \{0,1,\dots,m-1\}$ to obtain
\begin{align*}
|\lambda +1| \| \mathbf{L}^{\prime} \mathbf{R}_{\mathbf{L} _{0}} (\lambda) \mathbf{f} \| & = |\lambda +1| \| \mathbf{L}^{\prime} \mathbf{u} \| \simeq \big \| V \big( u_{2} - (\cdot) u_{1}^{\prime} +f_{1} \big) \big \|_{ H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ) } \\
& \lesssim \| u_{2} \|_{ H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ) } + \| (\cdot) u_{1}^{\prime} \|_{ H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ) } + \| f_{1} \|_{ H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ) } \\
& \lesssim \| u_{2} \|_{ H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ) } + \| u_{1} \|_{ H_{\text{rad}}^{m} (\mathbb{B}^{d+2} ) } + \| f_{1} \|_{ H_{\text{rad}}^{m-1} (\mathbb{B}^{d+2} ) } \\
& \simeq \| \mathbf{u} \| + \| \mathbf{f} \| \lesssim \Big( \frac{1}{\text{Re}\lambda +1} + 1 \Big) \| \mathbf{f} \| \\
&\lesssim \|\mathbf f\|,
\end{align*}
where we have used the bound
\[ \|\mathbf u\|=\|\mathbf R_{\mathbf L_0}(\lambda)\mathbf f\|\leq \frac{1}{\mathrm {Re}\lambda+1}\|\mathbf f\| \]
which follows from semigroup theory, see \cite{EngNag00}, p.~55, Theorem 1.10.
In other words,
\begin{align*}
\| \mathbf{L}^{\prime} \mathbf{R}_{\mathbf{L} _{0}} (\lambda) \| \lesssim \frac{1}{ |\lambda +1| } \leq \frac{1}{|\lambda| -1} \leq \frac{1}{R-1}
\end{align*}
and by choosing $R$ sufficiently large, we can achieve the desired $\|\mathbf L'\mathbf R_{\mathbf L_0}(\lambda)\|<1$.
As a consequence, $[1- \mathbf{L}^{\prime} \mathbf{R}_{\mathbf{L} _{0}} (\lambda) ]^{-1}$
exists and we obtain the bound
\begin{align*}
\| \mathbf{R}_{\mathbf{L}} (\lambda) \| & = \| \mathbf{R}_{\mathbf{L}_{0}} (\lambda) [1- \mathbf{L}^{\prime} \mathbf{R}_{\mathbf{L} _{0}} (\lambda) ]^{-1} \| \\
& \leq \| \mathbf{R}_{\mathbf{L}_{0}} (\lambda) \| \| [1- \mathbf{L}^{\prime} \mathbf{R}_{\mathbf{L} _{0}} (\lambda) ]^{-1} \| \\
& \leq \| \mathbf{R}_{\mathbf{L}_{0}} (\lambda) \| \sum _{i=0}^{\infty} \| \mathbf{L}^{\prime} \mathbf{R}_{\mathbf{L} _{0}} (\lambda) \|^{i} \\
& \leq C_\epsilon.
\end{align*}
\end{proof}
\subsection{The eigenspace of the isolated eigenvalue} In this section, we convince ourselves that the eigenspace of the isolated eigenvalue $\lambda=1$ for the full linear operator $\mathbf{L}$ is spanned by
\begin{align} \label{g}
\mathbf{g} (\rho):=
\begin{pmatrix}
g_{1} (\rho) \\
g_{2} (\rho)
\end{pmatrix}
=
\begin{pmatrix}
\frac{1}{\rho ^2 +d-2} \\
\frac{2(d-2)}{(\rho ^2 + d-2)^2}
\end{pmatrix},
~~\rho \in [0,1].
\end{align}
Consequently, we are looking for all $\mathbf{u}=(u_{1},u_{2}) \in \mathcal{D} (\mathbf{L}) \setminus \{ 0 \}$ such that $\mathbf{u} \in \ker(1-\mathbf{L})$. A straightforward calculation shows that the spectral equation $(1-\mathbf{L}) \mathbf{u} =0$ is equivalent to the following system of ordinary differential equations,
\begin{align*}
\begin{cases}
u_{2} (\rho)= \rho u_{1}^{\prime} (\rho)+ 2 u_{1} (\rho), & \\
\big (1-\rho ^2 \big ) u_{1}^{\prime \prime} (\rho) + \Big( \frac{d+1}{\rho} - 6\rho \Big)u_{1}^{\prime} (\rho)- \Big( 6+ \frac{d-1}{2} \frac{\eta'(f_0(\rho))}{\rho ^2} \Big ) u_{1} (\rho) =0, \
\end{cases}
\end{align*}
for $\rho \in (0,1)$. One can verify that a fundamental system of the second equation is given by
\begin{align*}
\Big \{ \frac{1}{\rho ^2 + d-2},~~\frac{Q_{d-1}(\rho)}{\rho ^d (\rho ^2 + d-2)} \Big \}
\end{align*}
where $Q_{d-1}$ is a polynomial of degree $d-1$ with non-vanishing constant term. We can write the general solution for the second equation as
\begin{align*}
u_{1} (\rho) = C_{1} \frac{1}{\rho ^2 + d-2} + C_{2} \frac{Q_{d-1}(\rho)}{\rho ^d (\rho ^2 + d-2)}.
\end{align*}
We must ensure that $\mathbf{u} \in \mathcal{D}( \mathbf{L})$ which in particular implies that $u_{1}$ must lie in the Sobolev space $H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2} )$. This requirement yields $C_{2}=0$ which in turn gives $\mathbf u \in \langle \mathbf{g} \rangle$. In conclusion,
\begin{align} \label{ker}
\text{ker}(1-\mathbf{L}) = \langle \mathbf{g} \rangle,
\end{align}
as initially claimed.
\subsection{Time evolution for the linearized problem}
We now focus on the time evolution for the linearized problem \eqref{Evolution}. Due to the presence of the eigenvalue $\lambda =1$, there exists a one dimensional subspace $\langle \mathbf{g} \rangle$ of initial data for which the solution grows exponentially in time. We call this subspace the unstable space. On the other hand, initial data from the stable subspace lead to solutions that decay exponentially in time. As we will show now, this time evolution estimates can be established using semigroup theory together with the previous results on the spectrum of the linear operators $\mathbf{L}_{0}$ and $\mathbf{L}$. To make this rigorous, we follow \cite{DonSch16} and use the fact that the unstable eigenvalue $\lambda=1$ is isolated to introduce a (non-orthogonal) projection $\mathbf{P}$. This projection decomposes the Hilbert space of initial data $\mathcal{H}$ into the stable and the unstable space.
Most importantly, we must ensure that $\langle \mathbf{g}\rangle$ is the only unstable direction in $\mathcal{H}$. This is the key statement of the following proposition and it is equivalent to the fact that
the algebraic multiplicity of the isolated eigenvalue $\lambda =1$,
\begin{align*}
m_{a} (\lambda =1):=\mathrm{rank}\, \mathbf{P} =\dim\mathrm{rg}\, \mathbf{P},
\end{align*}
is equal to one. We denote by $\mathcal{B}(\mathcal{H})$ the set of bounded operators from $\mathcal{H}$ to itself and prove the following result.
\begin{prop} \label{projection}
There exists a projection
\begin{align*}
\mathbf{P} \in \mathcal{B} (\mathcal{H}),\quad \mathbf{P}: \mathcal{H} \longrightarrow \langle \mathbf{g} \rangle,
\end{align*}
which commutes with the semigroup $\big( \mathbf{S}(\tau) \big) _{\tau \ge 0}$. In addition, we have
\begin{equation}
\mathbf{S}(\tau) \mathbf{P} \mathbf{f} = e ^{\tau} \mathbf{P} \mathbf {f}, \label{semigroup1}
\end{equation}
and there exist constants $C, \epsilon>0$ such that
\begin{equation}
\| (1-\mathbf{P}) \mathbf{S}(\tau) \mathbf{f} \| \leq C e^{-\epsilon \tau } \| (1- \mathbf{P}) \mathbf{f} \|,\label{semigroup2}
\end{equation}
for all $\mathbf {f} \in \mathcal{H}$ and $\tau \geq 0$.
\end{prop}
\begin{proof}
We argue along the lines of \cite{DonSch16}. Since the eigenvalue $\lambda =1$ is isolated, we can define the spectral projection
\begin{align*}
\mathbf{P}: \mathcal{H} \longrightarrow \mathcal{H}, \quad \mathbf{P} := \frac{1}{2 \pi i} \int _{\gamma} \mathbf{R}_{\mathbf{L}} (\mu) d \mu,
\end{align*}
where $\gamma : [0,2\pi] \longrightarrow \mathbb{C}$ is a positively orientated circle around $\lambda =1$ with radius so small that $\gamma \big( [0,2 \pi] \big) \subseteq \rho (\mathbf{L})$, see e.g.~\cite{Kat95}. The projection $\mathbf P$ commutes with the operator $\mathbf{L}$ and thus with the semigroup $\mathbf S(\tau)$. Moreover, $\mathbf{P}$ decomposes the Hilbert space as $\mathcal{H} =\mathcal M \oplus \mathcal N$, where $\mathcal M:=\rg \mathbf P$ and $\mathcal N:=\rg(1-\mathbf P)=\ker \mathbf{P}$. Most importantly, the operator $\mathbf{L}$ is decomposed accordingly into the parts $\mathbf{L}_{\mathcal M}$ and $\mathbf{L}_{\mathcal N}$ on $\mathcal M$ and $\mathcal N$, respectively. The spectra of these operators are given by
\begin{align} \label{spectrum}
\sigma \left( \mathbf L_{\mathcal N} \right ) = \sigma (\mathbf{L}) \setminus \{1\},\qquad \sigma \left( \mathbf{L}_{\mathcal M} \right ) = \{1\}.
\end{align}
We refer the reader to \cite{Kat95} for these standard results.
To proceed, we break down the proof into the following steps: \\ \\
Step 1: We prove that $\rank\mathbf{P}:=\dim\rg\mathbf{P}<+\infty$. We argue by contradiction and assume that $\rank\mathbf{P}=+\infty$. Using \cite{Kat95}, p.~239, Theorem 5.28, the fact that $\mathbf{L}^{\prime}$ is compact (see Proposition $\ref{growthestimatelinear}$), and the fact that the essential spectrum is stable under compact perturbations (\cite{Kat95}, p.~244, Theorem 5.35), we obtain
\begin{align*}
\mathrm{rank}\,\mathbf{P} = +\infty \Longrightarrow 1 \in \sigma _{e} (\mathbf{L}) = \sigma _{e} (\mathbf{L} -\mathbf{L}^{\prime})=\sigma _{e}(\mathbf{L}_{0}) \subseteq \sigma (\mathbf{L}_{0}).
\end{align*}
This contradicts \eqref{spectrumLo}.
\\ \\
Step 2: We prove that $\langle \mathbf{g} \rangle=\mathrm{rg}\,\mathbf{P}$. It suffices to show $\mathrm{rg}\,\mathbf{P} \subseteq \langle \mathbf{g} \rangle$ since the reverse inclusion follows from the abstract theory. From Step 1, the operator $1-\mathbf{L}_{\mathcal M}$ acts on the finite-dimensional Hilbert space $\mathcal M=\rg \mathbf P$ and, from $\eqref{spectrum}$, $\lambda =0$ is its only spectral point. Hence, $1-\mathbf{L}_{\mathcal M}$ is nilpotent, i.e., there exists a $k\in \mathbb N$ such that
\begin{align*}
\big( 1-\mathbf{L}_{\mathcal M} \big)^{k} \mathbf{u}= 0
\end{align*}
for all $\mathbf{u} \in \mathrm{rg}\, \mathbf{P}$
and we assume $k$ to be minimal.
Recall $\eqref{ker}$ to see that the claim follows immediately if $k=1$. We proceed by contradiction and assume that $k\geq 2$. Then, there exists a nontrivial function $\mathbf{u} \in \rg \mathbf{P} \subseteq \mathcal{D}( \mathbf{L})$ such that
$(1-\mathbf L_{\mathcal M})\mathbf u$ is nonzero and belongs to $\ker(1-\mathbf L_{\mathcal M})\subseteq \ker(1-\mathbf L)=\langle\mathbf g\rangle$.
This means that $\mathbf{ u } \in \rg\mathbf{P} \subseteq \mathcal{D} (\mathbf{L})$ satisfies $(1- \mathbf{ L }) \mathbf{ u } = \alpha \mathbf{ g }$, for some $\alpha \in \mathbb{ C }\setminus \{0\}$. Without loss of generality we set $\alpha=-1$ and a straightforward computation shows that the first component of $\mathbf u$ solves the second order differential equation
\begin{align*}
\left(1-\rho ^2\right) u_{1}^{\prime \prime} (\rho) + \left( \frac{d+1}{\rho} -6 \rho \right ) u_{1} ^{\prime} (\rho) - \left ( 6 + \frac{d-1}{2} \frac{\eta'(f_0(\rho) ) }{\rho ^2} \right ) u_{1} (\rho) = G(\rho),
\end{align*}
for $\rho \in (0,1)$, where
\begin{align*}
G(\rho):= \frac{\rho ^2 +5(d-2)}{ (\rho ^2 + d-2)^2},~~\rho \in [0,1].
\end{align*}
In order to find the general solution to this equation, recall $\eqref{g}$ to see that
\begin{align*}
\hat{u}_{1} (\rho):= g_{1} (\rho) = \frac{1}{\rho ^2 +d-2},~~~\rho \in (0,1)
\end{align*}
is a particular solution to the homogeneous equation
\begin{align*}
\left(1-\rho ^2 \right) u_{1}^{\prime \prime} (\rho) + \left( \frac{d+1}{\rho} -6 \rho \right ) u_{1} ^{\prime} (\rho) - \left ( 6 + \frac{d-1}{2} \frac{\eta'(f_0(\rho) ) }{\rho ^2} \right ) u_{1} (\rho) = 0.
\end{align*}
To find another linearly independent solution, we use the Wronskian
\begin{align*}
\mathcal{W} (\rho) := (1-\rho ^2)^{\frac{d-5}{2} } \rho ^{-d-1}
\end{align*}
to obtain
\begin{align*}
\hat{u}_{2} (\rho) := \hat{u}_{1} (\rho) \int _{\rho _1}^{\rho} (1- x ^2)^{\frac{d-5}{2}} x^{-d-1} (x^2 + d-2)^2 dx,
\end{align*}
for some constant $\rho_{1} \in (0,1)$ and for all $\rho \in (0,1)$.
Note that we have the expansion
\[ \hat u_2(\rho)=\rho^{-d}\sum_{j=0}^\infty a_j\rho^j,\quad a_0\not= 0 \]
near $\rho=0$. Furthermore, if $d\geq 5$, $\hat u_2\in C^\infty(0,1]$ and we choose $\rho_1=1$ which yields the expansion
\[ \hat u_2(\rho)=(1-\rho)^{\frac{d-3}{2}}\sum_{j=0}^\infty b_j (1-\rho)^j,\qquad b_0\not= 0 \]
near $\rho=1$.
For $d=3$, we set $\rho_1=\frac12$ and the expansion of $\hat u_2$ near $\rho=1$ contains a term $\log(1-\rho)$.
We invoke the variation of constants formula to see that $u_1$ can be expressed as
\begin{align*}
u_{1} (\rho) & = c_{1} \hat{u}_{1} (\rho) + c_{2} \hat{u}_{2} (\rho) \\
& + \hat{u}_{2} (\rho) \int _{0}^{\rho} \frac{ \hat{u}_{1}(y)G(y)y^{d+1} }{(1-y^2)^{ \frac{d-3}{2} } } dy - \hat{u}_{1} (\rho) \int _{0}^{\rho} \frac{ \hat{u}_{2}(y)G(y)y^{d+1} }{(1-y^2)^{ \frac{d-3}{2} } } dy,
\end{align*}
for some constants $c_{1}, c_{2} \in \mathbb{C}$ and for all $\rho \in (0,1)$.
The fact that $u_1\in H^{\frac{d+3}{2}}_\mathrm{rad}(\mathbb B^{d+2})$ implies
$c_2=0$ and we are left with
\begin{equation}
\label{eq:expru1}
u_{1} (\rho) = c_{1} \hat{u}_{1} (\rho)
+ \hat{u}_{2} (\rho) \int _{0}^{\rho} \frac{ \hat{u}_{1}(y)G(y)y^{d+1} }{(1-y^2)^{ \frac{d-3}{2} } } dy - \hat{u}_{1} (\rho) \int _{0}^{\rho} \frac{ \hat{u}_{2}(y)G(y)y^{d+1} }{(1-y^2)^{ \frac{d-3}{2} } } dy.
\end{equation}
If $d=3$, $\hat u_2(\rho)\simeq \log(1-\rho)$ near $\rho=1$ and thus, the last term in Eq.~\eqref{eq:expru1} stays bounded as $\rho\to 1-$ whereas the second term diverges unless
\[ \int_0^1 \frac{ \hat{u}_{1}(y)G(y)y^{d+1} }{(1-y^2)^{ \frac{d-3}{2} } } dy=0, \]
which, however, is impossible since the integrand is strictly positive on $(0,1)$.
This contradicts $u_1\in H^{\frac{d+3}{2}}_\mathrm{rad}(\mathbb B^{d+2})$ and we arrive at the desired $k=1$.
Next, we focus on $d\geq 5$, where the last term in Eq.~\eqref{eq:expru1} is smooth on $[0,1]$. To analyze the second term, we set
\begin{align}
\label{def:Id}
\mathcal{I} _{d}(\rho) := \hat{u}_{2} (\rho) \int _{0}^{\rho} \frac{ F_{d}(y) }{(1-y)^{ \frac{d-3}{2} } } dy, \quad \text{~~~~~} ~ F_{d}(y):= \frac{ \hat{u}_{1}(y)G(y)y^{d+1} }{(1+y)^{\frac{d-3}{2}}}=\frac{y^{d+1}(y^2+5(d-2))}{(1+y)^{\frac{d-3}{2}}(y^2+d-2)^3}.
\end{align}
Note that $F_{5}(1)\not= 0$ and thus, the expansion of $\mathcal I_5(\rho)$ near $\rho=1$ contains a term of the form $(1-\rho)\log(1-\rho)$.
Consequently, $\mathcal I_5''\notin L^2(\frac12,1)$ and this is a contradiction to $u_1\in H^4_{\mathrm{rad}}(\mathbb B^7)$.
The general case is postponed to the appendix (Proposition \ref{prop:Id}) where it is shown that the function $\mathcal I_d$ is not analytic at $\rho=1$. This implies that the expansion of $\mathcal{I}_d(\rho)$ near $\rho =1$ contains a term $(1-\rho)^{\frac{d-3}{2}}\log(1-\rho)$ which again contradicts $u_1\in H^{\frac{d+3}{2}}_\mathrm{rad}(\mathbb B^{d+2})$.\\ \\
Step 3: Finally, we prove the estimates $\eqref{semigroup1}$ and $\eqref{semigroup2}$ for the semigroup. First, note that $\eqref{semigroup1}$ follows immediately from the facts that $\lambda =1$ is an eigenvalue of $\mathbf{L}$ with eigenfunction $\mathbf{g}$ and $\rg\mathbf{P}=\langle \mathbf{g} \rangle$.
Furthermore, from Corollary \ref{cor:spec} and Proposition \ref{O2} we infer the existence of $C,\epsilon>0$ such that
\[ \|\mathbf R_{\mathbf L}(\lambda)(1-\mathbf P)\|\leq C \]
for all $\lambda \in \mathbb C$ with $\mathrm{Re}\lambda\geq -2\epsilon$.
Consequently, the Gearhart-Pr\"uss Theorem, see \cite{EngNag00}, p.~302, Theorem 1.11, yields the bound \eqref{semigroup2}.
\end{proof}
\subsection{Estimates for the nonlinearity}
The aim of this section is to establish a Lipschitz-type estimate for the nonlinearity. Recall that the nonlinear term in $\eqref{Evolution}$ is given by
\begin{align*}
\mathbf{ N } ( \mathbf{u} ) (\rho) =
\begin{pmatrix}
0 \\
\hat N (\rho, u_{1} (\rho))
\end{pmatrix}
:=
\begin{pmatrix}
0 \\
- \frac{d-1}{2} \frac{N( \rho u _{1} (\rho) )}{\rho ^3}
\end{pmatrix}.
\end{align*}
To begin with, we claim that
\begin{align*}
&\hat N(\rho,u_1(\rho)) \\
&=4 (d-1) u_{1}^{2} (\rho) \int _{0}^{1} \int _{0}^{1}\int _{0}^{1} \cos \left( 2z \left( f_0(\rho) +xy \rho u_{1} (\rho) \right) \right) \left( \frac{ f_0 (\rho)}{\rho} + xy u_{1} (\rho)\right) x dz dy dx. \\
\end{align*}
To see this, we use the fundamental theorem of calculus and the fact that $\eta ^{\prime \prime} (0)=0$ to write
\begin{align*}
N( \rho u _{1} (\rho) ) & = \eta( f_0 (\rho) + \rho u _{1} (\rho) ) - \eta ( f_0 (\rho)) - \eta ^{\prime} ( f_0 (\rho)) \rho u _{1} (\rho) \\
&= \int _{f_0 (\rho)}^{f_0(\rho) + \rho u_{1} (\rho)} \eta ^{\prime} (s) ds - \eta ^{\prime} ( f_0 (\rho)) \rho u _{1} (\rho) \\
&=\rho u_{1} (\rho) \int _{0}^{1} \eta ^{\prime} (f_0 (\rho) + x \rho u_{1} (\rho)) dx - \eta ^{\prime} ( f_0 (\rho)) \rho u _{1} (\rho) \\
&=\rho u_{1} (\rho) \int _{0}^{1}\left( \eta ^{\prime} (f_0 (\rho) + x \rho u_{1} (\rho)) - \eta ^{\prime} ( f_0(\rho)) \right) dx \\
&=\rho u_{1} (\rho) \int _{0}^{1}\left( \int _{f_0(\rho)}^{f_0 (\rho) + x\rho u_{1} (\rho)} \eta ^{\prime \prime} (s) ds \right) dx \\
&=\rho^{2} u_{1}^{2} (\rho) \int _{0}^{1}x \int _{0}^{1} \eta ^{\prime \prime} (f_0 (\rho) + xy \rho u_{1} (\rho)) dy dx \\
&=\rho^{2} u_{1}^{2} (\rho) \int _{0}^{1}x \int _{0}^{1} \int _{0}^{f_0(\rho) + xy \rho u_{1}(\rho)} \eta ^{\prime \prime \prime } (s) ds dy dx \\
&=\rho^{2} u_{1}^{2} (\rho) \int _{0}^{1}x \int _{0}^{1} \int _{0}^{1} \eta ^{\prime \prime \prime } \left( (f_0(\rho) +xy \rho u_{1} (\rho))z \right) \left( f_0 (\rho) + xy \rho u_{1} (\rho)\right) dz dy dx \\
&=\rho^{3} u_{1}^{2} (\rho) \int _{0}^{1} x \int _{0}^{1} \int _{0}^{1} \eta ^{\prime \prime \prime } \left( (f_0(\rho) +xy \rho u_{1} (\rho))z \right) \left(\frac{ f_0 (\rho)}{\rho} + xy u_{1} (\rho)\right) dz dy dx. \\
\end{align*}
For later purposes, we note that the function
\begin{align*}
\hat N (\rho, \zeta)=4 (d-1) \zeta ^{2} \int _{0}^{1} \int _{0}^{1} \int _{0}^{1} \cos \left( 2z \left( f_0(\rho) +xy \rho \zeta \right) \right) \left( \frac{ f_0 (\rho)}{\rho} + xy \zeta \right) x dz dy dx,
\end{align*}
defined for all $(\rho, \zeta) \in [0,1] \times \mathbb{R},$ is perfectly smooth in both variables since
\begin{align*}
\frac{f_0 (\rho)}{\rho} = \frac{2}{\rho} \arctan \left ( \frac{\rho}{\sqrt{d-2}} \right)
\end{align*}
is smooth at $\rho=0$.
Moreover, we define
\begin{align} \label{DefM}
M (\rho, \zeta) := \partial _{\zeta} \hat N (\rho, \zeta) = 4 (d-1) \left ( A(\rho, \zeta) + B(\rho, \zeta) +C(\rho, \zeta) +D(\rho, \zeta) \right),
\end{align}
where
\begin{align*}
& A(\rho,\zeta):= 2 \frac{f_0(\rho)}{\rho} \zeta \int _{0}^{1} \int_{0}^{1} \int _{0}^{1} \cos \left( 2z \left( f_0 (\rho) +xy \rho \zeta \right) \right) x dz dy dx, \\
& B(\rho, \zeta):= -2 f_0(\rho) \zeta ^{2} \int_{0}^{1} \int_{0}^{1} \int_{0}^{1} \sin \left( 2z \left(f_0 (\rho) + xy \rho \zeta \right) \right) x^{2} y z dz dy dx, \\
& C(\rho, \zeta):= 3 \zeta ^{2} \int_{0}^{1} \int_{0}^{1} \int_{0}^{1} \cos \left( 2z \left( f_0(\rho) + xy \rho \zeta \right) \right) x^{2} y dz dy dx, \\
& D(\rho, \zeta):= -2 \rho \zeta ^{3} \int_{0}^{1} \int_{0}^{1} \int_{0}^{1} \sin \left( 2z \left( f_0 (\rho) + xy \rho \zeta \right) \right) x^{3} y^{2} zdz dy dx.
\end{align*}
We denote by $\mathcal{B}_{\delta} \subseteq \mathcal{H}$ the ball of radius $\delta$ in $\mathcal{H}$ centered at zero, i.e.,
\begin{align*}
\mathcal{B}_{\delta}:= \left \{\mathbf{u} \in \mathcal{H}:~\left \| \mathbf{u} \right \|= \left \| (u_{1},u_{2}) \right \|_{ H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) \times H_{\text{rad}}^{\frac{d+1}{2}} (\mathbb{B}^{d+2}) }
\leq \delta \right \}.
\end{align*}
The main result of this section is the following Lipschitz-type estimate.
\begin{lemma}
Let $\delta>0$.
Then we have
\begin{align} \label{Lipschitz}
\big \| \mathbf{N (u)} - \mathbf{N(v)}\big \| \lesssim (\| \mathbf{u} \| +\| \mathbf{v} \| ) \| \mathbf{u}-\mathbf{v} \|
\end{align}
for all $\mathbf{u}, \mathbf{v} \in \mathcal{B}_{\delta}$.
\end{lemma}
\begin{proof}
We start by fixing a $\delta>0$, we pick two elements $\mathbf{u}, \mathbf{v} \in \mathcal{B}_{\delta}$ and define the auxiliary function
\begin{align*}
\zeta (\sigma)(\rho)= \sigma u_{1}(\rho) + (1-\sigma) v_{1} (\rho),
\end{align*}
for $\rho \in (0,1)$ and $\sigma \in [0,1]$. The triangle inequality implies
\begin{align*}
\mathbf{u},\mathbf{v} \in \mathcal{B}_{\delta} \Longrightarrow \left \| u_{1} \right \|_{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2})} \leq \delta ,~\left \| v_{1} \right \|_{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2})} \leq \delta
\Longrightarrow \left \| \zeta (\sigma) \right \| _{ H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \leq \delta,
\end{align*}
for all $\sigma \in [0,1]$. In other words,
\begin{align*}
\zeta (\sigma) \in \mathscr{B}_{\delta}:= \left \{f \in H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) :~\left \| f \right \|_{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \leq \delta \right \},
\end{align*}
for all $\sigma \in [0,1]$. Now, we claim that to show $\eqref{Lipschitz}$, it suffices to establish the estimate
\begin{align} \label{M}
\left \| M(\cdot, f(\cdot) ) \right \|_{ H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \lesssim \left \| f \right \|_{ H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) }
\end{align}
for all $f \in \mathscr{B}_{\delta}$,
where $M$ is given by $\eqref{DefM}$. To see this, we use the algebra property
\begin{align*}
\| fg \|_{ H ^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \lesssim \| f \|_{ H ^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \| g \|_{ H ^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } ,
\end{align*}
which holds since $\frac{d+3}{2}>\frac{d+2}{2}$,
to estimate
\begin{align*}
\big \| \mathbf{N(u)} -\mathbf{N(v)} \big\| &= \big \| \hat N (\cdot, u_{1}(\cdot) ) - \hat N (\cdot , v_{1}(\cdot)) \big \| _{H_{\text{rad}}^{\frac{d+1}{2}} (\mathbb{B}^{d+2}) } \\
&\leq
\big \| \hat N (\cdot, u_{1}(\cdot) ) - \hat N (\cdot , v_{1}(\cdot)) \big \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \\
&=\left \| \int _{v_{1}(\cdot)}^{u_{1}(\cdot)} \partial _{2} \hat N (\cdot, \zeta ) d \zeta \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2})} \\
&=\left \| \left( u_{1}(\cdot) - v_{1}(\cdot) \right) \int _{0}^{1} \partial _{2} \hat N (\cdot , \underbrace{ \sigma u_{1}(\cdot)+(1- \sigma) v_{1}(\cdot)}_{\zeta(\sigma) }) d \sigma \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \\
\nonumber
&\lesssim \left \| u_{1} - v_{1} \right \| _{ H ^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \left \| \int _{0}^{1} \partial _{2} \hat N (\cdot , \zeta (\sigma) ) d \sigma \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \\
& \lesssim \left \| u_{1} - v_{1} \right \| _{ H ^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \int _{0}^{1} \left \| M (\cdot , \zeta (\sigma)(\cdot) ) \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } d \sigma \\
& \lesssim \left \| u_{1} - v_{1} \right \| _{ H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \int _{0}^{1} \left \| \zeta (\sigma) \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } d \sigma \\
& \lesssim \left \| u_{1} - v_{1} \right \| _{ H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \int _{0}^{1} \left( \sigma \left \| u_{1} \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } +(1-\sigma) \left \| v_{1} \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \right) d \sigma \\
& \lesssim \left \| u_{1} - v_{1} \right \| _{ H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \left( \left \| u_{1} \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } + \left \| v_{1} \right \| _{H_{\text{rad}}^{\frac{d+3}{2}} (\mathbb{B}^{d+2}) } \right) \\
&\lesssim \left \| \mathbf{u} - \mathbf{v} \right \| \left( \left \| \mathbf{u} \right \| + \left \| \mathbf{v} \right \| \right).
\end{align*}
It remains to prove $\eqref{M}$. To this end we use a simple extension argument (see e.g.~Lemmas B.1 and B.2 in \cite{DonSch16}) and Moser's inequality (\cite{Rau12}, p.~224, Theorem 6.4.1) to infer the existence of a smooth function $h: [0,\infty)\to [0,\infty)$ such that
\[ \|M(\cdot,f(\cdot))\|_{H^{\frac{d+3}{2}}_\mathrm{rad}(\mathbb B^{d+2})}\leq h\left (\|f\|_{L^\infty(\mathbb B^{d+2})}\right )\|f\|_{H^{\frac{d+3}{2}}_\mathrm{rad}(\mathbb B^{d+2})} \]
for all $f\in \mathscr B_\delta$.
By Sobolev embedding we have $\|f\|_{L^\infty(\mathbb B^{d+2})}\lesssim \|f\|_{H^{\frac{d+3}{2}}_\mathrm{rad}(\mathbb B^{d+2})}\leq \delta$
for all $f\in \mathscr B_\delta$ and \eqref{M} follows.
This concludes the proof.
\end{proof}
\subsection{The abstract nonlinear Cauchy problem} In this section, we focus on the existence and uniqueness of solutions to the Cauchy problem $\eqref{Evolution}$. In fact, by appealing to Definition $\ref{def}$, we consider the integral equation
\begin{align} \label{integralequation}
\Phi(\tau)= \mathbf{S}(\tau) \mathbf{u} + \int _{0}^{\tau} \mathbf{S}(\tau - s) \mathbf{N} \big( \Phi (s) \big) ds,
\end{align}
for all $\tau \ge 0$ and $\mathbf u\in \mathcal H$. We introduce the Banach space
\begin{align*}
\mathcal{X}:= \{ \Phi \in C( [0,\infty);\mathcal{H}) : ~~\| \Phi \|_{\mathcal{X}} := \sup _{\tau >0} e^{\epsilon \tau} \| \Phi(\tau) \| < + \infty \}
\end{align*}
with $\epsilon>0$ from Proposition \ref{projection}. Moreover, we denote by $\mathcal{X}_{\delta}$ the closed ball
\begin{align*}
\mathcal{X}_{\delta} :=\left \{ \Phi \in \mathcal{X}: \| \Phi \|_{\mathcal{X}} \leq \delta \right \} = \left \{ \Phi \in C( [0,\infty);\mathcal{H}): \| \Phi \| \leq \delta e^{-\epsilon \tau},~~\forall \tau >0 \right \}.
\end{align*}
In the following, we will only sketch the rest of the proof and discuss the main arguments since they are analogous to \cite{Don11, DonSch12, DonSch14, Don14, DonSch16}. To prove the main theorem, we would like to apply a fixed point argument to the integral equation $\eqref{integralequation}$. However, the exponential growth of the solution operator on the unstable subspace prevents from doing this directly. We overcome this obstruction by subtracting the correction term\footnote{All integrals here exist as Riemann integrals over continuous functions.}
\begin{align} \label{Correction}
\mathbf{C}(\Phi,\mathbf{u}) := \mathbf{P} \left( \mathbf{u} + \int_{0}^{\infty} e^{-s} \mathbf{N} \big( \Phi (s) \big) ds \right)
\end{align}
from the initial data. Consequently, we consider the fixed point problem
\begin{align} \label{modified}
\Phi (\tau)= \mathbf{K} ( \Phi, \mathbf{u})(\tau)
\end{align}
where
\begin{align} \label{K}
\mathbf{K} (\Phi, \mathbf{u}) (\tau):=\mathbf{S} (\tau) [\mathbf{u} - \mathbf{C}(\Phi,\mathbf{u})] + \int_{0}^{\tau} \mathbf{S} (\tau - s) \mathbf{N} \big( \Phi(s) \big) ds.
\end{align}
This modification stabilizes the evolution as the following result shows.
\begin{theorem} \label{th2}
There exist constants $\delta,C>0$ such that for every $\mathbf{u} \in \mathcal{H}$ with $\| \mathbf{u} \| \leq \frac{\delta}{C}$, there exists a unique $\mathbf{\Phi} (\mathbf{u}) \in \mathcal{X}_{\delta}$ that satisfies
\begin{align*}
\mathbf{\Phi} (\mathbf{u}) = \mathbf{K} (\mathbf{\Phi} (\mathbf{u}),\mathbf{u}).
\end{align*}
In addition, $\mathbf{\Phi}(\mathbf u)$ is unique in the whole space $\mathcal X$ and the solution map $\mathbf{u} \mapsto \mathbf{\Phi }(\mathbf{u})$ is Lipschitz continuous.
\end{theorem}
\begin{proof}
The proof is based on a fixed point argument and the essential ingredient is the Lipschitz estimate \eqref{Lipschitz} for the nonlinearity. Although the proof coincides with the one of Theorem 4.13 in \cite{DonSch16}, we sketch the main points for the sake of completeness. We pick $\delta>0$ sufficiently small and fix $\mathbf{u} \in \mathcal{H}$ with $\| \mathbf{u} \| \leq \frac{\delta}{C}$, where $C>0$ is sufficiently large. First, note that the continuity of the map
\begin{align*}
\mathbf{K} (\Phi, \mathbf{u}): [0,\infty) \longrightarrow \mathcal{H},\quad \tau \longmapsto \mathbf{K} (\Phi, \mathbf{u})(\tau)
\end{align*}
follows immediately from the strong continuity of the semigroup $\left( \mathbf{S}(\tau) \right)_{\tau >0}$. Next, to show that $\mathbf K(\cdot,\mathbf u)$ maps $\mathcal X_\delta$ to itself,
we pick an arbitrary $\Phi \in \mathcal{X}_{\delta} $ and decompose the operator according to
\begin{align*}
\mathbf{K} ( \Phi, \mathbf{u})(\tau) = \mathbf{P} \mathbf{K} ( \Phi, \mathbf{u})(\tau) + (1-\mathbf{P} ) \mathbf{K} ( \Phi, \mathbf{u})(\tau).
\end{align*}
The Lipschitz bound \eqref{Lipschitz} implies
\begin{align*}
\left \| \mathbf{N} \left( \Phi (\tau) \right) \right \| \lesssim \delta^2 e^{-2 \epsilon \tau}
\end{align*}
and together with the time evolution estimates for the semigroup on the unstable and stable subspaces (see Proposition \ref{projection}), we get
\begin{align*}
\left \| \mathbf{P} \mathbf{K} \left( \Phi, \mathbf{u} \right) (\tau) \right \| \lesssim \delta ^{2} e^{-2 \epsilon \tau}, \quad \left \| \left(1- \mathbf{P} \right) \mathbf{K} \left( \Phi, \mathbf{u} \right) (\tau) \right \| \lesssim (\tfrac{\delta}{C}+\delta^2) e^{-\epsilon \tau} .
\end{align*}
Clearly, these estimates imply that $ \mathbf{K} ( \Phi, \mathbf{u}) \in \mathcal{X}_{\delta}$ for sufficiently small $\delta$ and sufficiently large $C>0$. Finally, we need to show the contraction property. To this end, we pick two elements $\Phi,\widetilde{\Phi} \in \mathcal{X}_{\delta}$.
As before, the Lipschitz estimate \eqref{Lipschitz} together with Proposition \ref{projection} imply
\begin{align*}
\left \| \mathbf{P} \left( \mathbf{K} ( \Phi, \mathbf{u})(\tau) - \mathbf{K} ( \widetilde{ \Phi }, \mathbf{u})(\tau) \right) \right \| &\lesssim \delta e^{-\epsilon \tau} \left \| \Phi - \widetilde{\Phi} \right \|_{\mathcal X}, \\
\left \| \left(1- \mathbf{P}\right) \left( \mathbf{K} ( \Phi, \mathbf{u})(\tau) - \mathbf{K} ( \widetilde{ \Phi }, \mathbf{u})(\tau) \right) \right \| &\lesssim \delta e^{-\epsilon\tau} \left \| \Phi - \widetilde{\Phi} \right \|_{\mathcal X}
\end{align*}
and by choosing $\delta$ sufficiently small we conclude
\begin{align*}
\left \| \mathbf{K} ( \Phi, \mathbf{u}) - \mathbf{K} ( \widetilde{ \Phi }, \mathbf{u}) \right \|_{\mathcal{X}} \leq \frac{1}{2} \left \| \Phi - \widetilde{\Phi} \right \|_{\mathcal{X}}.
\end{align*}
Consequently, the claim follows by the contraction mapping principle. Uniqueness in the whole space $\mathcal X$ and the Lipschitz continuity of the solution map are routine and we omit the details.
\end{proof}
Now we turn to the particular initial data we prescribe. To this end, we define the space
\begin{align*}
\mathcal{H}^{R} := H_{\text{rad}}^{m} (\mathbb{B}_{R}^{d+2} ) \times H_{\text{rad}}^{m-1} (\mathbb{B}_{R}^{d+2}), \quad m\equiv m_{d} =\frac{d+3}{2}
\end{align*}
for $R>0$, endowed with the induced norm
\begin{align*}
\left \| \mathbf{w} \right \|_{\mathcal{H}^{R}}^{2} = \left \| (w_{1},w_{2}) \right \|_{\mathcal{H}^{R}}^{2} =\left \| w_{1}\right \|_{ H_{\text{rad}}^{m} \left( \mathbb{B}^{d+2}_R \right) } +
\left \| w_{2} \right \|_{ H_{\text{rad}}^{m-1} \left( \mathbb{B}^{d+2}_R \right) }.
\end{align*}
Recall the definition of the initial data operator $\mathbf U(\mathbf v, T)$ from Eq.~\eqref{5}.
\begin{lemma} \label{th1}
Fix $T_0>0$.
Let $\delta>0$ be sufficiently small and $\mathbf{v}$ with $|\cdot|^{-1} \mathbf{v} \in \mathcal{H}^{T_{0} + \delta}$. Then, the map
\begin{align*}
\mathbf{U} (\mathbf{v},\cdot):[T_{0}-\delta,T_{0}+\delta] \longrightarrow \mathcal{H},\quad T \longmapsto \mathbf{U} (\mathbf{v},T)
\end{align*}
is continuous. Furthermore, for all $T \in [T_{0} -\delta,T_{0}+\delta]$,
\begin{align*}
\big \| |\cdot|^{-1} \mathbf{v}\big \|_{\mathcal{H}^{T_{0}+\delta} } \leq \delta \Longrightarrow \big \| \mathbf{U} (\mathbf{v},T)\big \| \lesssim \delta.
\end{align*}
\end{lemma}
\begin{proof}
The statements are straightforward consequences of the very definition of $\mathbf U(\mathbf v,T)$, the smoothness of $\frac{f_0(\rho)}{\rho}$, and the continuity of rescaling in Sobolev spaces. We omit the details.
\end{proof}
Finally, given $T_{0}>0$ and $\mathbf{v} \in \mathcal{H}^{T_{0} + \delta}$ with $\| |\cdot|^{-1} \mathbf{v} \|_{\mathcal{H}^{T_{0}+\delta} } \leq \frac{\delta}{M}$ for $\delta>0$ sufficiently small and $M>0$ sufficiently large, we apply Lemma \ref{th1} to see that $\mathbf{u}:=\mathbf{U} (\mathbf{v},T)$ satisfies the assumptions of Theorem \ref{th2} for all $T\in [T_0-\delta,T_0+\delta]$. Hence, for all $T \in [T_{0}-\delta,T_{0}+\delta]$, the map $\mathbf K(\cdot,\mathbf U(\mathbf v,T))$ has a fixed point $\Phi_{T}:= \mathbf{\Phi} (\mathbf U(\mathbf v,T)) \in \mathcal{X}_{\delta}$. In the last step we now argue that for each $\mathbf v$, there exists a particular $T_{\mathbf{v}} \in [T_{0}-\delta,T_{0}+\delta]$ that makes the correction term vanish, i.e., $\mathbf C(\Phi_{T_{\mathbf v}},\mathbf U(\mathbf v,T_{\mathbf v}))=0$. Since $\mathbf C$ has values in $\rg \mathbf P=\langle\mathbf g\rangle$, the latter is equivalent to
\begin{align}
\label{eq:Tv}
\exists T_{\mathbf{v}} \in [T_{0}-\delta,T_{0}+\delta]: \quad \Big< \mathbf{C} \left( \Phi _{T_{\mathbf{v}}},\mathbf{U} \left( \mathbf{v},T_{\mathbf{v}} \right) \right),\mathbf{g} \Big>_{\mathcal H} = 0.
\end{align}
The key observation now is that
\[ \partial_T \left . \left ( \begin{array}{c}
\frac{1}{\rho}f_0(\frac{T}{T_0}\rho) \\
\frac{T^2}{T_0^2}f_0'(\frac{T}{T_0}\rho) \end{array} \right )\right |_{T=T_0}=\frac{2\sqrt{d-2}}{T_0}\,\mathbf g(\rho) \]
and thus, we have the expansion
\[ \Big< \mathbf{C} \left( \Phi_T,\mathbf U (\mathbf{v},T ) \right),\mathbf{g} \Big>_{\mathcal H}=\frac{2\sqrt{d-2}}{T_0}\|\mathbf g\|^2(T-T_0)
+O((T-T_0)^2)+O(\tfrac{\delta}{M}T^0)+O(\delta^2T^0).
\]
Consequently, a simple fixed point argument proves \eqref{eq:Tv}, see \cite{DonSch16}, Theorem 4.15 for full details.
In summary, we arrive at the following result.
\begin{theorem} \label{correction}
Fix $T_0>0$. Then there exist $\delta,M >0$ such that for any $\mathbf{v}$ with
\[ \| |\cdot|^{-1} \mathbf{v} \|_{\mathcal{H}^{T_{0}+\delta} } \leq \frac{\delta}{M} \] there exists a $T \in [T_{0}-\delta,T_{0}+\delta]$ and a function $\Phi \in \mathcal{X}_{\delta}$ which satisfies
\begin{align}
\label{eq:int}
\Phi (\tau) = \mathbf{S} (\tau) \mathbf{U} (\mathbf{v},T) + \int_{0}^{\tau} \mathbf{S} (\tau-s) \mathbf{N} \big( \Phi (s) \big) ds
\end{align}
for all $\tau \geq 0$. Furthermore, $\Phi$ is unique in $C \big( [0,\infty);\mathcal{H} \big)$.
\end{theorem}
\subsection{Proof of the main theorem} With the results of the previous section at hand, we can now prove the main theorem. Fix $T_0>0$ and suppose the radial initial data
$\psi [0]$ satisfy
\begin{align*}
\left \| |\cdot|^{-1} \Big( \psi [0] -\psi^{T_{0}}[0] \Big) \right \|_{ H^{\frac{d+3}{2}} (\mathbb{B}_{T_{0}+\delta}^{d+2}) \times H^{\frac{d+1}{2}} (\mathbb{B}_{T_{0}+\delta}^{d+2} )} \leq \frac{\delta}{M}
\end{align*}
with $\delta,M>0$ from Theorem \ref{correction}.
We set $\mathbf v:=\psi[0]-\psi^{T_0}[0]$, cf.~Section \ref{sec:sim}.
Then we have
\begin{align*}
\left \| |\cdot|^{-1} \mathbf{v} \right \|_{\mathcal{H}^{T_{0}+\delta}} = \left \| |\cdot|^{-1} \Big( \psi [0] -\psi^{T_{0}}[0] \Big)
\right \|_{\mathcal{H}^{T_{0}+\delta}} \leq \frac{\delta}{M}
\end{align*}
and Theorem $\ref{correction}$ yields the existence of
$T \in [T_{0}-\delta,T_{0}+\delta]$ such that Eq.~\eqref{eq:int} has a unique solution $\Phi \in \mathcal{X}$ that satisfies $\| \Phi (\tau) \| \leq \delta e^{-\epsilon \tau}$ for all $\tau\geq 0$.
By construction,
\[ \psi(t,r)=\psi^T(t,r)+\frac{r}{T-t}\phi_1\left (\log\frac{T}{T-t},\frac{r}{T-t}\right ) \]
is a solution to the original wave maps problem \eqref{cauchy}. Furthermore,
\[ \partial_t \psi(t,r)=\partial_t \psi^T(t,r)+\frac{r}{(T-t)^2}\phi_2 \left (\log\frac{T}{T-t},\frac{r}{T-t}\right ). \]
Consequently,
\begin{align*}
(T-t)^{k-\frac{d}{2}}&\left \||\cdot|^{-1}\left (\psi(t,\cdot)-\psi^T(t,\cdot)\right )
\right \|_{\dot H^k(\mathbb B^{d+2}_{T-t})} \\
&=(T-t)^{k-\frac{d}{2}-1}\left \|\phi_1\left (\log\frac{T}{T-t},\frac{|\cdot|}{T-t}\right ) \right \|_{\dot H^k(\mathbb B^{d+2}_{T-t})} \\
&=\left \|\phi_1\left (\log\frac{T}{T-t},\cdot\right ) \right \|_{\dot H^k(\mathbb B^{d+2})}
\leq \left \|\Phi\left (\log\frac{T}{T-t}\right )\right \| \\
&\leq \delta (T-t)^\epsilon
\end{align*}
for all $t\in [0,T)$ and $k=0,1,2,\dots,\frac{d+3}{2}$.
Analogously,
\begin{align*}
(T-t)^{\ell-\frac{d}{2}+1}&\left \||\cdot|^{-1}\left (\partial_t \psi(t,\cdot)-\partial_t \psi^T(t,\cdot)\right )\right \|_{\dot H^\ell(\mathbb B^{d+2}_{T-t})} \\
&=(T-t)^{\ell-\frac{d}{2}-1}\left \|\phi_2\left (\log\frac{T}{T-t},\frac{|\cdot|}{T-t}\right ) \right \|_{\dot H^\ell(\mathbb B^{d+2}_{T-t})} \\
&=\left \|\phi_2\left (\log\frac{T}{T-t},\cdot\right ) \right \|_{\dot H^\ell(\mathbb B^{d+2})}
\leq \left \|\Phi\left (\log\frac{T}{T-t}\right )\right \| \\
&\leq \delta (T-t)^\epsilon
\end{align*}
for all $\ell=0,1,2,\dots,\frac{d+1}{2}$. |
1507.04635 | \subsection*{Acknowledgements}
We would like to thank Thomas Keller for his assistance with Canadian traveler problem,
and Rajesh Ranganath for helpful feedback on configuring RMSProp for black-box variational inference.
Frank Wood is supported under DARPA PPAML through the U.S. AFRL under Cooperative Agreement number FA8750-14-2-0006, Sub Award number 61160290-111668.
\section{Anglican}
All case studies are implemented in Anglican, a probabilistic programming language that is closely integrated into the Clojure language. In Anglican, the macro \lsi{defquery} is used to define a probabilistic model. Programs may make use of user-written Clojure functions (defined with \lsi{defn}) as well as user-written Anglican functions (defined with \lsi{defm}). The difference between the two is that in Anglican functions may make use of the model special forms \lsi{sample}, \lsi{observe}, and \lsi{predict}, which interrupt execution and require action by the inference back end. In Clojure functions, \lsi{sample} is a primitive procedure that generates a random value, \lsi{observe} returns a log probability, and \lsi{predict} is not available.
Full documentation for Anglican can be found at
\begin{verbatim}
http://www.robots.ox.ac.uk/~fwood/anglican
\end{verbatim}
The complete source code for the case studies can be found at
\begin{verbatim}
https://bitbucket.org/probprog/black-box-policy-search
\end{verbatim}
\section{Canadian Traveler Problem}
The complete results for the Canadian traveler problem, showing the performance and convergence for the learned policies
for multiple graphs of different sizes and topologies,
are presented in Figures~\ref{fig:ctp-supplementary-20}~and~\ref{fig:ctp-supplementary-50}.
\begin{figure}[p]
\begin{center}
\includegraphics[width=\textwidth]{plots/ctp_travel_grid_20_edge-policy.png}
\includegraphics[width=\textwidth]{plots/ctp_distance_vs_steps_individual_20_edge-policy.pdf}
\end{center}
\caption{\label{fig:ctp-supplementary-20} Canadian traveler problem: edge weights, indicating average travel frequency under the learned policy, and convergence for individual instances with 20 nodes.}
\end{figure}
\begin{figure}[p]
\begin{center}
\includegraphics[width=\textwidth]{plots/ctp_travel_grid_50_edge-policy.png}
\includegraphics[width=\textwidth]{plots/ctp_distance_vs_steps_individual_50_edge-policy.pdf}
\end{center}
\caption{\label{fig:ctp-supplementary-50} Canadian traveler problem: edge weights, indicating average travel frequency under the learned policy, and convergence for individual instances with 50 nodes.}
\end{figure}
\section{RockSample}
The RockSample problem was formulated as a benchmark for value iteration algorithms and is normally evaluated in an infinite horizon setting where the discount factor penalizes sensing and movement. In the original formulation of the problem, movement and sensing incur no cost. The agent gets a reward of 10 for each good rock, as well as for reaching the right edge, but incurs a penalty of -10 when sampling a bad rock.
Here we consider an adaptation of RockSample to a finite horizon setting. We assume sensing is free, and movement incurs a cost of -1. We structure the policy by moving along rocks in a left-to-right order. At each rock the agent sense the closest next rock and chooses to move to it, or discard it and consider the next closest rock. When the agent gets to a rock, it only samples the rock if the rock is good. The parameters describe the prior over the probability of moving to a rock conditioned on the current location and the sensor reading.
\section{Guess Who}
In Table~\ref{table:ontology} we provide as reference the complete ontology for the Guess Who domain.
At each turn, the player asks whether the unknown individual has a particular value of a single attribute.
\begin{sidewaystable}
\begin{tabular}{l|lllllllllllll}
\hline
id & beard & ear-rings & eye-color & gender & glasses & hair-color & hair-length & hair-type & hat & moustache & mouth-size & nose-size & red-cheeks\\
\hline
alex & false & false & brown & male & false & black & short & straight & false & true & large & small & false\\
alfred & false & false & blue & male & false & ginger & long & straight & false & true & small & small & false\\
anita & false & false & blue & female & false & blonde & long & straight & false & false & small & small & true\\
anne & false & true & brown & female & false & black & short & curly & false & false & small & large & false\\
bernard & false & false & brown & male & false & brown & short & straight & true & false & small & large & false\\
bill & true & false & brown & male & false & ginger & bald & straight & false & false & small & small & true\\
charles & false & false & brown & male & false & blonde & short & straight & false & true & large & small & false\\
claire & false & false & brown & female & true & ginger & short & straight & true & false & small & small & false\\
david & true & false & brown & male & false & blonde & short & straight & false & false & large & small & false\\
eric & false & false & brown & male & false & blonde & short & straight & true & false & large & small & false\\
frans & false & false & brown & male & false & ginger & short & curly & false & false & small & small & false\\
george & false & false & brown & male & false & white & short & straight & true & false & large & small & false\\
herman & false & false & brown & male & false & ginger & bald & curly & false & false & small & large & false\\
joe & false & false & brown & male & true & blonde & short & curly & false & false & small & small & false\\
maria & false & true & brown & female & false & brown & long & straight & true & false & small & small & false\\
max & false & false & brown & male & false & black & short & curly & false & true & large & large & false\\
paul & false & false & brown & male & true & white & short & straight & false & false & small & small & false\\
peter & false & false & blue & male & false & white & short & straight & false & false & large & large & false\\
philip & true & false & brown & male & false & black & short & curly & false & false & large & small & true\\
richard & true & false & brown & male & false & brown & bald & straight & false & true & small & small & false\\
robert & false & false & blue & male & false & brown & short & straight & false & false & small & large & true\\
sam & false & false & brown & male & true & white & bald & straight & false & false & small & small & false\\
susan & false & false & brown & female & false & white & long & straight & false & false & large & small & true\\
tom & false & false & blue & male & true & black & bald & straight & false & false & small & small & false\\
\end{tabular}
\caption{Ontology for the Guess Who domain, consisting of 24 individuals, characterized by 11 binary attributes and two multi-class attributes.}
\label{table:ontology}
\end{sidewaystable}
\section{Policies as Programs}
Probabilistic programming systems \cite[]{milch_chap_2007,goodman_uai_2008,minka_software_2010,pfeffer_rep_2009,mansinghka_arxiv_2014,wood_aistats_2014,GHNR14} represent generative models as programs in a language that provides specialized syntax to instantiate random variables, as well as syntax to impose conditions on these random variables.
The goal of inference in a probabilistic program is to characterize the distribution on its random variables subject to the imposed conditions, which is done using one or more generic methods provided by an inference backend.
In sequential decision problems we must define a stochastic simulator of an agent, which chooses actions based on current contextual information, and a stochastic simulator of the world, which may have some internal variables that are opaque to the agent, but provides new contextual information after each action. For sufficiently simple problems, both the agent and the world simulator can be adequately described as graphical models. Here we are interested in using probabilistic programs as simulators of both the world and the agent. The trade-off made in this approach is that we can incorporate more detailed assumptions about the structure of the problem into our simulator of the agent, which decreases the size of the search space, at the expense of having to treat these simulators as black boxes from the perspective of the learning algorithm.
In Figure \ref{fig:ctp-overview} we show an example of a program, written in the language Anglican \cite[]{wood_aistats_2014}, which simulates an agent in the Canadian traveler problem (CTP) domain. This agent traverses a graph using depth first search (DFS) as a base strategy, choosing edges either at random, or according to sampled preferences. Probabilistic programs can describe a family of algorithmic policies, which may make use of programming constructs such as recursion, and higher-order functions and arbitrary deterministic operations. This allows us to define structured policies that enforce basic constraints, such as the rule that you should never travel the same edge twice.
Given a base policy program, we can define different parametrizations that encode additional structure, such as the typical travel distance starting from each edge. We can then formulate a Bayesian approach to policy learning, in which we place a prior on the policy parameters and optimize its hyperparameters to maximize the reward. To do so we employ a planning as inference interpretation \cite[]{toussaint_nc_2006,rawlik_rss_2012,neumann_icml_2011,hoffman2009expectation,hoffman2009new,levine_icml_2013} that casts policy search as stochastic gradient ascent on the marginal likelihood.
A challenge in devising methods for approximate inference in probabilistic programs is that such methods must deal gracefully with programs that may not instantiate the same set of random variables in each execution. For example, the random policy in Figure~\ref{fig:ctp-overview} will generate a different set of categorical variables in each execution, depending on the path followed through the graph. Similarly, the edge based policy samples values \lsi{(Q u v)} lazily, depending on the visited nodes.
In this paper we develop an approach to policy learning based on black box variational inference (BBVI) \cite[]{ranganath_aistats_2014,wingate_arxiv_2013}, a technique for variational approximation of the posterior in Bayesian models. We begin by reviewing planning as inference formulations of policy search. We then show how BBVI can be adapted to perform hyperparameter optimization. In a planning as inference interpretation this method, which we call black box policy learning (BBPL), is equivalent to classic policy gradient methods. We then describe how BBPL may be implemented in the context of probabilistic programs with varying numbers of random variables, and provide a language-agnostic definition of the interface between the program and the inference back end.
\section{Policy Search as Bayesian Inference}
In sequential decision problems, an agent draws an action $\u_t$ from a policy distribution $\pi(\u_t \,|\, \ensuremath{x}_t)$, which may be deterministic, conditioned on a context $\ensuremath{x}_t$. The agent then observes a new context $\ensuremath{x}_{t+1}$ drawn from a distribution $p(\ensuremath{x}_{t+1} \,|\, \u_t, \ensuremath{x}_t)$. In the finite horizon case, where an agent performs a fixed number of actions $T$, resulting in a sequence $\t = (\ensuremath{x}_0,\u_0,\ensuremath{x}_1,\u_1,\ensuremath{x}_2,\ldots,\u_{T-1},\ensuremath{x}_T)$, which is known as a trajectory, or roll-out. Each trajectory gets a reward $R(\t)$. Policy search methods maximize the expected reward $J_\q = \ensuremath{\mathbb{E}}_{p_\q}[R(\t)]$ for a family of stochastic policies $\pi_\q$ with parameters $\q$
\begin{align}
J_\q
&=
\int
R(\t)
p_\q(\t)
\: d \t
,
\\
p_\q(\t)
&:=
p(\ensuremath{x}_0)
\prod_{t=0}^{T-1}
\pi(\u_{t} \,|\, \ensuremath{x}_{t}, \q)
\p{\ensuremath{x}_{t+1}}{\u_{t},\ensuremath{x}_{t}}
.
\end{align}
We are interested in performing upper-level policy search, a variant of the problem defined in terms of the hyperparameters $\l$ of a distribution $p_\l(\t,\q)$ that places a prior $p_\l(\q)$ on the policy parameters
\begin{align}
J_\l
&=
\int
R(\t)
p_\l(\t,\q)
\: d \t \, d \q
,
\\
p_{\l}(\t, \q)
&:=
p_\l(\q)
\p{\t}{\q}
.
\end{align}
Upper-level policy search can be interpreted as maximization of the normalizing constant $Z_\l$ of an unnormalized density
\begin{align}
\label{eq:bbps-density}
\gamma_{\l}(\t,\q)
&=
p_{\l}(\t,\q) \exp(\beta R(\t))
,
\\
\label{eq:Zl}
Z_{\l}
&=
\int
\gamma_{\l}(\t,\q)
\:
d\t
\,
d\q
\\
&=
\ensuremath{\mathbb{E}}_{p_{\l}}[\exp(\beta R(\t))]
.
\end{align}
The constant $\beta > 0$ has the interpretation of an `inverse temperature' that controls how strongly the density penalizes sub-optimal actions. The normalization constant $Z_{\l}$ is the expected value of the exponentiated reward $\exp(\beta R(\t))$, which is known as the desirability in the context of optimal control \cite[]{kappen_jsmtm_2005,todorov_pnas_2009}. It is not a priori obvious that maximization of the expected reward $J_\l$ yields the same policy hyperparameters as maximization of the desireability $Z_\l$, but it turns out that the two are in fact equivalent, as we will explain in section \ref{sec:methodology}.
In planning as inference formulations,
$\gamma_\l(\t,\q)/Z_\l$ is often interpreted as a posterior $p_\l(\t,\q \,|\, r)$ conditioned on a pseudo observable $r=1$ that is Bernoulli distributed with probability $p(r=1 \,|\, \t) \propto \exp(\beta R(\t))$, resulting in a joint distribution that is proportional to $\gamma_\l(\t,\q)$,
\begin{align}
p(r=1,\t,\q)
&\propto
p_\l(\t,\q) \exp(\beta R(\t))
=
\gamma_\l(\t,\q).
\end{align}
Maximization of $Z_\l$ is then equivalent to the maximization of the marginal likelihood $p_\l(r=1)$ with respect to the hyperparameters $\l$. In a Bayesian context this is known as empirical Bayes (EB) \cite[]{maritz_mono_1989}, or type II maximum likelihood estimation.
\section{Black-box Variational Inference}
Variational Bayesian methods \cite[]{wainwright_ftml_2008} approximate an intractable posterior with a more tractable family of distributions. For purposes of exposition we consider the case of a posterior $\p{\ensuremath{z},\q}{\ensuremath{y}}$, in which $\ensuremath{y}$ is a set of observations, $\q$ is a set of model parameters, and $\ensuremath{z}$ is a set of latent variables. We write $\p{\ensuremath{z},\q}{\ensuremath{y}} = \gamma(\ensuremath{z},\q)/Z$ with
\begin{align}
\label{eq:bbvi-density}
\gamma(\ensuremath{z},\q)
&=
\p{\ensuremath{y}}{\ensuremath{z},\q}p(\ensuremath{z} \,|\, \q)p(\q)
,
\\
Z
&=
\int
\gamma(\ensuremath{z},\q)
\:
d\ensuremath{z}
\,
d\q
.
\end{align}
Variational methods approximate the posterior using a parametric family of distributions $q_\l$ by maximizing a lower bound on $\log Z$ with respect to $\l$
\begin{align}
\label{eq:bbvi-bound}
\L_\lambda
&=
\ensuremath{\mathbb{E}}_{q_\l}[\log \gamma(\ensuremath{z},\q) - \log q_\l(z,\q)]
\\
&=
\log Z
-
\Dkl{q_{\l}(\ensuremath{z})}{\gamma(\ensuremath{z})/Z}
\le
\log Z
.
\end{align}
This objective may be optimized with stochastic gradient ascent \cite[]{hoffman_jmlr_2013}
\begin{align}
\label{eq:bbvi-grad}
\l_{k+1}
&=
\l_{k} + \rho_k \nabla_\l \L_\l
\big|_{\l = \l_k}
,
\\
\nabla_\l \L_\l
&=
\ensuremath{\mathbb{E}}_{q_\l(z)}
\left[
\nabla_\l \log q_\l(z)
\log \frac{\gamma(z,\q)}{q_\l(\ensuremath{z},\q))}
\right]
.
\end{align}
Here $\rho_k$ is a sequence of step sizes that satisfies the conditions $\sum_{k=1}^\infty \rho_k = \infty$ and $\sum_{k=1}^\infty \rho_k^2 < \infty$.
The calculation of the gradient $\nabla_\l \L_\l$ requires an integral over $q_\l$. For certain models, specifically those where the likelihood and prior are in the conjugate exponential family \cite[]{hoffman_jmlr_2013}, this integral can be performed analytically.
Black box variational inference targets a much broader class of models by sampling $\ensuremath{z}^{[n]},\q^{[n]} \sim q_\l$ and replacing the gradient for each component $i$ with a sample-based estimate \cite[]{ranganath_aistats_2014}
\begin{align}
\label{eq:bbvi-grad-est}
\hat{\nabla}_{\l_i} \L_\l
&=
\sum_{n=1}^N
\nabla_{\l_i} \log q_{\l}(\ensuremath{z}^{[n]},\q^{[n]}) (\log w^{[n]} - \hat{b}_i)
,
\\
w^{[n]}
&=
\gamma(\ensuremath{z}^{[n]},\q^{[n]}) / q_\l(\ensuremath{z}^{[n]},\q^{[n]})
,
\end{align}
in which $\hat{b}_i$ is a control variate that reduces the variance of the estimator
\begin{align}
\label{eq:bbvi-cv}
\hat{b}_i
&=
\frac{\sum_{n=1}^N (\nabla_{\l_i} \log q_{\l}(\ensuremath{z}^{[n]},\q^{[n]}))^2 w^{[n]}}
{\sum_{n=1}^N (\nabla_{\l_i} \log q_{\l}(\ensuremath{z}^{[n]},\q^{[n]}))^2}
.
\end{align}
\section{Introduction}
\input{introduction}
\begin{figure*}[!t]
\begin{minipage}[t]{0.5\textwidth}
\lstinputlisting[frame=bottomline, firstline=47, lastline=64]{src/ctp/fig1-v2.clj}
\end{minipage}
~~~
\begin{minipage}[t]{0.45\textwidth}
\lstinputlisting[frame=bottomline, firstline=1, lastline=18]{src/ctp/fig1-v2.clj}
\end{minipage}
~~~
\begin{minipage}[t]{\textwidth}
\includegraphics[width=1.6in,trim={0.2in 0.3in 0 0.1in},clip]{plots/{ctp_travel_graph_50e_edge-policy_1.0}.png}
\hspace{0.055in}
\includegraphics[width=1.6in,trim={0.2in 0.3in 0 0.1in},clip]{plots/{ctp_travel_graph_50e_edge-policy_0.9}.png}
\hspace{0.055in}
\includegraphics[width=1.6in,trim={0.2in 0.3in 0 0.1in},clip]{plots/{ctp_travel_graph_50e_edge-policy_0.8}.png}
\hspace{0.055in}
\includegraphics[width=1.6in,trim={0.2in 0.3in 0 0.1in},clip]{plots/{ctp_travel_graph_50e_edge-policy_0.7}.png}
\end{minipage}
\caption{\label{fig:ctp-overview} A Canadian traveler problem (CTP) implementation in Anglican. In the CTP, an agent must travel along a graph, which represents a network of roads, to get from the start node (green) to the target node (red). Due to bad weather some roads are blocked, but the agent does not know which in advance.
Upon arrival at each node the agent observes the set of open edges. The function \lsi{dfs-agent} walks the graph by performing depth-first search, calling a function \lsi{policy} to choose the next destination based on the current and unvisited locations. The function \lsi{make-random-policy} returns a \lsi{policy} function that selects destinations uniformly at random, whereas \lsi{make-edge-policy} constructs a \lsi{policy} that selects according to sampled edge preferences \lsi{(Q u v)}. By learning a distribution on each value \lsi{(Q u v)} through gradient ascent on the marginal likelihood, we obtain a heuristic offline policy that follows the shortest path when all edges are open, and explores more alternate routes as more edges are closed.}
\end{figure*}
\input{background}
\section{Black-box Policy Search}
\input{methodology}
\section{Learning Probabilistic Programs}
\input{programs}
\section{Case Studies}
\input{studies}
\section{Discussion}
\input{discussion}
\input{acknowledgements}
\small
\bibliographystyle{abbrvnat}
\subsection{Example: Canadian Traveler Problem}
An implementation of BBVI and BBPL for probabilistic program inference needs to address two domain-specific issues. The first is that probabilistic programs need not always instantiate the same set of random variables, the second is that we need to distinguish between distributions that define model parameters $\theta$ and those that define latent variables $z$, or variables that are part of the context $x$ in the case of decision problems.
Let us refer back to the program in Figure \ref{fig:ctp-overview}. The function \lsi{dfs-agent} performs a recursive loop until a stopping criterion is met: either the target node is reached, or there are no more paths left to try. At each step \lsi{dfs-agent} makes a call to \lsi{policy}, which is created by either calling \lsi{make-random-policy} or \lsi{make-edge-policy}. A random policy samples uniformly from unexplored directions. When depth first search is performed with this policy, we are defining a model in which the number of context variables is random, since the number of steps required to reach the goal state will vary. In the case of the edge policy, we use a memoized function to sample edge preference values as needed, choosing the unexplored edge with the highest preference at each step. In this case the number of parameter variables is random, since we only instantiate preferences for edges that are (a) open, and (b) connect to the current location of the agent.
As has been noted by \cite{wingate_arxiv_2013}, BBVI can deal with varying sets of random variables quite naturally. Since the gradient is computed from a sample estimate, we can compute gradients for a each random variable by simply averaging over those executions in which the variable exists. Sampling variables as needed can in fact be more statistically efficient, since irrelevant variables that never affect the trajectory of the agent will not contribute to the gradient estimate. BBVI has the additional advantage of having relatively light-weight implementation requirements; it only requires differentiation of the log proposal density, which is a product over primitive distributions of a limited number of types, for which derivatives can be computed analytically. This is in contrast to implementations based on (reverse-mode) automatic differentiation \cite[]{pearlmutter_chap_2008}, as is used in Stan \cite[]{kucukelbir_nips_2015}, which store derivative terms for the entire computation graph.
To provide a language-agnostic definition of BBVI and BBPL, we formalize learning in probabilistic programs as the interaction between a program $\P$ and an inference back end $\ensuremath{{\mathcal B}}$. The program $\P$ represents all deterministic steps in the computation and has internal state (e.g.~its environment variables). The back end $\ensuremath{{\mathcal B}}$ performs all inference-related tasks.
A program $\P$ executes as normal, but delegates to the inference back end whenever it needs to instantiate a random variable, or evaluate a conditioning statement. The back end $\ensuremath{{\mathcal B}}$ then supplies a value for the random variable, or makes note of the probability associated with the conditioning statement, and then delegates back to $\P$ to continue execution. We will assume that the programming language provides some way to differentiate between latent variables $\ensuremath{z}$, which are simply to be sampled, and parameters $\q$ for which a distribution is to be learned. In Anglican the syntax \lsi{(sample (tag :policy d))}, as used in Fig.~\ref{fig:ctp-overview}, is used as a general-purpose mechanism to label distributions on random variables. An inference back end can simply ignore these labels, or implement algorithm-specific actions for labeled subsets.
In order for the learning algorithm to be well-defined in programs that instantiate varying numbers of random variables, we require that the each random variable $z_a$ is uniquely identified by an address $a$, which may either be generated automatically by the language runtime, or specified by the programmer. Each model parameter $\theta_b$ is similarly identified by an address $b$.
In BBVI, the interface between a program $\P$ and the back end $\ensuremath{{\mathcal B}}$ can be formalized with the following rules:
\begin{itemize}
\item Initially $\ensuremath{{\mathcal B}}$ calls $\P$ with no arguments $\P()$.
\item A call to $\P$ returns one of four responses to $\ensuremath{{\mathcal B}}$:
\begin{itemize}
\item[1.] $({\tt sample},a,f, \phi)$: Identifies a latent random variable (not a policy parameter) $\ensuremath{z}_a$ with unique address $a$, distributed according to $f_a(\cdot \,|\, \phi_a)$. The back end generates a value $\ensuremath{z}_a \sim f_a(\cdot \,|\, \phi_a)$ and calls $\P(\ensuremath{z}_a)$.
\item[2.] $({\tt learn},b,f,\eta)$: For policy parameters, the address $b$ identifies a random variable $\q_b$ in the model, distributed according to a distribution $f_b$ with parameters $\eta_b$. The back end generates $\q_b \sim f_b(\cdot \,|\, \lambda_b)$ conditioned on a learned variational parameter $\lambda_b$ and registers an importance weight $w_b = f_b(\q_{b} \,|\, \eta_b) / f_b(\q_{b} \,|\, \l_b)$. Execution continues by calling $\P(\q_b)$.
\item[3.] $({\tt factor},c,l)$: Here $c$ is a unique address for a factor with log probability $l_c$ and importance weight $w_c = \exp(l_c)$. Execution continues by calling $\P()$.
\item[4.] $({\tt return},v)$: Execution completes, returning a value $v$.
\end{itemize}
\end{itemize}
Because each call to $\P$ is deterministic, an execution history is fully characterized by the values for each random variable that are generated by $\ensuremath{{\mathcal B}}$. However the set of random variables that is instantiated may vary from execution to execution. We write $A, B, C$ for the set of addresses of each type visited in a given execution. The program $\P$ now defines an unnormalized density $\gamma_{\P}$ of the form
\begin{align}
\gamma_{\P}(\ensuremath{z},\q)
&:=
p_{\P}(\ensuremath{z},\q)
\prod_{c \in C} \exp(l_{c})
,
\\
p_{\P}(\ensuremath{z},\q)
&:=
\prod_{a \in A} f_{a}(\ensuremath{z}_a \,|\, \phi_{a})
\prod_{b \in B} f_{b}(\q_b \,|\, \eta_{b})
~.
\end{align}
Implicit in this notation is the fact that the distribution types $f_a(\cdot \,|\, \phi_a)$ and $f_b(\cdot \,|\, \eta_b)$ are return values from calls to $\P$, which implies that both the parameter values and the distribution type may vary from execution to execution. While $f_a(\cdot \,|\, \phi_a)$ and $f_b(\cdot \,|\, \eta_b)$ are fully determined by preceding values for $\ensuremath{z}$ and $\q$, we assume they are opaque to the inference algorithm, in the sense that no analysis is performed to characterize the conditional dependence of each $\phi_a$ or $\eta_b$ on other random variables in the program.
Given the above definition of a target density $\gamma_{\P}(\ensuremath{z},\q)$, we are now in a position to define the density of a variational approximation $\ensuremath{\mathcal{Q}}_\l$ to the program. In this density, the runtime values $\eta_b$ are replaced by variational parameters $\lambda_b$
\begin{align}
\label{eq:Qlambda}
p_{\ensuremath{\mathcal{Q}}_\lambda}(z,\q)
&:=
\prod_{a \in A} f_{a}(\ensuremath{z}_a \,|\, \phi_{a})
\prod_{b \in B} f_{b}(\q_b \,|\, \lambda_{b})
~.
\end{align}
This density corresponds to that of a mean-field probabilistic program, where the dependency of each $\theta_b$ on other random variables is ignored.
Repeated execution of $\P$ given the interface described above results in a sequence of weighted samples $(w^{[n]},\q^{[n]},z^{[n]})$, whose importance weight $w^{[n]}$ is defined as
\begin{align}
\label{eq:bbpl-weights}
w^{[n]}
&:=
\gamma_{\P}(z^{[n]},\q^{[n]}) ~/~
p_{\ensuremath{\mathcal{Q}}_\l}(z^{[n]},\q^{[n]})
\nonumber \\
&=
\prod_{b \in B}
\frac{f(\theta^{[n]}_b \,|\, \eta_b)}
{f(\theta^{[n]}_b \,|\, \lambda_b)}
\prod_{c \in C}
\exp l^{[n]}_c
.
\end{align}
With this notation in place, it is clear that we can define a lower bound $\L_{\ensuremath{\mathcal{Q}}_\l,\ensuremath{\mathcal{Q}}_{\l_k}}$ analogous to that of Equation~\ref{eq:bbpl-bound}, and a gradient estimator analogous to that of Equation~\ref{eq:bbvi-grad-est}, in which the latent variables $z$ take the role of the trajectory variables $\tau$. In summary, we can describe a sequential decision problem as a probabilistic program $\P$ in which the log probabilities $l_c$ are interpreted as rewards, parameters $\q_b$ define the policy and all other latent variables $z_a$ are trajectory variables. EB inference can then be used to learn the hyperparameters $\l$ that maximize the expected reward,
as described in Algorithm~\ref{alg:bbpl}.
An assumption that we made when deriving BBPL is that the variational distribution $q_\l(\t,\q)$ must have the same analytical form as the prior $p_{\l_0}(\t,\q)$. Practically this requirement means that a program $\P$ must be written in such a way that the values of the hyperparameters $\eta_b$ have the same constant values in every execution, since their values may not depend on those of random variables. One way to enforce this is to pass $\eta$ as a parameter in the initial call $\P(\eta)$ by $\ensuremath{{\mathcal B}}$, though we do not formalize such a requirement here.
\begin{algorithm}[t]
\caption{Black-box Policy Learning}
\label{alg:bbpl}
\begin{algorithmic}
\State {\bf initialize} parameters $\lambda_{0,b} \leftarrow \eta_b$, iteration $k=0$
\Repeat
\State Set initial $\lambda_{k+1} = \{ \lambda_{k,b} \}_{b \in B}$
\State Run $N$ executions of program $\ensuremath{\mathcal{Q}}_{\lambda_{k}}$, generating
\State \hspace{1.5em}$(w^{[n]}, \theta^{[n]}, z^{[n]})$ according to Eqns.~\ref{eq:Qlambda}, \ref{eq:bbpl-weights}
\For {each address $b$}
\State Let $N_b \le N$ be the \# of runs containing $b$
\State Let $g_b^{[n]} := \nabla_{\lambda_{k,b}} \log f(\theta_b^{[n]} | \lambda_{k,b})$
\State
Compute baseline $\hat b_{\lambda_{k,b}}$ from Eq.~\ref{eq:bbvi-cv}
\State $\hat \nabla_{\lambda_{k,b}} J_{\lambda_k} \leftarrow {N_b}^{-1} \sum g_b^{[n]} (\log w^{[n]} - \hat b_{\lambda_{k,b}})$
\State Update $\lambda_{k+1,b} \leftarrow \lambda_{k,b} + \rho_k \hat \nabla_{\lambda_{k,b}} J_{\lambda_k}$
\EndFor
\State $k \leftarrow k + 1$
\Until parameters $\lambda_b$ converge
\end{algorithmic}
\end{algorithm}
\subsection{Planning and Control as Inference}
Our formulation of policy search as expectation maximization in probabilistic programs fits into a long history of approaches that frame planning and control problems as inference. Path integral methods \cite[]{kappen_jsmtm_2005,kappen_cbns_2007,broek_jair_2008,todorov_nips_2009,todorov_pnas_2009,kappen_ml_2012} express the expected cost-to-go as a KL-divergence between a posterior, controlled, distribution on $x_{1:T}$ and a distribution defined as the product of the prior, uncontrolled, dynamics and the exponent of the reward. Minimization of this KL divergence is then equivalent to minimization of the expected cost-to-go. In the context of these types of approaches, minimization of the expected exponentially weighted cost-to-go is known as risk-sensitive optimal control \cite[]{vandenbroek_uai_2010}. Path integral approaches have been applied to reinforcement learning problems in a number of studies \cite[]{theodorou_aistats_2010,theodorou_cdc_2012,azar_jmlr_2012}.
Related approaches \cite[]{rawlik_nips_2010,rawlik_rss_2012} minimize the KL divergence between the distribution on trajectories $\t$ induced by a stochastic policy $\pi$ relative to that induced by a an exponentially weighted posterior distribution with policy $\pi_0$. This is equivalent to a optimal control problem in which the reward is discounted by the log probability of the actions under $\pi_0$. Iterative solution, in which each $\pi_i$ is obtained by KL minimization relative to the posterior induced by $\pi_{i-1}$, converges to the optimal policy in this setting, which is consistent with our observation that expectation maximization is equivalent to maximization of the expected reward.
The use of variational inference in reinforcement learning problems has also been explored in a number of ways. When the reward is non-negative, variational Bayes and expectation propagation can be used to approximate a posterior in a (non-exponentiated) reward weighted model \cite[]{furmston_icml_2010}. The exponentiated reward case has similarly been considered \cite[]{neumann_icml_2011}. In this setting, it has been observed that minimization of the M-projection (i.e. the KL divergence between posterior and variational distribution, as used in expectation propagation) instead of the I-projection (i.e. the KL divergence between the variational distribution distribution and the posterior) leads to an objective that is equivalent to that used in Monte-Carlo EM approaches \cite[]{kober_ml_2011,vlassis_ar_2009}. As noted, the variant of BBVI used here is closely related to the one proposed by \cite[]{wingate_arxiv_2013} for probabilistic programs.
\subsection{Policies as Programs}
\subsection{Canadian Traveler Problem}
In the Canadian Traveler Problem (CTP)~\cite[]{papadimitriou_tcs_1991}, an agent must traverse a graph $G=(V,E)$, in which edges may be missing at random. It is assumed the agent knows the distance $d: E \to \ensuremath{\mathbb{R}}+$ associated with each edge, as well as the probability $p: E \to (0,1]$ that the edge is open, but has no advance knowledge of the edges that are blocked. The problem is NP-hard~\cite[]{fried_tcs_2013}, and heuristic online and offline approaches~\cite[]{eyerich_aaai_2010} are used to solve problem instances.
The results in Figure \ref{fig:ctp-overview} show that the learned policy behaves in a reasonable manner. When edges are open with high probability, the policy takes the shortest path from the start node, marked in green, to the target node, marked in red. As the fraction of closed edges increases, the policy makes more frequent use of alternate routes. Note that each edge has a fixed probability of being open in our set-up, resulting in a preference for routes that traverse fewer edges.
Figure \ref{fig:ctp-convergence} shows convergence as a function of the number of gradient steps. Results are averaged over 5 domains of 20 and 50 nodes respectively. Convergence plots for each individual domain can be found in the supplementary material. We compare the learned policies against the optimistic policy, a heuristic that selects edges according to the shortest path, assuming that all unobserved edges are open. We observe that mean traveled distance for the learned policy converges to that of the optimistic policy, which is close to optimal.
\begin{figure}
\begin{center}
\includegraphics[width=1.4in]{plots/rockwalk_flow_4x4.png}
~~~
\includegraphics[width=1.4in]{plots/rockwalk_flow_5x5.png}
\\
\vspace{4pt}
\includegraphics[width=1.4in]{plots/rockwalk_flow_7x8.png}
~~~
\includegraphics[width=1.4in]{plots/rockwalk_flow_10x10.png}
\end{center}
\caption{\label{fig:rw-graph} Learned policies for the Rock Sample domain. Edge weights indicate the frequency at which the agent moves between each pair of rocks. Starting points are in green, exit paths in red.\vspace{-0.7em}}
\end{figure}
\subsection{RockSample POMDP}
In the RockSample POMDP~\cite[]{smith_uai_04}, an $N \times N$ square field with $M$ rocks is given. A rover is initially located in the middle of the left edge of the square. Each of the rocks can be either good or bad; the rover must traverse the field and collect samples of good rocks while minimizing the traveled distance. The rover can sense the quality of a rock remotely with an accuracy decreasing with the distance to the rock. We consider a finite-horizon variant of the RockSample domain, described in the supplementary material, with a structured policy in which a robot travels along rocks in a left-to-right order.
The policy plots in Figure \ref{fig:rw-graph} show that this simple policy results in sensible movement preferences. In particular we point out that in the $5 \times 5$ instance, the agent always visits the top-left rock when traveling to the top-middle rock, since doing so incurs no additional cost. Similarly, the agent follows an almost deterministic trajectory along the left-most 5 rocks in the $10 \times 10$ instance, but does not always make the detour towards the lower rocks afterwards.
\subsection{Guess Who}
\begin{figure}
\begin{center}
\includegraphics[width=3.1in]{plots/guess_who_reward_vs_questions.pdf}
\vspace{-1.2em}
\end{center}
\caption{\label{fig:guess-who} (left) Average reward in Guess Who as a function of number of questions. (right) Convergence of rewards as function number of gradient steps. Each dot marks an independent restart.\vspace{-0.8em}}
\end{figure}
Guess Who is a classic game in which players pick a card depicting a face, belonging to a set that is known to both players. The players then take turns asking questions until they identify the card of the other player \cite[]{coster_game_1979}. We here consider a single-player setting where an agent asks a pre-determined number of questions, but the responses are inaccurate with some probability. This is sometimes known as a measurement selection, or optimal diagnosis problem. We make use of a feature set based on the original game, consisting of 24 individuals, characterized by 11 binary attributes and two multi-class attributes, resulting in a total of 19 possible questions. We assume a response accuracy of 0.9. By design, the structure of the domain is such that there is no clear winning opening question. However the best question at any point is highly contextual.
We assume that the agent knows the reliability of the response and has an accurate representation of the posterior belief $b_t(s) = \p{s}{ \ensuremath{x}_t}$ for each candidate $s$ in given questions and responses. The agent selects randomly among the highest ranked candidates after the final question. We consider 3 policy variants, two of which are parameter-free baselines. In the first baseline, questions are asked uniformly at random. In the second, questions are asked according to a myopic estimate of the value of information \cite[]{hay_uai_2012}, i.e.~the change in expected reward relative to the current best candidates, which is myopically optimal in this setting. Finally, we consider a policy that empirically samples questions $q$ according to a weight $v_q = \gamma^{n_q} (A b)_q $, based on the current belief $b$, a weight matrix $A$, and a discount factor $\gamma^{n_q}$ based on the number of times $n_q$ a question was previously asked.
Intuitively, this algorithm can be understood as learning a small set of $\alpha$-vectors, one for each question, similar to those learned in point-based value iteration \cite[]{pineau_ijcai_2003}. The discounting effectively ``shrinks'' the belief-space volume associated with the $\alpha$-vector of the current best question, allowing the agent to select the next-best question.
The results in Figure~\ref{fig:guess-who} show that the learned policy clearly outperforms both baselines, which is a surprising result given the complexity of the problem and the relatively simplistic form of this heuristic policy. While these results should not be expected to be in any way optimal, they are encouraging in that they illustrate how probabilistic programming can be used to implement and test policies that rely on transformations of the belief or information state in a straightforward manner. |
1212.3896 | \section{Introduction}
The theory of critical phenomena and the emergent notion of
universality was one of the singular developments of physics in the
twentieth century. With a known order parameter and symmetries of the
problem, calculation of long-range, measurable behaviors of
equilibrium physical quantities becomes a rather straightforward
task. The success has turned out to be hard to replicate for
non-equilibrium systems and systems where symmetry properties are
similar in the phases on both sides of the transition
\cite{Brazhkin:2012}. Here it is often unclear which quantity can
serve as a good order parameter, and the developed theoretical
machinery does not apply. Where progress has been made, order
parameters have been very specific, making it difficult to identify
universal properties. For example, in reaction-diffusion problems with
absorption, one commonly uses linear superposition of particle
concentrations as order parameters \cite{Tch:2010,vanWijland:1998},
while particle current is a better choice for jamming problems
\cite{Garrahan:2007}. Further, the order parameters often have
nontrivial relations to easily observable quantities. For example,
phase transitions in some systems with dynamic heterogeneities often
must be described with four-point correlation functions of particle
densities \cite{Biroli:2007}, or a multitude of correlation functions
\cite{Binder:1990,LeDoussal:2004ua}. Similarly, dynamical phase
transition require one to study the space of trajectories instead of
the state space \cite{Lecomte:2007}.
Whatever the choice, the order parameter is a statistics averaged over
a distribution of microscopic states. A continuous or discontinuous
change in its value at a transition indicates a similar change in the
underlying probability distribution. Therefore, it is natural to shift
attention to the distribution itself, and, specific to nonequilibrium
systems, to how it converges to the steady state.
Intuitively, different phases (often with different symmetries)
manifest themselves by changes in our ability to use local
experimental measurements for long-range predictions. For example,
nonzero magnetization in an Ising magnet allows us to predict with
some certainty orientation of far away spins based on the value of the
spin at the origin. Similarly, different crystalline phases of solids
have different density autocorrelation functions, and hence existence
of an atom at the origin translates into different predictions about
the presence of an atom a certain distance away. Then instead of a
specific statistics characterizing the predictability, namely the
order parameter, it might be useful to study one's ability to use
local measurements to predict states of the rest of the system {\em
directly}.
This prediction ability is naturally quantified using the language of
Shannon's information theory \cite{Shannon:1998ti}. In previous work,
we have termed it the {\em predictive information}
\cite{Bialek:2001wv,Bialek:2001wz}. Briefly, in information theory,
the total uncertainty in a system specified by a state $\vec{x}\in X$,
$\dim \vec{x}=N$, is measured by the (differential) entropy,
\begin{equation}
S[X]=-\int d^Nx P(\vec{x})\log_2P(\vec{x}).
\end{equation}
Then observing a state of another variable $\vec{y}\in Y$, $\dim y=M$,
may reduce the uncertainty about $\vec{x}$, and hence provide the
information about it
\begin{eqnarray}
I[X;Y]&=&S[X]-\langle S[X|Y]\rangle_Y= \int d^Nx\,d^My\,P(\vec{x},\vec{y})
\log \frac{P(\vec{x},\vec{y})}{P(\vec{x})P(\vec{y})}\nonumber\\
&=& \left<
\log \frac{P(\vec{x},\vec{y})}{P(\vec{x})P(\vec{y})}\right>_{X,Y}
=I[Y;X].
\label{eq:pred_Defined}
\end{eqnarray}
Importantly, $I[X;Y]$ depends on the entire probability distribution
$P(\vec{x},\vec{y})$, but not just on its specific statistics, and it
is zero iff $X$ and $Y$ are statistically independent.
One can consider $X$ and $Y$ to be states of a physical process, such
that $X$ are the measured quantities, and $Y$ are the quantities that
one wants to predict \cite{Bialek:2001wv}. For example, $X$ can be the
state of spins on one segment of an Ising chain, and $Y$ be the state
of spins far away. Similarly, for time series and for nonequilibrium
processes, $X$ can be the past of the process of duration $N$, and $Y$
part of its future of duration $M$. Then the information becomes the
{\em predictive information}:
\begin{equation}
I_{\rm pred}(N,M)=I[X;Y].
\label{eq:Ipred}
\end{equation}
Since the quantification of the intrinsic state of the system should
not depend on which specific set of variables $Y$ one wants to
predict, it makes sense to define predictive information as
\begin{equation}
I_{\rm pred}[X]\equiv I_{\rm pred}(N)=\lim_{M\to\infty} I(N,M).
\label{eq:IN}
\end{equation}
That is, one quantifies how much information the local observations
$X$ provide about an entire, infinitely large physical system.
Predictive information is subextensive, $\lim_{N\to\infty} I_{\rm pred}(N) /
N=0$ \cite{Bialek:2001wv}. It tends to a handful of universal
behaviors for large systems, $N\to\infty$, intuitively correlating
with the complexity of the underlying physical process. In particular,
$\lim_{N\to\infty}I_{\rm pred}(N)={\rm const}$ indicates an easily predictable
deterministic, or a short correlation length probabilistic dynamics
(``simple'' long range prediction can be perfect, or it is impossible,
respectively). Further, $\lim_{N\to\infty}I_{\rm pred}(N)\propto \log N$ is
indicative of a second order equilibrium phase transition (power-law
decaying correlations allow for complex, multiscale, partially
predictable patterns over very long distances). Finally, $
\lim_{N\to\infty}I_{\rm pred}(N)\propto N^\alpha$, $\alpha<1$ may correspond to
more exotic phase transitions with infinite-dimensional order
parameters, but this case is not well understood.
The dependence of $I_{\rm pred}$ on the full underlying probability
distribution and the relation to phase transitions make it natural to
explore $I_{\rm pred}$ as a ``universal order parameter'', also useable in the
nonequlibrium context. However, we are not aware of calculations of
predictive information for nonstationary processes, where $P(\vec{x})$
is explicitly or implicitly time dependent. Further, even for
equilibrium systems, the transition between $I_{\rm pred}={\rm const}$ and
$I_{\rm pred}\propto \log N$ in the vicinity of a phase transition has not been
studied.
In this paper, we study predictive information in a context of a
simple nonequilibrium, continuous-time Markov process, which ages and
develops an absorbing state at a certain critical value of a
parameter. This process can be viewed as a toy model, which is likely
to possess some features of more complex systems. We calculate the
expression for predictive information at the critical point and, for
the first time for any system, near the critical point. The
calculation reveals the need to modify the definition,
Eq.~(\ref{eq:IN}), to remove an ultraviolet divergence emerging due to
the continuous-time nature of the process. Similar modifications will
likely allow extension of predictive information methodology to
multidimensional systems. We demonstrate explicitly the logarithmic
divergence of $I_{\rm pred}$ at the transition, and we show that the divergent
term in the information is insensitive to temporally local, invertible
transformations of the state space. This makes predictive information,
and specifically its divergent term, a great candidate to characterize
nonequlibrium phase transitions.
\section{The model}
We consider a Markovian system governed by the following Langevin
equation:
\begin{align}
\label{eqn:lang}
&\partial_tx(t)=-x(x^2+\tau)+\sqrt{2}\sigma |x|^{\alpha/2}\eta ,\\
&x(t=0)=x_0, \;\mbox {sampled from } P(x_0)\equiv P_0,
\end{align}
where $\langle\eta(t)\eta(t')\rangle=\delta(t-t')$. We will treat this
equation in the Ito sense. Without the noise term, $x$ relaxes from
the initial value $x_0$ to either 0 or $\pm \sqrt{\tau}$, depending on
if $\tau>0$. The transition happens at $\tau=0$. For large noise near
$x=0$ (that is, small $\alpha$), $x$ gets kicked out from $x\approx0$
region, and the system equilibrates. For small noise (large $\alpha$),
a near-deterministic relaxation to the absorbing state at $x=0$
persists. This is probably the simplest example of nonequilibrium,
stochastic relaxation dynamics, and it is a natural starting point for
the analysis.
We note that we can view this equation as describing dynamics of
magnetization, $x$, along a line normal to a boundary of an Ising
ferromagnet in some number of spatial dimensions. The coordinate is
$t=0$ at the boundary, and increases into the bulk. The deterministic
cubic dynamics in Eq.~(\ref{eqn:lang}) is the usual coarse-grained
model of such ferromagnet. In such a model, the variance of the noise
increases with $x$, and $\alpha$ would depend on the overall
dimensionality of the problem.
To calculate predictive information, Eq.~(\ref{eq:Ipred}), we
discretize the time $t$, $t_n=n\Delta t$, and $x_n=x(t_n)$. We choose
$\Delta t\to 0$, and yet $N\Delta t=T_{\rm p}\to\infty$, and $M\Delta
t=T_{\rm f}\to\infty$, where p and f stand for {\em past} and {\em
future}, respectively. Then Eq.~(\ref{eqn:lang}) is equivalent to
the following Markovian dynamics:
\begin{multline}
P(x_{n+1}|x_0,x_1,...,x_n)=P(x_{n+1}|x_n)\\=
\frac{1}{\sqrt{4\pi\Delta t}\sigma x^{\alpha/2}}\exp\left\{-\frac{\left[x_{n+1}-\left(x_n-x_n(x_n^2+\tau)\Delta
t\right)\right]^2}
{4\sigma^2 |x_n|^\alpha\Delta t} \right\}.
\end{multline}
To simplify the notation, we define
\begin{align}
&P_{n|n-1}\equiv P(x_n|x_{n-1}),\\
&P_{n}\equiv P(x_n)=\int dx_{n-1}P(x_{n-1})P(x_{n}|x_{n-1}).
\end{align}
Then:
\begin{multline}
\label{eqn:minminfo}
I_{\rm pred}(N,M)=
\left\langle \log_2\frac{ P_0\prod_{n=1}^{N+M-1}P_{n|n-1}} {
P_0\prod_{n=1}^{N-1}P_{n|n-1} P_N
\prod_{m=N+1}^{N+M-1}P_{m|m-1}}\right\rangle \\
=\left\langle\log_2 \frac{P_{N|N-1}}{P_N}\right\rangle=I[x_N;x_{N-1}].
\end{multline}
Not surprisingly for a Markovian process, predictive information
is the mutual information between two successive measurements and does
not depend on the length of the future sequence, $M$, so that the
limit, Eq.~(\ref{eq:IN}), is trivial. However, the information can
depend on $N$ since the system is not stationary, and not
time-translation invariant. Specifically, for small noise, each
subsequent $x$ is more narrowly distributed. This allows the
information to increase unboundedly with $N$, unlike in typical
finite-dimensional Markov processes with constant transition
probabilities, where $I_{\rm pred}$ is always finite
\cite{Bialek:2001wv}. These considerations also point out that one
must take the sequence on $N$ observations starting from exactly the
same time when calculating the averages.
Since $x(t)$ is continuous, $x_{N}\to x_{N-1}$ as $\Delta t\to 0$. The
state of the process at the next time step becomes exactly known, and
predictive information diverges. However, this is a superficial
ultraviolet divergence, while we are interested in studying the
infrared behavior. Interestingly, this interfacial effect has been the
primary reason behind the inability to apply predictive information
ideas to systems in more than one dimension, where the size of the
interface diverges with the system size. This makes it difficult to
disambiguate divergences in predictive information coming from
long-range prediction from those produced by short range interfacial
effects.
We thus need to introduce the cutoff scale into the system, at which
predictive information is computed, similarly to how one does this in
the renormalization group theory. For this, we redefine predictive
information as mutual information between the past of duration $T_{\rm
p}=N\Delta t$ and the future of duration $T_{\rm f}=M\Delta T$,
separated by a ``scale'' gap of duration $T_{\rm s}=L\Delta T$, which
remains finite as $\Delta T\to0$. That is
\begin{multline}
\label{eqn:geninfo}
I_{\rm pred}(N,M|L)= \\
\left\langle \log_2\frac{ P_0 \prod_{n=1}^{N-1}P_{n|n-1}P_{N+L|N-1}\prod_{m=N+L+1}^{N+L+M-1} P_{m|m-1}}
{P_0 \prod_{n=1}^{N-1}P_{n|n-1}P_{N+L}\prod_{m=N+L+1}^{N+L+M-1} P_{m|m-1}}\right\rangle \\ =\left\langle\log_2
\frac{P_{N+L|N-1}}{P_{N+L}}\right\rangle=I[x_{N+L};x_{N-1}].
\end{multline}
Here
\begin{equation}
P_{N+L|N-1}=\int \prod_{n=N}^{N+L-1}dx_n\prod_{m=N}^{N+L} P_{m|m-1}.
\end{equation}
\section{Invariance of predictive information}
From Eq.~(\ref{eqn:geninfo}), it is clear that predictive
information is invariant under reparameterization of $x$. This is a
desired property for any potential universal order parameter. Further,
any experimental device measuring $x(t)$ will act as a temporal
filter, so that the measured values will be convolutions of true $x$'s
at nearby time points. Thus it is also desirable for the
nonequilibrium order parameter to be invariant to temporally local
invertible transformations of data \cite{Bialek:2001wv}. Does the
predictive information obey this property?
The filter, represented by ${\mathcal F}$, maps the sequences of true
states of the system $\{x\}$ into measured data $\{\chi\}$. We
require that the filter does not inject additional information into
the dynamics. This means that the extraneous parameters of the mapping
$\mathcal{F}$ must be known. In a real-life experiment, this means
that we would like to be able to separate the behavior of the observed
system from any artifacts associated with the experimental setup.
In general terms, such filter can be represented by a convolution
kernel $\mathcal{L}(t-t')$. Since a convolution mixes the past and the
future, the measured data $\{\chi\}$ is no longer Markovian. We
require that the so-introduced statistical dependences are short
lived, i.\ e.\, the kernel $\mathcal{L}(t-t')$ is of compact support
or decreases with time exponentially or faster. This is our definition
of temporal locality.
Convolutions are reductions in rank and therefore (potentially)
invertible only for infinitely long data sequences. Therefore, we can
define invertibility only in the $t\to\infty$ limit. To this end, let
$\mathfrak{V}=\bigotimes_n \mathbb{R}^n$ be the space of all
temporally discretized, finite length trajectories, that is the space
of all $n$-tuples of $x$, $n<\infty$. Let $\mathcal{F}:
\mathfrak{V}\to\mathfrak{V}$ be a function such that $\mathcal{F}(
\mathbb{R}^{N+\nu})\subset\mathbb{R}^N$. That is, a sequence of $N$
data points is defined from $N+\nu$ points through some filtering
procedure. We consider this mapping to be invertible if the
Radon-Nikodym derivative over the set $\mathcal{F}^{-1}\left(\mathbf
x\in \mathbb{R}^N\right)$ converges to a delta function for
$N\to\infty$. More specifically, the probability of observing a
trajectory $\{\chi_i\}_{i=1}^N$ is given by
\begin{multline}
\label{eqn:probchi}
P(\{\chi_j\}_{i=j}^N)= \int\,d^{N+\nu}xP(\{x_j\}_{j=-\nu}^N)
\prod_{j=1}^N \delta\left(\chi_j-\sum_k \mathcal{L}(j-k)x_k \right) \\
=\int\,d^{N+\nu}x\,d^N\lambda\times \\ \times \exp\left[
-i\sum_{j=1}^N\lambda_j \left(\chi_j-\sum_k \mathcal{L}(j-k)x_k \right)
+\ln P(\{x_j\}_{j=-\nu}^N) \right].
\end{multline}
Thus invertibility requires that the Hessian matrix of the exponent in
this equation diverges, defining a dominant stationary solution of the
corresponding ``action''. With this requirement, $\{\chi_i\}$ are
simply reparameterizations of $\{x_i\}$, and predictive
information is invariant under the change. While this requirement is
very general, we suspect that, in practice, it will be equivalent to
the asymptotic properties of trajectory-averaged quantities, for which
there are already well established results \cite{jones:2004}. We leave
exploration of these conditions to future work.
\section{Solving the model}
To calculate predictive information in the model, we first
calculate the Green's functions (the marginal and the conditional
distributions) of Eq.~(\ref{eqn:lang}). For this, we write the
Fokker-Planck equation corresponding to the Langevin dynamics
\begin{equation}
\label{eqn:FPgen}
\partial_tp(x,t)=\partial_x\left[x(x^2+\tau)p(x,t)+\sigma^2 \partial_x\left(|x|^{\alpha} p\left(x,t\right)\right)\right].
\end{equation}
This equation immediately confirms our earlier statement that
$p(x,t)=\delta(x)$ is a stationary state, stability of which depends
on the strength of the noise, which in turn is controlled by
$\alpha$. As a result, the equation can develop a singularity near
$x=0$. Fortunately, the probability current at $x=0$ is zero. Thus
for $x_0>0$, we can consider $x(t)>0$ for any $t$. Further, we seek
the solution for $\tau>0$, hoping further to analytically continue to
the entire real axis of $\tau$. With these caveats, we make the
following simplifying transformations:
\begin{align}
&\bar{\tau}\equiv\frac{\beta^2}{\sigma^2}\hat\tau=\beta\tau/\sigma^2,\\
&\hat{t}=t\tau/\beta,\\
&\hat y\equiv y\hat{\tau}^{1/2}=x^{-1/\beta}\hat{\tau}^{1/2},\\
&f=y^{-\beta\alpha}p\left(x\left(y\right),t\right),\\
&\beta=2/(\alpha-2),\\
&n=2(\alpha-1)/(\alpha-2).\label{eq:n}
\end{align}
Then Eq.~(\ref{eqn:FPgen}) becomes
\begin{equation}
\label{eqn:al5diff}
\hat{y}^{n-1}\partial_{\hat{t}} f=-\partial_{\hat{y}}\left[\left(
\hat{y}^n+\frac{\beta\bar{\tau}^{(n-3)}}{\sigma^2}\hat{y}^{4-n}\right)f\right]+
\partial_{\hat{y}}\left(\hat{y}^{n-1}\partial_{\hat{y}} f\right) .
\end{equation}
The initial condition should obey $p(\hat y=0,t)=p(\hat y\to\infty,t)=0$. The
former condition is a result of the inverse relationship between $x$
and $\hat y$, while the latter is due to $x=0$ being the absorbing state.
It is important to discuss the allowed values of $\alpha$ at this
point. From Eq.~(\ref{eq:n}), $n$ becomes divergent at $\alpha=2$.
This corresponds to a large noise, which hides the phase transition.
On the other hand, for large $\alpha$, the noise is negligible, and
the system is in an effectively deterministic regime. This happens at
$n\le 3$, where the second term in
Eq.~(\ref{eqn:al5diff}) is suppressed as $\bar\tau\to 0$. Thus we are
interested in $3< n<\infty$, which corresponds to $2<\alpha<4$. In
this regime, the $\bar{\tau}$ term in Eq.~(\ref{eqn:al5diff}) is
negligibly small, and can be dropped.
With this, we notice that Eq.~(\ref{eqn:al5diff}) is the radial part
of the diffusion equation in $n$ dimensions. Thus our strategy is to
solve it first for $n$ integer, hoping to analytically continue to all
$n$ later on. Assuming an integer $n$, we rewrite
Eq.~(\ref{eqn:al5diff}):
\begin{equation}
\label{eqn:al6diff}
\partial_{\hat{t}} f=- nf -{\hat y}\partial_{\hat y} f
+ \frac{1}{{\hat y}^{n-1}}\partial_{\hat y}\left({\hat y}^{n-1}\partial_{\hat y} f\right).
\end{equation}
Therefore, $f(\hat y)$ is the radially symmetric part of the solution of
the following equation
\begin{equation}
\label{eqn:al7diff}
\partial_{\hat{t}} f=-n f -
\mathbf{\hat y}\cdot\mathbf{\nabla} f + \nabla^2 f.
\end{equation}
We solve this equation in Appendix A, resulting in:
\begin{multline}
\label{eqn:ugrn}
G(t,y,z)= C(n)z^{n-1}\left(\frac{\hat\tau}{2\pi (e^{2 \hat\tau
t}-1)}\right)^{n/2}\times\\
\int_{-1}^1 \,d \lambda \exp\left(-\frac{\hat\tau}{2(e^{2\hat\tau
t}-1)}(y^2-2yze^{\hat\tau t}\lambda+z^2e^{2\hat\tau t})\right) K(\lambda),
\end{multline}
where $K(x)$ is a kernel, which, for integer $n$, is the Jacobian of
the $n$-dimensional change of variables from Cartesian to spherical
coordinates. We still need to determine it for non-integer
dimensions. For this, we substitute the expression of
Eq.~(\ref{eqn:ugrn}) in Eq.~(\ref{eqn:al6diff}) (for general $n$) and
find that it satisfies iff given by
\begin{equation}
\label{eqn:K}
\partial_{\lambda}^2[(1-\lambda^2)K(\lambda)]+(n-1)\partial_{\lambda}(\lambda K(\lambda))=0
\end{equation}
To guarantee regularity at $\lambda=\pm 1$ (and in analogy with the
integer dimensional cases), we additionally impose the condition
that $K(\pm 1)=0$, leading to the solution
\begin{equation}
K(\lambda)=(1-\lambda^2)^{\frac{n-3}{2}}.
\end{equation}
The normalization constant $C(n)$ can be determined from the
requirement that the integral over $y$ for a fixed $z$ is unity when
$t\to 0$. In the case of an integer $n$, $C(n)$ is the area of the
unit sphere in $n-1$ dimensions. To verify this for any value $n$, we
need to perform the integration explicitly. To this end, it is
convenient to introduce $\Delta=[(e^{2\hat\tau t}-1)/\hat\tau]^{1/2}$,
and $z'=ze^{\hat\tau t}$. Then integrating Eq.~(\ref{eqn:ugrn}), we get
\begin{multline}
\label{eqn:intgrn}
\int_0^{\infty}G(t,y,z)\,dy=C(n)z^{n-1}\left(\frac{1}{\sqrt{2\pi}
\Delta}\right)\int_{0}^{\infty} dy \exp\left(-\frac{(y-z')^2}{2\Delta^2}\right)\\
\int_{-1}^1(\sqrt{2\pi})^{1-n}\Delta^{1-n}\exp\left(-\frac{yz'(1-\lambda)}{\Delta^2}\right) K(\lambda) \,d\lambda.
\end{multline}
We concentrate on the inner integral first. We perform the
substitution $\xi=yz'(1-\lambda)/\Delta^2$ which leads to
\begin{multline}
\label{eqn:int1grn}
\int_0^{2yz'/\Delta^2} (yz')^{-\frac{n-1}{2}}(\sqrt{2\pi})^{1-n}e^{-\xi}\left[\xi\left(2-\frac{\Delta^2 \xi}{yz'}\right)\right]^{\frac{n-3}{2}}\,d\xi\xrightarrow[\Delta\to\infty]{}\\
\frac{(yz)^{-\frac{n-1}{2}}}{2\pi^{(n-1)/2}}\int_{0}^{\infty}e^{-\xi}
\xi^{\frac{n-3}{2}}\,d\xi
=\frac{1}{2\pi^{(n-1)/2}}(yz)^{-\frac{n-1}{2}}\Gamma\left(\frac{n-1}{2}\right).
\end{multline}
By dominated convergence, the limit is valid for any $y$ and all $n>
1$. (The cases $3\ge n> 1$ follow from the fact that
$\xi(2-\Delta^2 \xi/yz')\ge \xi$ for $0<\xi\le
yz'/\Delta^2$, while the portion of the integral in Eq. \ref{eqn:int1grn}
between $yz'/\Delta^2<\xi\le 2yz'/\Delta^2$ converges to $0$
as $\Delta\to 0$).
Furthermore, since $yz'/\Delta^2$ controls the convergence in a
monotonic fashion, the limit
is uniform on any semi-infinite interval not containing 0. Since the
convergence is dominated by a multiple of $(yz)^{-(n-1)/2}$,
particularly for the values of $y$ close to zero, we recognize the
outer integral in Eq.~(\ref{eqn:intgrn}) as a delta function.
Therefore, in order to bring the value of Eq.~(\ref{eqn:intgrn}) to
unity, we need that
\begin{equation}
C(n)=\frac{2\pi^{(n-1)/2}}{\Gamma((n-1)/2)},
\end{equation}
which is the area of the $n-1$ dimensional unit sphere when $n$ is
integer.
By reverting back to the original coordinate $x$, we can rewrite
Eq.~(\ref{eqn:ugrn}) and obtain the solution in these coordinates.
However, for the purposes of the next section, it is more convenient
to stay in the $y$ space instead. Notice that if we make the
substitutions $\tilde p=y^{-\alpha\beta/2}p$ in Eq.~(\ref{eqn:FPgen}),
we obtain
\begin{equation}
\label{eqn:normdiff}
\partial_t\tilde p=-\frac{1}{\beta}\partial_y\left((\hat\tau
y+\frac{\alpha\sigma^2}{2}y^{-1}+y^{5-2n})\tilde p\right)+
\frac{\sigma^2}{\beta^2}\partial_y^2\tilde p .
\end{equation}
The advantage of $\tilde p$ over $f$ calculated earlier is that
$\tilde p$ is a probability distribution. We can immediately write its
Green's function from Eq.~(\ref{eqn:ugrn}) since $\tilde
p(t,y)=y^{n-1}f(t,y)$:
\begin{multline}
\label{eqn:fgrn}
\tilde G(t,y,z)=C(n)(y)^{n-1}\left(\frac{\hat\tau}{2\pi (e^{2 \hat\tau t}-1)}\right)^{n/2} \times\\
\times\int_{-1}^1 \,d \lambda
\exp\left(-\frac{\hat\tau}{2(e^{2\hat\tau t}-1)}(y^2-2yze^{\hat\tau
t}\lambda+z^2e^{2\hat\tau t})\right) K(\lambda) .
\end{multline}
This is the main result of this section, which we will use in order
calculate predictive information for our model. One can verify by
explicit substitution that the expression in Eq.~(\ref{eqn:fgrn})
satisfies the Fokker-Planck equation, Eq.~(\ref{eqn:al5diff}), and it
reduces to a delta function as $t\to 0$. Thus it represents the
conditional distribution of $y$ given $z$.
\section{Predictive information for the model}
Predictive information is reparameterization invariant. Thus we
can calculate it for $y$ instead of $x$ and use the expression,
Eq.~(\ref{eqn:fgrn}), when applying the Eq.~(\ref{eqn:geninfo}) to our
model. Without loss of generality, we assume that the initial
condition is a delta function. Then the continuous form of
Eq.~(\ref{eqn:geninfo}) is
\begin{equation}
\label{eqn:geninfocont}
I_{\rm pred} (t)=\left\langle
\log_2\frac{\tilde G(\tilde t,y,z)}{\tilde G(t+\tilde t,y,w)}
\right\rangle,
\end{equation}
where $w$, $z$, and $y$ are the values of the observable at times $0$,
$t=(N-1)\Delta t$, and $T\equiv t+\tilde t=(N+L)\Delta t$
respectively, i.\ e., $w=x_0^{-1/\beta}$, $z=x_{N-1}^{-1/\beta}$, and
$y=x_{N+L}^{-1/\beta}$. Equation (\ref{eqn:geninfocont}) involves an
integral with complex time and $\hat\tau$- dependences. In the
following, we would like to find the leading orders of these
dependences. Defining $\Delta(t)=[(e^{2\hat\tau t}-1)/\hat\tau]^{1/2}$
(cf.~Eq.~(\ref{eqn:ugrn})), it is also convenient to introduce
$\Xi(t;\lambda,y,w)=\exp[(y^2-2ywe^{\hat\tau t}\lambda+w^2e^{2\hat\tau
t})/ (2\Delta(t)^2)]$, so that Eq.~(\ref{eqn:fgrn}) takes on the form
\begin{equation}
\label{eqn:fgrn1}
\tilde G(t,y,z)=C(n)\frac{(2\pi)^{-n/2}}{\Delta(t)^n}
\int_{-1}^1 \,d\lambda\, K(x)\, \Xi (t;\lambda,y,z).
\end{equation}
Then Eq.~(\ref{eqn:geninfocont}) becomes
\begin{multline}
\label{eqn:geninfocont1}
I_{\rm pred}(t)=n\log_2\frac{\Delta(T)}{\Delta(\tilde t)}+
\left\langle\log_2
\int_{-1}^1 \,d\lambda K(\lambda) \Xi (\tilde t;\lambda,y,z)
\right\rangle- \\
\left\langle\log_2
\int_{-1}^1 \,d\lambda K(\lambda) \Xi (T;\lambda,z,w)
\right\rangle.
\end{multline}
In Appendix C, we show that the last two terms in
Eq.~(\ref{eqn:geninfocont1}) are asymptotically constant when
$T\to\infty$ if $t$ is large and $\hat\tau$ is small. Therefore, to
the leading order, predictive information is
\begin{equation}
\label{eqn:infofinal}
I_{\rm pred}(t)\approx n\log_2\frac{\Delta(T)}{\Delta(\tilde t)}= n\log_2\frac{\exp[2\hat\tau(t+\tilde t)]-1}{\exp(2\hat\tau \tilde t)-1}.
\end{equation}
At the critical point, when the absorbing state is just starting to
emerge, $\hat\tau\to 0$, this expression reduces to
\begin{equation}
\label{eqn:infofinal0}
I_{\rm pred}(t)\approx n\log_2\frac{t+\tilde t}{\tilde t}.
\end{equation}
This logarithmic growth with the system size $t$ has been anticipated
for a critical point in Ref.~\cite{Bialek:2001wv}, but has not been
calculated before for any nonequilibrium stochastic dynamical
system. A plot of Eq.~(\ref{eqn:infofinal}) is given for different
parameter values in Fig.~\ref{fig:1}.
Notice that the prefactor $n=2(\alpha-1)/(\alpha -2)$ increases with
the effect of the noise, which corresponds to more of partially
predictable variability in the dynamics, and hence to an intuitively
higher complexity. Further, as $\alpha\to2$, or $n\to\infty$, the
leading term in predictive information becomes extensive, and
hence it would cancel out in the difference of entropies in
Eq.~(\ref{eq:pred_Defined}), leading to $I_{\rm pred}(t)={\rm const}$.
Equation~(\ref{eqn:infofinal}) also allows calculation of the
asymptotic away from the phase transition. For large negative
$\hat{\tau}$, $I_{\rm pred}(t)={\rm const}$. For large positive $\tau$,
$I_{\rm pred}(t)\propto t$, since perfect prediction is possible in the
absorbing state. Hence it cancels out as well, leading to the constant
limit, and indicating the absence of the phase transition. These
results illustrate that divergence of predictive information correctly
captures the existence of the phase transition (emergence of the
absorbing state) at $\tau\to0$.
\begin{figure}
\label{fig:1}
\includegraphics[width=0.75\textwidth]{info1.eps}
\caption{A plot of $I_{\rm pred}/n$ for different values of $\tau<0$ at
different times $T$ for a fixed $\tilde t=1$.}
\end{figure}
\section{Discussion}
Predictive information was introduced in Ref.~\cite{Bialek:2001wv} as
information between the past and the future of a time series, or
between left and right parts of a physical system. It was argued, in
particular, that the behavior of predictive information as the system
size grows can signal existence of a phase transition. As an example,
Ref.~\cite{Bialek:2001wv} calculated the information numerically for
an equilibrium long-range one-dimensional Ising magnet. In the
current work, we argue that predictive information can be used as a
universal order parameter in more complicated scenarios, such as in
nonequilibrium contexts, where traditional symmetry arguments fail to
identify low-order correlation functions that can serve this role. For
the first time, we calculate predictive information for a
nonequilibrium Markov process, which exhibits a phase transition at
certain values of parameters. Divergence of predictive information
correctly captures this phase transition. In addition to results {\em at}
and {\em far away} from the critical point, our calculations reveal how
predictive information behaves {\em near} a phase transition,
exhibiting a smooth crossover from an asymptotically constant to an
asymptotically divergent regime. To our knowledge, this has not been
calculated before, either for equilibrium or for nonequlibrium
systems.
One important technical difference between this work and the previous
ones is the introduction of an additional ``renormalization'' scale,
$L$ or $\tilde{t}$, in the definition of predictive information,
so that the information is calculated between the past and the future
that are separated by a finite distance. This removed the ultraviolet
divergences associated with information at the interface between the
past and the future of a trajectory. While this modification was
precipitated by the continuous time/space nature of the stochastic
process, we believe that it will solve additionally difficulties with
application of predictive information ideas to systems with more than
one dimension. Indeed, there the main problem is that the interface
between two parts of a system diverges with the system size, and hence
the interfacial contribution to predictive information diverges
even away from a critical point. This will not happen if direct
interfaces are eliminated.
In summary, in this paper, we provide the first example of a direct
analytical calculation of predictive information for a nonequilibrium
stochastic process. This example argues further for using predictive
information as a universal order parameter for studying phase
transitions.
\begin{acknowledgements}
This work has been supported in part by a James S.\ McDonnell
Foundation Complex Systems Grant No.~220020321. We would like to
thank HGE Hentschel for stimulating discussions.
\end{acknowledgements} |
1212.4135 | \section{Introduction}
Since the first conclusive detection~\cite{Smoot:1992td} of the $\Delta T/T={\cal O}(10^{-5})$ temperature fluctuations in the cosmic microwave background radiation (CMB), a new concordance picture of cosmology has been established. This is supported by vastly increased observational precision in CMB measurements~\cite{Komatsu:2010fb,Dunkley:2010ge,Story:2012wx,Ade:2013uln,Ade:2013zuv} as well as measurements of the redshift-distance relation for large samples of distant type IA supernovae~\cite{Riess:1998cb,Perlmutter:1998np}, baryon acoustic oscillations (BAO)~\cite{Percival:2009xn}, and the Hubble parameter $H_0$ by the Hubble Space Telescope key project~\cite{Riess:2011yx}. The results paint a Universe very close to being spatially flat, where large-scale structure originates from a pattern of coherent acoustic oscillations in the early dense plasma which was seeded by an almost scale-invariant power spectrum of super-horizon size curvature perturbations with Gaussian distribution. These initial
conditions arise as a direct
consequence of a wide class of models of cosmological inflation driven by the potential energy of a scalar field. An inflationary origin of the observed curvature perturbation spectrum predicts in addition the existence of a similarly almost scale-invariant power spectrum of super-horizon size primordial gravitational waves. The magnitude of this `tensor mode' power spectrum, and in turn its detectability, is determined by the energy scale at which inflation took place.
Such almost scale-invariant power spectra of super-horizon size curvature perturbations and tensor mode perturbations with Gaussian distribution can be described at the Gaussian level by just three observational quantities: The overall normalization $\Delta_s^2$ of the curvature perturbation power spectrum (known since the COBE measurements~\cite{Smoot:1992td}), its spectral tilt $n_s$, describing the (small) deviations from scale invariance expected in most models of inflation, and the fractional power $r$ in tensor modes. $n_s$ has been constrained by various combinations of the WMAP satellite CMB results~\cite{Komatsu:2010fb} with type IA SN, BAO and $H_0$ data. In combination with the recently released ca. two-and-a-half full-sky surveys of PLANCK CMB temperature data~\cite{Ade:2013uln,Ade:2013zuv}, and earlier the 2012 Atacama Cosmology Telescope~\cite{Dunkley:2010ge}, and South Pole Telescope CMB data~\cite{Story:2012wx}, this led to an unambiguous $>5$-$\sigma$ detection of a red tilt $n_s<1$. The
tensor mode power fraction $r$ is so far subject to an upper bound, most recently improved to $r<0.12\,(95\%)$ by the PLANCK analysis~\cite{Ade:2013uln,Ade:2013zuv}. A future analysis of
data of the PLANCK satellite CMB B-mode polarization results as well as future polarized ground-based CMB detectors may substantially sharpen this upper bound in the next few years.
Inflationary theory determines these three numbers in terms of the value of the scalar potential $V_0$ at the time when the largest observable scales exited the inflationary horizon (about 60 e-folds before the end of inflation), and its first and second derivatives $V'_0$, $V''_0$ with respect to the inflation scalar field $\phi$ at that time. This implies that there are huge classes of scalar potentials $V(\phi)$ even for single-field models which yield identical predictions for $\Delta_s^2$, $n_s$, and $r$.
In any attempt to connect data with theory, potential degeneracies must be taken into account before any conclusions can be drawn. In this context it is important to understand the structure of this very large model space, and look for degeneracies between large classes of inflationary models with respect to the three observable quantities. We will restrict our attention here to single-field models of inflation which partition into two large classes: models with a canonically normalized kinetic term $\frac12 (\partial_\mu\phi)^2$, and so-called non-canonical inflation models with Lagrangian ${\cal L}=p\left((\partial_\mu\phi)^2,\phi\right)$. Non-canonical inflation has been studied field-theoretically in the context of k-inflation~\cite{Garriga:1999vw}, and within string theory in DBI-inflation~\cite{Silverstein:2003hf}. In both cases the function $p\left((\partial_\mu\phi)^2,\phi\right)$ can be written as an (infinite) sum over higher powers of the derivative $(\partial_\mu\phi)^2$ with potentially field-
dependent pre-factors. These terms can lead to additional effective friction terms in the equations of motion for the inflaton. They
can slow down the rolling of the scalar field into a regime of vacuum energy domination for potentials which would be too steep to do so in presence of a canonically normalized kinetic term alone. More general studies of such non-canonical models of inflation can be found in~\cite{Franche:2009gk}, while the effective field theory of inflationary quantum fluctuations in such general settings is discussed in~\cite{Cheung:2007st}. Non-canonical inflation quite generally leads to appreciable levels of non-Gaussianity of the inflationary quantum fluctuations~\cite{Garriga:1999vw,Silverstein:2003hf}, which has been analyzed more generally in~\cite{Chen:2006nt}, and has its full effective field theory treatment in~\cite{Cheung:2007st}.
We will look at the question of whether there are degeneracies between canonical and non-canonical models of inflation with respect to the three observational quantities describing their predicted power spectra at the Gaussian level. This question has been attacked from the point of view of reconstructing the inflationary action from observables using Monte Carlo simulations in \cite{Easson:2012id}.\footnote{Earlier work towards reconstructing the inflationary potential was done for a canonical scalar field in \cite{Easther:2002rw}, and for a general action with noncanonical kinetic terms in \cite{Bean:2008ga, Gauthier:2008mq, Powell:2008bi}.}
The method of canonical transformations for transforming noncanonical kinetic terms into canonical kinetic terms, even in $0+1$D, appears to be limited to the case where the noncanonical theory has a quadratic potential, as we elucidate in Appendix \ref{sec:AppB}. Therefore we work here at the level of the action and of the inflationary solution itself. While formally non-canonical 2-derivative models of the form ${\cal L}=f(\phi)(\partial_\mu\phi)^2-V(\phi)$ can always be transformed off-shell by a local field redefinition into a canonical model with a transformed scalar potential, this question is rather non-trivial in the presence of
higher-power kinetic terms. As the inflationary behavior of a given model is described in terms of a generalized slow-roll attractor solution in phase space, we will look at possible on-shell transformations of a given non-canonical model on its inflationary attractor into an equivalent canonical slow-roll inflation model. We find the general formalism for performing this matching of trajectories, which will give the canonical potential $V_{can}$ leading to slow-roll inflation in a canonical theory, with inflationary trajectory $X_{inf} (\phi)$ matching exactly that in the given non-canonical model. This matching is quite general.
Furthermore, the 2-point observables $\Delta_s^2$, $n_s$, and $r$ are shown, numerically and analytically, to match in the case of DBI inflation, over a range of efolds. This degeneracy is nontrivial, and seen for a large range of field values well outside of the canonical regime of DBI. It could not be resolved with the currently available data at the 2-point level, requiring a measurement of the ratio of $r$ and $n_T$ to distinguish the two theories.
Note that 3-point function observables, i.e. non-Gaussianities, while generally negligible in single-field canonical inflationary models, can be appreciable in certain special cases. A sum of oscillating terms in the potential can lead to an approximately equilateral-type non-Gaussian signal \cite{Gwyn:2012pb}, while coupling of the inflaton to gauge quanta can also give rise to large equilateral-type non-Gaussianity \cite{Barnaby:2010vf, Barnaby:2011qe}. \footnote{Note that such models may be subject to a strong bound on the power spectrum coming from the non-detection of primordial black holes \cite{Linde:2012bt}.} This becomes even more interesting given that the analysis of the recent PLANCK CMB temperature data constrained local-shape non-Gaussianity arising from multi-field inflation models with $f_{NL}^{loc.}=2.7\pm 5.8$~\cite{Ade:2013ydc} down to non-primordial foreground levels, while leaving a considerable window for equilateral non-Gaussianity with $f_{NL}^{equil.}=42\pm75$~\cite{Ade:2013ydc}.
Hence, a matching of the 2-point function observables can in principle be extended to 3-point function observables by adding additional couplings or features to the potential of the canonical theory. We find matching of the 2-point function observables to be possible precisely for the case of DBI inflation while failing for simple classes of DBI-inspired generalizations. This may point to a special status for DBI inflation as a member of the non-canonical class in that it can be related to a canonical model of inflation with matching 2-point function observables.
Our discussion proceeds as follows. In Section~\ref{sec:review} we review briefly the relevant aspects of non-canonical inflation, while in Section~\ref{findcanpot_sec} we discuss the on-shell transformation of a non-canonical model into a canonical one on the inflationary attractor of the non-canonical model. Section~\ref{theoriesspeedlimit_sec} discusses the relation of the 2-point function observables under the transformation between several classes of non-canonical inflation with a speed limit inspired by and including DBI inflation and their associated canonical models. Our main example, DBI inflation, we analyze in Section~\ref{DBIinflectionexample_sec}. Section~\ref{csnot1_sec} treats the corrections from typically the reduced speed of sound in non-canonical inflation to the 2-point function observables, and we conclude in Section~\ref{sec:conc}. There are two appendices which contain a short discussion of the accessibility of the non-canonical regime for DBI inflation (Appendix~\ref{sec:AppA}), and
an analysis of possible off-shell transformations between non-canonical and canonical theories using a form of canonical transformations (Appendix~\ref{sec:AppB}).
\section{Review: Non-canonical inflation}\label{sec:review}
We study inflationary dynamics of a single scalar field $\phi$ minimally coupled to gravity via
\begin{equation}
S = \int d^4 x \sqrt{g} \left[\frac{M_p^2}{2}\mathcal{R} + p(X,\phi) \right]\,,\label{Leffgen}
\end{equation}
with $X\equiv -(\partial_\mu \phi)^2 = \dot \phi^2/2$ in a homogeneous FLRW background $ds^2 = -dt^2 + a(t)^2 dx^2$.
From an effective field theory point of view, we expect the function $p(X,\phi)$ to have the form
\begin{equation}
p(X,\phi)=\sum_{n\geq 0} c_n(\phi) \frac{X^{n+1}}{\Lambda^{4n}} - V(\phi) = \Lambda^4 S(X,\phi)-V(\phi)\,,\label{psum}
\end{equation}
with some cutoff scale $\Lambda$. In this work, we will restrict ourselves to the case where the coefficients $c_n$ are not field dependent, i.e. $c_n(\phi)=c_n$, such that $p(X,\phi)$ is separable, i.e.
\begin{equation}
p(X,\phi)= \Lambda^4 S(X) - V(\phi)\,.
\end{equation}
A theory is intrinsically non-canonical if the higher order kinetic terms $X^n$ with $n>1$ play a significant role in the dynamics. Note that this qualitatively different from theories with non-canonical kinetic terms where one can at least in principle find a field redefinition transforming to a canonical theory.
The inflationary dynamics and observables are described in terms of the generalized slow-roll parameters~\cite{Garriga:1999vw,Chen:2006nt} given as
\begin{equation}
H=\frac{\dot{a}}{a}\,,\quad \epsilon=-\frac{\dot{H}}{H^2}\,,\quad \eta=\frac{\dot \epsilon}{H\,\epsilon}\,,\quad \kappa=\frac{\dot c_s}{H\,c_s}\,,\quad c_s^2 = \left(1+2X\frac{\partial^2 p / \partial X^2}{\partial p / \partial X} \right)^{-1}\,,
\end{equation}
which reduce to
\begin{equation}
H=\frac{\dot{a}}{a}\,,\quad \epsilon=\epsilon_{V}=\frac{1}{2} \left(\frac{V'}{V}\right)^2\,,\quad \eta= 4 \epsilon_{V} - 2\eta_{V}\,, \quad \eta_{V} = \frac{V''}{V}\,,\quad \kappa=0\,,\quad c_s^2 = 1\,.
\end{equation}
in the canonical case $p(X,\phi)= X - V(\phi)$. The equations of motion can be derived as the Friedmann equations of a perfect fluid
\begin{align}
\begin{aligned}
&H^2 = \frac{1}{3M_p^2}\rho\,,\\
&\frac{\ddot a}{a} = - \frac{1}{6 M_p^2} (\rho + 3p)\,,\label{Friedmanneq}
\end{aligned}
\end{align}
with pressure $p = p(X,\phi)$ and energy density
\begin{equation}
\rho = 2 X \frac{\partial p}{\partial X} - p\,.
\end{equation}
Inflationary solutions $p_{inf}\simeq - \rho_{inf}$ to eq.~\eqref{Friedmanneq} can be found as algebraic solutions $X_{inf}=X(A)$ to the equation~\cite{Franche:2009gk}
\begin{equation}
\sqrt{2\frac{X}{\Lambda^4}} \frac{\partial p}{\partial X} = A\,,\label{eqofminf}
\end{equation}
with the `non-canonicalness' parameter
\begin{equation}
A(\phi)=\frac{V'}{3\,H\,\Lambda^2}\,.\label{Adefgen}
\end{equation}
For $A\ll1$ the theory is in its canonical limit, i.e. $p(X,\phi)\simeq X - V(\phi)$ while for $A\gg1$ the theory shows its non-canonical nature, i.e. the terms $X^n$ with $n>1$ dominate the Lagrangian.
For theories with a finite convergence radius $X/\Lambda^4<R$ of $S(X)$, it was shown that a truly non-canonical inflationary solution of eq.~\eqref{eqofminf} with $A\gg1$ exists under the following conditions:
\begin{itemize}
\item The derivative $\partial_X S(X)$ diverges at the radius of convergence $R$.
\item The potential is large in units of the cutoff scale, i.e. $V\gg\Lambda^4$ such that the energy density of the potential always dominates that of the kinetic terms~\footnote{Note that the effective field theory description is valid as long as $H<\Lambda$. This generally allows large values of the potential in terms of the cutoff scale since $\frac{H}{\Lambda} \simeq \left( \frac{V}{\Lambda^4} \right)^{1/2}\frac{\Lambda}{M_p}$.}.
\end{itemize}
Note that a finite radius of convergence implies a speed limit $X<R\Lambda^4$. Theories without a speed limit with a $p(X,\phi)$ monotonically increasing in $X$ might lose validity for $X > \Lambda$ as an effective field theory.
The scalar power spectrum $\Delta_s^2$, the tensor power spectrum $\Delta_t^2$, the scalar spectral index $n_s$ and the tensor spectral index $n_t$ can then be calculated via~\cite{Garriga:1999vw,Chen:2006nt}
\begin{align}
\begin{aligned}
\Delta_s^2(k) &= \left. \frac{1}{8\pi^2} \frac{H^2}{M_p^2} \frac{1}{c_s \epsilon} \right|_{c_s k = a H}\,,\\
\Delta_t^2(k) &= \left. \frac{2}{\pi^2} \frac{H^2}{M_p^2} \right|_{k = a H}\,,\\
n_s(k) -1 &= \left.-2 \epsilon - \eta - \kappa\right|_{c_s k = a H}\,,\\
n_t(k) &= \left. -2 \epsilon\right|_{k = a H}\,.\label{obsgen}
\end{aligned}
\end{align}
In the canonical case, this reduces to
\begin{align}
\begin{aligned}
\Delta_s^2(k) &= \left. \frac{1}{8\pi^2} \frac{H^2}{M_p^2} \frac{1}{\epsilon} \right|_{k = a H}\,,\\
\Delta_t^2(k) &= \left. \frac{2}{\pi^2} \frac{H^2}{M_p^2} \right|_{k = a H}\,,\\
n_s(k) -1 &= \left.-2 \epsilon - \eta \right|_{k = a H}\,,\\
n_t(k) &= \left. -2 \epsilon\right|_{k = a H}\,.
\end{aligned}
\end{align}
\section{On-shell transformation of inflationary solutions} \label{findcanpot_sec}
In any theory, canonical or non-canonical with scalar field $\chi$, the inflationary solution can be expressed as a function $X_{inf}(\chi)$. We want to obtain the solution $X_{inf}(\phi)$ from a canonically normalized Lagrangian with scalar field $\phi$ and potential $V_{can}(\phi)$. In other words we want to find $V_{can}(\phi)$ such that the slow-roll inflationary solution of the action $P = X - V_{can}(\phi)$, $X_{inf}^{can}(\phi)$, has the same functional form as the inflationary solution $X_{inf}(\chi)$ coming from a general $P(X, \chi)$. In the following we describe how to construct $V_{can}(\phi)$
In a canonically normalized theory that allows slow-roll inflation, the equations of motion are approximately
\begin{equation}
\dot \phi \simeq - \frac{V_{can}'(\phi)}{3 H(\phi)}\,,\qquad H^2(\phi) \simeq \frac{V_{can}(\phi)}{3}\,,\label{eomcan}
\end{equation}
where $'$ denotes the derivative with respect to $\phi$. Using $\dot \phi = - \sqrt{2 X}$ we obtain
\begin{eqnarray}
X & \simeq & \frac{1}{6} \frac{(V_{can}')^2}{V_{can}}\\
\sqrt{6 X}\,d \phi &=& \frac{1}{\sqrt{V_{can}}} \,d V_{can}\,,\label{dphidV}
\end{eqnarray}
where the first expression is a slow-roll approximation (see e.g. \cite{Franche:2009gk}). At this point we replace the approximation with an equal sign, since we are looking for a potential which satisfies the slow-roll conditions.
Now, going on-shell $X=X_{inf}(\chi)$ and hence $d\phi=d\chi$ we can integrate both sides of eq.~\eqref{dphidV} to solve for $V_{can}$:
\begin{align}
\begin{aligned}
&\int_{\phi_0}^\phi \sqrt{6X_{inf}(\chi)} \, d \chi = \int_{{V_0}_{can}}^V \frac{d V_{can}}{\sqrt{V_{can}}}\,,\\
\Rightarrow \quad &V_{can}(\phi) = \left( \sqrt{{V_0}_{can}} + \int_{\phi_0}^\phi \sqrt{\frac{3}{2}\,X_{inf}(\chi)} \, d \chi \right)^2\,,
\end{aligned}
\label{Vtransformed}
\end{align}
with ${V_0}_{can} = V_{can}(\phi_0)$. Eq.~\eqref{Vtransformed} can be seen as an on-shell transformation of the originally possibly non-canonical theory to a canonical theory. It gives us the potential $V_{can}$, whose dynamics described in eq.~\eqref{eomcan} give exactly the same trajectory in phase space as in the original theory. In other words, given an inflationary trajectory in a theory with general kinetic term, we have derived the form of the potential in a theory with canonical kinetic term which will give rise to the same inflationary trajectory. We assume that the kinetic term is canonical and $X = X_{inf}^{can} = X_{inf}^{noncan}= X_{inf}$, and find the corresponding $V_{can}$. This is not a field transformation, since we simply match the inflationary trajectory in two different theories. Hence for any properties regarding the inflationary background solution the fields $\chi$ and $\phi$ are the same while their general dynamics governed respectively by their non-canonical and canonical Lagrangians are
different. Note that we are free to choose $V_0$ (an integration constant) to satisfy the slow-roll conditions, since we are explicitly looking for a slow-roll solution in a canonical theory with the same inflationary trajectory as that arising from some given non-canonical theory.\footnote{Here we work on-shell, which is to say at the level of the background equation of motion, rather than performing an off-shell field transformation at the level of the action. Offshell transformations between canonical and non-canonical theories are discussed in Appendix B, where we show that canonical transformations can be used to transform between canonical and noncanonical theories in the case that the theory with noncanonical kinetic term has a dominantly quadratic potential. This method thus appears to be somewhat limited.}
If the original theory is canonical with potential $V(\chi)$, the inflationary trajectory is given by~\cite{Franche:2009gk}
\begin{equation}
X_{inf}^{can}\,\, = \,\,\frac{(V')^2}{6\,V} \,\, = \,\, \frac{(V_{can}')^2}{6\,V_{can}}\, ,
\end{equation}
such that $V_{can}(\phi) = V(\chi)$.
\section{Comparison of observables}
In this section, we compare the number of efolds $N_e$, the scalar power spectrum $\Delta_s^2$, the tensor power spectrum $\Delta_t^2$ and the scalar spectral index $n_s$ of non-canonical and canonical inflation. We discuss under what conditions these observables will match for a non-canonical theory and a canonical theory whose potential is obtained via eq.~\eqref{Vtransformed} such that it describes the same dynamics as the non-canonical theory.
The natural time measure during inflation is the number of efolds $N_e$ that inflation produces in the time interval $[t_i,t_f]$. It is defined as
\begin{equation}
N_e = \int_{t_i}^{t_f} H(t) \,dt = \int_{\phi_{N_e}}^{\phi_{end}} \frac{H(\phi)}{\dot \phi} \,d \phi = \int^{\phi_{N_e}}_{\phi_{end}} \left(\frac{V(\phi)}{6 \, X_{inf}(\phi)}\right)^{1/2} \,d \phi\,, \label{Negen}
\end{equation}
where in the last equation we have used $H^2\simeq V/3$~\footnote{{In the following we restrict our analysis to non-canonical theories where the energy density is dominated by the potential, i.e., $H^2 \simeq V/3$.}} and $\dot \phi = -\sqrt{2X}$ on the inflationary trajectory in phase space and $\phi_{end}$ is the field value when inflation ends. In the case of a canonically normalized Lagrangian, this reduces to
\begin{equation}
N_e = \int_{\phi_{end}}^{\phi_{N_e}} \frac{V(\phi)}{V'(\phi)} \,d\phi = \int_{\phi_{end}}^{\phi_{N_e}} \frac{1}{\sqrt{2\epsilon}} \,d\phi\,.
\end{equation}
The observables are evaluated as functions of the comoving momentum $k$. Due to the fact that the sound speed $c_s$ is generically different from one, the time of horizon crossing for scalar modes is different from the time of horizon crossing for tensor modes. In terms of efolds $N_e$, the different times of horizon crossing are determined via
\begin{eqnarray}
\begin{aligned}
&\text{scalar modes: }\qquad c_s k = a H \,\,&\Leftrightarrow&\,\, \ln k = (N_e -\ln c_s) + \ln H\,,\\
&\text{tensor modes: }\qquad k = a H \,\,&\Leftrightarrow&\,\, \ln k = N_e + \ln H\,.\label{cskaHandkaH}
\end{aligned}
\end{eqnarray}
Hence, the moment of horizon crossing of the scalar modes is earlier than that of the tensor modes and the correction is logarithmic in $c_s$ with $\ln c_s <0$ due to $c_s <1$.
The speed of sound is constrained from the non-observation of non-Gaussianities of the equilateral type to be $c_s \gtrsim 0.1$ such that the correction to horizon crossing is of the order of one efold. We will ignore this correction in the remainder of this section but will discuss its significance in section~\ref{csnot1_sec}. It will turn out that the correction is negligible for $\Delta_s^2$ and $\Delta_t^2$ while it is significant for $n_s$.
\subsection{Theories with a speed limit} \label{theoriesspeedlimit_sec}
Let us examine under which conditions the observables of non-canonical inflation and canonical inflation obtained as a function of $N_e$, as discussed in section~\ref{findcanpot_sec}, will agree. Let us make two assumptions:
\begin{itemize}
\item The non-canonical theory has a canonical branch where $V_{can} \simeq V$.
\item The non-canonical theory has a speed limit $R$ such that $X_{inf} \simeq \Lambda^4 R$ for $A\gg 1$.
\end{itemize}
We can perform the integration in eq.~\eqref{Vtransformed} analytically and obtain
\begin{equation}
V_{can}(\phi) = \frac{3}{2}R\,\Lambda^4(\phi-C)^2\,,
\end{equation}
with a constant $C$ for the canonical potential in the limit for $A\gg 1$. This implies
\begin{equation}
\epsilon_{can} = \frac{1}{2} \left(\frac{V_{can}'}{V_{can}} \right)^{2} = \frac{3 R \Lambda^4}{V_{can}(\phi)}\,.\label{epscanAlarge}
\end{equation}
It was shown in~\cite{Franche:2009gk} that the first slow-roll parameter becomes
\begin{equation}
\epsilon = \sqrt{2R} \,\frac{\epsilon_{V}}{A}
\end{equation}
for $A\gg 1$. Using the definition of $A$, eq.~\eqref{Adefgen}, and eq.~\eqref{epscanAlarge}, the agreement of $\Delta_s^2$ and $\Delta_t^2$ as a function of $\phi$ can be phrased as conditions on the potentials and the speed of sound, i.e.
\begin{equation}
V_{can} \simeq V \qquad \text{and} \qquad c_s = \frac{\sqrt{2R}}{A} \qquad \text{for } A\gg1\,. \label{condcsVagree}
\end{equation}
Note that the first condition in eq.~\eqref{condcsVagree} is trivially satisfied in the canonical limit $A\ll1$. In the non-canonical limit $A\gg1$, the derivative $V'$ will generically have large values while $V_{can}'$ has to be small in order to support slow-roll inflation. Thus, at some value $A^*$ in the $A\gg1$ limit, $V$ and $V_{can}$ will not agree anymore. However, there can be an intermediate regime $A\in[1,A^*]$ with $V_{can} \simeq V$ and $V_{can}'\ll V'$. This intermediate regime can even serve to describe the complete phenomenologically interesting region if $c_s(A^*)<0.1$, such that only the region $A>A^*$ is excluded due to non-observation of equilateral non-Gaussianities.
The first condition in eq.~\eqref{condcsVagree} implies an agreement as a function of $N_e$ as well since according to eq.~\eqref{Negen}
\begin{align}
\begin{aligned}
&\text{canonical: } \qquad N_e=\int_{\phi_{end}}^{\phi_{N_e}} \frac{1}{\sqrt{2\epsilon_{can}}} \,d\phi = \int_{\phi_{end}}^{\phi_{N_e}} \left(\frac{V_{can}(\phi)}{6R\Lambda^4}\right)^{1/2} \,d\phi\,,\\
&\text{non-canonical: } \qquad N_e = \int^{\phi_{N_e}}_{\phi_{end}} \left(\frac{V(\phi)}{6 \, X_{inf}(\phi)}\right)^{1/2} \,d \phi = \int^{\phi_{N_e}}_{\phi_{end}} \left(\frac{V(\phi)}{6R\Lambda^4}\right)^{1/2} \,d \phi\,.
\end{aligned}
\end{align}
As far as the spectral indices $n_s$ and $n_t$ are concerned we do not find agreement in the limit $A\gg 1$ since
\begin{align}
\begin{aligned}
\text{canonical: } \qquad &n_s-1=-6\epsilon_{can} + 2\eta_{can} = -\frac{12R\Lambda^4}{V_{can}}\,,\\
&n_t=-2\epsilon_{can}=-\frac{6R\Lambda^4}{V_{can}}\,,\\
\text{non-canonical: } \qquad &n_s-1=-2 \epsilon - \eta - \kappa = \frac{\sqrt{2R}}{A}(-6 \epsilon_{V}+2\eta_{V})-\kappa\,,\\
&n_t=-2\epsilon=-\frac{2\sqrt{2R}}{A}\epsilon_{V}\,,\\
\end{aligned}
\end{align}
using $\eta=\sqrt{2R}/A(4\epsilon_{V}-2\eta_{V})$ as was shown in~\cite{Franche:2009gk}. However, this does not exclude an agreement in an intermediate region $A\gtrsim1$. Furthermore, the scalar spectral index $n_s$ receives significant corrections from the fact that $c_s<1$ in non-canonical theories. This can improve the agreement, as we will show in section~\ref{csnot1_sec}.
Let us now investigate with some examples when the second condition in eq.~\eqref{condcsVagree} on the speed of sound $c_s$ can be fulfilled. First, we note that using eq.~\eqref{eqofminf} the speed of sound can be expressed as
\begin{equation}
c_s^2(A) = \frac{A\, \partial X_{inf} / \partial A}{2 X_{inf}}\,.\label{csXinfA}
\end{equation}
Hence, we need to know the functional dependence $X_{inf}(A)$ in order to decide whether the observables $\Delta_s^2$ and $\Delta_t^2$ of the canonical and non-canonical theory agree. For $p(X,\phi) = \Lambda^4 S(X)-V(\phi)$ as defined in eq.~\eqref{psum} this dependence is determined by the identity
\begin{equation}
2\frac{X}{\Lambda^4}\left(\sum_{n\geq 0} (n+1)\, c_n \left(\frac{X}{\Lambda^4} \right)^n \right)^2 = A^2\,,\label{XinfAinv}
\end{equation}
using the algebraic equation for the inflationary solution, eq.~\eqref{eqofminf}. To obtain $X_{inf}(A)$ we have to invert eq.~\eqref{XinfAinv}, which is impossible for most general coefficients $c_n$. However, we will discuss some closed form expressions for $p(X,\phi)$ in the following.
Consider the class of non-canonical Lagrangians defined by
\begin{equation}
p(X,\phi)= \Lambda^4 \left[1-\left(1-\frac{1}{a}\,\frac{X}{\Lambda^4} \right)^a \right] - V(\phi)\,,\label{classpa}
\end{equation}
with $0<a<1$ such that $\partial p / \partial X$ diverges at the radius of convergence $R_a = a$. This class of non-canonical Lagrangians includes the DBI action via the case $a=1/2$, i.e.
\begin{equation}
p(X,\phi)= \Lambda^4 \left[1-\left(1-2\,\frac{X}{\Lambda^4} \right)^{1/2} \right] - V(\phi)\,.
\end{equation}
Squaring the equation for the inflationary solution, eq.~\eqref{eqofminf} becomes
\begin{equation}
2 \frac{X}{\Lambda^4} = A^2 \left(1- \frac{1}{a}\,\frac{X}{\Lambda^4} \right)^{2-2a}\,.\label{infsola}
\end{equation}
If $2-2a$ is not an integer one has to exponentiate with the denominator of $2-2a$ to solve for $X_{inf}(A)$. In fact the only value of $0<a<1$ for which $2-2a$ is an integer is $a=1/2$, i.e. the DBI case, with solution
\begin{equation}
X_{inf} = \frac{\Lambda^4}{2}\,\frac{A^2}{1+A^2}\,.\label{XinfDBI}
\end{equation}
For all $a\neq 1/2$, $X_{inf}$ will be some function of $A^n$ with integer $n>2$. For instance, for $a=3/4$ we have to square eq.~\eqref{infsola} to obtain the solution
\begin{equation}
X_{inf} = \frac{\Lambda^4}{6}\,A^4 \left(\sqrt{1+\frac{9}{A^4}} - 1 \right)\,.
\end{equation}
Note that for $X_{inf}(A^n)$, $c_s^2$ is also a function of $A^n$ since
\begin{equation}
c_s^2 = \frac{n A^n \,X_{inf}'(A^n)}{2X_{inf}(A^n)}\,,
\end{equation}
where $'$ denotes the derivative with respect to $A^n$. Hence, the dominating term in $c_s$ will be of the order
\begin{equation}
c_s^2 \sim \frac{1}{A^n}
\end{equation}
up to an $\mathcal{O}(1)$ coefficient. For the DBI case, we find
\begin{equation}
c_s^2 = \frac{1}{1+A^2} \simeq \frac{1}{A^2} \qquad \text{for } A\gg1\,,\label{csDBI}
\end{equation}
which fulfills the criterion eq.~\eqref{condcsVagree} on $c_s$ for the agreement of the observables ($R=1/2$). However, this is the only member of the class of non-canonical theories defined by eq.~\eqref{classpa} where the observables can agree since
$c_s^2 \sim 1/A^n$ with $n>2$ for all other values of $a$ such that the condition on $c_s$ in eq.~\eqref{condcsVagree} cannot be satisfied. For example, for $a=3/4$ we find
\begin{equation}
c_s^2 = 1- \frac{1}{\sqrt{1+9\,A^{-4}}} \simeq \frac{9}{2}\, \frac{1}{A^4} \qquad \text{for } A\gg1\,,
\end{equation}
There are of course plenty of other models apart from those defined in eq.~\eqref{classpa} that fulfill the conditions of a canonical branch and a speed limit. The question of whether there could be other examples than DBI where the conditions on the potential and speed of sound eq.~\eqref{condcsVagree} for an agreement of $\Delta_s^2$ and $\Delta_t^2$ are fulfilled is hard to answer in full generality. Consider for example the class of functions
\begin{equation}
p(X,\phi) = X\left[1- a \left(\frac{X}{\Lambda^4} \right)^{b} \right]^{c}\,.
\end{equation}
For $a=4$, $b=4$ and $c=1/2$ we numerically find a solution $X_{inf}(A)$ of the equations of motion eq.~\eqref{eqofminf} with
\begin{equation}
c_s^2 \simeq \frac{\sqrt{2}}{A^2} = \frac{2 R}{A^2} \qquad \text{ for } A\gg1\,,
\end{equation}
such that the second condition in eq.~\eqref{condcsVagree} on the speed of sound is fulfilled. However, this solution suffers from the absence of a canonical limit $X_{inf} \sim A^2$ for $A<1$ and a violation of the null-energy condition $\partial p/ \partial X > 0$. Due to the lack of other working examples where the agreement conditions eq.~\eqref{condcsVagree} are matched, we suspect that the description in terms of a canonical theory may be special to the DBI case. We will study this case more explicitly in the following section. We note at this point that the matching of the background equation of motion does not necessarily mean that fluctuations around this background in the two different theories should match. One should thus not expect agreement of the inflationary observables in general, even if the inflationary trajectory is the same. This makes the agreement in the DBI case all the more remarkable. \footnote{We thank Bret Underwood for discussions on this point.}
\subsection{DBI inflation with an inflection point potential} \label{DBIinflectionexample_sec}
We now want to give an example of our general considerations in section~\ref{theoriesspeedlimit_sec}. We consider the DBI action together with an inflection point potential:
\begin{equation}
p(X,\phi)=-\frac{1}{f(\phi)} \left(\sqrt{1- 2f(\phi)X} -1 \right) - V(\phi)\,,\label{DBIinflp}
\end{equation}
with
\begin{equation}
V(\phi)= V_0 + \lambda (\phi-\phi_0)+\beta (\phi-\phi_0)^3\,.\label{DBIinflpointpotential}
\end{equation}
We fix the parameters of this theory to be
\begin{equation}
V_0 = 3.7 \cdot 10^{-16}\,,\quad \lambda = 1.13\cdot 10^{-20}\,,\quad \beta = 1.09\cdot 10^{-15}\,,\quad \phi_0 = 0.01\,,\quad f = 1.6\cdot 10^{21} \,.\label{DBIinflparam}
\end{equation}
These are the values that were considered in~\cite{Franche:2009gk}. In particular, the field-dependent warp factor has been set to a constant $f=\Lambda^{-4}$ which is justified if the range of field values that $\phi$ travels during inflation is small. The parameters in eq.~\eqref{DBIinflparam} have been chosen such that for a canonical kinetic term $p(X,\phi)=X-V$ the amplitude of the scalar fluctuations and the spectral index agree with observations, i.e. $\Delta_s^2 = 2.41 \cdot 10^{-9}$ and $n_s = 0.961$.
Let us first see when eq.~\eqref{DBIinflp} is in the non-canonical regime by evaluating the `non-canonicalness' parameter $A$.
We find that for $\phi \lesssim 0.025$ we are in the canonical regime $A\leq1$, while for $\phi \gtrsim 0.025$ we enter the non-canonical regime $A>1$, see Figure~\ref{epsiloncompare_fig}.
The phase space trajectory (see also Figure~\ref{Xinfaplot_DBI_fig}) for eq.~\eqref{DBIinflp} is determined by eq.~\eqref{XinfDBI}.
\begin{figure}[h!]
\centering
\includegraphics[width= 0.5\linewidth]{Xinfaplot_DBI.pdf}
\caption{The phase space trajectory $X_{inf}(\phi)$ for the DBI action eq.~\eqref{DBIinflp} with the numerical values given in eq.~\eqref{DBIinflparam}. For large $\phi$ the trajectory approaches the limit $(2f)^{-1}$, see eq.~\eqref{XinfDBI}.}
\label{Xinfaplot_DBI_fig}
\end{figure}
This determines the potential $V_{can}(\phi)$ that resembles the trajectory from a canonical kinetic term via eq.~\eqref{Vtransformed}. We perform the integration numerically and show $V_{can}(\phi)$ compared to the original inflection point potential $V(\phi)$ in Figure~\ref{VcanvsVinf_fig}.
\begin{figure}[h!]
\vskip -3mm
\centering
\includegraphics[width= 0.49\linewidth]{VinfVcanplotsmallplot_DBI.pdf}
\includegraphics[width= 0.49\linewidth]{VinfVcanplotlargeplot_DBI.pdf}
\caption{Comparison of the inflection point potential $V\equiv V_{infl}$ of eq.~\eqref{DBIinflpointpotential} and the potential of the canonical theory $V_{can}(\phi)$ obtained via eq.~\eqref{Vtransformed} for $\phi \in [0,0.025]$ (left) and $\phi \in [0,0.12]$ (right).}
\label{VcanvsVinf_fig}
\end{figure}
We see that, as expected, $V_{can}$ agrees with $V$ in the canonical regime while it is flatter than $V$ in the non-canonical regime. To see that $V_{can}$ actually supports slow-roll inflation we check $\epsilon$ and $\eta$ as functions of $\phi$ in Figure~\ref{epsiloncompare_fig}.
\begin{figure}[h!]
\centering
\includegraphics[width= \linewidth]{epsilonetaA_DBI.pdf}
\caption{The `non-canonicalness' parameter $A$ (top left), the parameters $\epsilon_V$ and $\eta_V$ (top right) and the generalized slow-roll parameters $\epsilon$, $\eta$ and $\kappa$ (bottom left) for the DBI action eq.~\eqref{DBIinflp} with the numerical values of the parameters given in eq.~\eqref{DBIinflparam}. Also the slow roll parameters $\epsilon_{can}$ and $\eta_{can}$ of the canonical theory are shown (bottom right).}
\label{epsiloncompare_fig}
\end{figure}
\subsubsection*{Comparison of observables}
We compare the observables of the canonical and non-canonical theory in Figure~\ref{Necompare_fig}. The agreement in $\Delta_s$ and $\Delta_t$ at the level of $\sim 1 \%$ is up to values $\phi<0.2$ which is roughly one order of magnitude more than the value of $\phi$ where the non-canonical regime begins. So as discussed after eq.~\eqref{condcsVagree} there is indeed an intermediate regime where the observables agree even though $V_{can}$ is much flatter than $V$. Furthermore, since $c_s < 0.1$ for $\phi > 0.06$ the phenomenologically viable region is included in this intermediate regime.
The agreement of $n_s-1$ of the two theories as functions of $N_e$ holds only up to $\phi \leq 0.05$, see Figure~\ref{nscscorr_fig}. However, there are important corrections to $n_s-1$ induced by the fact that the speed of sound $c_s$ in the non-canonical theory is smaller than one. We will discuss these corrections in detail in section~\ref{csnot1_sec}.
Note that there is an additional upper bound on $c_s$ which has to be fulfilled in order to treat the inflationary quantum fluctuations perturbatively~\cite{Cheung:2007st,Leblond:2008gg,Shandera:2008ai}. If the speed of sound becomes too small the perturbations become strongly coupled and in particular the expressions for the inflationary observables eq.~\eqref{obsgen} are not valid. For DBI this can be expressed as a bound on the `non-canonicalness' parameter~\cite{Franche:2009gk}
\begin{equation}
A < \left( \frac{3\,\epsilon}{V}\right)^{1/5}\,.
\end{equation}
For our numerical example, this implies $A < \mathcal{O}(100)$ and hence $\phi \lesssim 0.2$. Note that this is exactly the region where we find agreement between the non-canonical and transformed canonical theory.
\begin{figure}[h!]
\centering
\includegraphics[width= \linewidth]{Obs2_DBI.pdf}
\caption{Comparison of the observables $\Delta_s^2$ (top right), $n_s$ (bottom left) and $\Delta_t^2$ (bottom right) of the non-canonical DBI and the transformed canonical theory. Since the number of efolds (top left) of the two theories agrees as a function of $\phi$, the agreement of the observables as a function of $\phi$ can be read as an agreement as a function of $N_e$.}
\label{Necompare_fig}
\end{figure}
We can prove the agreement of $\Delta_s^2$ and $\Delta_t^2$ in the whole intermediate region (note that in section~\ref{theoriesspeedlimit_sec} this was shown only in the limit $A\gg1$). Using the exact expression for the speed of sound in eq.~\eqref{csDBI}, together with eq.~\eqref{epscanAlarge}, we find
\begin{equation}
\epsilon_{can} = \frac{3\Lambda^4}{2 V_{can}} \frac{A^2}{1+A^2}\, ,
\end{equation}
and the exact expression for $\epsilon$ that was found in~\cite{Franche:2009gk} is
\begin{equation}
\epsilon = \frac{3}{2}\, \frac{A^2}{1+A^2}\, \frac{1}{1+\frac{V/\Lambda^4-1}{\sqrt{1+A^2}}}\,.
\end{equation}
Now the condition $c_s \epsilon = \epsilon_{can}$ can be rephrased as
\begin{equation}
\frac{V}{\Lambda^4} + \sqrt{1+A^2} - 1 = \frac{V_{can}}{\Lambda^4}\,.\label{agreementDBIanalytic}
\end{equation}
This condition will be fulfilled to very large $A$ for $V\simeq V_{can}$, since $V\gg \Lambda^4$ as we demanded at the beginning of section~\ref{theoriesspeedlimit_sec}. For instance, in the numerical example described in eq.~\eqref{DBIinflparam} we have $V/\Lambda^4 \simeq 10^5$ such that eq.~\eqref{agreementDBIanalytic} would hold up to $A\lesssim 10^4$
assuming the condition $V\simeq V_{can}$ is not violated before $A$ reaches this value.
The agreement works out as well for the DBI action with a Coulomb type potential
\begin{equation}
V(\phi) = V_0 - \frac{T}{(\phi+\phi_0)^n}\,,
\end{equation}
instead of an inflection point potential. The non-canonical regime is accessed for $\phi < \phi_0$ while the canonical regime is given by $\phi > \phi_0$. Hence, the agreement with the transformed canonical theory is trivially found for $\phi < \phi_0$ and extends to the non-canonical regime until the condition $V \simeq V_{can}$ is violated.
\subsubsection*{Consistency relation}
Canonical and non-canonical theories are usually assumed to be distinguishable, not only because of the possibility of equilateral-type non-gaussianity in the latter, but because of the consistency relation relating $r = \Delta_t^2/\Delta_s^2$ the tensor-to-scalar ratio and $n_t$ the tensor spectral index. The relation in the noncanonical case has an additional factor of $c_s$ \cite{Garriga:1999vw}:
\begin{eqnarray*}
r_{Can} & = & - 8 n_t;\\
r_{NC} & = & - 8 c_s n_t.
\end{eqnarray*}
Because of the appearance of $c_s$, a sufficiently precise measurement of the ratio $r/n_T$ would therefore resolve the degeneracy we have found. However, we currently have no bound on $n_t$ and only an upper bound on $r$: $r< 0.12$ \cite{Ade:2013uln}. With the current state of observational bounds, these models remain indistinguishable at the 2-point function level.
\section{Corrections from $c_s < 1$} \label{csnot1_sec}
As we discussed in eq.~\eqref{cskaHandkaH}, the observables have to be evaluated as functions of the comoving momentum $k$ which implies different times of horizon crossing for scalar and tensor modes respectively. Assuming $H_{can} \simeq H_{non-can}$ which actually follows from the condition $V_{can} \simeq V$, an agreement of tensor observables $T$ as functions of $\ln k$ is equivalent to
\begin{equation}
T_{can} (N_e) = T_{non-can} (N_e)\,,
\end{equation}
having used $N_e = \ln k - \ln H$.
For scalar observables $S$ however, we have to take into account that $N_e - \ln c_s = \ln k - \ln H$ in the non-canonical theory while $N_e = \ln k - \ln H$ in the canonical theory. Hence, we have to check for the equality
\begin{equation}
S_{can} (N_e) = S_{non-can} (N_e - \ln c_s)\,.
\end{equation}
Since the non-observation of equilateral non-Gaussianities implies $|\ln c_s| \ll N_e^t$, it is sufficient to expand $S_{non-can}$ to first order in $\ln c_s$, i.e.
\begin{equation}
S_{non-can} (N_e - \ln c_s) \simeq S_{non-can} (N_e) - S_{non-can}' (N_e)\, \ln c_s\,.
\end{equation}
In the following, we discuss this expansion for the scalar power spectrum $\Delta_s^2$ and the scalar spectral index $n_s$.
Using the definition of $\Delta_s^2$ in eq.~\eqref{obsgen}, we find
\begin{align}
\begin{aligned}
\frac{\partial \Delta_s^2}{\partial N_e} &= \Delta_s^2 \,\frac{\partial \ln \Delta_s^2}{\partial N_e} = \Delta_s^2 \left(2\frac{\partial \ln H}{\partial N_e} - \frac{\partial \ln \epsilon}{\partial N_e} - \frac{\partial \ln c_s}{\partial N_e}\right)\,,\\
&= \Delta_s^2 \cdot \left(-2\epsilon - \eta - \kappa \right) = \Delta_s^2 \cdot (n_s - 1)\,,
\end{aligned}
\end{align}
having used
\begin{equation}
\epsilon = - \frac{\partial \ln H}{\partial N_e}\,,\qquad \eta= \frac{\partial \ln \epsilon}{\partial N_e} \,,\qquad \kappa = \frac{\partial \ln c_s}{\partial N_e}\,.
\end{equation}
This implies
\begin{equation}
\Delta_s^2(N_e - \ln c_s) \simeq \Delta_s^2(N_e) \left[1- (n_s-1)\ln c_s \right]\,.
\end{equation}
Hence the correction that is induced by $\ln c_s$ is suppressed by the slow-roll parameters and we can approximate $\Delta_s^2(N_e - \ln c_s) \simeq \Delta_s^2(N_e)$.
\begin{figure}[t!]
\centering
\includegraphics[width= 0.8\linewidth]{nscscorr.pdf}
\caption{The agreement $\delta_{n_s-1}$ of the canonical and non-canonical theory in $n_s$ defined in eq.~\eqref{nscscorr} as a function of $\phi$ with and without $c_s$ corrections. The fluctuations in the uncorrected $\delta_{n_s-1}$ for small $\phi$ are due to numerical inaccuracies when obtaining $V_{can}$ via numerical integration.}
\label{nscscorr_fig}
\end{figure}
For the spectral index $n_s$ the corrections induced by $\ln c_s$ are significant. Using the definition of $n_s$ in eq.~\eqref{obsgen} we find to first order in the slow-roll parameters
\begin{equation}
\frac{\partial n_s}{\partial N_e} = - \eta \frac{\partial \ln \eta}{\partial N_e}- \kappa \frac{\partial \ln \kappa}{\partial N_e}\,,
\end{equation}
which implies
\begin{equation}
n_s(N_e - \ln c_s) -1 \simeq -2 \epsilon - \eta - \kappa + \left( \eta \frac{\partial \ln \eta}{\partial N_e} + \kappa \frac{\partial \ln \kappa}{\partial N_e} \right) \ln c_s\,.\label{nscscorr}
\end{equation}
Note that $\partial \ln \eta / \partial N_e$ corresponds to third derivative terms of the potential $V$, while the $\partial \ln \kappa / \partial N_e$ term corresponds to second derivative terms of the speed of sound $c_s$.
We show in Figure~\ref{nscscorr_fig} numerically that the agreement in
\begin{equation}
\delta_{n_s-1} \equiv \frac{(n_s-1)_{can} - (n_s-1)_{non-can}}{(n_s-1)_{can}}\,,
\end{equation}
for the DBI example considered in section~\ref{DBIinflectionexample_sec} improves if one takes the corrections described in eq.~\eqref{nscscorr} into account. We find that the regime where $n_s-1$ of the canonical and non-canonical theory agree at the level of $1\%$ is increased from $\phi \leq 0.05$ to $\phi \leq 0.08$. Consequently, the phenomenologically interesting region where $c_s > 0.1$ given by $\phi \leq 0.06$ is included due to the inclusion of this correction.
\section{Conclusions}\label{sec:conc}
Cosmological inflation generates almost scale-invariant power spectra of super-horizon size curvature perturbations and tensor mode perturbations with Gaussian distribution. They can be described at the Gaussian level by just three observational quantities: The overall normalization $\Delta_s^2$ of the curvature perturbation power spectrum, its spectral tilt $n_s$, describing the (small) deviations from scale invariance expected in most models of inflation, and the fractional power $r$ in tensor modes.
In this context it is important to understand the structure of this very large model space, and look for degeneracies between large classes of inflationary models with respect to the three observable quantities. We have restricted our attention here to single-field models of inflation which partition into two large classes: models with a canonically normalized kinetic term $\frac12 (\partial_\mu\phi)^2$, and so-called non-canonical inflation models with Lagrangian ${\cal L}=p\left((\partial_\mu\phi)^2,\phi\right)$.
We have explored the degeneracies between canonical and non-canonical models of inflation with respect to the three observational quantities describing their predicted power spectra at the Gaussian level. While formally non-canonical 2-derivative models of the form ${\cal L}=f(\phi)(\partial_\mu\phi)^2-V(\phi)$ can be always transformed off-shell by a local field redefinition into a canonical model with a transformed scalar potential, this question is rather non-trivial in the presence of higher-power kinetic terms. We have elucidated the method of canonical transformations for transforming noncanonical kinetic terms into canonical kinetic terms which, even in $0+1$D, appears to be limited to the case where the noncanonical theory has a quadratic potential, see Appendix \ref{sec:AppB}.
As the inflationary behavior of a given model is described in terms of a generalized slow-roll attractor solution in phase space, we have therefore looked at possible on-shell transformations of a given non-canonical model on its inflationary attractor into an equivalent canonical slow-roll inflation model. We have constructed such on-shell transformations in general, so that given a non-canonical lagrangian which supports inflation, the potential required to reproduce the inflationary trajectory $X_{inf}(\phi)$ in a canonical theory can be found.
Furthermore, we checked for the matching of the 2-point function observables $\Delta_s^2$, $n_s$, and $r$. We find a full on-shell match for all 2-point function quantities precisely for the case of DBI inflation while failing for the DBI-inspired generalizations. This can be shown analytically and numerically. This may point to a special status of DBI inflation as a member of the non-canonical class in that it can be related to a canonical model of inflation with matching 2-point function observables..
Lastly, in the light of the much-awaited Planck data on nongaussianity, we would like to point out that given the data we have, there remains a large degree of degeneracy between inflationary models, which we have to bear in mind when interpreting that data. Since it is often claimed that canonical and noncanonical theories can be distinguished using the data, we feel this added degeneracy serves as a warning that this may not be the case, particularly if large NG is not observed. Unless data on nongaussianities improves drastically and reveals a non-negligible single of equilateral nongaussianity, or the consistency relation between $r$ and $n_T$ can be accurately measured, one may never be able to distinguish between non-canonical inflation and slow-roll inflation in some canonical theory. In fact even with the observation of NG, this differentiation may not be possible: Note that appreciable non-Gaussianity can arise in single scalar field theories of inflation with a canonical kinetic term, from features
in the potential \cite{Gwyn:2012pb}, or from coupling of the inflaton to gauge quanta \cite{Barnaby:2010vf, Barnaby:2011qe}. It is possible that by adding additional couplings or features to the potential of the canonical theory one could match observables at the 3-point function level as well. We have not addressed the question of matching 3-point observables such as non-Gaussianity here, and leave investigation of this question for future work. Also the deeper reason for the agreement of DBI with its canonical transform at the level of the 2-point function is yet to be understood on a more fundamental level. This degeneracy thus opens many questions for future study.
\acknowledgments We thank Jan Louis, Raquel Ribeiro and especially Bret Underwood for valuable, and enlightening discussions. This work was supported by the Impuls und Vernetzungsfond of the Helmholtz Association of German Research Centers under grant HZ-NG-603, the German Science Foundation (DFG) within the Collaborative Research Center 676 ``Particles, Strings and the Early Universe" and the Research Training Group 1670. R.G. is grateful for support by the European Research Council via the Starting Grant numbered 256994. R.G. was also supported during the initial stages of this work by an SFB fellowship within the Collaborative Research Center 676 ``Particles, Strings and the Early Universe'' and would like to thank the theory groups at DESY and the University of Hamburg for their hospitality at this time. |
1904.06211 | \section{Introduction}
Denial of Service (DoS) attacks are a pressing problem in computer networks, and it becomes an even more complex problem in cloud computing environments. According to the Verisign report of 2018, these kinds of attacks directly impact business, financial, information technology and telecommunications services \cite{verisign-ddos}.
The migration of traditional services and applications to centralized cloud environments amplifies their vulnerability to disruption in relation to the impact of DoS attacks performed over conventional architectures. Thus, there is an urgent need to perform the identification and characterization of DoS traffic in cloud environments. In general, there are three ways to perform traffic classification: i) identification using TCP/UDP port numbers; ii) inspecting the contents of the network packets; and, finally, iii) applying Machine Learning (ML) techniques to network data source \cite{ml-sdn-2016}. This paper advances the latter, and proposes the use of ML algorithms in order to distinguish malicious traffic from legitimate traffic from clients in a cloud environment. ML algorithms have been already used for traffic identification and classification \cite{dl-survey}, including malicious ones, to spot the occurrence of a DoS attack.
According to a recent survey \cite{Boutaba2018}, the traditional data source used by ML algorithms are traces of traffic in the network (packets). It is well-known that DoS attacks systemically affects the usage of cloud computing resources. Different from traditional approached based on traffic traces, this work proposed the use of the telemetry from the cloud (such as resources usage from physical and virtual hosts) as data source for ML algorithms.
Large scale monitoring traffic in conventional networks usually involves costly and complex architectures, probe packets and other artifices. In contrast, clouds have native telemetry, i.e., data collection services. Metrics from both physical and virtual hosts. Moreover, such rich data set can be used with little or no overhead to the cloud. Therefore, the research hypothesis of this paper is that it is possible to improve identification and accuracy of DoS attacks using the information from the usage of computing resources from physical and virtual hosts allocated to the applications and services.
\section{Evaluation}
As an initial scenario of this proposal, we monitor the usage of computing resources of a cloud under the TCP SYN Flood attack. This attack has been chosen because it is simple to perform and reproduce. An experimental setup has been created with an OpenStack cloud (Rocky version), containing a web server using Apache2, a virtual machine simulating legitimate web clients with Siege \footnote{https://www.joedog.org/siege-home/}. Finally, another host performs the role of malicious users and generates the SYN Flood Attack with hping3\footnote{https://linux.die.net/man/8/hping3}.
\begin{figure*}[ht!]
\centering
\hfill
\subfigure[CPU\label{fig:memory}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-cpu.pdf}}
\hfill
\subfigure[Memory\label{fig:memory}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-memory.pdf}}
\hfill
\subfigure[HD Request read\label{fig:hd-read}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-disk-request-read.pdf}}
\hfill
\subfigure[HD Request write\label{fig:hd-write}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-disk-request-write.pdf}}
\hfill
\vspace{-0.2cm}
\subfigure[Interface Bytes in\label{fig:bytes-in}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-interface-bytes-in.pdf}}
\hfill
\subfigure[Interface Bytes out\label{fig:bytes-out}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-interface-bytes-out.pdf}}
\hfill
\subfigure[Interface packets in\label{fig:packets-in}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-interface-packets-in.pdf}}
\hfill
\subfigure[Interface packets out\label{fig:packets-out}]{
\includegraphics[width=0.22\textwidth,trim={0cm 0cm 1.5cm 1.3cm},clip]{figures/comparative-interface-packets-out.pdf}}
\hfill
\vspace{-0.4cm}
\caption{Comparison between the metrics in both scenarios.}
\label{fig:gra}
\vspace{-0.3cm}
\end{figure*}
Two test scenarios have been performed. In the first test there is only legitimate clients, generating traffic at from users so that the web server can hold it. In the second test, however, along with legitimate clients, SYN Flood attack is executed. During both tests, the following metrics are monitored by the OpenStack cloud: CPU and Memory usage, number of disk read requests, number of disk write requests, rate of incoming and outgoing Bytes in each network interface, rate of incoming and outgoing packets in each network interface.
In both tests, the total duration is 30 minutes. There are attack phases with different intensities, which last 10 minutes each. There are 10s gaps at beginning and at ending parts of our trace with no legitimate clients' nor attacker's traffic. Metrics are collected every 5 seconds performing a total of 360 samples. The attack is divided into three phases of 10 minutes each. In the first one, the attack happens at a small rate i.e., at every 300 milliseconds. In the second phase, 10 minutes after, the malicious attacks happens every 250 milliseconds. Finally, in the last phase, the attack is generated at the maximum rate that the attacking host can perform.
Given the measurements of the two experiments, a Principal Component Analysis (PCA) was used to verify which metrics are relevant in the case of a TCP SYN Flood attack. A sample comparison of these metrics is shown in Figure~\ref{fig:gra}. In all the results, the blue lines represent the tests performed only in the client scenario and the red lines in the client and attacker scenario. According to these results the following metrics were chosen: CPU usage, number of disk write requests, rate of incoming and outgoing Bytes, rate of incoming and outgoing packets.
After the selection of the most important characteristics, the traffic labeling in the data is performed. These samples are used as a training dataset for two different ML algorithms: k-Nearest Neighbors (kNN) and a decision tree (CART). These simple algorithms were chosen for the sake of simplicity to obtain preliminary results in order to verify our hypothesis regarding ML exploiting could telemetry. The data from this dataset has been labeled in two classes: "attack" and "no attack".
In addition, another experiment was carried out, with a duration of 120 minutes, alternating between moments of traffic with only legitimate clients, only attacker and a combination between both legitimate clients and attackers. With the collected metrics of this experiment, the trained kNN and CART tries to identify attack periods.
The Table~\ref{tab:ml} presents the results (in percentage) of the Accuracy, Precision, Recall and F1-Score metrics obtained in the dataset using the kNN and CART algorithms, being the first line for the kNN and the last line for the decision tree algorithm (CART). In these preliminary tests a high accuracy was verified, with low incidence of false positives and false negatives, mainly in the kNN algorithm.
\begin{table}
\vspace{-0.2cm}
\begin{center}
\caption{Result of ML algorithms}
\vspace{-0.4cm}
\label{tab:ml}
\small
\begin{tabular}{|c|c|c|c|c|}
\hline
ML Algorithms & Accuracy & Precision & Recall & F1-Score \\
\hline
kNN & 99.30 & 99.33 & 99.31 & 99.31 \\
\hline
CART & 91.07 & 95.53 & 91.08 & 92.38 \\
\hline
\end{tabular}
\end{center}
\vspace{-0.5cm}
\end{table}
\section{Conclusion}
Given the promising results, the proposal to use data from the resources usage from cloud computing environment is a feasible hypothesis.
As future work, we aim at performing the characterization of other types of DoS attacks, as well as combinations of these types. We intend not only to detect that an attack is occurring (anomaly detection), but also, through ML algorithms, to perform (real-time) the identification of what kind of attack is going on. Another future work is to explore different ML algorithms, including the use of deep learning. |
1806.05954 | \section{Introduction}
\label{sec:intro}
Plasma-based electron acceleration methods take advantage of wakefield excitation by either a relativistic electron bunch for plasma wakefield acceleration (PWFA) \cite{Chen1985, Rosenzweig1988} or an intense laser pulse for laser wakefield acceleration (LWFA) \cite{Tajima1979}. In both cases the longitudinal electric wakefield gradient is in the order of 100 GV/m, which is orders of magnitudes higher than in conventional accelerators \cite{Kostyukov2015, Esarey2009}. If the laser pulse intensity reaches a certain threshold value, the wakefield breaks and a solitary electronic cavity, called the bubble, is formed \cite{Pukhov2002, Jansen2014, Lu2007}. It is a nearly spherical region with uniform accelerating fields that propagates with almost speed of light c \cite{Kostyukov2004}. A similar structure can be created, if a dense relativistic electron beam excites a so called "blow-out"\cite{Rosenzweig1991, Lotov2004}. In both cases the expelled electrons gather in a thin sheath on the border of the cavity, while those electrons, which become trapped inside the wakefield, form a dense witness bunch - the so called beam load.
The major feature, that characterizes the bubble and the blow-out regime, is the quasi-monoenergetic energy spectrum of the fast electrons inside the beam load. However, if the total charge of the beam load exceeds a certain threshold, the plasma cavity
structure is reshaped and the effective accelerating field is modified. This in turn affects final beam properties like maximum energy, energy spread and transverse emittance \cite{Golovanov2016b, Lu2007, Tzoufras2009, Couperus2017}. As a consequence, it is necessary to find a beam loading technique which gives maximum control over injection parameters like total charge, initial momenta and initial positions inside the wakefield. An especially effective method is the lateral or on-axis injection of pre-accelerated electron bunches.
When using the on-axis injection technique, the driver and the electron bunch propagate on the same axis. In the case of an intense laser driver this may lead to bunch scattering if the laser pulse passes through the electron bunch in vacuum. The resulting limitation of the number of trapped particles then would lower the quality of the accelerated bunch. To bring this problem under control, the side injection method has been proposed \cite{Luttikhof2009} and applied to proton-driven wakefields \cite{Pukhov2011} already.
In this work, we consider side injection of pre-accelerated electron bunches into a blow-out at a small angle $\vartheta$ (see Fig.\ref{pic:3d}). We study the dependence of the critical injection angle $\vartheta_\text{crit}$, for which at least 90\% of the injected particles are trapped, on the injection position, the initial electron energy and the radial plasma density profile in a deep channel. Our work is done in the scope of a semi-analytical blow-out model and compared to analytical approximations and particle-in-cell simulations. We show that external injection into blow-outs is less critical in deep channels than in homogeneous plasma. A comparison of our results from analytical predictions to test particle simulations in a quasi-static blow-out model and to PIC simulations indicates that in homogeneous plasma it is favorable to inject electron bunches on-axis while in channeled plasma it is possible to trap bunches which have been injected off-axis. However, this advantage is compensated by the need for a higher initial focussing of the injected electron beam.
In section \ref{sec:modell} we present the semi-analytical model and derive the equations which are solved numerically in section \ref{sec:simu}. In \ref{sec:ana} we derive an analytical description of $\vartheta_\text{crit}$ which is compared to test particle simulations in section \ref{sec:simu}. A further comparison to simulations with the fully electromagnetic version of the three-dimensional PIC code VLPL \cite{Pukhov1999,Pukhov2016} is given in section \ref{sec:compare}.
\\
\section{The analytical model}
\label{sec:modell}
Besides analytical and semi-analytical bubble and blow-out models for homogeneous plasma \cite{Kostyukov2009, Kostyukov2010, Lu2006a, Yi2011, Yi2013, Pak2010, Zeng2012, Thomas2014}, there are also more general models for channeled plasmas \cite{Thomas2016, Golovanov2017b, Golovanov2016b, Golovanov2016c} describing the blow-out envelope and the fields in terms of the radial distance to the symmetry axis $r_b$ in a moving frame of reference. In this frame all fields and sources are quasi-static, which means that they depend on $\xi=ct-z$ solely. In the following we normalize coordinates to the inverse electron wave number $k_p^{-1}=c/\omega_p$, velocities to the speed of light $c$, fields to $E_0 = m_ec\omega_p/e$ and time to the electron plasma frequency $\omega_p^{-1} = \sqrt{m_e/(4\pi e^2 n_0)}$, where $m_e$ is the electron mass and $n_0$ is a certain density in the system to which the electron density $n_e(r)$ and the ion density $\rho_{ion}$ are normalized. After this transformation we further take the cylindrical symmetry of the system into account and write the fields within a blow-out ($r\leq r_b$) for the case of a large blow-out \cite{Thomas2016}
\begin{align}
B_\varphi = \frac{ S_{Ib}}{2}\frac{r}{r_b}\left( \frac{r_b^{\prime 2}}{r_b} - r_b^{\prime\prime} \right) - \frac{1}{2} s_i(r_b) r r_b^{\prime 2} - \frac{\Lambda(\xi)}{r}
\\
E_r = B_\varphi - \frac{S_I(r)}{r},
\qquad
E_z = - S_{Ib} \frac{r_b^{\prime}}{r_b}
\label{eq:EundB}.
\end{align}
Here, $s_{ib} = s_i(r_b)$ and $S_{Ib} = S_I(r_b)$ are abbreviations for the negative ion density $s_i(r) = -\rho_\text{ion}(r)$ and its weighted integral $S_I(r) = \int_0^rs_i(r^\prime)r^\prime \, dr^\prime$. The function $r_b(\xi)$ can be calculated from the differential equation
\begin{align}
& S_{Ib}r_br_b'' + s_{ib}r_b^2r_b'^2 + S_{Ib} = -2\Lambda(\xi), \label{ODE}
\end{align}
where $\Lambda(\xi) = -\int_0^{r_b}J_z(\xi,r')r'dr'$ is the weighted integral of the longitudinal current density created by
electron bunches within the blow-out. In figure \ref{pic:3d} the solution to this equation is shown for $\rho_{ion} \propto r^2$. The red line is the trajectory of a test particle starting with initial momentum $\vec{p}_0=p_{r}\vec{e}_r +p_{||}\vec{e}_z$ inside the witness bunch (small red cloud). The surrounding electron sheath is indicated by a black layer.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{3d_bild_rot_Impulse_beschriftet.png}
\caption{Semi-analytic model of a a radially inhomogeneous $s_i(r) \propto \alpha r^2$ blow-out. On the left is the driving bunch, on the right the injected bunch. The trajectory of a test particle, red solid line, and its injection angle $\tan(\theta) = p_r/p_\parallel$, in blue, are shown.}
\label{pic:3d}
\end{figure}
In this figure the driving electron bunch (large red cloud) moves along the negative $\xi$-axis.
The ODE (\ref{ODE}) is valid in the relativistic approximation $r_b/(S_{Ib}M_1(0))\ll\Delta\ll r_b$, where $\Delta$ is the thickness of the sheath and $M_0(x) = \int_x^\infty g(y) \,dy$ is the zeroth moment of an arbitrary function $g(y)$ describing the shape of the boundary \cite{Golovanov2016}. In our present work we follow \cite{Lu2006, Thomas2016}, where $g(y) = \Theta(1-y)$ was used to model a rectangular shaped layer profile by the Heaviside step-function $\Theta(x)$. With this model for the electron layer we follow the argumentation of Golovanov et al. \cite{Golovanov2017b} and strongly simplify the fields inside the sheath by a lowest-order perturbation theory with respect to $\epsilon$.
In general, the first order wakefield potential for $r>r_b$ can be expressed as
\begin{align}
\Psi(\xi,r) = -S_{Ib}\epsilon\int_{R(r,r_b)}^\infty M_0(Y) \, dY,
\end{align}
where $R(r,r_b) = (r-r_b)/\Delta$. Since $g(y) = \Theta(1-y)$, we find
\begin{align}
\Psi(\xi,r) = & -\frac{S_{Ib}}{2\epsilon} \left(\zeta-\epsilon\right)^2,
\end{align}
with $\zeta = (r-r_b)/r_b$ and thus
\begin{align}
E_z= \frac{\partial \Psi}{\partial \xi}= -\frac{\epsilon - \zeta}{\epsilon}\frac{S_{Ib}r_b^\prime}{r_b},
\label{eq:ez_rand}
\end{align}
for $r>r_b$. This result is equivalent to Golovanov et al. \cite{Golovanov2017b}, where an exponentially decaying sheath source was assumed. The other two field components are $B_\varphi = - \partial_\xi A_r - \partial_r A_z$ and $E_r = - \partial_r \Psi + B_\varphi$, where
\begin{align}
\frac{\partial A_r}{\partial \xi} = \frac{1}{r} \int_0^r \frac{S_{Ib}} {\epsilon} \frac{r_b^{\prime 2}}{r_b^2} r^\prime dr^\prime
\label{darxi}
\end{align}
and
\begin{align}
\frac{\partial A_z}{\partial r} = - \frac{1}{r} \int_0^r J_z(\xi,r^\prime)r^\prime dr^\prime.
\label{dazr}
\end{align}
To calculate the plasma return current $J_s(\xi)$ we follow the sheath model presented in \cite{Thomas2016} and see that it is connected to the wakefield potential by Ampere's circuit law
\begin{align}
\oint_{\partial P} \vec{B} \cdot d\vec{l} = \int_P \left( \vec{J} + \frac{d\vec{E}}{dt} \right) \cdot d\vec{s}.
\end{align}
If we decide that the surface $P$ is the transverse plane with the unit normal vector $\hat{n} = \vec{e}_z$, integrating over the whole plane gives
\begin{align}
\int_{\mathbb{R}^2} \left( J_z + \frac{\partial E_z}{\partial t} \right) dx \, dy = 0.
\end{align}
We know that $\Psi$ and $J_z$ vanish outside the surface and we know that $\int_0^{r_b} J_z r \, dr = - \Lambda (\xi)$ connects the sheath current to the wakefield potential. Thus
\begin{align}
\int_0^{r_b+\Delta} J_z r \, dr = - \int_0^{r_b+\Delta} \frac{\partial^2 \Psi}{\partial \xi^2} r \, dr
\end{align}
and the plasma return current can be written as
\begin{align}
J_s(\xi) = & \frac{-2}{(r_b+\Delta)^2 - r_b^2} \left( \int_0^{r_b+\Delta} \frac{\partial^2 \Psi}{\partial \xi^2} r \, dr -\Lambda(\xi) \right).
\label{eq:js_jo}
\end{align}
Using the Lorenz gauge, the normalized Poisson equations and equation \eqref{eq:js_jo} for $r_b < r < r_b + \Delta$, we get
\begin{align}
r B_\varphi = - \frac{X^2 - (1+\zeta)^2}{ X^2 - 1} \left( \int_0^{r_b} \frac{\partial^2 \Psi}{\partial \xi^2}r \, dr - \Lambda(\xi) \right)
\nonumber
\\
+ \int_{r_b}^r\frac{\partial^2\Psi}{\partial \xi^2}r \, dr + \frac{1 - (1+\zeta)^2}{X^2 - 1} \int_{r_b}^{r_b+\Delta} \frac{\partial^2\Psi}{\partial \xi^2}r \, dr.
\label{eq:lange_b_phi}
\end{align}
At our level of precision
\begin{align}
\frac{X^2 - (1+\zeta)^2}{ X^2 - 1} \approx & \frac{\epsilon - \zeta}{\epsilon},
\quad
\frac{\partial^2 \Psi}{\partial \xi^2} \approx \frac{S_{Ib}}{\epsilon}\frac{r_b^{\prime 2}}{r_b^2}
\end{align}
and the last two terms in equation \eqref{eq:lange_b_phi} vanish so that
\begin{align}
B_\varphi = \frac{\zeta - \epsilon}{\epsilon} \left(\frac{S_{Ib}}{2 \epsilon} r_b^{\prime 2} + \Lambda(\xi)\right)\frac{1}{r}. \label{Bphi}
\end{align}
Finally, with $\partial_r \Psi = -(\zeta-\epsilon)S_{Ib}/(\epsilon r_b)$, the radial component of the electric field can be expressed as
\begin{align}
E_r = B_\varphi + \frac{\zeta - \epsilon}{\epsilon} \frac{S_{Ib}}{r_b}. \label{Er}
\end{align}
The fields in (\ref{eq:EundB}) for $r\leq r_b$ and the fields in (\ref{eq:ez_rand}), (\ref{Bphi}) and (\ref{Er}) for $r>r_b$ are the base for our blow-out model. In section \ref{sec:simu} we will solve the equations of motion of a test particle in these fields to find the maximum injection angle in dependence of the particle's position, the particle's initial energy, the blow-out radius and the plasma density profile. In the following section we derive an approximation for the maximum injection angle assuming a spherical blow-out shape. A comparison between predictions of this approximation to simulations from section \ref{sec:simu} and PIC simulations is given in section \ref{sec:compare}.
\section{Estimation of the maximum injection angle}
\label{sec:ana}
In the following we estimate a trapping condition for test particles with initial radial momentum $p_{r,0}$, initial parallel momentum $p_{||,0}\gg p_{r,0}\gg1$, and initial position $(\xi_0,0)$ in terms of the angle of injection $\vartheta_\text{crit} =\arctan(p_{r,0} / p_{||,0})$. We derive a closed formula for the critical angle $\vartheta_\text{crit}=\max\vartheta$ which barely allows for trapping of a test electron.
For simplicity we assume that trapping occurs during a characteristic time which is approximately the first quarter oscillation. In this time a particle reaches its maximal distance to the symmetry axis $r_\text{max}$ and has traveled a certain distance $\Delta\xi$ on the $\xi$-axis. In our analysis we assume that the blow-out is a perfect circle with radius $R$ so that the simplest trapping condition is
\begin{align}
& r_\text{max}\leq \sqrt{R^2 -\xi_\text{max}^2}, && \xi_\text{max}=\xi_0+\Delta\xi \label{condition}.
\end{align}
The kinetic energy of the particle changes adiabatically slow during the first quarter oscillation. Thus we conclude that radial momentum is completely transferred into parallel momentum during the same time.
To calculate $r_\text{max}$ we solve the equations of motion for a test particle within the blow-out. For a circular shaped electron sheath, in cylindrical coordinates and in a co-moving frame of reference they are
\begin{align}
\dot{\textbf{p}} = - \left( \textbf{E} + \dot{\textbf{r}}_\bot \times \textbf{B} \right), && \dot{\textbf{r}} = \frac{\textbf{p}_\bot}{\gamma} + \vec{e_z} \left( 1 -\frac{p_{||}}{\gamma} \right) \label{motion}
\end{align}
with $E_z=\xi/2$ and $E_r=-B_\varphi=r/4$ (also compare \cite{Kostyukov2004, Kostyukov2009}). Solving the radial motion for $p_\parallel/\gamma \approx 1$ we find $\ddot{r} = - r/(2\gamma)$, which describes the betatron motion of an electron with frequency $\omega=1/\sqrt{2\gamma}$. Using the boundary conditions $r(t=0) = 0$ and $\dot{r}(t=0) = p_{r,0}/\gamma$ the betatron motion in the first quarter can be approximated by
\begin{align}
r(t) = \sqrt{\frac{2}{\gamma}} p_{r,0} \sin{\left( \frac{t}{\sqrt{2\gamma}}\right)},
\label{eq:r}
\end{align}
which shows that the test particle reaches its maximum distance to the symmetry axis $r_\text{max} = \sqrt{2/\gamma}\, p_{r,0}$ at time $t_\text{max} = \sqrt{\gamma/2}\,\pi$.
To calculate how far the particle has moved on the $\xi$-axis during the time $t_\text{max}$, we solve the equations of motion for $\xi$. Since $p_{||,0}\gg p_{r,0}\gg1$ and $\gamma$ changes adiabatically slow during the first quarter of the first betatron oscillation, the equations of motion (\ref{motion}) can be combined to
\begin{align}
\dot{\xi} & = 1 - \sqrt{ 1 - \dot{r}^2}.
\end{align}
Eq.\eqref{eq:r} and the boundary condition $\xi(t=0)=\xi_0$ then yield
\begin{align}
\xi(t) & = t - \sqrt{\frac{2}{\gamma}}p_\parallel\int_0^{t/\sqrt{2\gamma}} dt^{\prime} \sqrt{ 1 + \frac{p_{r,0}^2}{ p_{\parallel,0}^2 } \sin^2{\left( t^{\prime}\right)} } \nonumber
\\
& = t - \sqrt{\frac{2}{\gamma}}p_\parallel E\left(\frac{t}{\sqrt{2 \gamma}} \middle| - \frac{p_{r,0}^2}{ p_{\parallel,0}^2 }\right) + \xi_0, \label{xi}
\end{align}
where
\begin{align}
E(\varphi|m) = \int_0^\varphi \sqrt{1-m\sin^2(\vartheta)}d\vartheta \label{incomplete}
\end{align}
is the incomplete elliptic integral of the second kind. If we substitute $t=t_\text{max}$ in Eq.(\ref{xi}) $E$ becomes a complete elliptic integral and we can make use of the relation $E(-m)=\sqrt{1+m}E(m/(1+m))$ so that
\begin{align}
\xi_\text{max} = \sqrt{\frac{\gamma}{2}}\pi - \sqrt{2\gamma} E\left(\frac{ p_{r,0}^2}{\gamma^2}\right) + \xi_0
\label{eq:xi_max_komplett}
\end{align}
with
\begin{align}
E(m) = \frac{\pi}{2}\left[ 1-\sum_{n=1}^\infty\left(\frac{(2n-1)!!}{(2n)!!}\right)^2 \frac{m^n}{2n-1} \right].
\end{align}
If we expand $E(p_{r,0}^2/\gamma^2)$ for $p_{r,0}^2/\gamma^2\approx 0$ up to the second non-vanishing order, Eq.\eqref{eq:xi_max_komplett} becomes
\begin{align}
\xi_\text{max} \approx \frac{\pi}{4}\frac{p_{r,0}^2}{\sqrt{2\gamma^3}} + \xi_0
\label{eq:xi(t_max)}
\end{align}
and a formula for the initial radial momentum $p_{r,0}$ can be calculated from the trapping condition (\ref{condition}). Since we assumed that $p_{r,0}\ll p_{||,0}$, we take only terms up to second order in $p_{r,0}/\gamma$ into account so that
\begin{align}
p_{r,0}^2 \leq \frac{R^2 - \xi_0^2 }{\left( 4\sqrt{2\gamma} +\pi\xi_0\right)}\sqrt{2\gamma}^3 .
\label{eq:pr0quadrat}
\end{align}
With Eq.(\ref{eq:pr0quadrat}) it is possible to express the critical angle in terms of the initial position of a test electron inside a spherical blow-out as
\begin{align}
\tan{(\vartheta_\text{crit})} = \left( \frac{\left( 4\sqrt{2\gamma} + \pi\xi_0\right) \sqrt{\gamma}} {(R^2 - \xi_0^2) \sqrt{2}^3} -1\right)^{-1/2}.
\label{eq:tancrit}
\end{align}
In figure \ref{pic:ana_quasi_static_angle} we compare this strongly simplified trapping condition to numerical solutions of the equations of motion (\ref{motion}) for a blow-out with an electron sheath calculated from Eq.(\ref{ODE}). For our simulations we assumed the fields (\ref{eq:EundB}) for $r\leq r_b$, as well as (\ref{eq:ez_rand}), (\ref{Bphi}) and (\ref{Er}) for $r>r_b$.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{ana_num_vergleich_tan_krit.png}
\caption{Analytical solution (red dotted line) and numerical solution (blue line) of the critical angle of injection $\tan{(\vartheta_\text{crit})}$ in dependence of the initial position $(\xi_0,0)$ of a test electron inside a blow-out.}
\label{pic:ana_quasi_static_angle}
\end{figure}
The initial positions of the test electrons are located on axis ($r_0=0$) in the accelerating blow-out phase. As figure \ref{pic:ana_quasi_static_angle} shows, both results are in good agreement as long as the blow-out can be approximated by a sphere i.e. near the blow-out center. In the rear part, i.e. for large $\xi$, strong deviations occur due to the non-spherical form of the electron sheath.
\section{Simulations of test particles}
\label{sec:simu}
In the end of the last chapter we compared an analytical estimation of the critical injection angle $\vartheta_\text{crit}$ to numerical simulations for the special case of electrons which are initially located on the symmetry axis of the blow-out. In this section we study the dependence of $\vartheta_\text{crit}$ on the initial positions in the whole blow-out, on the initial electron energy and on the plasma density profile systematically. We discuss results from numerical simulations solving the equations of motion (\ref{motion}) for a blow-out with an electron sheath calculated from Eq.(\ref{ODE}). For our simulations we consider the blow-out model introduced in section \ref{sec:modell}, where the fields in (\ref{eq:EundB}) belong to the inner wakefield ($r\leq r_b$) and those in (\ref{eq:ez_rand}), (\ref{Bphi}) and (\ref{Er}) determine the electron motion inside the surrounding sheath with thickness $\Delta$ which is approximately 1\% of the maximum blow-out radius $r_{b,\text{max}}$. To calculate the integral current $\Lambda(\xi)$ we assumed a cylinder symmetric electron driver with parabolic density profile
\begin{align}
n_b = n_{b,0} \left(1-\left(\frac{2r}{\sigma_r}\right)^2 -\left(\frac{2(\xi-\xi_d)}{\sigma_\xi}\right)^2\right)
\label{eq:flat_top}
\end{align}
and $n_{b,0} = 8.8$, $\sigma_r = 4.8$, as well as $ \sigma_\xi = 7.2$. The shift $\xi_d$ is chosen such that $r_b(\xi=0)=r_{b,\text{max}}$. The ion density of the background plasma is modeled in polynomial form
\begin{align}
\rho_\text{ion}(r) & = \alpha r^n,
\end{align}
where $\alpha$ is chosen such that the plasma density at distance $r_{b,max}$ from the symmetry axis is approximately $1$.
For our simulations we subdivide the accelerating part of the blow-out into small boxes (see Fig.\ref{pic:winkel}a and Fig.\ref{pic:winkel}b), each containing 2000 non-interacting test electrons. The largest angle which allows for trapping of at least 90 \% of the particles inside a box defines the critical injection angle for that particular box and can be compared to the analytical prediction from Eq.(\ref{eq:tancrit}). In figures \ref{pic:winkel}a and \ref{pic:winkel}b the box colors indicate the value of $\vartheta_\text{crit}$ which is high in the wakefield center (red boxes) and low near the border (blue boxes). The numbered ellipses mark injection positions of bunches we simulated with PIC. They are discussed in the following section.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{02_1000_10000.png}
\caption{Blow-out (gray sphere) driven by a dense particle bunch (ellipse on the left-hand side) and subdivision of the accelerating phase into boxes for a) homogeneous plasma and b) channeled plasma with $\rho_\text{ion}\propto r^2$. The color of a box represents the critical angle of injection for which at least 90\% of all electrons could be trapped. Numbered ellipses on the right-hand side mark injection positions of pre-accelerated electron bunches we simulated in PIC.}
\label{pic:winkel}
\end{figure*}
For homogeneous plasma ($\rho_\text{ion}=1$) Fig.\ref{pic:winkel}a shows that the inner critical injection angles are twice as large as the outer ones but are still in a linear regime, i.e. in a range where $\tan(\vartheta_\text{crit})\approx \vartheta_\text{crit}$. Since $\vartheta_\text{crit}$ is maximal in the center of the blow-out and declines radially, it is favorable to inject pre-accelerated electron bunches on-axis into blow-outs in homogeneous plasma. For a blow-out in a channeled plasma with $\rho_\text{ion}\propto r^2$ Fig.\ref{pic:winkel}b shows a slightly different result. As can be seen $\vartheta_\text{crit}$ reaches its maximum in a much wider range in the central part of the wake. As a consequence, the distance to the symmetry axis is less important for trapping than in homogeneous plasmas. However, the color scaling in Fig.\ref{pic:winkel}a and Fig.\ref{pic:winkel}b shows that the maximum of $\vartheta_\text{crit}$ is 25\% less for channeled plasmas then for homogeneous plasmas. This is important because the possibility to trap bunches which have been injected off-axis in channeled plasma is compensated by the need for a higher initial focussing.
In the next section we compare results from PIC simulations to the analytical predictions from section \ref{sec:ana} and the results from test particle simulations presented in this section.
\section{Comparison to PIC simulations}
\label{sec:compare}
\begin{figure}[b]
\centering
\includegraphics[width=0.47\textwidth]{vlpl_10e_total_transparent.png}
\caption{PIC simulation of a lateral injected electron beam (yellow framed bunch) into a blow-out in a parabolic $\rho_\text{ion}\propto r^2$ plasma density channel. The longitudinal electric field $E_z$ accelerates the trapped electrons $n_b$. The wakefield is excited by a dense electron bunch (high dense electrons on the right hand side) in the background electron density $n_e$.}
\label{pic:pic_quadratic}
\end{figure}
Our PIC simulations are carried out using the fully electromagnetic version of the three-dimensional PIC code VLPL \cite{Pukhov2016, Pukhov1999}. An exemplary simulation in a channeled plasma with parabolic density profile $\rho_\text{ion} \propto r^2$ is shown in Fig.\ref{pic:pic_quadratic}. Here, the longitudinal electric field $E_z$, the density of the injected electron bunch $n_b$ and the electron plasma density $n_e$ are shown. Both the injected and the driving bunch are modeled by the parabolic density profile \eqref{eq:flat_top}. For the driver $n_{b,0}$ is chosen such that its total charge is $1.7 \text{nC}$ for $\sigma_r = 0.8 \lambda_p$ and $ \sigma_\xi = 1.2 \lambda_p$. The injected electron bunch has a lower total charge of $10 \text{pC}$ and a smaller spatial extension of $\sigma_r = 0.3 \lambda_p$ and $\sigma_\xi = 0.6 \lambda_p$. The electrons inside the driver have an energy of $E_d = 5$ GeV each, while the witness bunch consists of pre-accelerated electrons with an initial energy of $E_w = 500$ MeV. The injection angle $\tan(\vartheta)=p_r/p_{||}\approx 0.08$ is large enough to cause large amplitude betatron oscillations of the injected beam and is small enough to trap 99.8 \% of the pre-accelerated electrons.
\begin{figure*}
\centering
\includegraphics[width=0.47\textwidth]{homogen_normalisiert_in_mev.png}
\includegraphics[width=0.47\textwidth]{quadratisch_in_mev.png}
\caption{Comparison of the critical injection angles $\vartheta_\text{crit}$ from PIC simulations, test particle simulations and analytical approximations for (a) homogeneous plasma and (b) channeled plasma with parabolic density profile $\rho_\text{ion}\propto r^2$. The blue (solid) and red (dashed) lines belong to analytical predictions from formula Eq.(\ref{eq:tancrit}) for position I and II in Fig.\ref{pic:winkel}a. The markers $\times$, $+$, $\ast$ and $\diamond$ are interpolated values for $\tan(\vartheta_\text{crit})$ from simulations of test particles in the semi-analytical model, while the enclosed symbols $\bigcirc$, $\square$, $\otimes$ and $\oplus$ are interpolated values from PIC simulations. The numbers I to IV refer to the initial positions in Fig.\ref{pic:winkel}.}
\label{pic:theta_vlg}
\end{figure*}
Other PIC simulations were performed for different initial energies of injected electrons, ranging from $50$ MeV to $5$ GeV. The initial witness bunch positions relative to the driver are labeled by the numbered positions in Fig.\ref{pic:winkel}a and \ref{pic:winkel}b. In both simulation series positions I and II are chosen such that the center of mass of the witness bunch is located on the blow-out symmetry axis. For homogeneous plasma (Fig.\ref{pic:winkel}a) the center of mass in position III has the same $\xi$-coordinate as in I and an additional radial shift close to the electron sheath. For the channeled plasma (Fig.\ref{pic:winkel}b) the center of mass in positions III and IV have the same $\xi$-coordinates as in I and II respectively. Similar to simulations for homogeneous plasma both bunches are radially shifted close to the electron sheath.
To compare the critical injection angles observed in PIC simulations to those discussed in section \ref{sec:simu} and to analytical predictions from Eq.\ref{eq:tancrit}, we plot these information into one cumulative diagram. The result for homogeneous background plasma is presented in Fig. \ref{pic:theta_vlg}a whereas Fig. \ref{pic:theta_vlg}b is an evaluation for parabolically channeled plasma. In both diagrams $\tan(\vartheta_\text{crit})$ is plotted versus the initial energy of injected electrons. The blue (solid) line in Fig. \ref{pic:theta_vlg}a belongs to the analytical prediction from formula Eq.(\ref{eq:tancrit}) for position I while the red (dashed) line belongs to the analytical prediction for position II.
The Semi-markers are interpolated values for $\tan(\vartheta_\text{crit})$ from test particle simulations we discussed in the previous section. A comparison to the analytical predictions shows that those markers representing simulations in homogeneous plasma with initial positions near the symmetry axis cover the predictions for all energies while the other markers representing simulations with initial electron positions near the blow-out border are positioned well below the lines. In contrast to that we observe that the Semi-markers for test particle simulations in channeled plasma are much closer. This circumstance coincides with the discussion of Fig. \ref{pic:winkel}a and \ref{pic:winkel}b in the previous section so that we conclude that injection of pre-accelerated electron bunches into a blow-out in homogeneous plasma should be done preferably near the symmetry axis. In channeled plasma small deviations in radial direction are possible as long as the injected bunch is well focused.
The critical angles we observe in PIC simulations are represented by enclosed markers in Fig. \ref{pic:theta_vlg}a and Fig. \ref{pic:theta_vlg}b. In both figures we clearly see that the $\bigcirc$ markers are close to the analytical predictions and the test particle simulations. This indicates that external injection of pre-accelerated electron bunches on-axis and near the bubble center is a promising method for both homogeneous and channeled plasma. For on-axis injection near the bubble back the $\square$ markers in Fig. \ref{pic:theta_vlg}b show a similar agreement to the simplified bubble model. In contrast to that the $\square$ markers in Fig. \ref{pic:theta_vlg}a are placed well below the predictions of the simplified models but still in the same order. From this we conclude that the trapping process for on-axis injection can be modeled by simplified models quite well and that there is not much difference for this kind of injection between deep plasma channels and homogeneous plasmas.
For an off-axis injection of pre-accelerated electron beams (see PIC II and PIC III markers) our PIC simulations confirm what we already observed: In homogeneous plasma it is favorable to inject on-axis while in channeled plasma with $\rho_\text{ion}\propto r^2$ the distance to the symmetry axis is less important for trapping. However, the possibility to trap bunches which have been injected off-axis in channeled plasma is compensated by the need for a higher initial focussing of the witness beam.
\section{Conclusion}
\label{sec:conclusion}
In this work we study electron side injection into a blow-out in homogeneous and channeled plasma. We discuss the critical injection angle $\vartheta_\text{crit}$ for which at least 90\% of the injected particles are trapped and show that external injection into blow-outs is less critical in deep channels than in homogeneous plasma. A comparison of our results from analytical predictions to test particle simulations in a quasi-static blow-out model and to PIC simulations shows that in homogeneous plasma it is favorable to inject on-axis while in channeled plasma it is possible to trap bunches which have been injected off-axis. However, this advantage is compensated by the need for a higher initial focussing of the injected electron beam.
\begin{acknowledgments}
This work has been supported in parts by DFG project PU-213 and BMBF project 05K2016.
\end{acknowledgments}
\bibliographystyle{prsty} |
1806.06074 | \section{Introduction} \label{Intro}
Carbon stars were historically thought to lie on the asymptotic giant branch (AGB), having dredged up triple-$\alpha$
burning products to their surface. This gives rise to distinct atmospheric chemistry when the C/O ratio exceeds unity,
revealing strong molecular absorption bands of $\rm{{C}}_{2}$, CH, and CN. Intriguingly, it is now known that dwarf
stars can exhibit the same distinct absorption features, thus indicating that there exists a carbon-enriched,
main-sequence stellar population.
The first dwarf carbon (dC) star discovered was G77-61, a $T_{\textrm{eff}} \approx 4100$\,K, high proper-motion star
that was at first assumed to be an M dwarf. A discrepancy was noticed when the $M_{\textrm{V}}=+10.08$\,mag derived
from parallax measurements was compared to the observed colour, with the star appearing far redder than expected.
Spectroscopy revealed strong molecular carbon features, typical of classical carbon giants, were responsible for the
red colour \citep{Dahn} and established the first known main-sequence star with distinct C$_2$ absorption bands.
Stellar evolution does not predict the synthesis of carbon in single stars until the AGB, resulting in two possible
explanations for the atmospheric chemistry of G77-61. The first hypothesis is that the star was formed in a
carbon-enriched environment, and the second is that mass was transferred from an evolved companion (now unseen;
\citealt{Dahn}). Radial velocity monitoring of G77-61 over a baseline of three years revealed variations consistent with
a circular orbit and 245\,d period \citep{Dearborn}.The mass function indicated the unseen component must have a
mass of at least $0.55M_{\sun}$, consistent with a white dwarf.
Carbon is produced via the triple-$\alpha$ process through helium shell burning on the AGB. This material is then mixed
into the envelope raising the carbon abundance over time via a series of convection and pulsation episodes, with the
largest being the third dredge-up. This process can produce a C/O ratio well above unity for stars of intermediate mass
\citep{Iben}. If the star is part of a binary, then this carbon-rich material can be transferred to the companion via Roche
lobe overflow or efficient wind capture. However, the mass transfer mechanism is currently unconstrained for dC stars
owing to the lack of information on orbital separations.
Mass transfer of carbon-rich material from an AGB star is widespread amongst binary systems, with other well-known
examples being carbon-enhanced, metal-poor s-type (CEMP-s), Ba, and CH stars. CEMP-s stars are defined by their
relatively low metallicity, high carbon abundance, and high abundance of Ba ($\rm{[Fe/H]} < -2.0$, $\rm{[C/Fe]} > +1.0$,
$\rm{[Ba/Fe]} > +1.0$; \citealt{Aoki}), whereas Ba and CH stars are more loosely defined as containing strong absorption
features of Ba and CH respectively. These stars are typically giants with Ba and CH stars found in the red clump and on
the main-sequence turn-off respectively \citep{Esc}, while CEMP-s stars populate the first ascent giant branch (RGB). All
three populations exhibit radial velocity variations consistent with high binary fractions and orbital periods typically on the
order of hundreds to thousands of days \citep{McClure,Jorissen,Hansen}.
In this paper the first results of a radial velocity monitoring survey are presented for $28$ dC stars. The results to date are
consistent with a binary fraction possibly as high as $100\%$, supporting a post-mass transfer origin. In Section \ref{target}
the target selection and observations are described, with the results given in Section \ref{results}. The results are discussed
in Section \ref{disc}, with the preliminary conclusions presented in Section \ref{conc}.
\section{Target selection and observations} \label{target}
Potential targets were compiled from the literature based on brightness, and selected to have a high likelihood of being
a main-sequence star. The bulk of potential targets were found within the Sloan Digital Sky Survey (SDSS), with the first
few hundred candidates identified via colour cuts \citep{Downes}. The largest sample of dC candidates to date was later
discovered via cross-correlation to template spectra in DR7 and DR8 \citep{Green}. Additional dC stars identified via colour
and proper motion were also included among potential targets \citep{Liebert,Totten,Lowrance,Rossi}, including the prototype
G77-61. From an initial pool of 73 potential targets brighter than $g=19.0$\,AB\,mag, roughly three dozen stars have been
observed at least once, and the 28 targets with two or more observations are listed in Table \ref{tab:RVtab}.
The observations were carried out using the ISIS spectrograph on the William Herschel Telescope at Roque de los Muchachos.
The data reported here were obtained between 2013 February and 2017 August, using the ISIS blue arm and the EEV12
detector with no dichroic. The R1200B grating and a $1 \arcsec$ slit were used to achieve a resolving power $R \approx 6400$
over the range $5000$ -- $6000\,\text{\normalfont\AA}$. This choice was motivated by the presence of several strong absorption features
for robust cross-correlation, with the region including the $\rm{C}_{2}$ Swan bands at $5165$ and $5636\,\text{\normalfont\AA}$, the
$\ion{Mg}{i}$ triplet at $5183\,\text{\normalfont\AA}$, and the $\ion{Na}{i}$ doublet at $5889\,\text{\normalfont\AA}$. The spectral coverage achieved
with ISIS and the above settings is plotted in Figure \ref{2m1622_spec}. Observations were taken at airmasses below 1.5, and
using individual exposure times between 300 and 1200\,s, with a goal signal-to-noise (S/N) ratio $> 10$. Three or more exposures
were taken for each target per observation, to increase S/N, and to minimise the effect of cosmic rays and detector artifacts.
To obtain reliable wavelength solutions, arc lamps were observed for $60$\,s immediately before and after each set of
science exposures thus correcting any flexure in the optics between pointings. The CuNe$+$CuAr lamps were chosen for
calibration owing to the high number of features in the designated spectral range. Individual arcs were then cross-identified
against a master arc frame that consisted of several exposures taken at the start of the night at zenith.
\begin{figure*}
\centering
\includegraphics[width=\textwidth, height=8cm]{2m1622_spec.pdf}
\caption{The combined ISIS spectrum of the target LP\,225-12 showing the full spectral range for the adopted instrumental setup,
including vignetting towards the end points. The red lines correspond to the \chem{C_2} Swan band heads, green corresponds to
the $\ion{Mg}{i}$ triplet, and blue corresponds to the $\ion{Na}{i}$ D doublet.}
\label{2m1622_spec}
\end{figure*}
\section{Reduction and Analysis} \label{results}
\subsection{Data reduction and binary fraction}
The spectral images were trimmed, bias subtracted, and flat fielded using standard routines in \textsc{iraf}\footnote{IRAF is
distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research
in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.}. Individual spectra were extracted
using the {\sc apall} package and then combined as a weighted mean to facilitate the removal of cosmic rays and increase S/N.
The wavelength calibration for each combined spectrum was obtained by taking the average wavelength solution for the arc
frames taken directly before and after the science exposure. Subsequently each target was flux calibrated using a suitable
standard star with few absorption lines over the observed spectral range.
Radial velocity variations were evaluated using the package {\sc fxcor} that performs a Fourier cross-correlation between an
input spectrum and a given template \citep{Tonry}. For each target, the highest S/N spectrum was chosen as a template
against which all other observations were cross-correlated \citep{Marsh}. The velocity residuals (in \,km\,s$^{-1}$) of the cross-correlation
were added in quadrature to the velocity uncertainty from wavelength calibration, yielding a total error for each pair of spectra.
This total error depends on the S/N of the target, with higher S/N targets possessing errors $<1$\,km\,s$^{-1}$ and lower S/N targets
exhibiting errors up to $6$\,km\,s$^{-1}$, but typically less than a few \,km\,s$^{-1}$. Only relative radial velocities were determined in the present
study, as individual molecular transitions depend on gas temperature and pressure. Due to the presence of only a few atomic
lines in the spectra, absolute radial velocities will be derived and presented in a forthcoming paper.
A weighted $\chi^2$ test was used to determine if any velocity variation could be due to the measurement errors \citep{Lucatello,
Stark}. The results of the weighted $\chi^2$ test are given in Table \ref{tab:RVtab}, where $p(\chi^2|\nu)$ is the probability that
the observed radial velocity variations are real given the errors, and $\nu$ is the number of degrees of freedom (which here is
the number of observations). A small $p$ value thus indicates a low probability that the null hypothesis of constant radial
velocity can be accepted, implying the system is likely binary.
A limit of log$_{10}$($p)<-2$ (less than 1 chance in 100) was used to reject the null hypothesis that a target was not
radial velocity variable. By this definition, $21$ of the $28$ targets are consistent with being radial velocity variable. Despite
prior knowledge that G77-61 and PG\,0824$+$288 are binary \citep{Dearborn,Heber}, both were included in the statistics
as the sample was selected for brightness and visibility. No radial velocity variations were detected in PG\,0824$+$288,
but this is unsurprising given the fact it was spatially resolved at $\approx 17$\,au (corresponding to an orbital period of
$\ga 60$\,yr; \citealt{Far}). The binary detection rate is therefore $75\%$.
\subsection{Simulations} \label{simulations}
To numerically constrain the survey sensitivity, a set of Monte Carlo simulations was run for a model population of dC stars.
The simulation was run for $10$\,$000$ stars with an initial binary fraction of $60\%$, consistent with field stars brighter than
$M_{ \rm {v}}\approx +8$\,mag within $25$\,pc \citep{Jahreiss}, while varying the orbital parameters. The number of radial
velocity measurements for each simulated star was chosen randomly between $2$ and $8$ to be consistent with the
sampling of the survey. Each set of modelled radial velocities were then analysed using the same method adopted for the
empirical data to derive a $p$ value, where errors were randomly assigned to each modelled radial velocity measurement
with magnitudes comparable to the total errors in the survey. To measure the sensitivity, the same log$_{10}$($p)<-2$
detection criteria was applied.
Orbital parameters for each simulated binary system were assigned randomly, with inclination angle $i$, argument of
pericenter $\omega$, and pericenter phase $\phi$, all being assigned from uniform distributions. The mass of each
simulated dC star was sampled from a Salpeter initial mass distribution \citep{Salpeter} with upper and lower bounds
set at $0.8 M_{\sun}$ and $0.2M_{\sun}$ respectively, based on masses of K and M dwarfs. Each white dwarf mass was
sampled from a normal distribution with $\langle M_{*} \rangle = 0.63 M_{\sun} \pm 0.14$ \citep{Tremblay}.
Initially, the simulated orbital period distribution was set to $\textrm{log}_{10}{T\,\textrm{(d)}} = 4.8 \pm 2.3$ (based on
radial velocity studies of G dwarfs; \citealt{Duq}). The orbital eccentricity was assigned depending on the period, with
$e = 0$ for $T < 1000$\,d, and longer periods taken from a normal distribution $\langle e \rangle = 0.4 \pm 0.2$. A
maximum orbital period of $10$\,$000$\,yr ($\textrm{log}_{10}{T\,\textrm{(d)}} = 6.6$) was adopted as systems with
excessively long periods would not be detected given the baseline of the survey. Sampling the aforementioned orbital
period distribution, which has a $60\%$ binary fraction, resulted in a simulated detection rate of just $16\%$. Increasing
the binary fraction to $100\%$ raised the simulated detection rate to $27\%$, still far below the observational detection
rate of $75\%$. Thus the orbital period distribution of nearby G dwarfs appears to be highly discrepant with the dC star
population.
The Monte Carlo simulations were rerun using shorter orbital period distributions to better replicate the observed detection
rate. In order to increase the simulated detection rate, circular orbits were adopted as these yielded more detections than
eccentric orbits (by $\approx 2\%$), and only populations with $100\%$ binary fractions were simulated. A distribution of
$\textrm{log}_{10}{T\,\textrm{(d)}} = 3.6 \pm 1.8$ was tried as this represents a reasonable approximation of the periods
observed for Ba, CH, and CEMP-s stars \citep{Jorissen,Jorissen1,Hansen}. This yielded a simulated detection rate of
$37\%$ and still falls significantly short of the detected rate.
Shorter orbital period distributions were then explored by decreasing the mean of the log-normal distribution in steps of
$0.2$\,dex. The standard deviation of the distribution was initially set at one half the value of the mean, and then decreased
in steps of $0.1$\,dex. This method was carried out until the simulated detection rate converged closest to that observed.
While there is no unique period distribution that best approximates the detected binary fraction, a good representative
distribution is $\textrm{log}_{10}{T\,\textrm{(d)}} = 2.0 \pm 0.8$, and yields a simulated detection rate of $72\%$. Most
targets show $\Delta v_{\textrm{rad,max}}$ on the order of $20$ -- $40$\,km\,s$^{-1}$, thus consistent with orbital periods of
hundreds to thousands of days. Therefore, both the simulations and the observed changes in radial velocities both
provide consistent orbital period range estimates.
If the dC star population is $100\%$ binary, then the targets that do not show sufficient radial velocity variations are
therefore probably either long period binaries or low inclination systems that would yield small velocity semi-amplitudes.
Because two stars (SDSS\,J112633 and SDSS\,J120024) were only observed twice each, and with a baseline of less
than five days each, there is a wide range of orbital periods that could not be detected for these targets. The remaining
four null-detections (SDSS\,J074257, SDSS\,J110458, SDSS\,J130744, and SDSS\,J145725) have a baseline of over
three years, possibly suggesting inclinations that are close to pole-on. Additional data are required to determine the
exact orbital parameters, because the bulk of targets have an insufficient number of observations.
\begin{table*}
\centering
\caption{Dwarf carbon stars observed during the radial velocity monitoring survey with at least
two observations. Column six shows the $p$ value of each target expressed logarithmically
with an arbitrary
lower limit placed at log$_{10}$($p) = -6.0$, corresponding to a $10^{-6}$ chance of a target
being non-binary.
The seventh column gives the maximum radial velocity difference between
any two sets of observations, and
in the eighth column this is divided by the largest error in relative radial velocity, and is thus
a measure of the statistical significance.
}
\label{tab:RVtab}
\begin{tabular}{lccccrrr}
\hline
Name & RA & Dec & Epochs & Baseline & log$_{10}$($p$) & $\Delta v_{\textrm{rad,max}}$ & Significance \\ & & & &(d)& &(\,km\,s$^{-1}$)& ($\upsigma$)\\
\hline
LHS\,1075 & 00 26 00.48 & $-$19 18 52.0 & 6 & 1469 & <$-6.0$ & 26.0 & 5.3 \\
SDSS\,J012028 & 01 20 28.56 & $-$08 36 30.9 & 5 & 1469 & $-3.0$ & 22.4 & 5.1 \\
SDSS\,J012150 & 01 21 50.42 & +01 13 01.4 & 5 & 1469 & <$-6.0$ & 71.2 & 15.3 \\
SDSS\,J013007 & 01 30 07.13 & +00 26 35.3 & 5 & 1469 & $-3.6$ & 28.3 & 3.8 \\
SDSS\,J022304 & 02 23 04.43 & +00 45 01.3 & 5 & 1652 & $-2.5$ & 20.2 & 4.8 \\
G77-61$^{\rm a}$ & 03 32 38.08 & +01 58 00.0 & 8 & 1651 & <$-6.0$ & 28.9 & 4.4 \\
SDSS\,J074257$^{\rm a}$ & 07 42 57.17 & +46 59 17.9 & 5 & 1037 & $-1.0$ & 7.2 & 2.9 \\
SDSS\,J081157 & 08 11 57.14 & +14 35 33.0 & 6 & 1059 & $-4.6$ & 23.5 & 3.9 \\
PG\,0824+288 & 08 27 05.09 & +28 44 02.4 & 5 & 1059 & $-0.1$ & 8.3 & 1.1 \\
C\,0930-00 & 09 33 24.64 & $-$00 31 44.5 & 6 & 1061 & $-3.9$ & 22.4 & 5.5 \\
SDSS\,J093334 & 09 33 34.14 & +06 48 12.6 & 4 & 677 & $-2.6$ & 16.9 & 5.0 \\
SDSS\,J095545 & 09 55 45.84 & +44 36 40.4 & 4 & 1061 & $-5.3$ & 32.6 & 5.6 \\
SDSS\,J101548 & 10 15 48.90 & +09 46 49.7 & 2 & 388 & $-3.3$ & 26.2 & 6.8\\
SDSS\,J110458 & 11 04 58.97 & +27 43 11.8 & 3 & 1059 & $-0.0$ & 0.9 & 0.1 \\
KA\,2 & 11 19 03.90 & $-$16 44 49.3 & 2 & 5 & <$-6.0$ & 70.5 & 20.5\\
SDSS\,J112633 & 11 26 33.94 & +04 41 37.7 & 2 & 5 & $-0.4$ & 11.0 & 1.4 \\
SDSS\,J120024 & 12 00 24.09 & +38 17 20.3 & 2 & 1 & $-0.3$ & 4.1 & 1.1 \\
CLS\,50 & 12 20 00.77 & +36 48 01.7 & 4 & 763 & <$-6.0$ & 40.3 & 7.9\\
SDSS\,J130744 & 13 07 44.53 & +60 09 03.7 & 3 & 1651 & $-0.7$ & 9.8 & 1.8 \\
SBS\,1310+561 & 13 12 42.51 & +55 55 54.6 & 6 & 1651 & <$-6.0$ & 32.3 & 7.2\\
SDSS\,J145725 & 14 57 25.86 & +23 41 25.4 & 5 & 1469 & $-1.3$ & 20.1 & 4.9\\
CBS\,311 & 15 19 05.99 & +50 07 02.8 & 4 & 579 & <$-6.0$ & 46.8 & 4.2 \\
CLS\,96 & 15 52 37.35 & +29 27 59.1 & 5 & 1469 & $-4.7$ & 11.0 & 5.9 \\
LP\,225-12$^{\rm a}$ & 16 22 32.86 & +42 37 54.2 & 6 & 1469 & <$-6.0$ & 30.8 & 3.9 \\
SDSS\,J184735 & 18 47 35.67 & +40 59 44.1 & 4 & 1469 & <$-6.0$ & 25.2 & 6.7 \\
LSR\,J2105+2514 & 21 05 16.54 & +25 14 48.6 & 6 & 1469 & <$-6.0$ & 122.4 & 27.9\\
LP\,758-43$^{\rm a}$ & 21 49 37.84 & $-$11 38 28.5 & 6 & 1469 & <$-6.0$ & 29.2 & 3.3 \\
SDSS\,J235443 & 23 54 43.13 & +36 29 07.1 & 5 & 1030 & $-4.7$ & 34.2 & 6.7\\
\hline
\end{tabular}
\begin{tablenotes}
\item $^{\rm a}$ These four targets possess orbital periods constrained in the literature.
\end{tablenotes}
\end{table*}
\section{Discussion} \label{disc}
\subsection{Binary fraction and orbital evolution}
The observed variation in relative radial velocities of dC stars are consistent with the hypothesis that carbon-enhanced
material was transferred from an evolved companion. Within the observed sample, $75\%$ show clear variations and
are consistent with a $100\%$ binary population. The simulated period distributions that best match the observational
detection rate suggest that either Roche Lobe overflow or efficient wind capture may be responsible for the observed
pollution in dC stars.
To date five bona fide dC stars have determined orbital periods. SDSS\,J125017 was shown to have $2.9$\,d periodic
variability in its lightcurve, and this was confirmed to correspond to its orbital period via radial velocity measurements
\citep{Margon}. Interestingly, in their search of $\approx 1000$ dC star lightcurves using Palomar Transient Factory
data, \citet{Margon} find only SDSS\,J125017 exhibited variations consistent with a short period binary. This may
indicate that the majority of dC stars do not have short orbital periods and are hence unlikely to be post-common
envelope systems. However, the sensitivity to binary-induced photometric variability has yet to be established for
dC stars.
Three further dC binary orbital periods have recently been determined astrometrically at the U.S. Naval Observatory
(USNO) and lie in the range $\approx400$--$4000$\,d \citep{Harris}. This period range is broadly consistent with the
distribution adopted in Section \ref{simulations} based on binary simulations that are well-matched to the detected
fraction of stars with radial velocity changes. Importantly, all three of these targets have at least five radial velocity
observations in this study, and their $\Delta v_{\textrm{rad,max}}$ values in Table \ref{tab:RVtab} are consistent
with Keplerian orbits at their determined inclinations. Thus, the USNO astrometry strengthens the argument that
that dC stars are likely to typically have orbits on the order of hundreds to thousands of days.
Interestingly, together with G77-61 which possess a period of $245$\,d \citep{Dearborn}, these four dC stars
appear to lie in a ``no man's land'' for low-mass, unevolved companions to white dwarfs, as theory predicts
that any secondary should spiral in or spiral out of this region owing to the effects of mass loss during the AGB
\citep{Willems}. Mass loss can overfill the AGB star Roche lobe and create a common envelope that causes
an initially close companion to inspiral due to friction \citep{Ivanova}. In contrast, if the initial binary separation
is sufficiently large, then as mass is lost the orbital separation will increase to conserve angular momentum.
These theoretical predictions are strongly confirmed among commonly occurring white dwarf-M dwarf binary
systems, where there is a clear dearth of pairs with orbital separations in the region $\sim 1$ -- $10$\,au as
established via space-based imaging in the optical \citep{Far}. In contrast, there are myriad short-period ($\la
10$\,d; \citealt{Reb}), post-common envelope systems, and long-period ($\ga 50$\,yr) widely separated white
dwarf-M dwarf systems detected by common-proper motion \citep{far05}. Comparing the spectral types of M
dwarfs in post-common envelope systems to those in widely separated binaries, reveals no obvious differences
\citep{Schreiber}, therefore suggesting that neither process is capable of efficient mass transfer. This is further
suppoted by the detection of just one dC star among a sample of 1600 white dwarf-red dwarf binaries identified
from the SDSS via template matching and identifying excess red fluxes via optical and near-infrared survey data
\citep{Reb1}. Though dC stars would not be found via template matching to a white dwarf-red dwarf composite
spectrum, it would be expected that binaries identified via a red excess to the white dwarf spectrum could
include dC stars. Their rare nature as companions to known white dwarfs is consistent with the fact that
only 9 of 1211 SDSS dC stars possessing composite spectra.
The results to date from this study indicate that these intermediate orbital periods may be typical for dC stars,
and thus are likely a key characteristic tied to their origin. If indeed most dC stars possess periods of hundreds
to thousands of days, then they may be similar to those found for Ba, CH, and CEMP-s stars \citep{Pols,Izzard}.
Furthermore, the similarities between these polluted systems extends to metallicity, with all three populations
metal-deficient with respect to solar, most notably the CEMP-s stars. High-resolution spectroscopy has
revealed G77-61 is one of the most metal-poor stars known ($\rm{[Fe/H]} = -4.0$; \citealt{Plez}), and preliminary
kinematical results based on {\em Gaia} DR1 suggest that the dC population as a whole are old and metal-poor,
with roughly $30\%$ -- $60\%$ halo members \citep{Far1}. Thus it appears that metal-poverty is important for
C/O enhancement in dC stars.
\subsection{Carbon chemistry exoplanets}
There has been considerable interest in the existence of exoplanets that exhibit carbon dominated chemistry
\citep{Madhusudhan}. The existence of such planets requires that the protoplanetary material be intrinsically
enriched in carbon such that $\chem{C}/\chem{O} > 0.8$ \citep{Bond}. In this scenario, major planet-building
materials could be predominantly carbide minerals, allowing for a SiC, TiC, graphite mantle with an Fe--Si--Ni
core. Such planets would be chemically distinct from the rocky bodies found within the solar system. Although
unrelated to the present study, it is noteworthy that the minor bodies that pollute the surfaces of white dwarf stars
exhibit Earth-like or chondritic C/O, with no evidence for carbon-dominated materials \citep{Wilson}.
The potential frequency of carbon-rich exoplanets depends on the space density of viable host stars \citep{Fort}.
While dC stars are the most numerous carbon stars in the Galaxy, they are still far less abundant than their
oxygen-rich counterparts, with approximately 1:650\,000 dC stars relative to low-mass K and M dwarfs ($0.1{M}
_{\odot}< {M} <0.8\,{M}_{\odot}$; \citealt{Bochanski,deKool}. With drastically fewer potential hosts with $\rm{C/O}
> 1$, the expected relative abundance of carbon-rich planets could be vanishing.
Assuming carbon-rich planets can and do form around host stars with C/O $>0.8$, the results presented here,
that all low-mass, main-sequence stars in the phase space above C/O $ = 1.0$ are consistent with $100\%$ duplicity,
therefore diminishes the possibility of single stars with C/O $ \ge 1.0$, and thus their ability to host planets. Thus, the
available real estate for carbon planets may be dismal. However, one subgroup of CEMP stars appears to commonly
possess both binary and single members (the CEMP-no stars; \citealt{Stark}) and therefore may contain primordial
carbon-enhancement. If these stars are formed from carbon-enhanced nebulae, then presumably they are possible
sites for carbon-rich planets \citep{Loeb}, notwithstanding the potentially unfavourable planet hosting frequency of
metal-poor stars \citep{Fisch}.
\section{Conclusions} \label{conc}
This radial velocity monitoring survey shows that 21 of 28 ($75\%$) dC stars exhibit radial velocity variations consistent
with duplicity. Using Monte Carlo simulations for a $100\%$ binary population with an orbital period distribution
$\textrm{log}_{10}{T\,\textrm{(d)}} = 2.0 \pm 0.8$, the empirical ($75\%$) and predicted (72\%) detection rates are
well matched. Thus the dC stars appear consistent with a 100\% binary population, supporting the post-mass transfer
nature of these stars. When compared to white dwarf-M dwarf binaries, which exhibit a bimodal period distribution, the
dC population appears to lie between the peaks, indicating that efficient mass transfer circumvents migration to short
or long periods.
The high binary fraction of dC stars constrains the potential real estate for carbon-rich exoplanets, owing to the extrinsic
nature of their high carbon abundance. As dC stars are the product of efficient mass transfer, the chemistry of the system
during the planet formation phase would not reflect the chemistry of the dC star observed today. This may also be true
for all main-sequence stars that exhibit C/O significantly above solar; if they exist (which is uncertain; \citealt{Fort,Teske})
such stars could be the result of binary mass transfer. It is clear from the dC stars that carbon enhancement in a
main-sequence star is possible via binary evolution, and thus more subtle C/O enhancements may be more common
(e.g.\ in FGK stars).
Continued radial velocity measurements for the stars in this study are necessary to determine actual orbits. Physical
models of mass transfer -- for example Roche Lobe overflow or wind capture \citep{Paczy,AbatePols} -- can only be
tested with tightly constrained binary periods. State-of-the-art mass transfer models currently face challenges in
producing carbon-enhanced stars in general \citep{Izzard,Matrozis}, and the newly uncovered dC binary population
can provide an additional and distinct set of empirical constraints.
\section*{Acknowledgements}
The authors would like to thank H. C. Harris, B. T. G\"ansicke, I. D. Howarth, and an anonymous reviewer for feedback
that improved the quality of the manuscript. The data obtained in this paper was done so using the William Herschel Telescope
that is operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de
los Muchachos of the Instituto de Astrof\'isica de Canarias. |
1802.03782 | \section{Introduction}
Thanks to an increasing number of wide-field and multi-wavelength imaging dataset, we can study scaling relations of galaxies in a wide redshift range. Investigation on properties of galaxies found that there is a linear correlation between the integrated star formation rate (SFR) and stellar mass ($M_{*}$) of star-forming galaxies namely star formation main sequence (SFMS) \citep[e.g.][]{brinchmann2004, daddi2007, elbaz2007, noeske2007, salim2007, whitaker2012}. The integrated SFMS relation suggests that SFR increases with $M_{*}$ as a power law (SFR $\propto M_{*}^{\alpha}$ with $\alpha \sim 1$) over at least two orders of magnitude in stellar mass ($\sim10^{9}-10^{11}M_{\odot}$) up to $z\sim 6$. The normalization of the integrated SFMS relation shows a factor of $\sim 2$ dex decrease from $z\sim 6$ to $z\sim 0$ \citep{speagle2014}. Tightness of the integrated SFMS relation, $1 \sigma$ scatter of only $\sim 0.3$ dex in the redshift range of $0\lesssim z \lesssim 3$ \citep[e.g.][]{whitaker2012, speagle2014, kurczynski2016}, implies an importance of a continuous internal secular process in driving the star formation activity of the majority of galaxies rather than stochastic merger process \citep{noeske2007}.
Star-forming galaxies are evolving with cosmic time maintaining their position within $\pm 0.3$ dex around the global SFMS. Once the star formation activity in a galaxy is quenched, the galaxy will move away from the global SFMS relation until it reaches the red-sequence which is populated by quiescent galaxies. Responsible mechanisms for the quenching process are still unclear. Several quenching mechanisms have been proposed. Rapid gas consumption by starburst event can make galaxies to run out their gas and in combination with the outflow driven by a stellar feedback can quench star formation in the galaxies \citep[e.g.][]{murray2005}. Furthermore, feedback from a central super massive black hole growth process [i.e. active galactic nucleus (AGN) feedback] can suppress the cold gas supply to galaxies through quasar or radio feedback mode \citep[e.g.][]{sanders1988, silk1998, springel2005, hopkins2006, hopkins2008, schawinski2006, fabian2012}. On the other hand, morphological quenching scenario proposes that once central spheroidal component (i.e. bulge) is formed, the deeper gravitational potential of the bulge can stabilize gas in the disc, and the stabilization prevents gas collapse and stop the star formation in the disc \citep[e.g.][]{martig2009, genzel2014}. The suppression of cold gas accretion into a galaxy will also happen once the growth of host dark matter halo mass reaches a certain critical mass ($\sim 10^{12}M_{\odot}$) above which newly accreted gas will be shock heated \citep[e.g.][]{birnboim2003, dekel2006}.
Investigation on the morphology and structural properties of star-forming and quiescent galaxies revealed that quiescent galaxies tend to have higher S\' ersic index ($n$) and concentration index, i.e. higher bulge fraction (\textit{B/T}, bulge-to-total mass ratio), than star-forming galaxies \citep[e.g.][]{kauffmann2003a, wuyts2011}. It is still unclear how galaxies change their morphology from disc-dominated system (low concentration index and sersic index, $n \sim 1$) to bulge-dominated system (high concentration index and sersic index, $n\gtrsim 3$). Investigation on the radial stellar mass surface density profiles of massive galaxies at $0\lesssim z \lesssim 3$ revealed that massive galaxies establish their structures and stellar masses in a 'inside-to-outside' manner, where a bulge is already formed at $z\sim 2$ then a disc component is build subsequently \citep[e.g.][]{vandokkum2010, schreiber2011, nelson2012, nelson2016, patel2013, morishita2015, tacchella2015, tadaki2017}. Although it is suggested that galaxies change their morphologies to a bulge-dominated system during the quenching process, other investigation suggests that quiescent galaxies were born as a bulge-dominated system \citep{abramson2016}.
As the stellar mass buildup progresses inside-out, the quenching process also happen in the similar manner. This 'inside-out quenching' process is imprinted in the positive gradient of specific SFR (sSFR) radial profile of massive galaxies at $0\lesssim z \lesssim 2$ \citep[e.g.][]{tacchella2015, tacchella2018, gonzalez2016, abdurrouf2017, belfiore2018}. It is still unclear what is a physical mechanism responsible for the inside-out quenching. Some simulation works have been done to study the physical mechanism behind the inside-out quenching. Cosmological zoom-in simulations done by \citet{zolotov2015} and \citet{tacchella2016a, tacchella2016b} suggest that galaxy may experience central gas compaction followed by a central starburst which consumes gas rapidly in the central region. If further cold gas supply into the central region is stopped due to radiative stellar feedback and/or AGN feedback, the onset of the inside-out quenching begin.
To understand how galaxy's internal star formation leads to the building up of the galaxy's stellar mass and structure and also to understand how an internal quenching process shut down the star formation in the galaxy, an analysis on the spatially resolved distributions of $M_{*}$ and SFR for a large number of galaxies in a wide redshift range is essential. Recently, investigations on sub-galactic ($\sim1$ kpc-scale) surface densities of stellar mass ($\Sigma_{*}$) and SFR ($\Sigma_{\rm SFR}$) of $z\sim 0$ and $z\sim 1$ galaxies revealed that there is a nearly linear relation between $\Sigma_{*}$ and $\Sigma_{\rm SFR}$ in a similar form as found in the integrated scaling relation, namely spatially resolved SFMS relation (for $z\sim 1$: \citet{wuyts2013} and \citet{magdis2016}, while for $z\sim 0$: e.g. \citet{canodiaz2016}, \citet{maragkoudakis2017}, \citet{abdurrouf2017}, \citet{hsieh2017}, \citet{medling2018}, and \citet{liu2018}). Previous research papers reported the spatially resolved SFMS relations with various slopes ($\sim 0.7-1.0$) and zero points. This discrepancy is possibly caused by different methods used in each research, especially on the SFR indicator, i.e. method to derive SFR \citep{speagle2014}.
Understanding the spatially resolved SFMS relation and its evolution with cosmic time is very important to study the origin of the global SFMS relation, because the sub-galactic relation can be a more fundamental relation from which the global relation is originated. \citet{abdurrouf2017} studied the spatially resolved SFMS relation in the local ($0.01<z<0.02$) massive ($M_{*}>10^{10.5}M_{\odot}$) disc galaxies using seven bands (FUV, NUV, $u$, $g$, $r$, $i$ and $z$) imaging data from \textit{Galaxy Evolution Explorer} (\textit{GALEX}) and Sloan Digital Sky Survey (SDSS). In that research, we derived the spatially resolved SFR and stellar mass of a galaxy by using a method so-called pixel-to-pixel spectral energy distribution (SED) fitting which fits the spatially resolved SED of a galaxy with a set of model photometric SEDs using a Bayesian statistics approach. The reason for choosing the method is that the same method is applicable to a large number of galaxies even at high redshifts, thanks to the high spatial resolution of the near-infrared (NIR) images taken by the \textit{Hubble Space Telescope} (\textit{HST}).
\citet{abdurrouf2017} found that the spatially resolved SFMS in the local massive disc galaxies show that $\Sigma_{\rm SFR}$ increases linearly with $\Sigma_{*}$ at low $\Sigma_{*} (\lesssim 10^{7.5}M_{\odot}\text{kpc}^{-2})$ range, while flattened at high $\Sigma_{*} (\gtrsim 10^{7.5}M_{\odot}\text{kpc}^{-2})$ range. Investigation on the spatially resolved SFMS relation in the galaxies above $+0.3$ dex (hereafter, z0-$\Delta$MS1), between $-0.3$ and $+0.3$ dex (hereafter, z0-$\Delta$MS2) and below $-0.3$ dex (hereafter z0-$\Delta$MS3) of the global SFMS relation, found a tight spatially resolved SFMS relation in the z0-$\Delta$MS1 and z0-$\Delta$MS2 galaxies, while the relation seems to be broken in the z0-$\Delta$MS3 galaxies. The normalization in the global SFMS in each group is preserved in the spatially resolved SFMS, in the sense that the spatially resolved SFMS of z0-$\Delta$MS1 galaxies has higher normalization than the spatially resolved SFMS of z0-$\Delta$MS2 galaxies.
In the current work, we extend our previous study of the spatially resolved SFMS to massive disc galaxies at $0.8<z<1.8$ using the similar pixel-to-pixel SED fitting method applied to the 8 bands (F435W, F606W, F775W, F814W, F850LP, F125W, F140W and F160W) imaging data from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \citep[CANDELS;][]{grogin2011, koekemoer2011} and 3D-HST \citep{brammer2012} projects. Similar rest-frame wavelength coverage (FUV-NIR) and spatial resolution ($\sim 1$ kpc) of the imaging data used in this work to those in the previous work allows a consistent comparison and could resolve the problem caused by the different method in studying the spatially resolved SFMS at different redshifts. Furthermore, we also discuss the evolution of the $\Sigma_{*}$, $\Sigma_{\rm SFR}$ and sSFR radial profiles.
The structure of this paper is as follows. In Section~\ref{sec:data_sample}, we explain the sample. Section~\ref{sec:method} presents our methodology, pixel-to-pixel SED fitting. Results and discussions are presented in Section~\ref{sec:results} and \ref{sec:discussion}, respectively. The cosmological parameters of $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$ and $H_{0}=70 \text{km} \text{s}^{-1} \text{Mpc}^{-1}$ are used throughout this paper. We use $M_{*}$ to represent the total stellar mass of a galaxy, while $m_{*}$ is used to represent the stellar mass within a sub-galactic region. Terms global is used to indicate a galaxy-scale quantity, while term sub-galactic is used to represent $\sim 1$ kpc scale quantity within a galaxy.
\section{Data sample}
\label{sec:data_sample}
To examine the relation between $\Sigma_{*}$ and $\Sigma_{\rm SFR}$ of galaxies at $z\sim 1$ with the same resolution of 1-2 kpc as those of local galaxies in the companion paper \citep{abdurrouf2017}, we use eight bands (F435W, F606W, F775W, F814W, F850LP, F125W, F140W and F160W) imaging data from CANDELS \citep[]{grogin2011, koekemoer2011} and 3D-HST \citep{brammer2012} which cover $\sim 4000${\AA}$-16000${\AA}. The eight bands at $z\sim 1$ have similar rest-frame wavelength coverage to the seven bands (FUV, NUV, $u$, $g$, $r$, $i$ and $z$) of \textit{GALEX} and SDSS imaging data for galaxies at $z\sim 0$. Thanks to the wide wavelength coverage, degeneracy between age and dust extinction (inherent in the stellar population synthesis models) can be broken. The dust extinction can be constrained by the rest-frame FUV$-$NUV colour (observed F435W$-$F606W colour at $z\sim 1$), while age can be constrained by the rest-frame $u-g$ colour (observed F775W$-$F850LP colour at $z\sim 1$).
We select sample galaxies located in the GOODS-S field from the 3D-HST catalog \citep[]{skelton2014, brammer2012} based on $M_{*}$ and redshift. In the catalog, the $M_{*}$ is calculated through $0.3\mu m$ to $8\mu m$ SED modeling using the FAST code \citep{kriek2009} and redshift is determined using three methods: (1) photometric redshifts using the $0.3\mu m$ to $8\mu m$ SED fitting with EAZY code \citep{brammer2008}, (2) two-dimensional grism spectroscopy by the 3D-HST and (3) ground-based spectroscopy. For SFR, we do not use SFR derived by the FAST code, instead we use the SFR calculated following \citet{whitaker2014}, which uses the combination of rest-frame UV and IR luminosities. We applied following criteria to select the sample galaxies: (1) redshift range of $0.8<z<1.8$, (2) stellar mass higher than $10^{10.5}M_{\odot}$, (3) observed in the eight bands, (4) face-on configuration with ellipticity less than $0.6$ or $b/a > 0.4$ and (5) late-type (disc) morphology with S\' ersic index ($n$) less than $2.6$.
The redshift range is determined to achieve resolution of $\sim 1$ kpc with F160W image, which has largest full width at half-maximum (FWHM) with $0.19$ arcsec among the eight bands. In the redshift range, the eight band coverage samples the rest-frame SED in the FUV to near-infrared (NIR) wavelength. We apply the same mass limit of $10^{10.5}M_{\odot}$ as in \citet{abdurrouf2017}. We select face-on galaxies to minimize the effect of dust extinction. The ellipticities of the galaxies are calculated by averaging the F125W-band elliptical isophotes outside of an effective radius, as described in the construction of the radial profiles (see Section~\ref{sec:radial_profiles}). The S\' ersic index is calculated based on S\' ersic profile fitting to the one-dimensional stellar mass surface density radial profile ($\Sigma_{*}(r)$) using the maximum likelihood method. The calculation of the $\Sigma_{*}(r)$ and S\' ersic index are explained in Section~\ref{sec:radial_profiles}. In addition to the five selection criteria described above, we only select galaxies which have more than four bins of pixels, where each bin has a signal-to-noise (S/N) ratio of more than $10$ in all of the eight bands (see Section 3.2 of \citet{abdurrouf2017} for the description on the binning method).
For galaxies with the photometric redshift, we check the reliability of the redshift estimation by fitting the integrated SEDs of the galaxies in the F435W to F160W bands with the model SEDs, which are calculated at the redshifts of the galaxies. We use the maximum likelihood method to get the best-fitting model SED. We find eight galaxies with a strange SED which results in a very large $\chi^{2}$ and lead to an unreliable redshift estimate, while the other galaxies have small $\chi^{2}$, which indicates the reliability of their photometric redshifts. We then exclude the eight galaxies from the sample. Finally, we cross-match the remaining 163 galaxies with the \textit{Chandra} $7$ Ms sources catalog \citep[]{luo2017, yang2017} to remove galaxies with a luminous AGN activity. The \textit{Chandra} catalog contains X-ray sources from the $\sim 7$ Ms exposure in the \textit{Chandra} Deep Field-South (CDF-S), which covers GOODS-S field. We find 11 galaxies that have a luminous X-ray AGN activity ($L_{2-10\text{keV}}>10^{43}\text{erg }\text{s}^{-1}$) among the sample. We then exclude those galaxies from the sample to avoid contamination by the contribution from the non-stellar AGN component to the broad band photometry. Finally, $152$ galaxies are selected for further analysis. Top panel of Fig.~\ref{fig:galaxies_sample} shows the $M_{*}$ and SFR of the $152$ sample galaxies (blue circles) along with the distribution of entire galaxies more massive than $10^{9.5}M_{\odot}$ at $0.8<z<1.8$ in the GOODS-S field (small black circles). The black line indicates global SFMS relation of \citet{speagle2014} calculated at the median redshift of the sample, $z=1.217$, while the gray shaded area represents $\pm 0.3$ dex scatter around the global SFMS relation. The purple stars represent AGN-host galaxies. Bottom panel of Fig.~\ref{fig:galaxies_sample} shows redshift versus $M_{*}$ of the sample galaxies. The figure shows that redshifts of the sample galaxies are spread uniformly within the redshift range.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{galaxies_sample.png}
\includegraphics[width=0.5\textwidth]{plot_logSM_vs_redshift.png}
\caption{Top panel: SFR versus $M_{*}$ of galaxies more massive than $\log(M_{*}/M_{\odot})=9.5$ at $0.8<z<1.8$ located in the GOODS-S region. Blue circles represent $152$ sample galaxies used in this work, while the purple star symbols represent AGN-host galaxies. The black line represents the global SFMS relation of \citet{speagle2014} calculated at the median redshift of the sample galaxies, $z=1.217$, and the gray shaded area around it represents the $\pm 0.3$ dex scatter. Bottom panel: redshifts versus $M_{*}$ of the sample galaxies. \label{fig:galaxies_sample}}
\end{figure}
The eight band mosaic images from the 3D-HST\ \footnote{\url{http://3dhst.research.yale.edu/Data.php}} are registered to the same sampling of $0.06$ $\text{arcsec }\text{pixel}^{-1}$ and PSF-matched to the F160W image. Background of the mosaic images are subtracted. The FWHM corresponds to the physical scale of $\sim 1.4-1.6$ kpc at $0.8<z<1.8$. $5\sigma$ limiting magnitudes of the F435W, F606W, F775W, F814W, F850LP, F125W, F140W and F160W are $27.3$, $27.4$, $26.9$, $27.2$, $26.5$, $26.1$, $25.6$ and $26.4$ mag within $0.7$ arcsec diameter, respectively \citep{skelton2014}.
\section{Methodology} \label{sec:method}
In order to derive spatially resolved stellar population properties, especially SFR and $M_{*}$, we use a method namely pixel-to-pixel SED fitting, which is the same method as we used in \citet{abdurrouf2017}. We fit spatially resolved SED of each bin with a set of model SEDs using a Bayesian statistics approach. The method can be divided into three main steps: (1) Image registration, PSF matching, and pixel binning to get photometric SED of each bin of a galaxy, (2) construction of a library of model photometric SEDs, and (3) fitting the SED of each bin with the set of the model SEDs, as described in detail in \citet{abdurrouf2017}.
We do not need image registration and PSF matching because eight bands imaging data provided by the 3D-HST have been registered and PSF-matched as described previously, so the first step is to define an area of a galaxy. To define the area of a galaxy, we firstly generate a segmentation map for the mosaic image in each band using \texttt{SExtractor} \citep{bertin1996} with a detection threshold of above 1.5 times larger than the rms scatter outside of the galaxy, then using the position of a specific galaxy from the 3D-HST catalog, we find the segmentation map around the galaxy. In the \texttt{SExtractor} segmentation map, each object is indicated with a different value, which correspond to the id number of the object in the generated catalog. By reading the pixel value of the galaxy's central pixel and looking for other pixels which have the same value, we can obtain pixels associated with the galaxy. Some outlier pixels which are not connected with the main area of the galaxy are sometimes included in the area of the galaxy, in such case, we exclude those pixels which have no connection to the central pixel of the galaxy. The segmentation maps of the galaxy in eight bands are then merged to define the area of the galaxy.
The second step is converting a pixel value in a unit of $\text{count }\text{s}^{-1}$ to the flux in $\text{erg }\text{s}^{-1} \text{cm}^{-2}${\AA}$^{-1}$ and then pixel binning to increase the S/N ratio. The pixel value to flux conversion is done by multiplying the pixel value with a conversion factor given in the PHOTFLAM header keyword. The pixel binning is done by considering not only an S/N threshold to be reached by combining the pixels, but also similarity of SED shape (tested through a $\chi^{2}$ calculation) among the pixels which will be binned. The pixel binning is done by first, looking for the brightest pixel in the F125W band, then check each neighboring pixel located within a circular annulus centered at the brightest pixel, for the similarity of its SED shape to that SED of the brightest pixel (with $\chi^{2}$ below a certain limit) and include the pixel into the bin if its SED shape is similar. Radius of the circular annulus is then increased by $2$ pixels and the same procedure is done to add up more pixels until the total S/N of the bin reaches the S/N threshold. Next bin is made by looking for the brightest pixel in the F125W band that was not included in the previous binning and then do the same steps as above. The above procedure is applied until no bin can be made with the remaining pixels. Finally, all the remaining pixels are binned together into one bin. Similar as in \citet{abdurrouf2017}, we set S/N threshold of 10 in all eight bands and $\chi^{2}$ limit of 30. Fig.~\ref{fig:binningmap} shows (top panel) the F125W band image and (bottom panel) the binning result of a galaxy \texttt{GS\_19186}, which is located at RA$=53^{\circ}.120750$, DEC$=-27^{\circ}.818984$ and $z=1.0940$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Gal138_F125Wbandflux.png}
\includegraphics[width=0.5\textwidth]{Gal138_binningmap.png}
\caption{Top panel: F125W band image of a galaxy \texttt{GS\_19186} in the sample which located at RA$=53^{\circ}.120750$, DEC$=-27^{\circ}.818984$ and $z=1.0940$. Bottom panel: pixel binning result of the galaxy. Color coding represents index of bin. North is to the top and east is to the left. \label{fig:binningmap}}
\end{figure}
The next step is constructing a library of model SEDs. A library of $300,000$ model photometric SEDs with a random set of parameters [$\tau$, $t$, $E(B-V)$, and $Z$] is generated by interpolating the $286,000$ parent model SEDs in a grid of those parameters. We use GALAXEV stellar population synthesis model \citep{bruzual2003} with \citet{chabrier2003} initial mass function (IMF) and exponentially declining star formation history of $\text{SFR}(t)\propto e^{-t/\tau}$. $\tau$, $t$, $E(B-V)$, and $Z$ represent SFR decaying timescale, age of the stellar population, color excess of dust attenuation, and metallicity of the stellar population, respectively. We multiply parent model spectra with the eight filter transmission curves of CANDELS and 3D-HST then integrate to get model fluxes in the $8$ bands. To apply effect of dust extinction, we use \citet{calzetti2000} dust extinction law. The random set of parameters have ranges of: $\tau[0.1:10\text{ Gyr}]$, $t[0.25:6.6\text{ Gyr}]$, $E(B-V)[0:0.6\text{ mag}]$ and $Z[0.004:0.05]$. Those parameter ranges are the same as those used in \citet{abdurrouf2017}, except for the age range for which the age of the universe at $z=0.8$ is used as an upper limit. As in the previous work, the interpolation to estimate $8$ band fluxes and stellar masses for a random parameter set is done in two steps, first interpolation in a three-dimensional space [$E(B-V)$, $t$ and $\tau$] using a tricubic interpolation for each metallicity ($Z$), then in one-dimensional space of $Z$ with a cubic spline interpolation.
After constructing the spatially resolved SED of a galaxy and generating the library of the model SEDs, the final step is fitting the observed SED of each bin with the library of the model SEDs to get $m_{*}$ and SFR of the bin. The fitting is done using a Bayesian statistics approach. In this approach, probability distribution functions (PDFs) of the $m_{*}$ and SFR are constructed by compounding probabilities of the model SEDs, then posterior means of the $m_{*}$ and SFR are calculated. We evaluate probability of a model based on its $\chi^{2}$ in a form of Student's t distribution with degree of freedom, $\nu$, of 3, instead of a Gaussian form. It has been verified that, this new model weighting scheme gives a consistent estimate of SFR and sSFR with those estimated from $24\mu$m flux (see appendix A of \citet{abdurrouf2017}). Uncertainties of the SFR and $m_{*}$ are estimated by calculating standard deviation of the PDFs of SFR and $m_{*}$. Once the $m_{*}$ and SFR of a bin are obtained, those values are then divided into the pixels that belong to the bin by assuming that the $m_{*}$ and SFR of a pixel are proportional to the pixel's fluxes in F160W and F435W bands, respectively.
Fig.~\ref{fig:fitting_results} shows an example of the pixel-to-pixel SED fitting result for a galaxy \texttt{GS\_19186} in the sample (whose pixel binning result is shown in Fig.~\ref{fig:binningmap}). The $\Sigma_{\rm SFR}$ map roughly traces spiral arms which are associated with high star formation activity, while the $\Sigma_{*}$ map shows smoother distribution. Pixels with negative value in the $\Sigma_{\rm SFR}$($\Sigma_{*}$) due to a negative value in the F435W(F160W) flux caused by noise fluctuation, are not shown in the plot with the logarithmic scale. Those pixels with negative values are included in the later analysis e.g. calculation of the integrated SFR and $M_{*}$ and radial profiles of $\Sigma_{\rm SFR}(r)$ and $\Sigma_{*}(r)$.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Gal138_SFRSMmapfit.png}
\caption{An example of pixel-to-pixel SED fitting result for a galaxy \texttt{GS\_19186} (RA$=53^{\circ}.120750$, DEC$=-27^{\circ}.818984$ and $z=1.0940$). Top left panel: stellar mass surface density ($\Sigma_{*}$) map. Top right panel: SFR surface density ($\Sigma_{\rm SFR}$) map. Bottom left panel: $\Sigma_{*}$ uncertainty map. Bottom right panel: $\Sigma_{\rm SFR}$ uncertainty map. Up and left directions correspond to the north and east, respectively. \label{fig:fitting_results}}
\end{figure*}
Fig.~\ref{fig:integrated_SFMS} shows the integrated SFR versus $M_{*}$ of the sample galaxies. The integrated SFR and $M_{*}$ of a galaxy are derived by summing up the SFR and $m_{*}$ of all pixels that belong to the galaxy. Distributions of the sample galaxies on the SFR versus $M_{*}$ plane derived with our method (which is shown in Fig.~\ref{fig:integrated_SFMS}) is considerably different compared to that with the 3D-HST catalog (which is shown in Fig.~\ref{fig:galaxies_sample}). This discrepancy is caused by the discrepancy in the estimation of both of the SFR and $M_{*}$. Fig.~\ref{fig:compare_SFR_ptpSEDfit_3DHST} shows comparison between the integrated SFR derived using our pixel-to-pixel SED fitting method ($\text{SFR}_{\rm ptpSEDfit}$, which is the sum of the SFR of galaxy's pixels) and that from the 3D-HST catalog ($\text{SFR}_{\rm UV+IR}$). It is shown by the figure that the $\text{SFR}_{\rm ptpSEDfit}$ is broadly consistent with the $\text{SFR}_{\rm UV+IR}$. The Histogram shows the distribution of the $\log(\text{SFR}_{\rm UV+IR}/\text{SFR}_{\text{ptpSEDfit}})$, which has a mean value ($\mu$) of $0.031$ and a standard deviation ($\sigma$) of $0.48$ dex. The color-coding represents the ratio of $\log(\text{SFR}_{\rm UV}/\text{SFR}_{\rm UV+IR})$ which is expected to be inversely proportional to the amount of dust extinction. It is shown by the figure that there is a systematic dependence on the amount of dust extinction. It is suggested that the estimated summed SFR from the pixel-to-pixel SED fitting is systematically smaller for galaxies with large dust extinction. The large offset is only observed among a few galaxies in the sample and we expect their effects on the analysis of the statistical sample can be minor. The issues of the discrepancies in the SFR and $M_{*}$ are further discussed in appendix A. The $M_{*}$ estimated using our method systematically higher than the $M_{*}$ taken from the 3D-HST catalog (see lower panel in Fig.~\ref{SFR_SM_3DHST_vs_ptpSEDfit}).
In later analysis, we will discuss the difference between spatially resolved SFMS relations of galaxies as a function of their distances from the global SFMS relation in the SFR versus $M_{*}$ plane. As we used the global SFMS relation by \citet{speagle2014} to classify galaxies based on their distances from the global SFMS in \citet{abdurrouf2017}, here we also use the same global SFMS relation. The solid line in Fig.~\ref{fig:integrated_SFMS} represents the global SFMS relation calculated at median redshift of the sample, $z=1.217$. The grey-shaded area represents $\pm 0.3$ dex around the global SFMS relation. Galaxies that are located within $\pm 0.3$ dex, between $-0.3$ and $-0.8$ dex, and below $-0.8$ dex from the global SFMS are called z1-$\Delta$MS1 (blue circle), z1-$\Delta$MS2 (green square), and z1-$\Delta$MS3 (red diamond), respectively. The above sSFR groups are selected such that majority of the z1-$\Delta$MS1 and z1-$\Delta$MS3 are star-forming and quiescent galaxies, respectively. The upper limit for defining the z1-$\Delta$MS3 is chosen such that majority of galaxies below the upper limit are quiescent galaxies, by verifying it with the $UVJ$ diagram. Positions of the sample galaxies on the $UVJ$ diagram and verification that majority of the z1-$\Delta$MS1 and z1-$\Delta$MS3 are star-forming and quiescent galaxies, respectively, are described in appendix B. Median values of $\log(\text{sSFR}[yr^{-1}])$ (number of galaxies) of the z1-$\Delta$MS1, z1-$\Delta$MS2, and z1-$\Delta$MS3 sub-samples are $-9.28$ (47), $-9.70$ (72), and $-10.07$ (33), respectively.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{integrated_SFMS_z1.png}
\caption{Integrated SFR versus $M_{*}$ of the sample galaxies. The solid line represents the global SFMS relation of \citet{speagle2014} calculated at median redshift of the sample ($z=1.217$) and the dashed line represents SFMS$-0.8$ dex. Grey-shaded region corresponds to $\pm 0.3$ dex from the global SFMS. The blue circles (located within $\pm 0.3$ dex about the global SFMS), green squares (located between $-0.3$ and $-0.8$ dex from the global SFMS) and red diamonds (located below $-0.8$ dex from the global SFMS) represent z1-$\Delta$MS1, z1-$\Delta$MS2 and z1-$\Delta$MS3 sub-samples, respectively. \label{fig:integrated_SFMS}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{compare_SFR_ptpSEDfit_3DHST_withratioUVtotSFR.png}
\caption{Comparison between the integrated SFR estimated using the pixel-to-pixel SED fitting method ($\text{SFR}_{\rm ptpSEDfit}$) and that from the 3D-HST catalog ($\text{SFR}_{\rm UV+IR}$). The black line shows proportionality line. The histogram shows the distribution of $\log(\text{SFR}_{\rm UV+IR}/\text{SFR}_{\text{ptpSEDfit}})$ which has mean value ($\mu$) of $0.031$ and standard deviation ($\sigma$) of $0.48$ dex. The color-coding represents the ratio of $\log(\text{SFR}_{\rm UV}/\text{SFR}_{\rm UV+IR})$, which is expected to be inversely proportional to the amount of dust extinction. \label{fig:compare_SFR_ptpSEDfit_3DHST}}
\end{figure}
\section{Results} \label{sec:results}
\subsection{Spatially resolved star formation main sequence in massive disc galaxies at \texorpdfstring{$z\sim 1$}{Lg}} \label{sec:spatially_resolved_SFMS}
To examine the relation between $\Sigma_{*}$ and $\Sigma_{\rm SFR}$ at $\sim 1$ kpc scale in the $z\sim 1$ massive disc galaxies, the $\Sigma_{*}$ and $\Sigma_{\rm SFR}$ of all 597651 pixels of the sample galaxies are plotted in Fig.~\ref{fig:resolved_SFMS}. In the figure, the contours are colour-coded with the number of pixels in each $0.1 \times 0.1$ dex bin. The vertical (horizontal) lines at the bottom (left) axes are the median values of $\Sigma_{*}$($\Sigma_{\rm SFR}$) for pixels located in the outskirt (the outermost $8$ kpc elliptical annulus) of the sample galaxies and they represent the limiting values for those quantities considering the low S/N of the outskirt pixels (S/N$\sim 0.5$ per pixel). The contours with high number density imply a tight relation between $\Sigma_{*}$ and $\Sigma_{\rm SFR}$. The black circles with error bars over-plotted on the contours show the mode of $\Sigma_{\rm SFR}$ distribution for each $\Sigma_{*}$ bin with $0.3$ dex width. Error bars represent the standard deviation from the mode, and calculated separately above and below the mode value. As shown by the mode values, the relation between $\Sigma_{*}$ and $\Sigma_{\rm SFR}$ is linear at low $\Sigma_{*}$ ($\lesssim 10^{8.5}M_{\odot}\text{kpc}^{-2}$) and flattened at high $\Sigma_{*}$ end ($\gtrsim 10^{8.5}M_{\odot}\text{kpc}^{-2}$).
Fitting the linear part of the mode values (consist of five mode values with $\Sigma_{*} \lesssim 10^{8.5}M_{\odot}\text{kpc}^{-2}$ and excluding the two lowest $\Sigma_{*}$ points, which are affected by the limiting value of $\Sigma_{*}$) with a linear relation with a form of
\begin{equation}
\log \Sigma_{\rm SFR} = \alpha \log \Sigma_{*} + \beta
\end{equation}
using a least-square fitting method resulted in the best-fitting relation with the slope ($\alpha$) of $0.88$ and zero-point ($\beta$) of $-8.31$, which is shown by the black line. The red squares show the spatially resolved SFMS relation of massive ($\log(M_{*}/M_{\odot})>10$) star-forming galaxies at $0.7<z<1.5$ reported by \citet{wuyts2013}, which was derived from the median of $\Sigma_{\rm SFR}$ distribution in each $\Sigma_{*}$ bin. They also reported the flattening tendency of the relation at high $\Sigma_{*}$ region, although not as clear as the flattening trend obtained in this work. The systematically lower spatially resolved SFMS relation found in this work compared to that reported by \citet{wuyts2013} is in part caused by the different sample selection; massive star-forming galaxies were used in \citet{wuyts2013}, while in this work, we include not only massive star-forming galaxies, but also green-valley and quiescent galaxies which have lower $\Sigma_{\rm SFR}$ for a fixed $\Sigma_{*}$ at high $\Sigma_{*}$ region as will be discussed later.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{resolved_SFMS_allgals_noAGN.png}
\caption{$\Sigma_{\rm SFR}$ versus $\Sigma_{*}$ of all 597651 pixels of the 152 sample galaxies shown with contours colour-coded by number of pixels in each $0.1\times 0.1$ dex bin. The vertical (horizontal) lines at bottom (left) axes are limiting values of the $\Sigma_{*}$($\Sigma_{\rm SFR}$) which are median values of the $\Sigma_{*}$($\Sigma_{\rm SFR}$) for pixels located in outskirt (the outermost $8$ kpc elliptical annulus) of the sample galaxies. The black circles with error bars show spatially resolved SFMS relation obtained by taking mode of the $\Sigma_{\rm SFR}$ distribution for each $\Sigma_{*}$ bin with $0.3$ dex width. The black line represents linear function fitting to the five mode values with $\Sigma_{*}\lesssim 10^{8.5}M_{\odot}\text{kpc}^{-2}$, excluding two mode values in the lowest $\Sigma_{*}$. The red line with squares shows the spatially resolved SFMS relation of \citet{wuyts2013}. \label{fig:resolved_SFMS}}
\end{figure}
Next, we investigate the spatially resolved SFMS relation as a function of the distance from the global SFMS. Fig.~\ref{fig:resolved_SFMS_comb} shows the spatially resolved SFMS relation in the z1-$\Delta$MS1 galaxies (top left, consists of 160210 pixels), z1-$\Delta$MS2 galaxies (top right, consists of 286721 pixels), z1-$\Delta$MS3 galaxies (bottom left, consists of 150720 pixels) and the compilation of those three relations (bottom right). The spatially resolved SFMS relations of the z1-$\Delta$MS1, z1-$\Delta$MS2, and z1-$\Delta$MS3 are shown with blue circles, green squares, and red triangles, respectively. The spatially resolved SFMS of the z1-$\Delta$MS1 galaxies shows linear increasing trend in the entire $\Sigma_{*}$ range, without flattening trend at high $\Sigma_{*}$ range as found in the spatially resolved SFMS for all galaxies (Fig.~\ref{fig:resolved_SFMS}). The flattening at high $\Sigma_{*}$ appears in the spatially resolved SFMS of the z1-$\Delta$MS2 galaxies and the flattening is more enhanced in the spatially resolved SFMS of the z1-$\Delta$MS3. The solid line in the top left panel represents the result of a linear function fitting to the eight mode values (excluding one with the lowest $\Sigma_{*}$, which is affected by the $\Sigma_{*}$ limit), which has slope of $1.01$ and zero-point of $-9.24$. The dashed lines in the three panels are the same as the solid line in the Fig.~\ref{fig:resolved_SFMS}.
Comparison between the three spatially resolved SFMS relations (bottom right panel) shows similar value of $\Sigma_{\rm SFR}$ at the low $\Sigma_{*}$ region, while there is a large difference in $\Sigma_{\rm SFR}$ at the high $\Sigma_{*}$ region. Most of the pixels associated with high $\Sigma_{*}$ are located in the central region, while the pixels associated with low $\Sigma_{*}$ are located in the disc region. The linear increasing trend of the spatially resolved SFMS of z1-$\Delta$MS1 galaxies indicates the ongoing star formation activity in the central region as well as in the outskirt, while flattening at high $\Sigma_{*}$ region in the spatially resolved SFMS relations of the other groups indicates that a quenching mechanism is ongoing in the central region.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{resolved_SFMS_comb_SFGVQS_noAGN.png}
\caption{Spatially resolved SFMS relations of the galaxies as a function of their distance from the global SFMS. Spatially resolved SFMS relation for galaxies located within $\pm 0.3$ dex from the global SFMS (z1-$\Delta$MS1; top left panel), between $-0.3$ dex and $-0.8$ dex from the global SFMS (z1-$\Delta$MS2;top right panel) and below $-0.8$ dex from the global SFMS (z1-$\Delta$MS3;bottom left panel). The spatially resolved SFMS relations of z1-$\Delta$MS1, z1-$\Delta$MS2, and z1-$\Delta$MS3 are shown with blue circles, green squares, and red triangles, respectively. The solid line in the top left panel represents the linear function fitting result to the eight mode values (excluding one with the lowest $\Sigma_{*}$) and the dashed lines in the three panels are the same as the solid line in Fig.~\ref{fig:resolved_SFMS}. Bottom right panel: comparison between the three spatially resolved SFMS relations from the previous three panels. For clarity, blue circles are shifted by 0.05 dex to the left, while red triangles are shifted by 0.05 dex to the right from their actual positions. \label{fig:resolved_SFMS_comb}}
\end{figure*}
\subsection{Radial profiles of \texorpdfstring{$\Sigma_{*}(r)$}{Lg}, \texorpdfstring{$\Sigma_{\rm SFR}(r)$}{Lg} and sSFR\texorpdfstring{$(r)$}{Lg} at \texorpdfstring{$z\sim 1$}{Lg}}
\label{sec:radial_profiles}
Increasing $\Sigma_{*}$ along the x-axis of the spatially resolved SFMS plot (Fig.~\ref{fig:resolved_SFMS} and Fig.~\ref{fig:resolved_SFMS_comb}) roughly corresponds to a decreasing radius toward the central region of the galaxies because the radial profile of $\Sigma_{*}(r)$ is always decreasing from the central region to the outskirt. Therefore, the spatially resolved SFMS might be correlated with the radial profiles of $\Sigma_{*}(r)$, $\Sigma_{\rm SFR}(r)$ and sSFR$(r)$. Here, we derive those radial profiles to study how they correlate with the spatially resolved SFMS and also study how those radial profiles change with the distance of the galaxy from the global SFMS.
First, $\Sigma_{\rm SFR}(r)$ and $\Sigma_{*}(r)$ profiles are constructed by averaging the $\Sigma_{\rm SFR}$ and $\Sigma_{*}$ of pixels in each elliptical annulus of radius $r$. Then sSFR$(r)$ profile is obtained by dividing $\Sigma_{\rm SFR}(r)$ with $\Sigma_{*}(r)$. An ellipsoids are determined as follows. First, fitting the elliptical isophotes to the F125W-band image of a galaxy using an \texttt{ellipse} command in \texttt{IRAF}. Then an average ellipticity and position angle are derived based on the ellipsoids outside of a half-mass radius of the galaxy, which is defined as the length of a semi-major axis that encloses half of the total $M_{*}$. The half-mass radius is calculated based on the ellipsoids with ellipticity and position angle that are determined by averaging the ellipticity and position angle of the entire radius. The radial profile is sampled with a $2$ kpc step.
Fig.~\ref{fig:radial_profile_combine} shows the radial profiles of $\Sigma_{\rm SFR}(r)$ (left panel), $\Sigma_{*}(r)$ (middle panel) and sSFR$(r)$ (right panel). Blue square profile shows an individual radial profile of the sample galaxies, while an average radial profile is shown by green circle profile. The radial profiles are considered up to a semi-major axis of $17$ kpc. Before calculating the average radial profile, each radial profile is extrapolated if it does not reach semi-major axis of $17$ kpc. The extrapolation is done by fitting the radial profile with an exponential function using a least-square fitting method. The fitting is done to the outer region with semi-major axis larger than $3$ kpc to avoid the effect of a bulge component. The extrapolated part of the radial profile is shown with a black line. The error bars in the average radial profiles are calculated using the standard error of mean.
On average, the $\Sigma_{*}(r)$ and $\Sigma_{\rm SFR}(r)$ of $z\sim 1$ massive disc galaxies have a peak at the centre and gradually decline toward the outskirt, while the average sSFR$(r)$ is almost flat over the entire region. The flat average sSFR$(r)$ agrees with the linear form of the spatially resolved SFMS. We do not see a significant central suppression of sSFR in the average sSFR$(r)$, though it is expected from the flattening trend of the spatially resolved SFMS at high $\Sigma_{*}$.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{radial_profile_allgals_SFRSMsSFR_combine_noAGN.png}
\caption{Radial profiles of $\Sigma_{\rm SFR}(r)$ (left panel), $\Sigma_{*}(r)$ (middle panel) and sSFR$(r)$ (right panel) of all 152 sample galaxies are shown by blue squares with lines. The radial profiles are cutted up to semi-major axis of $17$ kpc. The black lines are the extrapolated part of the radial profiles which are calculated by an exponential function fitting to the region with semi-major axis larger than $3$ kpc using a least-square fitting method. The average radial profiles are shown by green circles. Errorbars for the average radial profiles are calculated by standard error of mean. \label{fig:radial_profile_combine}}
\end{figure*}
Fig.~\ref{fig:radial_profile_functionSFMS} shows the radial profiles of $\Sigma_{\rm SFR}(r)$ (left panel), $\Sigma_{*}(r)$ (middle panel) and sSFR$(r)$ (right panel) as a function of distance from the global SFMS, namely z1-$\Delta$MS1, z1-$\Delta$MS2 and z1-$\Delta$MS3 groups. On average, the z1-$\Delta$MS1 galaxies have higher $\Sigma_{\rm SFR}$ in all radii than the z1-$\Delta$MS2 galaxies and the z1-$\Delta$MS2 galaxies have higher $\Sigma_{\rm SFR}$ in all radii than the z1-$\Delta$MS3 galaxies. The z1-$\Delta$MS2 and z1-$\Delta$MS3 galaxies have slightly more concentrated $\Sigma_{*}(r)$ with steeper increase toward the central region than the $\Sigma_{*}(r)$ of the z1-$\Delta$MS1 galaxies. The sSFR$(r)$ of those three groups show systematic difference in the central region, while the difference is smaller in the outskirt. The z1-$\Delta$MS1 and z1-$\Delta$MS2 have a sSFR difference of 0.61 dex at semi-major axis of 1 kpc, while the z1-$\Delta$MS1 and z1-$\Delta$MS3 have the sSFR difference of 1.21 dex at the same semi-major axis. The z1-$\Delta$MS1 and z1-$\Delta$MS2 have a sSFR difference of 0.10 dex at semi-major axis of 17 kpc, while the z1-$\Delta$MS1 and z1-$\Delta$MS3 have the sSFR difference of 0.35 dex at the same semi-major axis. Sharp central suppression in the sSFR$(r)$ is observed among the z1-$\Delta$MS2 and z1-$\Delta$MS3 galaxies, while flat sSFR$(r)$ profile is observed for the z1-$\Delta$MS1 galaxies. Those sSFR$(r)$ have correlation with the spatially resolved SFMS of the corresponding groups. The flat sSFR$(r)$ of the z1-$\Delta$MS1 agrees with the linear increasing profile of the spatially resolved SFMS of that group, while the central suppression in the sSFR$(r)$ of the z1-$\Delta$MS2 and z1-$\Delta$MS3 agrees with the flattening trend at high $\Sigma_{*}$ region in the spatially resolved SFMS of those groups.
To check whether the central suppression in the sSFR$(r)$ profiles of the z1-$\Delta$MS2 and z1-$\Delta$MS3 is real and not caused by a bias toward lower sSFR due to only a few quiescent galaxies, we plot histograms of the sSFR distribution in the central ($r\leqslant 4$ kpc), middle ($4<r\leqslant 10$ kpc) and outskirt ($r>10$ kpc) regions of the z1-$\Delta$MS1 (blue), z1-$\Delta$MS2 (green) and z1-$\Delta$MS3 (red) in the right panel of Fig.~\ref{fig:radial_profile_functionSFMS}. It is shown by the histograms that the sSFRs in the central regions of the z1-$\Delta$MS2 and z1-$\Delta$MS3 are systematically lower than that in the central region of the z1-$\Delta$MS1. It is also shown that the sSFR in all of those three regions of the z1-$\Delta$MS1 have a peak at almost the same sSFR of $\sim 10^{-9.2}yr^{-1}$, which agrees with the flat profile of the sSFR$(r)$ of z1-$\Delta$MS1. Given that dust extinction is increasing toward the central region in massive galaxies \citep[see e.g.][]{nelson2016b,tacchella2018}, one may worry that the centrally suppressed sSFR$(r)$ is actually caused by the red dusty star-forming region which mistakenly recognized as old and passive system. To check if the central regions of the z1-$\Delta$MS2 and z1-$\Delta$MS3 are indeed passive regions, we have calculated the $U$, $V$, and $J$ magnitudes of the galaxies pixels located in the central, middle, and outskirt regions and locate their positions on the $UVJ$ diagrams. We found systematically older and more passive SEDs of pixels located in the central regions of the z1-$\Delta$MS2 and z1-$\Delta$MS3. We discuss this issue in appendix B.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{radial_profile_functionSFMS_combine_noAGN_new.png}
\caption{Radial profiles of $\Sigma_{\rm SFR}(r)$ (left panel), $\Sigma_{*}(r)$ (middle panel) and sSFR$(r)$ (right panel) as a function of distance from the global SFMS, namely z1-$\Delta$MS1, z1-$\Delta$MS2 and z1-$\Delta$MS3 groups. The radial profiles for the z1-$\Delta$MS1, z1-$\Delta$MS2 and z1-$\Delta$MS3 are shown with blue circles, green squares, and red diamonds with line, respectively. Histograms in the right panel show the sSFR distribution in the central ($r\leqslant 4$ kpc), middle ($4<r\leqslant 10$ kpc) and outskirt ($r>10$ kpc) regions of z1-$\Delta$MS1 (blue), z1-$\Delta$MS2 (green) and z1-$\Delta$MS3 (red) groups. \label{fig:radial_profile_functionSFMS}}
\end{figure*}
To examine the morphological difference between the z1-$\Delta$MS1, z1-$\Delta$MS2 and z1-$\Delta$MS3, we calculate the S\' ersic index and concentration index ($R_{90}/R_{50}$) of each galaxy in those groups and check the distributions of those properties for the corresponding groups. Fig.~\ref{fig:hist_serscidx_concidx} shows the histograms of the distributions of the S\' ersic indexes ($n$, top panel) and concentration indexes ($R_{90}/R_{50}$, bottom panel). $n$ is calculated by fitting the $\Sigma_{*}(r)$ with S\' ersic profile, $\Sigma_{*}(r)=\Sigma_{*}(r_{0})\exp\left(-\left(\frac{r}{h}\right)^{1/n}\right)$. First, the exponential function ($n=1$) fitting is done to get the initial guess for the radial scale length ($h$) and the zero point ($\Sigma_{*}(r_{0})$). Then the random set of $n$, $h$ and $\Sigma_{*}(r_{0})$ are generated according to the following parameter ranges: $n[0.5,5]$, $\Sigma_{*}(r_{0})[0.1\Sigma_{*}(r_{0},n=1),10\Sigma_{*}(r_{0},n=1)]$ and $h[1,10h_{n=1}]$. The best-fitting S\' ersic profile is determined based on the lowest $\chi^{2}$ value. The $R_{50}$ and $R_{90}$ in the concentration index are calculated with the semi-major axis that enclose 50\% and 90\% of the total $M_{*}$, respectively. In both panels, histogram with blue solid, green dashed and red dashed dotted lines represent the z1-$\Delta$MS1, z1-$\Delta$MS2 and z1-$\Delta$MS3, respectively. The histograms indicate that the z1-$\Delta$MS3 galaxies typically have higher S\' ersic index and concentration index (also higher bulge to total stellar mass ratio, B/T) than the z1-$\Delta$MS1 galaxies, while the z1-$\Delta$MS2 galaxies have both quantities in the intermediate between those two groups.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{histogram_sersicindex_SFGVQS_noAGN.png}
\includegraphics[width=0.5\textwidth]{histogram_concentration_index_SFGVQS_noAGN.png}
\caption{Histograms for the distributions of S\' ersic indexes ($n$, top panel) and concentration indexes ($R_{90}/R_{50}$, bottom panel) of z1-$\Delta$MS1 (blue solid line), z1-$\Delta$MS2 (green dashed line), and z1-$\Delta$MS3 (red dashed dotted line). \label{fig:hist_serscidx_concidx}}
\end{figure}
Those results suggest an existence of a bulge component in the z1-$\Delta$MS2 and z1-$\Delta$MS3 galaxies, while the z1-$\Delta$MS1 galaxies are disc-dominated. The flat average sSFR$(r)$ profile of the z1-$\Delta$MS1 suggests that those galaxies are still building their stellar mass in the outskirt as well as in the central region. In \citet{abdurrouf2017}, we found that the average sSFR$(r)$ radial profile of the entire $z\sim 0$ sample is centrally suppressed. Those observational results agree with the picture of inside-out quenching where galaxies tend to quench their star formation activities from the central region then the quenching process gradually moves toward the outskirt region. The evidences for the inside-out quenching are also reported by previous research papers, e.g. \citet{tacchella2015}, \citet{gonzalez2016}, \citet{belfiore2018}, \citet{tacchella2018}.
\section{Discussion}
\label{sec:discussion}
\subsection{Spatially resolved SFMS relations of $z\sim 0$ and $z\sim 1$ samples}
\label{sec:compare_resolved_SFMS}
In order to get insight on the cosmological evolution of the spatially resolved SFMS, we compare the spatially resolved SFMS of the $z\sim 1$ massive disc galaxies with that of the $z\sim 0$ massive disc galaxies. The $z\sim 0$ sample from \citet{abdurrouf2017} is based on $93$ massive face-on disc galaxies at $0.01 < z < 0.02$. Although our selection criteria for the two samples do not guarantee that the $z\sim 0$ sample is the descendant of the $z\sim 1$ sample, it is possible that part of the galaxies from $z\sim 1$ and $z\sim 0$ samples are likely to be on the same evolutionary path, i.e. progenitor and descendant. The comoving volumes covered by $z\sim 1$ and $z\sim 0$ samples are roughly similar ($4.5\times 10^{5} \text{ Mpc}^{3}$ and $4.3\times 10^{5} \text{ Mpc}^{3}$ for the $z\sim 1$ and $z\sim 0$ samples, respectively). However, the median $M_{*}$ of the $z\sim 1$ sample ($7.8\times 10^{10}M_{\odot}$) is systematically higher than that of the $z\sim 0$ sample ($3.5\times 10^{10}M_{\odot}$) and there are $50$ massive disc galaxies ($\log (M_{*}/M_{\odot})\geqslant 11.0$) in the $z\sim 1$ sample, while only $6$ such massive disc galaxies in the $z\sim 0$ sample. A part of the massive disc galaxies at $z\sim 1$ are thought to evolve into elliptical galaxies at $z\sim 0$. The comoving number density ($N$) and stellar mass density ($\rho$) of $\log (M_{*}/M_{\odot})\geqslant 11.0$ disc galaxies in the $z\sim 1$ sample are comparable to those of the elliptical galaxies at $0.01 < z < 0.02$ (taken from MPA-JHU catalog). The comoving number density and stellar mass density of the $z\sim 1$ massive disc galaxies are $\log (N[\text{Mpc}^{-3}])=-3.9$ and $\log (\rho[M_{\odot}\text{Mpc}^{-3}])=7.3$, while those of the local elliptical galaxies are $\log (N[\text{Mpc}^{-3}])=-4.4$ and $\log (\rho[M_{\odot}\text{Mpc}^{-3}])=6.7$, respectively.
We compare the spatially resolved SFMS relations of the six groups in the $z\sim 1$ and $z\sim 0$ samples (z1-$\Delta$MS1, z1-$\Delta$MS2, z1-$\Delta$MS3, z0-$\Delta$MS1, z0-$\Delta$MS2 and z0-$\Delta$MS3). The six groups are defined based on the distances from the global SFMS at each redshift. We should emphasize that the classification is based on the order of sSFR at each redshift. In Fig.~\ref{fig:evolution_mode_profile}, the six spatially resolved SFMS relations derived from the $z\sim 0$ and $z\sim 1$ samples are compared. The shift toward higher $\Sigma_{*}$ range for the $z\sim 1$ sample compared to that for the $z\sim 0$ sample is caused by the fact that the $z\sim 1$ sample is systematically more massive than the $z\sim 0$ sample. An obvious feature shown in the Fig.~\ref{fig:evolution_mode_profile} is that the difference in $\Sigma_{\rm SFR}$ at a fixed $\Sigma_{*}$ between the two spatially resolved SFMS relations at low $\Sigma_{*}$ region is smaller than that at high $\Sigma_{*}$ region. If we quantitatively compare the spatially resolved SFMS of galaxies in the highest sSFRs groups, i.e. z1-$\Delta$MS1 and z0-$\Delta$MS1, the sSFR difference is $0.4$ dex at $\log(\Sigma_{*}[M_{\odot} \text{kpc}^{-2}])=7.0$, while that is $1.5$ dex at $\log(\Sigma_{*}[M_{\odot} \text{kpc}^{-2}])=8.5$. This trend suggests that the star formation activity in the disc region (represented with low $\Sigma_{*}$ value) shows less suppression from $z\sim 1$ to $z\sim 0$ compared to the star formation activity in the central region (represented with high $\Sigma_{*}$ value). The trend agrees with the inside-out quenching scenario \citep[e.g.][]{tacchella2015,gonzalez2016,belfiore2018,tacchella2018}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{mode_profile_SFMS_comball.png}
\caption{Six spatially resolved SFMS relations derived from the $z\sim 1$ sample: z1-$\Delta$MS1 (blue solid tiangles), z1-$\Delta$MS2 (green solid pentagons), and z1-$\Delta$MS3 (red solid diamonds) and $z\sim 0$ sample: z0-$\Delta$MS1 (dark blue open circles), z0-$\Delta$MS2 (dark green open squares), and z0-$\Delta$MS3 (dark red open triangles). The $z\sim 0$ sample from \citet{abdurrouf2017} is based on $93$ massive face-on disc galaxies at $0.01\leqslant z \leqslant 0.02$. For clarity, blue solid triangles and dark blue open circles are shifted by $0.05$ dex to the left, while red solid diamonds and dark red open triangles are shifted by $0.05$ dex to the right from their actual positions. \label{fig:evolution_mode_profile}}
\end{figure}
\subsection{Empirical model for the evolution of \texorpdfstring{$\Sigma_{*}(r)$}{Lg}, \texorpdfstring{$\Sigma_{\rm SFR}(r)$}{Lg} and sSFR\texorpdfstring{$(r)$}{Lg} radial profiles at \texorpdfstring{$0\lesssim z \lesssim1$}{Lg}}
\label{sec:empirical_model}
We try to construct an empirical model for the evolution of the radial profiles of $\Sigma_{\rm SFR}(r)$, $\Sigma_{*}(r)$ and sSFR$(r)$ at $0\lesssim z\lesssim 1$, by defining possible pairs of the progenitor and descendant galaxies from the $z\sim 1$ and $z\sim 0$ samples, based on the location on the global SFMS. We define the pairs as follows: (1) we start from a galaxy at $z=2$ that has sSFR and $M_{*}$ within $\pm 0.3$ dex from the global SFMS relation of \citet{speagle2014} at that epoch, and use the sSFR and $M_{*}$ as a starting point for drawing a galaxy evolutionary track in the $\log (M_{*})$-$\log (\text{sSFR})$ plane. (2) The star formation history (SFH) of the galaxy is assumed to be in the exponentially declining form, $\text{SFR}(t)=\text{SFR}(t_{0})e^{-\Delta t/\tau}$ with $t=t_{0}+\Delta t$ and $t_{0}$ as the age of the universe at $z=2$. (3) We choose a set of model parameters (which are $M_{*}(t_{0})$, sSFR$(t_{0})$ and $\tau$) which can select as many galaxies as possible from the $z\sim 1$ and $z\sim 0$ samples so that the model evolutionary track can be a possible evolutionary path connecting the two samples.
Using the average $\Sigma_{\rm SFR}(r)$ of the progenitor and descendant samples selected from the above assumptions, we can infer the radially-resolved SFH, from which we can follow the radial stellar mass buildup during the epoch of $0\lesssim z \lesssim 1$. We consider three different evolutionary paths: two with long and short $\tau$, and the other one with just consider a same mass range. To make a model evolutionary track, we assume a certain range for each parameter which produces broad evolutionary track, instead of assuming a single value for each model parameter which only produces an evolutionary track with a single line. The two models with exponentially declining SFH are: (a) model with parameter ranges of $\log (M_{*}(t_{0}))=[9.7:9.9]$, $\log (\text{sSFR}(t_{0}))=[-8.6:-8.4]$ and $\tau = [4.0:6.0]$, hereafter called model A; and (b) model with $\log (M_{*}(t_{0}))=[10.2:10.3]$, $\log (\text{sSFR}(t_{0}))=[-8.7:-8.5]$ and $\tau = [1.3:2.5]$, hereafter called model B. The $M_{*}$, sSFR and $\tau$ are in unit of $M_{\odot}$, $\text{yr}^{-1}$ and Gyr, respectively. The third model, which is called model C, is made without any assumption on the SFH and only connects galaxies in the stellar mass range of $10.85\leq \log (M_{*}/M_{\odot}) \leq 11.2$.
Fig.~\ref{fig:estimate_evolution_galaxies_localdesc_highzprog} shows the model evolutionary tracks and the selected progenitor and descendant galaxies for the model A (left panel), B (middle panel) and C (right panel). The black lines represent the model evolutionary tracks if the model parameters are taken from the middle values of the ranges, while the vertical and horizontal gray dashed-lines at each redshift represent the ranges of the sSFR and $M_{*}$ if the model parameter ranges are used. The vertical 'error bar' is extended by $0.3$ dex above and below from the actual length to make it roughly as wide as the scatter of the global SFMS (which is expected to be able to account for a fluctuations of a real galaxy evolutionary path around the simple exponentially decaying form), while the horizontal 'error bar' is kept as the original length. The scatter in the vertical direction also accounts for the higher uncertainty of the sSFR compared to $M_{*}$ of the sample galaxies. The progenitors (descendants) are defined as the galaxies from $z\sim 1$ ($z\sim 0$) sample which are enclosed within the 'box' given by the vertical and horizontal 'errorbars', evaluated at the redshifts of the galaxies. Three green boxes show the ranges in sSFR and $M_{*}$ given by the horizontal and vertical 'error bars' of the model evolutionary track calculated at $z=1.8$, $0.8$, and $0$. The number of progenitors (descendants) selected using the model A, B and C are $20 (14)$, $57 (6)$ and $71 (14)$, respectively. As expected from the larger value of $\tau$, the sSFR of model A decline more slowly compared to that of model B. The purple dashed line and purple shaded region represent the global SFMS relation at $z=2$ and $\pm 0.3$ dex scatter around it, respectively. The black dashed-lines represent the global SFMS relations at $z=1.2$ and $z=0.015$.
\begin{figure*}
\centering
\includegraphics[width=0.32\textwidth]{model_track_longtau.png}
\includegraphics[width=0.32\textwidth]{model_track_shorttau.png}
\includegraphics[width=0.32\textwidth]{model_track_extreme.png}
\caption{Left panel: the evolutionary track and the selected progenitors and descendants of model A which has $\log (M_{*}(t_{0}))=[9.7:9.9]$, $\log (\text{sSFR}(t_{0}))=[-8.6:-8.4]$ and $\tau = [4.0:6.0]$. Middle panel: the evolutionary track and the selected progenitors and descendants of model B which has $\log (M_{*}(t_{0}))=[10.2:10.3]$, $\log (\text{sSFR}(t_{0}))=[-8.7:-8.5]$ and $\tau = [1.3:2.5]$. The $M_{*}$, sSFR and $\tau$ are in unit of $M_{\odot}$, $\text{yr}^{-1}$ and Gyr, respectively. The black lines in both panels represent the model evolutionary tracks if the middle value of each model parameter range is used. The gray dashed-line show the range of sSFR and $M_{*}$ if the model parameter ranges are considered. Three green boxes show the ranges of sSFR and $M_{*}$ given by the horizontal and vertical "errorbars" of the model evolutionary track calculated at $z=1.8$, $0.8$, and $0$. Right panel: the selected progenitors and descendants of model C that is made without any assumption on the SFH, except the mass range of $10.85\leqslant \log (M_{*}/M_{\odot}) \leqslant 11.2$. The purple dashed line and purple shaded area represent the global SFMS relation at $z=2$ and $\pm 0.3$ dex scatter around it, respectively. The black dashed-lines represent the global SFMS relations at $z=1.2$ and $z=0.015$.
\label{fig:estimate_evolution_galaxies_localdesc_highzprog}}
\end{figure*}
Fig.~\ref{fig:radial_profiles_prog_desc_combined} shows the average radial profiles of the selected progenitors (blue circle with solid line) and descendants (red open square with dashed line) galaxies using the evolutionary tracks of the model A (first row), B (second row) and C (third row). The average radial profiles of $\Sigma_{\rm SFR}(r)$ and sSFR$(r)$ show that the star formation activity is declined in all radii from $z\sim 1$ to $z\sim 0$ with larger decline in the central region compared to that in the outskirt. The stellar mass buildup in model A shows larger stellar mass increase over all radii compared to that in model B, as expected from the larger $\tau$ of model A than that of model B. The radial stellar mass increase is not found in model C.
Given the radial decrease of $\Sigma_{\rm SFR}(r)$ from $z\sim 1$ to $z\sim 0$, we derive an empirical model for the evolution of the $\Sigma_{\rm SFR}(r)$, $\Sigma_{*}(r)$ and sSFR$(r)$. Here, we assume exponentially declining SFH at each radius in the form
\begin{equation}
\Sigma_{\rm SFR}(r,t) = \Sigma_{\rm SFR}(r,t_{0}) e^{-\Delta t/\tau(r)}
\label{eq:radial_SFH}
\end{equation}
where $t=t_{0}+\Delta t$ with $t_{0}$ is the age of the universe at the median redshift of the progenitors. The median redshift of the selected progenitors (descendants) by model A, B and C are $1.064\pm 0.026$ ($0.016\pm 0.001$), $1.133\pm 0.043$ ($0.017\pm 0.002$) and $1.216\pm 0.044$ ($0.017\pm 0.002$), respectively. The uncertainty of the median redshift (which is calculated using bootstrap resampling method) is used in later analysis for calculating the uncertainty of model properties, such as radial profile of SFH, $\Sigma_{*}(r)$ and sSFR$(r)$.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{radial_profile_prog_desc_longtau.png}
\includegraphics[width=0.95\textwidth]{radial_profile_prog_desc_shorttau.png}
\includegraphics[width=0.95\textwidth]{radial_profile_prog_desc_extreme.png}
\caption{Average $\Sigma_{\rm SFR}(r)$ (left panel in each row), $\Sigma_{*}(r)$ (middle panel in each row) and sSFR$(r)$ (right panel in each row) radial profiles of the progenitors (blue circles with solid line) and descendants (red open squares with dashed line) selected by model A (first row), model B (second row) and model C (third row). \label{fig:radial_profiles_prog_desc_combined}}
\end{figure*}
Using Eq.~\ref{eq:radial_SFH} with $\Delta t$ as the time difference between the median redshifts of the progenitors and descendants ($7.74\pm 0.10$, $7.97\pm 0.17$ and $8.25\pm 0.12$ Gyr for model A, B and C, respectively), we calculate the $\tau$ at each radius. The results for all three models are shown in the left panel in each row of Fig.~\ref{fig:radial_profile_predicted_empiricalmodel}. The $\tau(r)$ is increasing with increasing radius in all three models. The errorbar at each radius is the $1 \sigma$ uncertainty calculated through a Monte-Carlo method, which calculate $\tau(r)$ randomly by varying the average $\Sigma_{\rm SFR}(r)$ of the progenitors and descendants and $\Delta t$ within their uncertainties following Gaussian distribution. The uncertainty of $\Delta t$ is calculated using a Monte-Carlo method, which calculate $\Delta t$ randomly by varying the median redshifts of the progenitors and descendants within their uncertainties following Gaussian distribution. The red line in the $\tau(r)$ plot shows the result of exponential function fitting and the red shaded area shows its $1\sigma$ uncertainty. They are calculated using a Bayesian statistic method. The middle and right panels in each row show the predicted $\Sigma_{*}(r)$ and sSFR$(r)$ by the model at the median redshift of the descendants (shown with a black line). The predicted $\Sigma_{*}(r)$ and sSFR$(r)$ by the model A and B are consistent with the average radial profiles of the descendants, while those of model C show large discrepancy from the observed radial profiles at $z\sim 0$. The consistency suggests that model A and B are possible evolutionary models describing the radial stellar mass accumulation in massive disc galaxies. The simple exponentially declining radial SFH model can explain the stellar mass buildup by the star formation activity in the massive disc galaxies.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{radial_profile_predicted_empiricalmodel_longtau.png}
\includegraphics[width=0.95\textwidth]{radial_profile_predicted_empiricalmodel_shorttau.png}
\includegraphics[width=0.95\textwidth]{radial_profile_predicted_empiricalmodel_extreme.png}
\caption{Comparison between average $\Sigma_{*}(r)$ and sSFR$(r)$ radial profiles of the descendant galaxies and those radial profiles predicted by the empirical models at $z\sim 0$. The first, second, and third rows show the radial profile of $\tau(r)$ (left panel in each row), the observed and predicted radial profiles of $\Sigma_{*}(r)$ (middle panel in each row) and the observed and predicted radial profiles of sSFR$(r)$ (right panel in eah row) for model A, B and C, respectively. The black diamonds in the left panel in each row show the $\tau(r)$ of each model, while the red line and red-shaded region around it show the best-fitting exponential function of the $\tau(r)$ and its $1\sigma$ uncertainty, respectively. The $\Sigma_{*}(r)$ and sSFR$(r)$ for progenitor, descendant, and model prediction are shown with blue closed circles with solid line, red open squares with dashed line, and black diamonds with solid line, respectively. \label{fig:radial_profile_predicted_empiricalmodel} }
\end{figure*}
Mathematical descriptions of the evolution of the $\Sigma_{\rm SFR}(r)$, $\Sigma_{*}(r)$ and sSFR$(r)$ radial profiles are constructed based on the model A. At first, the average $\Sigma_{\rm SFR}(r)$ and $\Sigma_{*}(r)$ radial profiles of the progenitors are fitted with exponential function and S\' ersic profile, respectively, and the best-fitting profiles are used as the initial condition from which the radial profiles at subsequent times are calculated. Those fitting results are
\begin{equation}
\Sigma_{\rm SFR}(r,t_{0}) = (0.21 \pm 0.03)e^{-r/(4.18 \pm 0.24)},
\label{eq:fit_SFR_profile}
\end{equation}
\begin{equation}
\Sigma_{*}(r,t_{0}) = (8.43\times 10^{9} \pm 4.43\times 10^{8})e^{-\left(\frac{r}{0.35 \pm 0.02}\right)^{\left(\frac{1}{1.96 \pm 0.03}\right)}},
\label{eq:fit_SM_profile}
\end{equation}
The time scale of star formation at each radius is determined by an exponential function fitting to the $\tau(r)$ as
\begin{equation}
\tau(r) = (1.66 \pm 0.22)e^{r/\left(9.32 \pm 2.21\right)},
\label{eq:fit_tau_profile}
\end{equation}
The best-fitting exponential function is shown in the left panel of the first row of Fig.~\ref{fig:radial_profile_predicted_empiricalmodel} with a red line. The mathematical prescription for the radial profile evolutions are as follows
\begin{equation}
\Sigma_{\rm SFR}(r,t) = \Sigma_{\rm SFR}(r,t_{0}) e^{-(t-t_{0})/\tau(r)},
\end{equation}
\begin{equation}
\Sigma_{*}(r,t) = \Sigma_{*}(r,t_{0}) + \tau(r) \Sigma_{\rm SFR}(r,t_{0}) \left(1 - e^{-(t-t_{0})/\tau(r)}\right),
\label{eq:SM_empirical_model}
\end{equation}
where $t_{0}$ is the age of the universe at the median redshift of the progenitors and $t$ is the cosmic time within $0\lesssim z \lesssim 1$. The $\Sigma_{*}(r)$ and sSFR$(r)$ at $z=0.8$, $0.6$, $0.4$ and $0.2$ calculated based on the above empirical model are shown as gray lines in the middle and right panels of the first row in Fig.~\ref{fig:radial_profile_predicted_empiricalmodel}. The empirical model for the evolution of the $\Sigma_{*}(r)$ shows stellar mass buildup in inside-to-outside manner. This inside-out stellar mass buildup in the galaxies is also found by previous researches e.g. \citet{vandokkum2010, nelson2016, morishita2015, tacchella2015, tadaki2017}.
We check the consistency between the empirical model of $\Sigma_{\rm SFR}(r,t)$ and $\Sigma_{*}(r,t)$ radial profiles and the spatially resolved SFMS at $z\sim 0$ and $z\sim 1$. Fig.~\ref{fig:SFMS_from_radial_profile} shows the spatially resolved SFMS relations at redshift interval of $0.11$ between $0\leqslant z \leqslant 1.1$ (black circles) constructed from the empirical model of radial profiles. The red lines represent the best-fitting second order polynomial functions to the spatially resolved SFMS constructed from the empirical model. The blue triangles and green squares represent the observed spatially resolved SFMS relations of the z1-$\Delta$MS2 and z0-$\Delta$MS2 galaxies, respectively. The observed spatially resolved SFMS from those two groups are used for the comparison because large fraction of the progenitor and descendant galaxies are belong to those groups. The spatially resolved SFMS relations at $z=1.1$ and $z=0$ predicted by the empirical model agree with the observed spatially resolved SFMS of z1-$\Delta$MS2 and z0-$\Delta$MS2, respectively.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{spatially_resolved_SFMS_from_radial_profile_prog_desc.png}
\caption{Evolution of the spatially resolved SFMS relation as a function of redshift inferred by the empirical model of $\Sigma_{\rm SFR}(r)$ and $\Sigma_{*}(r)$ radial profiles. The spatially resolved SFMS relations at $0.11$ redshift steps between $z=0$ and $z=1.1$ constructed using the empirical model are shown with black circles. The red line on each spatially resolved SFMS represents best-fitting second order polynomial function. The blue triangles with solid line and green squares with dashed line represent observed spatially resolved SFMS of z1-$\Delta$MS2 and z0-$\Delta$MS2, respectively. \label{fig:SFMS_from_radial_profile}}
\end{figure}
\subsection{The radial quenching timescale derived from the empirical model}
In this section, we estimate the quenching timescale at each radius to quantitatively examine the inside-out quenching process of the sample galaxies. Using the empirical model derived in the previous section, we derive the radial profile of the quenching timescale ($t_{\rm quench}(r)$). The quenching timescale is assumed to be the time needed for the sSFR in each radius ($\Sigma_{\rm SFR}(r,t)/\Sigma_{*}(r,t)$) to reach a critical value of $10^{-10}yr^{-1}$, which is also used to separate star-forming and quiescent galaxies by \citet{peng2010} and star-forming and quiescent sub-galactic region by \citet{gonzalez2016}, which corresponds to the mass doubling time of $10$ Gyr, i.e. larger than the Hubble time at $z\gtrsim 0.5$. Black line in Fig.~\ref{fig:radial_profile_quenching_time_longtau} shows $t_{\rm quench}(r)$ from $z=1.1$. The gray shaded area around the line represents the $1\sigma$ uncertainty calculated using the Monte-Carlo method which is done by randomly varying all the parameters involved in the calculation ($\Sigma_{\rm SFR}(r,t0)$, $\Sigma_{*}(r,t0)$ and $\tau(r)$) within their uncertainties by assuming Gaussian distribution, then calculate the standard deviation of the $t_{\rm quench}$ at each radius.
Inside-out quenching process is clearly shown by the $t_{\rm quench}(r)$ profile. The $t_{\rm quench}(r)$ shows that the central regions ($r\sim 1$ kpc) will quench by $\sim 200$ Myr from $z=1.1$, while the outskirt ($r \sim 15$ kpc) will quench by $\sim 5.2$ Gyr from $z=1.1$. The model A from which the empirical model is derived has initial mass at $z=2$ of $9.7\leqslant \log (M_{*}/M_{\odot}) \leqslant 9.9$ and the progenitor galaxies selected using this model have $10.5<\log (M_{*}/M_{\odot})<10.9$ at $z\sim 1.1$. The blue profile in Fig.~\ref{fig:radial_profile_quenching_time_longtau} represents the $t_{\rm quench}(r)$ reported by \citet{tacchella2015} for very massive galaxies with stellar mass range of $10.8\leqslant \log (M_{*}/M_{\odot})<11.7$, at $z\sim 2$, which has been subtracted by the cosmic time interval between $z=1.1$ and $z=2.2$. The $t_{\rm quench}(r)$ profile of \citet{tacchella2015} is derived based on the average $\Sigma_{*}(r)$ and $\Sigma_{\rm SFR}(r)$ of massive galaxies at $z\sim 2$ and the average $\Sigma_{*}(r)$ of similarly massive early-type galaxies at $z\sim 0$. By assuming that the $z\sim 2$ galaxies keep forming stars with their observed $\Sigma_{\rm SFR}(r)$, they estimated the time needed for each radius to stop their star formation in order not to overshoot the $\Sigma_{*}(r)$ of the $z\sim 0$ galaxies. By the calculation, they shown that the integrated SFR at any given time is following that of typical main-sequence galaxies.
The blue $t_{\rm quench}(r)$ shows the inside-out quenching process of the $z\sim 2$ massive galaxies, of which the central region is quenched since $z\sim 2$, and their star formation is fully quenched in the entire region by $z\sim 1$. The $t_{\rm quench}(r)$ of low mass (this work) and very massive galaxies \citep{tacchella2015} are differ in a starting time of the quenching in the central region, while their slopes are similar. Those $t_{\rm quench}(r)$ trends agree with the "downsizing" scenario \citep[e.g.][]{cowie1996, Juneau2005} and furthermore suggests that the "downsizing" phenomenon appear even in the spatially resolved properties. The massive galaxies tend to quench faster in all radii than the low mass galaxies. \citet{perez2013} also found the indication that the "downsizing" phenomenon is spatially preserved by analyzing the spatially resolved stellar mass assembly history in local galaxies using integral field spectroscopy observation. They found that massive galaxies assemble their stellar mass faster than low mass galaxies in both inner and outer regions.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{radial_profile_quenching_time_longtau.png}
\caption{Radial profile of the quenching timescale ($t_{\rm quench}(r)$) from the age of the universe at $z=1.1$. Negative time corresponds to the cosmic time at $z>1.1$. The black line represents $t_{\rm quench}(r)$ obtained from this work for $10.5<\log (M_{*}/M_{\odot})<10.9$ galaxies at $z=1.1$, which corresponds to the low-mass galaxies ($9.7\lesssim \log (M_{*}/M_{\odot})\lesssim 9.9$) at $z=2$ (according to model A, see Section~\ref{sec:empirical_model}). The blue line represents $t_{\rm quench}(r)$ profile reported by \citet{tacchella2015} for massive galaxies, with stellar mass range of $10.8\leqslant \log (M_{*}/M_{\odot})<11.7$ at $z\sim 2$. \label{fig:radial_profile_quenching_time_longtau}}
\end{figure}
\section{Summary}
We investigate the relation between local surface density (at the $\sim 1$kpc scale) of SFR ($\Sigma_{\rm SFR}$) and stellar mass ($\Sigma_{*}$), so-called spatially resolved SFMS, in the massive ($\log(M_{*}/M_{\odot})>10.5$) face-on disc galaxies at $0.8 < z < 1.8$ and located in the GOODS-S region. We also study the radial profiles of $\Sigma_{\rm SFR}(r)$, $\Sigma_{*}(r)$ and sSFR$(r)$. The effect of the integrated sSFR to the spatially resolved SFMS and the radial profiles of $\Sigma_{\rm SFR}(r)$, $\Sigma_{*}(r)$ and sSFR$(r)$ are discussed. By employing our previous results for $z\sim 0$ massive ($\log(M_{*}/M_{\odot})>10.5$) face-on disc galaxies \citep{abdurrouf2017}, we discuss the evolution of the spatially resolved SFMS and the radial profiles of $\Sigma_{\rm SFR}(r)$, $\Sigma_{*}(r)$ and sSFR$(r)$ during the epoch of $0\lesssim z \lesssim 1$.
To derive the spatially resolved SFR and stellar mass of a galaxy at $z\sim 1$, we use a method so-called pixel-to-pixel SED fitting, which fits the spatially resolved photometric SED in each bin of a galaxy to the library of model photometric SEDs using the Bayesian statistics approach. The spatially resolved SED of a galaxy with rest-frame FUV-NIR coverage is constructed using 8 bands imaging data from CANDELS and 3D-HST.
Our results can be summarized as follows.
\begin{enumerate}[leftmargin=*]
\item[1.] We find the relation between $\Sigma_{\rm SFR}$ and $\Sigma_{*}$, so-called spatially resolved SFMS, in the $z\sim 1$ sample. This relation has a linear form with the slope of $1.01$ in the galaxies which lie within $\pm 0.3$ dex from the global SFMS (i.e. z1-$\Delta$MS1), while a flattening trend at high $\Sigma_{*}$ end is observed in the spatially resolved SFMS of galaxies which lie between $-0.3$ and $-0.8$ dex (i.e. z1-$\Delta$MS2) and below $-0.8$ dex (i.e. z1-$\Delta$MS3) from the global SFMS.
\item[2.] The sSFR$(r)$ radial profiles of the z1-$\Delta$MS2 and z1-$\Delta$MS3 galaxies show decline in the central region, while sSFR$(r)$ radial profile of the z1-$\Delta$MS1 is flat over the entire radius. The central suppression in the sSFR$(r)$ radial profiles of the z1-$\Delta$MS2 and z1-$\Delta$MS3 corresponds to the flattening at high $\Sigma_{*}$ end of the spatially resolved SFMS of the corresponding groups. Morphology of the z1-$\Delta$MS3 galaxies show higher S\' ersic index and concentration index ($R_{90}/R_{50}$) compared to those of the z1-$\Delta$MS1, while the S\' ersic index and concentration index of the z1-$\Delta$MS2 galaxies are in the intermediate between those two groups. This trend suggests the existence of central bulge components in the z1-$\Delta$MS2 and z1-$\Delta$MS3 galaxies, while z1-$\Delta$MS1 galaxies are disc-dominated system and still building their stellar mass in both of the central region and outskirt.
\item[3.] The spatially resolved SFMS shows smaller decline (i.e. smaller decrease of sSFR=$\Sigma_{\rm SFR}/\Sigma_{*}$) in the low $\Sigma_{*}$ region than that in the high $\Sigma_{*}$ region from $z\sim 1$ to $z\sim 0$. This trend suggests that the star formation rate in the disc region experienced less suppression compared to the star formation rate in the central region during that epoch, agrees with the inside-out quenching scenario.
\item[4.] By selecting pairs of possible progenitors and descendants from the $z\sim 1$ and $z\sim 0$ samples using model evolutionary track with exponentially declining SFH, and then using the average $\Sigma_{\rm SFR}(r)$ of the progenitor and descendant galaxies to obtain the radially-resolved SFH following exponentially declining form, we derive the empirical model for the evolution of the $\Sigma_{\rm SFR}(r)$, $\Sigma_{*}(r)$ and sSFR$(r)$ radial profiles. The empirical model successfully reproduces the observed $\Sigma_{*}(r)$ and sSFR$(r)$ radial profiles at $z\sim 0$ and also consistent with the spatially resolved SFMS at $z\sim 1$ and $z\sim 0$.
\item[5.] Using the empirical model for the evolution of the $\Sigma_{\rm SFR}(r)$ and $\Sigma_{*}(r)$, we estimate the radial profile of the quenching timescale. $t_{\rm quench}(r)$ is increasing with increasing radius which shows an inside-out progression of the quenching process of the sample galaxies. The quenching timescale at each radius is later than that reported by \citet{tacchella2015} for more massive galaxies. This result suggests that "downsizing" signal is spatially preserved i.e. faster quenching of massive galaxies than low mass galaxies in the entire radius.
\end{enumerate}
\section*{Acknowledgements}
We thanks anonymous referee for his/her comments which improve our paper. We thanks Drs. Takahiro Morishita and Sandro Tacchella for their useful comments. We thanks Dr. Sandro Tacchella for providing the radial profile of quenching timescale of massive galaxies at $z\sim 2$. Abdurro'uf acknowledges the support from Japanese Government (MEXT) scholarship for his studies.
This work is based on observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This work is based on observations taken by the CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
This work is based on observations made with the NASA \textit{Galaxy Evolution Explorer}. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. This work has made use of SDSS data. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics,
Instituto de Astrof\'isica de Canarias, The Johns Hopkins University,
Kavli Institute for the Physics and Mathematics of the Universe (IPMU) /
University of Tokyo, Lawrence Berkeley National Laboratory,
Leibniz Institut f\"ur Astrophysik Potsdam (AIP),
Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg),
Max-Planck-Institut f\"ur Astrophysik (MPA Garching),
Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE),
National Astronomical Observatories of China, New Mexico State University,
New York University, University of Notre Dame,
Observat\'ario Nacional / MCTI, The Ohio State University,
Pennsylvania State University, Shanghai Astronomical Observatory,
United Kingdom Participation Group,
Universidad Nacional Aut\'onoma de M\'exico, University of Arizona,
University of Colorado Boulder, University of Oxford, University of Portsmouth,
University of Utah, University of Virginia, University of Washington, University of Wisconsin,
Vanderbilt University, and Yale University.
\bibliographystyle{mnras} |
1508.07401 | \section{Global solution}
In this section, we show that the solution of system \eqref{E4} is positive and global. Denote $\mathbb R^n_+=\{(x_1,\cdots,x_n) \in \mathbb R^n\,{:}\, x_i>0 \,\,(i\geq1)\}, \,\, \mathbb R^n_{+0}=\{(x_1,\cdots,x_n) \in \mathbb R^n\,{:}\, x_i\geq 0\,\,(i\geq1)\}\,\, (n=1,2).$ Let $g(t)$ be a function, for a
brevity, instead of writing $g(t)$ we write $g$.
If $g$ is bounded continuous function on $\mathbb R_{+0}$, we denote
$$g^u= \sup_{t \in \mathbb R_{+0}}\ g(t), \ g^l=\inf_{t \in \mathbb R_{+0}} g(t).$$
We have the following theorem.
\begin{theorem} \label{T1}
For any given initial value $(x_0,y_0) \in \mathbb R^2_{+},$ there is a unique solution $(x_t, y_t)$ to \eqref{E4} for $t\geq 0$. Further, with probability one, $\mathbb R^2_+$ is positively invariant for \eqref{E4}, i.e., $(x_t, y_t)\in \mathbb R^2_+$ a.s. for all $t\geq 0,$ if $(x_0, y_0)\in \mathbb R^2_+$.
\end{theorem}
\begin{proof}
Since some coefficients of \eqref{E4} are not locally Lipschitz continuous, we can not say that there is a unique local solution $(x_t, y_t)$ to \eqref{E4}. However, we consider the following system
\begin{equation} \label{E4.1.1}
\begin{cases}
\begin{aligned}
d\xi_t= &\left[a_1-\frac{1}{2}(\sigma_1+\sigma_2 \exp\{\xi_t\})^2-b_1 \exp\{\xi_t\}-\frac{c_1\exp\{\eta_t\}}{\exp\{\xi_t\}+e \exp\{\eta_t\}}\right] dt\\
&+(\sigma_1+ \sigma_2 \exp\{\xi_t\}) dw_t,\\
d\eta_t=&\left[-a_2-\frac{1}{2}(\rho_1+\rho_2 \exp\{\eta_t\})^2-b_2 \exp\{\eta_t\}+\frac{c_2 \exp\{\xi_t\}}{\exp\{\xi_t\}+e \exp\{\eta_t\}} \right]dt\\
&+(\rho_1+\rho_2 \exp\{\eta_t\}) dw_t,
\end{aligned}
\end{cases}
\end{equation}
with an initial value $(\xi_0, \eta_0)=(\ln x_0, \ln y_0).$ Since the coefficients of \eqref{E4.1.1} are locally Lipschitz continuous, there is a unique local solution $(\xi_t,\eta_t)$ to \eqref{E4.1.1} for $t\in [0,\tau),$ where $\tau$ is the explosion time (Arnold \cite{A} or Friedman \cite{F}). Therefore, by It\^o's formula, $(x_t, y_t)=( \exp\{\xi_t\}, \exp\{\eta_t\})$ is the unique positive local solution to \eqref{E4} for $t\in [0,\tau)$ with the initial value $(x_0, y_0).$ To show the solution is global, we need to show that $\tau=\infty$ a.s. We use the technique of localization dealt with in \cite{IS,M}. Let $k_0>0$ be sufficiently large for $x_0$ and $y_0$ lying within the interval $[\frac{1}{k_0},k_0]$. Let us define a sequence of stopping times \cite [Problem 2.7, p.7]{IS} for each integer $k\geq k_0$ by
$$\tau_k=\inf \left\{t\geq 0 \,{:}\, x_t\notin (\frac{1}{k},k) \text{ or } y_t\notin (\frac{1}{k},k)\right\}$$
(with the convention $\inf\emptyset=\infty$). Since $\tau_k$ is nondecreasing as $k \rightarrow \infty$, there exists the limit $\tau_{\infty}=\lim_{k \rightarrow \infty} \tau_k$. Then $\tau_{\infty}\leq \tau$ a.s. Now, we will show that $\tau_{\infty}=\infty$ a.s. If this statement is false, then there exist $T>0$ and $\varepsilon \in (0,1)$ such that $\mathbb P\{\tau_{\infty}\leq T\}>\varepsilon.$ Thus, by denoting $\Omega_k=\{\tau_k \leq T\},$ there exists $k_1\geq k_0$ such that
\begin{equation} \label{E5}
\mathbb P(\Omega_k)\geq \varepsilon \hspace{1cm} \text{ for all } k\geq k_1.
\end{equation}
Let $\theta_i\in (0,1)\, (i=1,2).$ We consider the following function
$$V(x,y)=x^{\theta_1}-\ln x+ y^{\theta_2}-\ln y-\sum_{i=1}^2\frac{1+\ln \theta_i}{\theta_i}\cdot$$
Because $x^{\theta_1}-\ln x- \frac{1+\ln \theta_1}{\theta_1}\geq 0 $ for all $ x> 0,$ we have $V\in C^2 (\mathbb R^2_+,\mathbb R_{+0}).$ If $(x_t,y_t) \in \mathbb R^2_+,$ by using It\^o's formula, we get
\begin{equation} \label{E6}
dV(x_t,y_t)=f(x_t,y_t,t)dt+g(x_t,y_t,t) dw_t,
\end{equation}
where
\begin{equation} \label{E7}
\begin{aligned}
g(x,y,t)=&(\theta_1 x^{\theta_1}-1)(\sigma_1+\sigma_2 x)+(\theta_2 y^{\theta_2}-1)(\rho_1+\rho_2 y),\\
f(x,y,t)=&(\theta_1 x^{\theta_1}-1)\left[ a_1-b_1x-\frac{c_1 y}{ x+e y}\right]\\
&+(\theta_2 y^{\theta_2}-1)\left[- a_2-b_2y-\frac{c_2 x}{ x+e y}\right]\\
&+\frac{1}{2}[\theta_1(\theta_1-1)x^{\theta_1}+1](\sigma_1+\sigma_2 x)^2\\
&+\frac{1}{2}[\theta_2(\theta_2-1)y^{\theta_2}+1](\rho_1+\rho_2 y)^2.
\end{aligned}
\end{equation}
It is easy to see from $\theta_i\in (0,1)$ and from \eqref{E7} that the function $f(x,y,t)$ is bounded above, say by $M$, in $\mathbb R^2_+ \times \mathbb R_{+0}$. It then follows from $(x_{t\wedge \tau_k},y_{t\wedge\tau_k})\in \mathbb R^2_+$ and from \eqref{E6} that
$$\int_0^{T\wedge \tau_k}dV(x_t,y_t)\leq \int_0^{T\wedge \tau_k} M dt+ \int_0^{T\wedge \tau_k} g(x_t,y_t,t) dw_t.$$
Taking expectations yields
\begin{equation} \label{E8}
\mathbb E V(x_{T\wedge \tau_k},y_{T\wedge \tau_k}) \leq V(x_0, y_0) + M \mathbb E (T\wedge \tau_k) \leq V(x_0, y_0)+ MT.
\end{equation}
On the other hand, for every $\omega\in \Omega_k,$ either $x_{\tau_k}(\omega)$ or $y_{\tau_k}(\omega)$ belongs to the set $\{k, \frac{1}{k}\}$. Then
\begin{align*}
V(x_{T\wedge \tau_k}(\omega),y_{T\wedge \tau_k}(\omega)) \geq &\min\Big\{k^{\theta_i}-\ln k- \frac{1+\ln \theta_i}{\theta_i},\\
& \frac{1}{k^{\theta_i}}-\ln \frac{1}{k}- \frac{1+\ln \theta_i}{\theta_i}\quad (i=1,2)\Big\} \\
=&\min\Big\{k^{\theta_i}-\ln k- \frac{1+\ln \theta_i}{\theta_i},\\
& \frac{1}{k^{\theta_i}}+\ln k- \frac{1+\ln \theta_i}{\theta_i}\quad (i=1,2)\Big\}.
\end{align*}
We therefore get from \eqref{E5} that
\begin{align*}
\mathbb E V(x_{T\wedge \tau_k},y_{T\wedge\tau_k})\geq &\mathbb E [1_{\Omega_k}V(x_{T\wedge \tau_k},y_{T\wedge \tau_k})]\\
\geq &\varepsilon \min\Big\{k^{\theta_i}-\ln k- \frac{1+\ln \theta_i}{\theta_i},\\
& \frac{1}{k^{\theta_i}}+\ln k- \frac{1+\ln \theta_i}{\theta_i}\quad (i=1,2)\Big\}.
\end{align*}
It then follows from \eqref{E8} that
$$V(x_0, y_0)+ MT\geq \varepsilon \min \Big\{k^{\theta_i}-\ln k- \frac{1+\ln \theta_i}{\theta_i}, \frac{1}{k^{\theta_i}}+\ln k- \frac{1+\ln \theta_i}{\theta_i} \quad (i=1,2)\Big\}.$$
Letting $k \rightarrow \infty$ leads to $\infty > V(x_0, y_0)+ MT =\infty.$ This is a contradiction. Therefore
$\tau_{\infty}=\infty$ a.s. Then $\tau=\infty$ a.s., and $(x_t, y_t) \in \mathbb R^2_+$ a.s. The proof is complete.
\end{proof}
\section{Boundedness of moments}
In this section, we shall show the boundedness of species's quantity moments. In order to capture them, we shall distinguish between the following hypotheses of random perturbation.
(H1) The random factor make only effects on the growth rate of population, i.e., $\sigma_2(t)=\rho_2 (t)=0$ for all $t\geq 0$ and $\sigma_1^l,\rho_1^l>0;$
(H2) The random factor make effects on both the growth rate of population and the inhibiting effects of environment, i.e., $\sigma_i^l, \rho_i^l>0\,\, (i=1,2).$\\
Consider
\begin{equation*}
\begin{aligned}
LV(x,y)=&\frac{1}{2} [\sigma_1 (t)+\sigma_2 (t)x]^2 x^2\frac{\partial^2 V}{\partial x^2}+\frac{1}{2} [\rho_1 (t)+\rho_2 (t)y]^2 y^2\frac{\partial^2 V}{\partial y^2}\\
&+ [\sigma_1 (t)+\sigma_2 (t)x][\rho_1 (t)+\rho_2 (t)y] xy \frac{\partial^2 V}{\partial x \partial y}\\
& +f_1(x,y,t) \frac{\partial V}{\partial x}+f_2(x,y,t) \frac{\partial V}{\partial y},
\end{aligned}
\end{equation*}
the infinitesimal operator of \eqref{E4}, defined on the space $C^2(\mathbb R^2_+,\mathbb R),$ where
$$f_1(x,y,t)=a_1x-\frac{c_1 xy}{x+e y}-b_1 x^2,$$
$$ f_2(x,y,t)=-a_2 y+\frac{c_2 xy}{ x+e y} - b_2 y^2,$$
and for any positive numbers $\theta_1, \theta_2$, we put
\begin{align} \label{E9}
d_2&=\min\{ \theta_ib_i^l\,\,\, (i=1,2)\}, \quad \theta=\frac{1}{\theta_1+\theta_2}, \notag \\
d_1&=\sup_{t\geq 0}\Big\{\frac{1}{2}\sigma^2_1(t)\theta_1(\theta_1-1)+\frac{1}{2}\rho^2_1(t)\theta_2(\theta_2-1)+\sigma_1(t)\rho_1(t)\theta_1\theta_2+\theta_1 a_1(t)\notag\\
&+[c_2(t)- a_2(t)]\theta_2\Big\},\\
\lambda_1&=\frac{d_1}{d_2}-(\theta_1+\theta_2)\{1-\ln (\theta_1+\theta_2)\}, \notag\\
\lambda_2&(x_0,y_0)=(\theta_1 \ln x_0+\theta_2 \ln y_0)+(\theta_1+\theta_2)\{1-\ln(\theta_1+\theta_2)\}-\frac{d_1}{d_2}\cdot \notag
\end{align}
We have the following theorems.
\begin{theorem} \label {T2}
Under condition {\rm(H1)}, for any positive $\theta_1, \theta_2$ and initial value $(x_0,y_0)\in \mathbb R^2_+,$ solution of \eqref{E4} satisfies
$$\mathbb E (x^{\theta_1}_ty^{\theta_2}_t) \leq \exp\{\lambda_1+\lambda_2(x_0,y_0) \exp\{-d_2t\}\}\hspace{1cm} \text{ for all } t\geq 0.$$
Consequently, $\limsup_{t\rightarrow \infty} \mathbb E (x^{\theta_1}_ty^{\theta_2}_t)\leq \exp\{\lambda_1\}. $
\end{theorem}
\begin{proof}
Firstly, we prove that
\begin{equation} \label{E10.0}
\mathbb E (x^{\theta_1}_ty^{\theta_2}_t)< \infty \hspace{1cm}\text{ for all } t\geq 0 \text{ and } \theta_i>0.
\end{equation}
Define a function $V\in C^2 (\mathbb R^2_+,\mathbb R_+)$ by $V(x,y)=x^{\theta_1}y^{\theta_2}.$ For any $t\geq 0$, using It\^o's formula gives that
\begin{equation} \label{E10}
dV(x_t,y_t)=LV(x_t,y_t)dt+(\theta_1\sigma_1+\theta_2\rho_1)V(x_t,y_t)dw_t.
\end{equation}
It is easy to see that
\begin{equation} \label {E11}
\begin{aligned}
LV(x,y)=&\Big[\frac{1}{2}\theta_1(\theta_1-1)\sigma_1^2+\frac{1}{2}\theta_2(\theta_2-1)\rho_1^2+\theta_1\theta_2\sigma_1\rho_1+\theta_1(a_1-b_1x) \\
&-\theta_2(a_2+b_2 y)+\frac{\theta_2c_2 x-\theta_1c_1y}{x+e y}\Big]V(x,y)\\
\leq &[d_1-d_2 (x+y)] V(x,y).
\end{aligned}
\end{equation}
For every integer $k\geq 1$, we define a stopping time $\tau_k=\inf\{t\geq 0 \,{:}\, x_t+y_t \geq k\}.$ Then, the sequence $\{\tau_k, k\geq 1\}$ is nondecreasing and by the positive invariance of $(x_t,y_t)$ on $\mathbb R^2_+$, we have $\lim_{k\rightarrow \infty }\tau_k=\infty$ a.s. It then follows from \eqref{E10} that
\begin{align*}
V(x_{t\wedge \tau_k}, y_{t\wedge \tau_k})=&V(x_0,y_0)+\int_0^{t\wedge\tau_k}LV(x_s,y_s)ds\\
&+\int_0^{t\wedge \tau_k}[\theta_1\sigma_1(s)+\theta_2\rho_1(s)]V(x_s,y_s)dw_s.
\end{align*}
Taking expectations of both sides and using \eqref{E11}, we have
\begin{align*}
\mathbb E V(x_{t\wedge \tau_k}, y_{t\wedge \tau_k})& \leq V(x_0,y_0)+d_1 \mathbb E \int_0^{t\wedge \tau_k}V(x_s,y_s)ds\\
& \leq V(x_0,y_0)+d_1 \int_0^t\mathbb E V(x_{s\wedge \tau_k},y_{s\wedge \tau_k} )ds.
\end{align*}
Thus, by using Gronwall's inequality, we obtain $$\mathbb E V(x_{t\wedge\tau_k}, y_{t\wedge \tau_k})\leq V(x_0,y_0) \exp\{d_1t\}.$$
Letting $k\rightarrow \infty$ in the latter inequality, we yield that, for all $t\geq 0,$ $\mathbb E V(x_t,y_t)\leq V(x_0,y_0) \exp\{d_1t\},$ from which we deduce \eqref{E10.0}.
Next, since $V(x,y)=x^{\theta_1}y^{\theta_2}\leq (x+y)^{\theta_1+\theta_2}, $ we have $x+y \geq V^{\theta}(x,y).$ It then follows from \eqref{E11} that
\begin{equation} \label{E12}
LV(x,y)\leq [d_1-d_2 V^{\theta}(x,y)] V(x,y).
\end{equation}
Applying \eqref{E10.0} to $(\theta_1(1+\theta), \theta_2(1+\theta))$, we have
$$\mathbb E \left[V^{1+\theta}(x_t,y_t)\right]=\mathbb E \left[x^{\theta_1(1+\theta)}_ty^{\theta_2(1+\theta)}_t\right]<\infty \hspace{1cm} \text{ for all } t\geq 0.$$
Then, by using H\"older's inequality, yields
$$\left[\mathbb E V(x_t,y_t)\right]^{1+\theta}\leq \mathbb E\left[V^{1+\theta}(x_t,y_t)\right].$$
It then follows from \eqref{E10} and from \eqref{E12} that, for any $t\geq 0$ and $h>0$,
\begin{align} \label{E13}
\mathbb E V(x_{t+h}, y_{t+h})-\mathbb E V(x_t,y_t)&\leq \int_t^{t+h}\left[ d_1\mathbb E V(x_s,y_s)-d_2 \mathbb E V^{1+\theta}(x_s,y_s)\right]ds \notag\\
&\leq \int_t^{t+h}\left[ d_1\mathbb E V(x_s,y_s)-d_2 \left [\mathbb E V(x_s,y_s)\right]^{1+\theta}\right]ds.
\end{align}
Putting $v(t)=\mathbb E V(x_t,y_t),$ then $0<v(t)<\infty$ for all $t\geq 0$. Further, the continuity of $v(t)$ in $t$ can be seen by the continuity of the solution $(x_t, y_t)$ and the dominated convergence theorem. We define the right upper derivative of $v(t) $ by
$$D^+v(t)=\limsup_{h \rightarrow 0} \frac{v(t+h)-v(t)}{h}\cdot$$
From \eqref{E13}, we have
$$\frac{v(t+h)-v(t)}{h}\leq \frac{1}{h}\int_t^{t+h}\left[ d_1v(s)-d_2 v^{1+\theta}(s)\right]ds.$$
Letting $h\rightarrow 0$ gives
$D^+v(t)\leq v(t)[d_1-d_2v^{\theta}(t)] \, \, \text{ for all } t\geq 0.$
Therefore,
\begin{align*}
D^+[\exp\{d_2t\} \ln v(t)]&=d_2 \exp\{d_2t\} \ln v(t)+ \exp\{d_2t\} \frac{D^+v(t)}{v(t)}\\
&\leq d_2 \exp\{d_2t\} \ln v(t)+\exp\{d_2t\} [d_1-d_2 v^{\theta}(t)]\\
&=d_1 \exp\{d_2t\}+ d_2 \exp\{d_2t\} [\ln v(t)-v^{\theta}(t)].
\end{align*}
It is easy to see that $\ln x-x^{\theta}\leq -\frac{1}{\theta}(1+\ln \theta) $ for all $x>0.$ Then
$$D^+[\exp\{d_2t\} \ln v(t)] \leq \left [d_1-\frac{1}{\theta}(1+\ln \theta) d_2\right]\exp\{d_2t\}.$$
Taking integrations of both sides yields
\begin{align*}
\exp\{d_2t\} \ln v(t)\leq &\ln v(0)+\left [\frac{d_1}{d_2}-\frac{1}{\theta}(1+\ln \theta) \right]\left [\exp\{d_2t\}-1\right]\\
=&\lambda_2 +\lambda_1 \exp\{d_2t\}.
\end{align*}
Consequently, we have $\ln v(t)\leq \lambda_1+\lambda_2 \exp\{-d_2t\},$ from which follows the first statement of theorem. Letting $t\to \infty$ in the latter inequality, we get the second one. The proof is complete.
\end{proof}
\begin{theorem} \label{T3}
Under condition {\rm(H2)}, for any $\theta_i \in (0,1], \varrho_i \in [0,3)$ and $ \varsigma_i \in \mathbb R_+,$ there exist positive constants $K_1=K_1(\theta_i, \varsigma_i)$ and $K_2=K_2(\varrho_i, \varsigma_i) \,(i=1,2)$ satisfying the following for any initial value $(x_0, y_0) \in \mathbb R^2_+$
\begin{itemize}
\item [(i)]
$\limsup_{t\to \infty} \mathbb E \left[\varsigma_1x^{\theta_1}_t+\varsigma_2y^{\theta_2}_t\right] \leq K_1;$
\item [(ii)] $\limsup_{t\to \infty} \frac{1}{t} \int_0^t \mathbb E [\varsigma_1x^{\varrho_1}_s+\varsigma_2y^{\varrho_2}_s]ds \leq K_2.$
\end{itemize}
\end{theorem}
\begin{proof}
Consider a function $V_1\,{:}\, \mathbb R^2_+ \to \mathbb R^2_+$ defined by $V_1(x,y)=\varsigma_1x^{\theta_1}+\varsigma_2y^{\theta_2}.$ For any $t\geq 0$, by using It\^o's formula, we have
\begin{equation} \label{E14}
\begin{aligned}
dV_1(x_t,y_t)=&LV_1(x_t,y_t)dt+\Big\{ \theta_1 \varsigma_1[\sigma_1 (t)+\sigma_2 (t) x_t]x^{\theta_1}_t\\
&+\theta_2 \varsigma_2[\rho_1 (t)+\rho_2 (t) y_t]y^{\theta_2}_t\Big\}dw_t,
\end{aligned}
\end{equation}
where
\begin{equation} \label {E15}
\begin{aligned}
LV_1(x,y)=&\frac{1}{2}\theta_1(\theta_1-1) \varsigma_1 [\sigma_1 (t)+\sigma_2 (t) x]^2x^{\theta_1}\\
&+\frac{1}{2}\theta_2(\theta_2-1) \varsigma_2 [\rho_1 (t)+\rho_2 (t) y]^2y^{\theta_2}\\
&+\theta_1\varsigma_1x^{\theta_1}\Big [a_1(t)-b_1 (t)x-\frac{c_1(t)y}{x+e(t) y}\Big]\\
&+\theta_2\varsigma_2y^{\theta_2}\Big [-a_2(t)+\frac{c_2(t)x}{ x+e(t) y}-b_2 (t)y\Big].\\
\end{aligned}
\end{equation}
Then, from $\theta_i \in (0,1]$ and from $ \varsigma_i \in \mathbb R_+ \, (i=1,2),$ there exists $K_1=K_1(\theta_1, \theta_2, \varsigma_1, \varsigma_2)$ such that $LV_1(x,y)+V_1(x,y)\leq K_1$ for all $(x,y,t)\in \mathbb R^2_+ \times \mathbb R_{+0}.$ Applying It\^o's formula yields
\begin{align} \label{E16}
d[e^tV_1(x_t,y_t)]=&e^t[V_1(x_t,y_t)+LV_1(x_t,y_t)]dt \notag\\
&+e^t[\theta_1 \varsigma_1(\sigma_1+\sigma_2 x_t)x^{\theta_1}_t+\theta_2 \varsigma_2(\rho_1+\rho_2 y_t)y^{\theta_2}_t)]dw_t \notag\\
\leq & K_1 e^t dt+e^t[\theta_1 \varsigma_1(\sigma_1+\sigma_2 x_t)x^{\theta_1}_t+\theta_2 \varsigma_2(\rho_1+\rho_2 y_t)y^{\theta_2}_t)]dw_t.
\end{align}
Using the sequence of stopping times $\{\tau_k\}_{k=1}^{\infty} $ defined in the proof of Theorem \ref{T1} and from \eqref{E16}, we have
$$\mathbb E \left[e^{t\wedge \tau_k}V_1(x_{t\wedge \tau_k},y_{t\wedge \tau_k})\right]\leq V_1(x_0,y_0)+ K_1(\mathbb E e^{t\wedge \tau_k}-1) \hspace{1cm}\text{ for all } t\geq 0.$$
Letting $k\rightarrow \infty$ in the latter inequality with a fact that $V(x_{t\wedge\tau_k}, y_{t\wedge \tau_k})>0$ and $0<e^{t\wedge\tau_k}\leq e^{t} \, a.s.$, and using Fatou's lemma, we obtain
$$e^t\mathbb E V_1(x_t,y_t)\leq V_1(x_0,y_0)+ K_1(e^t-1).$$ Therefore,
$\limsup_{t\to \infty} \mathbb E V_1(x_t,y_t)\leq K_1.$
To prove Part (ii), we consider a function $V_2(x,y)=\varsigma_1x^{\varrho_1}+\varsigma_2y^{\varrho_2}.$ Since $ \varrho_i \in [0,3),$ there exist $\theta_i \in (0,1)\, (i=1, 2)$ such that $0\leq \varrho_i<2+\theta_i.$
Then, from \eqref{E15} there exists $K_2=K_2(\varrho_i, \varsigma_i)$ such that
$LV_1(x,y)+ V_2(x,y)\leq K_2$ for all $ (x,y,t)\in \mathbb R^2_+ \times \mathbb R_{+0}.$
Using \eqref{E14} gives
\begin{align*}
V_1(x_t,y_t)\leq &V_1(x_0,y_0)+ \int_0^t [K_2-V_2(x_s,y_s)] ds\\
& + \int_0^t [\theta_1 \varsigma_1(\sigma_1+\sigma_2 x_s)x^{\theta_1}_s+\theta_2 \varsigma_2(\rho_1+\rho_2 y_s)y^{\theta_2}_s)]dw_s.
\end{align*}
Taking expectations of both sides, we obtain
$$\mathbb E V_1(x_t,y_t)+\int_0^t \mathbb E V_2(x_s,y_s) ds \leq V_1(x_0,y_0)+ K_2 t,$$
from which follows $\int_0^t \mathbb E V_2(x_s,y_s) ds \leq V_1(x_0,y_0)+ K_2 t.$ Therefore,
$$\limsup_{t\to \infty} \frac{1}{t} \int_0^t \mathbb E [\varsigma_1x^{\varrho_1}_s+\varsigma_2y^{\varrho_2}_s]ds\leq K_2.$$
\end{proof}
\section{Upper growth rate estimation}
In this section, we shall show the upper-growth rates of population under the case of the random factor making the effect only on the growth rate of population.
\begin{theorem} \label {T4}
Under condition {\rm(H1)}, for any $ \theta_i\geq 0$ and any initial value $(x_0, y_0) \in \mathbb R^2_+,$
$$\limsup_{t\to \infty} \frac{\ln\left[x_t^{\theta_1}y_t^{\theta_2}\right]}{\ln t}\leq \theta_1+\theta_2 \hspace{1cm} a.s.$$
Furthermore, if $ \theta_i\in [0,1) $ then for any $\varsigma_i >0 \, (i=1,2)$ there exists $K=K( \theta_i, \varsigma_i )$ such that
$$\limsup_{t\to \infty} \frac{1}{t} \int_0^t (\varsigma_1 x^{\theta_1}_s+\varsigma_2 y^{\theta_2}_s)ds \leq K \hspace{1cm} a.s.$$
\end{theorem}
\begin{proof}
Firstly, we prove the first inequality. Putting $x_t=\exp\{\xi_t\}, y_t=\exp\{\eta_t\}, \vartheta_1=a_1-\frac{\sigma_1^2}{2}, \vartheta_2=a_2+\frac{\rho_1^2}{2}$ and substituting this transformation into system \eqref{E4}, we obtain
\begin{equation} \label{E20}
\begin{cases}
\begin{aligned}
d\xi_t= &\left[\vartheta_1-b_1 \exp\{\xi_t\}-\frac{c_1\exp\{\eta_t\}}{\exp\{\xi_t\}+e \exp\{\eta_t\}}\right] dt+\sigma_1 dw_t,\\
d\eta_t=&\left[-\vartheta_2-b_2 \exp\{\eta_t\}+\frac{c_2 \exp\{\xi_t\}}{\exp\{\xi_t\}+e \exp\{\eta_t\}} \right]dt+\rho_1 dw_t,
\end{aligned}
\end{cases}
\end{equation}
or equivalently
\begin{equation*}
\begin{cases}
d\xi_t= \left[ \vartheta_1-b_1 x_t-\frac{c_1y_t}{ x_t+e y_t}\right] dt+\sigma_1 dw_t,\\
d\eta_t=\left[- \vartheta_2 - b_2y_t+\frac{c_2x_t}{ x_t+e y_t}\right]dt+\rho_1dw_t.
\end{cases}
\end{equation*}
Fix $p>0$. Applying It\^o's formula to $\exp\{pt\}\xi_t$ and $\exp\{pt\}\eta_t$, from \eqref{E20}, we have
\begin{align}
\exp\{pt\}\xi_t=&\xi_0+\int_0^t \exp\{ps\}\left[ \vartheta_1-b_1 x_s-\frac{c_1y_s}{x_s+e y_s}\right] ds \notag \\
&+\int_0^t p\exp\{ps\} \xi_s ds+\int_0^t \sigma_1 \exp\{ps\} dw_s, \label{E22} \\
\exp\{pt\}\eta_t=&\eta_0+\int_0^t \exp\{ps\}\left[- \vartheta_2- b_2 y_s+\frac{c_2 x_s}{ x_s+e y_s} \right] ds \notag \\
&+\int_0^t p\exp\{ps\} \eta_s ds+\int_0^t \rho_1 \exp\{ps\} dw_s. \label{E23}
\end{align}
We set
$$M_{1t}=\int_0^t\sigma_1\exp\{ps\} dw_s, M_{2t}=\int_0^t\rho_1 \exp\{ps\} dw_s,$$
then $M_{it} \,(i=1,2)$ are real valued continuous martingales vanishing at $t=0$ with quadratic forms
$$<M_{1},M_{1}>_t=\int_0^t \sigma_1^2 \exp\{2ps\} ds, \quad <M_{2},M_{2}>_t=\int_0^t \rho_1^2 \exp\{2ps\} ds.$$
Let $\varepsilon\in (0,1)$ and $\theta>1$. Using the exponential martingale inequality \cite [Theorem 1.7.4]{M}, for every $k\geq 1$ and $ i=1,2$, we have
$$\mathbb P\left\{\sup_{0\leq t\leq k}\left[M_{it}-\frac{\varepsilon}{2}\exp\{-pk\} <M_{i},M_{i}>_t\right]\geq \frac{\theta \exp\{pk\}}{\varepsilon}\ln k\right\}\leq \frac{1}{k^\theta}\cdot$$
It then follows from Borel-Cantelli lemma that there exists an $\Omega_i\subset \Omega$ with $\mathbb P(\Omega_i)=1$ having the following property. For any $\omega\in \Omega_i,$ there exists $k_i=k_i(\omega)$ such that, for all $k\geq k_i$ and $t\in [0,k]$,
\begin{align*}
M_{1t}&\leq \frac{\varepsilon}{2}\exp\{-pk\} <M_1,M_1>_t +\frac{\theta \exp\{pk\}}{\varepsilon}\ln k\\
&= \frac{\varepsilon }{2}\exp\{-pk\} \int_0^t \sigma_1^2\exp\{2ps\} ds +\frac{\theta \exp\{pk\}}{\varepsilon}\ln k,\\
M_{2t}&\leq \frac{\varepsilon}{2}\exp\{-pk\} <M_2,M_2>_t +\frac{\theta \exp\{pk\}}{\varepsilon}\ln k\\
&= \frac{\varepsilon }{2}\exp\{-pk\} \int_0^t \rho_1^2\exp\{2ps\} ds +\frac{\theta \exp\{pk\}}{\varepsilon}\ln k.
\end{align*}
We therefore have from \eqref{E22} and \eqref{E23} that for any $\omega\in \Omega_1\cap \Omega_2$ and $ t\in [0,k], k\geq k_0(\omega),$ where $k_0(\omega)=k_1(\omega)\wedge k_2(\omega),$
\begin{align}
\exp\{pt\}\xi_t\leq&\xi_0+\int_0^t p\exp\{ps\} \xi_s ds \notag \\
&+\int_0^t \exp\{ps\}\left[\vartheta_1-b_1 x_s-\frac{c_1y_s}{x_s+e y_s}\right] ds \notag\\
& +\frac{\varepsilon }{2}\exp\{-pk\} \int_0^t \sigma_1^2\exp\{2ps\} ds+\frac{\theta \exp\{pk\}}{\varepsilon}\ln k \notag \\
=& \xi_0+p\int_0^t \exp\{ps\} \xi_s ds +\frac{\theta \exp\{pk\}}{\varepsilon}\ln k +\int_0^t \exp\{ps\}\Big[a_1\notag \\
&-b_1 x_s+\frac{\varepsilon \sigma_1^2}{2}\exp\{-p(k-s)\}-\frac{c_1y_s}{x_s+e y_s}\Big] ds, \notag\\
\label{E24} \\
\exp\{pt\}\eta_t\leq& \eta_0+\int_0^t p\exp\{ps\} \eta_s ds,\notag\\
&+\int_0^t \exp\{ps\}\left[-\vartheta_2-b_2 y_s-\frac{c_2x_s}{x_s+e y_s}\right] ds \notag \\
&+\frac{\varepsilon }{2}\exp\{-pk\} \int_0^t \rho_1^2\exp\{2ps\} ds+\frac{\theta \exp\{pk\}}{\varepsilon}\ln k\notag\\
=& \eta_0+p\int_0^t \exp\{ps\} \eta_s ds +\frac{\theta \exp\{pk\}}{\varepsilon}\ln k +\int_0^t \exp\{ps\}\Big[-\vartheta_2\notag \\
&-b_2 y_s+\frac{\varepsilon\rho_1^2}{2}\exp\{-p(k-s)\}+\frac{c_2x_s}{x_s+e y_s}\Big] ds. \notag\\
\label{E25}
\end{align}
From \eqref{E24} and \eqref{E25}, for any $\omega\in \Omega_1\cap \Omega_2$ and $ t\in [0,k], k\geq k_0(\omega),$ we have
\begin{align} \label{E26}
\exp\{pt\}(\theta_1 \xi_t+\theta_2 \eta_t) \leq &(\theta_1 \xi_0+\theta_2 \eta_0)+\frac{\theta(\theta_1+\theta_2) \exp\{pk\}}{\varepsilon}\ln k \notag\\
&+\int_0^t \exp\{ps\}\Big[p(\theta_1 \xi_s+\theta_2 \eta_s)+\theta_1\vartheta_1-\theta_2\vartheta_2\notag \\
&+\frac{\varepsilon \left[\theta_1\sigma_1^2+\theta_2\rho_1^2\right] }{2}\exp\{-p(k-s)\} \notag\\
&-b_1\theta_1 x_s - b_2 \theta_2 y_s+\frac{\theta_2c_2x_s-\theta_1c_1y_s}{x_s+e y_s}\Big] ds.
\end{align}
Since $\theta_1, \theta_2 \in \mathbb R_+,$ there exists $H=H(p,\theta_1,\theta_2)>0$ such that for any $ (x,y,t)\in \mathbb R^2_+ \times \mathbb R_{+0},$
$$\Big[p(\theta_1 \ln x+\theta_2 \ln y)+\theta_1\vartheta_1-\theta_2\vartheta_2
-b_1\theta_1 x- b_2 \theta_2 y+\frac{\theta_2c_2x-\theta_1c_1y}{ x+e y}\Big]\leq H.$$
It then follows from \eqref{E26} that for any $\omega\in \Omega_1\cap \Omega_2$ and $ t\in [0,k], k\geq k_0(\omega),$
\begin{align*}
\exp\{pt\}(\theta_1 \xi_t+\theta_2 \eta_t) \leq &(\theta_1 \xi_0+\theta_2 \eta_0)+\frac{\theta(\theta_1+\theta_2) \exp\{pk\}}{\varepsilon}\ln k \notag\\
&+\int_0^t \exp\{ps\}\Big[H+\frac{\varepsilon (\theta_1\sigma_1^2+\theta_2\rho_1^2) }{2}\Big] ds\notag\\
\leq&(\theta_1 \xi_0+\theta_2 \eta_0)+\frac{\theta(\theta_1+\theta_2) \exp\{pk\}}{\varepsilon}\ln k \notag\\
&+\frac{1}{p} \left[H+\frac{\varepsilon (\theta_1{\sigma_1^u}^2+\theta_2{\rho_1^u}^2) }{2}\right](\exp\{pt\}-1).
\end{align*}
Thus,
\begin{align} \label{E27}
\theta_1 \xi_t+\theta_2 \eta_t \leq &(\theta_1 \xi_0+\theta_2 \eta_0)\exp\{-pt\}+\frac{\theta(\theta_1+\theta_2) \exp\{p(k-t)\}}{\varepsilon}\ln k \notag\\
&+\frac{1}{p} \left[H+\frac{\varepsilon (\theta_1{\sigma_1^u}^2+\theta_2{\rho_1^u}^2) }{2}\right](1-\exp\{-pt\})\notag\\
\leq &(\theta_1 \xi_0+\theta_2 \eta_0)+\frac{\theta(\theta_1+\theta_2) \exp\{p(k-t)\}}{\varepsilon}\ln k \notag\\
&+\frac{1}{p} \left[H+\frac{\varepsilon (\theta_1{\sigma_1^u}^2+\theta_2{\rho_1^u}^2) }{2}\right],\notag\\
\end{align}
For any $\omega\in \Omega_1\cap \Omega_2$, $k\geq k_0(\omega)$ and $t\in [k-1,k]$, from \eqref{E27} we have
\begin{align*}
\frac{\theta_1 \xi_t+\theta_2 \eta_t}{\ln t}\leq &\frac{1}{\ln (k-1)}\Big[(\theta_1 \xi_0+\theta_2 \eta_0)+\frac{\theta(\theta_1+\theta_2) \exp\{p\}}{\varepsilon}\ln k \notag\\
&+\frac{1}{p} \Big\{H+\frac{\varepsilon (\theta_1{\sigma_1^u}^2+\theta_2{\rho_1^u}^2) }{2}\Big\}\Big],\notag
\end{align*}
from which implies
$$\limsup_{t\to\infty} \frac{\theta_1 \xi_t+\theta_2 \eta_t}{\ln t}\leq \frac{\theta(\theta_1+\theta_2) \exp\{p\}}{\varepsilon}\cdot$$
Letting $\varepsilon\to 1^-, \theta \to 1^+, p\to 0^+$ and noting $\mathbb P(\Omega_1\cap \Omega_2)=1$ yields
$$\limsup_{t\to\infty} \frac{\theta_1 \xi_t+\theta_2 \eta_t}{\ln t}\leq \theta_1+\theta_2 \hspace{1cm} \text { a.s.,}$$
i.e.,
$$\limsup_{t\to \infty} \frac{\ln\left[x_t^{\theta_1}y_t^{\theta_2}\right]}{\ln t}\leq \theta_1+\theta_2 \hspace{1cm} \text { a.s. }$$
Now, we prove the remain inequality. Putting $V(x,y)=\ln (\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})$
and using It\^o's formula, we get easily that
\begin{align} \label{E17}
dV(x_t,y_t)=& \Big[ \frac{\varsigma_1 \theta_1 x^{\theta_1} }{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}}(a_1-b_1 x-\frac{c_1 y}{x+e y}) \notag\\
& + \frac{\varsigma_2 \theta_2 y^{\theta_2} }{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}}(-a_2-b_2 y+\frac{c_2x}{x+e y}) \notag \\
&+ \frac{[\varsigma_1 \theta_1 (\theta_1-1)x^{\theta_1}(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})-\varsigma_1^2 \theta_1^2 x^{2\theta_1} ] \sigma_1}{2(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})^2}\notag \\
&+ \frac{[\varsigma_2 \theta_2 (\theta_2-1)y^{\theta_2}(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})-\varsigma_2^2 \theta_2^2 y^{2\theta_2} ] \rho_1}{2(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})^2} \notag \\
&-\frac{\varsigma_1\varsigma_2 \theta_1\theta_2\sigma_1 \rho_1x^{\theta_1}y^{\theta_2} }{2(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})^2} \Big]dt\notag\\
&+\frac{\varsigma_1\theta_1 \sigma_1x^{\theta_1}+\varsigma_2\theta_2 \rho_1 y^{\theta_2}}{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}} dw_t\notag\\
=&P(x,y,t)dt+\frac{\varsigma_1\theta_1 \sigma_1 x^{\theta_1}+\varsigma_2\theta_2 \rho_1 y^{\theta_2}}{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}} dw_t,
\end{align}
where
\begin{align*}
P(x,y,t)=&\frac{\varsigma_1 \theta_1 x^{\theta_1} }{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}}(a_1-b_1 x-\frac{c_1 y}{x+e y})\\
& + \frac{\varsigma_2 \theta_2 y^{\theta_2} }{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}}(-a_2-b_2 y+\frac{c_2x}{x+e y}) \\
&+ \frac{\varsigma_1 \theta_1 \sigma_1[\varsigma_2(\theta_1-1) y^{\theta_2}-\varsigma_1 x^{\theta_1} ] x^{\theta_1}}{2(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})^2} \\
&+ \frac{\varsigma_2 \theta_2 \rho_1[\varsigma_1(\theta_2-1) x^{\theta_1}-\varsigma_2 y^{\theta_2} ] y^{\theta_2}}{2(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})^2}\\
&-\frac{\varsigma_1\varsigma_2 \theta_1\theta_2 \sigma_1 \rho_1x^{\theta_1}y^{\theta_2} }{2(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})^2} \cdot
\end{align*}
Putting
$$K=\sup_{(x,y,t)\in R^2_+\times \mathbb R_{+0}} \left [P(x,y,t)+(\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2})\right]$$
and
$$M_t=\int_0^t \frac{\varsigma_1\theta_1 \sigma_1x^{\theta_1}+\varsigma_2\theta_2 \rho_1y^{\theta_2}}{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}} dw_s,$$
then $\{M_t, \mathcal F_t, t\geq 0\}$ is a martingale, and by $ \theta_i\in [0,1), $ we see that $K<\infty.$ The quadratic variation of $M_t$ can be shown by using \cite[Theorem 5.14, p.25]{M} as follows.
$$<M,M>_t=\int_0^t \left[\frac{\varsigma_1\theta_1\sigma_1 x^{\theta_1}+\varsigma_2\theta_2 \rho_1y^{\theta_2}}{\varsigma_1 x^{\theta_1}+\varsigma_2 y^{\theta_2}} \right]^2ds.$$
It is easy to see that
$$\limsup_{t\to \infty}\frac{<M,M>_t}{t}\leq \left[\max\{\theta_1 \sigma_1^u,\theta_2 \sigma_2^u\}\right]^2.$$
So, using the strong law of large numbers for martingale \cite[Theorem 1.3.4]{M}, we have
\begin{equation} \label{E19}
\lim_{t\to \infty} \frac{M_t}{t}=0 \hspace{1cm} \text{ a.s.}
\end{equation}
On the other hand, from \eqref{E17}, we have
$$0<V(x_t,y_t)\leq \int_0^t [K -(\varsigma_1 x^{\theta_1}_s+\varsigma_2 y^{\theta_2}_s)]ds+M_t,$$
from which follows
$$ \frac{1}{t} \int_0^t (\varsigma_1 x^{\theta_1}_s+\varsigma_2 y^{\theta_2}_s)ds \leq K+ \frac{M_t}{t}\cdot$$
Therefore, by \eqref{E19},
$$\limsup_{t\to \infty} \frac{1}{t} \int_0^t (\varsigma_1 x^{\theta_1}_s+\varsigma_2 y^{\theta_2}_s)ds\leq K \hspace{1cm} \text{ a.s.}$$
\end{proof}
\begin{remark}
For the deterministic version of model \eqref{E4}, i.e., $\sigma_i=\rho_i=0$ and the other coefficients are constants, it is easy to see that $\lim_{t\to \infty} y_t=0 $ holds under some special conditions, i.e., the predator dies out, but it never get $\lim_{t\to \infty} x_t=0$ (if $\lim_{t\to \infty} y_t=0 $ then $\liminf_{t\to \infty} x_t\geq \frac{a_1}{b_1}>0$). However, in the above theorem, if $K=0$ then both prey and predator die out. This means that a relatively large stochastic perturbation can cause the extinction of the population. Further, the prey population dies out even if there is no predator and the death rate is so rapid (at an exponential rate). We can see that in two following theorems.
\end{remark}
\begin{theorem} \label{T6}
Under condition {\rm(H1)}, if the prey is absent, i.e., $x_t=0$ a.s. for all $t\geq 0,$ then the predator dies with probability one. Furthermore, the death rate of predator is exponential, i.e.,
$$\limsup_{t\to \infty} \frac{\ln y_t}{t}\leq -\inf_{t\geq 0}\left [a_2(t)+\frac{\rho_1^2(t)}{2}\right] \hspace{1cm} \text{ a.s.}$$
\end{theorem}
\begin{proof}
The quantity $y_t=\exp\{\eta_t\}$ of predator at the time $t$ satisfies the following equation
$$d\eta_t=\left[-a_2-\frac{\rho_1^2}{2}-b_2 \exp\{\eta_t\} \right]dt+\rho_1dw_t.$$
Thus,
\begin{align} \label{E28}
\eta_t=&\eta_0+ \int_0^t \left[-a_2-\frac{\rho_1^2}{2}-b_2 \exp\{\eta_s\} \right]ds+ M_t \notag\\
\leq & \eta_0 - \inf_{t\geq 0}\left [a_2(t)+\frac{\rho_1^2(t)}{2}\right] t +M_t,
\end{align}
where
$M_t=\int_0^t \rho_1(s) dw_s$
is a martingale. The quadratic variation of $M_t$
$$<M,M>_t=\int_0^t \rho_1^2(s)ds$$
satisfying
$$\limsup_{t\to \infty}\frac{<M,M>_t}{t}\leq {\rho_1^u}^2.$$
Using the strong law of large numbers for martingales gives
$\lim_{t\to \infty} \frac{M_t}{t}=0 \,\, a.s.$
It then follows from \eqref{E28} that
$$\limsup_{t\to \infty} \frac{\ln y_t}{t}\leq -\inf_{t\geq 0}\left [a_2(t)+\frac{\rho_1^2(t)}{2}\right] $$
and $\lim_{t\to \infty}y_t=0$ a.s.
\end{proof}
\begin{theorem} \label{T7}
Under condition {\rm(H1)}, if the predator is absent, i.e., $y_t=0$ a.s. for all $t\geq 0,$ then the quantity of prey satisfies the following
\begin{itemize}
\item [(i)]
If $\sup_{t\geq 0}\left\{a_1(t)-\frac{\sigma_1^2(t)}{2}\right\}<0$ then $\lim_{t\to \infty} x_t=0 \text{ a.s.}$ and the prey dies out at an exponential rate;
\item[(ii)] If $\sup_{t\geq 0}\left\{a_1(t)-\frac{\sigma_1^2(t)}{2}\right\}=0$ then $\lim_{t\to \infty} \mathbb E x_t=0;$
\item [(iii)]
$$\limsup_{t\to \infty} \frac{\ln x_t}{\ln t}\leq 1 \hspace{1cm} \text{ a.s.}$$
\end{itemize}
\end{theorem}
\begin{proof}
Similarly to Theorem \ref{T6}, the quantity $x_t=\exp\{\xi_t\}$ of prey at time $t$ satisfies the following equation
\begin{equation*}
d\xi_t=\left[a_1-\frac{\sigma_1^2}{2}-b_1 \exp\{\xi_t\} \right]dt+\sigma_1dw_t .
\end{equation*}
For Case (i), we have
$$d\xi_t\leq \sup_{t\geq 0}\left\{a_1(t)-\frac{\sigma_1^2(t)}{2}\right\} dt+\sigma_1 dw_t.$$
Using the same arguments as in Theorem \ref{T6} yields
$$\limsup_{t\to \infty} \frac{\ln x_t}{t}\leq \sup_{t\geq 0}\left\{a_1(t)-\frac{\sigma_1^2(t)}{2}\right\}<0,$$
from which follows that $\lim_{t\rightarrow \infty} x_t=0$ a.s. and the death rate of prey is exponential. Consider Case (ii).
It follows from
\begin{equation*}
\xi_t=\xi_0+\int_0^t \left[a_1-\frac{\sigma_1^2}{2}-b_1 \exp\{\xi_s\} \right]ds+\int_0^t \sigma_1(s) dw_s,
\end{equation*}
and Jensen's inequality that
$$\mathbb E\xi_t\leq \xi_0- b_1^l \int_0^t \mathbb E \exp\{\xi_s\}ds\leq \xi_0- b_1^l \int_0^t \exp\{\mathbb E\xi_s\}ds.$$
Therefore, $\mathbb E\xi_t \leq Z_t$ where $Z_t$ is the solution of the following differential equation
$$Z'_t=- b_1^l \exp\{Z_t\}, \quad Z_0=\xi_0.$$
It is easy to see that
$Z_t=-\log [b_1^l t+\exp\{-\xi_0\}] \to -\infty \text{ as } t\to \infty,$
then $\lim_{t\to \infty} \mathbb E\xi_t=-\infty.$ Using Jensen inequality again gives $\mathbb E x_t=0.$
The proof of Case (iii) is similar to one of Theorem \ref{T4}. We therefore omit it here.
{\bf Acknowledgement.}
The authors would like to thank the anonymous referee(s) for his very helpful suggestions which greatly improve the manuscript.
\end{proof} |
2108.09591 | \section{Introduction}
Breast cancer is responsible for over 42,000 women deaths in the United States and 685,000 deaths globally in 2020 \cite{Siegel2020}. Mammogram screening for early detection of breast lesions is important for decreasing the mortality rate \cite{broeders2012impact}. A major challenge in the screening procedure is the considerable inter-radiologist diagnostic performance variation \cite{mckinney2020international}. Both missing a cancer case or predicting a lesion to be malignant due to over-diagnosis would create severe consequences. False-negative cases could delay treatment and decrease the patient survival chance \cite{houssami2017epidemiology}. In contrast, false-positive diagnoses can lead to unnecessary biopsies which often create stress, bleeding, bruising, and financial burden.
The diagnostic performance of computer-aided software has significantly increased thanks to recent advances in deep learning. They hold great potential in helping radiologists to make more accurate diagnoses and reduce the performance variation \cite{freer2001screening}. Previous work focuses mostly on the general problem of classifying whether the lesion is benign or malignant \cite{shen2019deep, ribli2018detecting}. Recently, several papers on breast cancer have started to tackle more specific classification problems, including lesion types classification (mass or calcification) and BI-RADS density classification (level 1 to 4) \cite{shen2019deep, matthews2020multi, chougrad2020multi}. In \cite{shen2019deep}, given a lesion image, it will be categorized into five different classes: benign mass, malignant mass, benign calcification, malignant calcification or background. Masses are defined as three-dimensional space-occupying lesions in the breasts. Calcifications are deposits of calcium salts in the breast. Because different types of lesions (mass, calcification, etc.) have different properties, classifying the lesion type first can help the cancer diagnostic process. Breast density is another important factor for pathology evaluation \cite{kerlikowske2010breast}. In \cite{matthews2020multi}, they focused on this sub-problem and showed promising results. Chougrad et al. \cite{chougrad2020multi} combined the information of lesion types, breast density and pathology in a multi-label setting to exploit the correlation that may exist between those labels.
To the best of our knowledge, most approaches only use mammograms as input information for pathology classification. In this work, by using the available labels of clinical features that go along with their corresponding mammogram in the CBIS-DDSM database \cite{lee2017curated}, we propose a multimodal model that help improve the pathology classification performance by leveraging multimodal data. Further, for the problem of missing clinical features data which is often the case in real-world setting, we carry out evaluations for our proposed framework when it is combined with either co-attention or cross-attention.
Specifically, the contributions of this paper include:
\begin{enumerate}
\item \textbf{Multimodal features combination for improving breast cancer pathology diagnostic}: we combine imaging features and clinical features for breast cancer diagnostic. Here, imaging features will be an embedding extracted from a pretrained deep learning model. For clinical features, we use a total of five features: breast density, mass shape, mass margins, calcification type, and calcification distribution. These features will be represented as a vector which is concatenated from 5 one-hot vectors (or a zero vector for each of missing feature), each vector is for one of the 5 features.
\item \textbf{Evaluation of using co-attention and cross-attention in the presence of missing data}: in real-world setting, clinical features sometimes are either not provided or incomplete due to variations in clinical practice. To allow our model to be wisely adaptive to clinical situations so that it can be easily plugged to any clinical workflows, we incorporate either co-attention or cross-attention module to our proposed framework. We further train them to cope with missing clinical features.
\end{enumerate}
\section{Methodology}
\subsection{Multimodal Fusion Network Architecture}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/architectures/concat.png}
\caption{Overall multimodal fusion network architecture. Mammogram embedding is obtained by feeding a mammogram through a pretrained deep network. Both mammogram and clinical embedding are first projected into a fixed k-dimensional embedding. Then, two new projected embeddings are concatenated before feeding into a fully-connected network for classification.}
\label{fig:framework}
\end{figure}
\subsubsection{Mammogram Embedding}
We use a ResNet-50 which has been pretrained on ImageNet to extract feature maps from mammograms. For each mammogram $x_i$, we obtain a 2048-d feature map $e_i$ from the global avarage pooling layer:
\begin{equation}
e_i=ResNet50(x_i)
\end{equation}
In this work, we use ResNet50 as our model backbone but it can be replaced by any other architectures. The obtained feature map $e_i$ is furthed projected into a 100-dimensional embedding vector $\Tilde{e_i}$:
\begin{equation}
\Tilde{e_i}=F(W_x^T\cdot{e_i}+b_x),
\end{equation}
where F is a nonlinear activation function, here ReLU is used.
\subsubsection{Clinical Embedding}
We use a total of five features for our clinical features: breast density, mass shape, mass margins, calcification type, and calcification distribution. Basically, mass shape and mass margins are only present for mass lesions. Similarly, calcification type and calcification distribution are only present for calcification lesions.
Each of the 5 features will be described by an one-hot vector.
\begin{itemize}
\item Breast Density - BI-RADS classifies breast density into four groups: entirely fatty, scattered fibroglandular densities, heterogeneously dense, and extremely dense. Thus, our one-hot vector will have four dimensions.
\item Mass Shape - shape can receive value such as round, oval, irregular, and lobulated. Mass shape is basically categorized into eight classes so the representation will be an 8-dimensional one-hot vector.
\item Mass Margins - margin can receive value such as ill defined, circumscribed, and spiculated. Mass margin is basically categorized into five classes so the representation will be a 5-dimensional one-hot vector.
\item Calcification Type - type can receive value such as amorphous, punctate, and vascular. Calcification type is basically categorized into fourteen classes so the representation will be a 14-dimensional one-hot vector.
\item Calcification Distribution - distribution can receive value such as clustered, linear, and regional. Calcification distribution is basically categorized into five classes so the representation will be a 5-dimensional one-hot vector.
\end{itemize}
In the cases of missing features, the corresponding vectors are set as zero vectors. For instance, mass lesions will neither have calcification type nor calcification distribution so their representations will be two zero vectors. This is similar for the case of calcification lesions. In real-world applications, any missing feature will result in one zero vector. In summary, our clinical embedding will have 36 dimensions, which is the sum of dimensions of 5 feature vectors.
Each clinical embedding $c_i$ is projected into a 100-dimensional embedding $\Tilde{c_i}$, which is the same as for mammogram embedding:
\begin{equation}
\Tilde{c_i}=F(W_t^T\cdot{c_i}+b_t)
\end{equation}
\subsubsection{Multimodal fusion}
After projected mammogram embedding $\Tilde{e_i}$ and projected clinical embedding $\Tilde{c_i}$ are obtained, they will be concatenated before feeding into 2 fully connected layers for pathology classification:
\begin{equation}
k_i=Concat(\Tilde{e_i}, \Tilde{c_i}),
\end{equation}
where $k_i$ is the final concatenated embedding. There are multiple ways to fuse information, e.g., early fusion and late fusion. For simplicity, in this work, we simply concatenate two projected embedding vectors but other methods could be experimented to see whether they further improve the model performance. Figure \ref{fig:framework} shows the overall framework of our proposed multimodal fusion model.
\subsection{Co-attention and Cross-attention Modules}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/architectures/coatt_crossatt.png}
\caption{Network architecture with co-attention/cross-attention. We incorporate either co-attention or cross-attention module to our proposed framework. For cross-attention module, two dashed arrows in the figure are removed.}
\label{fig:att_framework}
\end{figure}
In clinical settings, mammograms are not often provided together with clinical features or only part of the features are provided. This leads to the problems of missing values. In this work, we make evaluations of our model when it is combined with either co-attention or cross-attention module. Here, co-attention/cross-attention is used with the intention of teaching the model to learn the relevant information between a mammogram and its clinical features. Figure \ref{fig:att_framework} shows our proposed framework when combined with co-attention/cross-attention module.
\subsubsection{Co-attention Module}
Co-attention was proposed in \cite{lu2019vilbert}. In this paper, being inspired by the idea of co-attention, we try to combine the projected mammogram embedding $\Tilde{e_i}$ and the projected clinical embedding $\Tilde{c_i}$ as follows:
\begin{equation}
\begin{split}
\alpha_{e_i}=\sigma(W_x^{'T}\cdot{Concat(e_i, c_i)} + b_v^{'}),\\
\alpha_{c_i}=\sigma(W_t^{'T}\cdot{Concat(e_i, c_i)} + b_t^{'}),\\
e_i^{aug}=\alpha_{e_i}\odot\Tilde{e_i}, \quad c_i^{aug}=\alpha_{c_i}\odot\Tilde{c_i},\\
k_i=Concat(e_i^{aug}, c_i^{aug}),
\end{split}
\end{equation}
where $\sigma$ is the sigmoid activation function, $\alpha_{e_i}$ and $\alpha_{c_i}$ are the augmented coefficients, $k_i$ is the final concatenated embedding.
\subsubsection{Cross-attention Module}
Cross-attention was proposed in \cite{abavisani2020multimodal}. Basically, the only difference between co-attention and cross-attention is in the way of calculating the augmented coefficients. While co-attention uses information from both the multimodal data, cross-attention only uses the information in the other modality to compute the coefficient for the current modality, which is as follow:
\begin{equation}
\begin{split}
\alpha_{e_i}=\sigma(W_x^{'T}\cdot{c_i} + b_v^{'}), \quad
\alpha_{c_i}=\sigma(W_t^{'T}\cdot{e_i} + b_t^{'})
\end{split}
\end{equation}
\subsubsection{Missing Clinical Variables}
Hospitals have different practices, therefore, might not use the same set of clinical variables. Even for the same hospital, patient information often has missing values. Therefore, it is important to study the effect of missing variables on our model's performance. We use bait-and-switch training where clinical features are removed from randomly selected mammograms during both training and testing phases.
\begin{figure*}
\centering
\begin{subfigure}[c]{0.37\textwidth}
\resizebox{\linewidth}{!}{
\setlength{\tabcolsep}{5pt}
\begin{tabular}{cc}
\shortstack{\textbf{{\scriptsize Mammogram Only}} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/roc_curves/without_clinical_feats.png}}&
\shortstack{\textbf{{\scriptsize With Clinical Features}} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/roc_curves/with_clinical_feats.png}} \\
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/pr_curves/without_clinical_feats.png}}&
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/pr_curves/with_clinical_feats.png}} \\
\end{tabular}
}
\caption{Results comparison when using additional clinical features.}
\label{fig:origin_vs_with_clinical_feats}
\end{subfigure}
\hfill
\begin{subfigure}[c]{0.52\textwidth}
\centering
\resizebox{\linewidth}{!}{
\setlength{\tabcolsep}{7pt}
\begin{tabular}{ccc}
\shortstack{\textbf{Concat} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/roc_curves/concat.png}}&
\shortstack{\textbf{Co-attention} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/roc_curves/coatt.png}}&
\shortstack{\textbf{Cross-attention} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/roc_curves/crossatt.png}} \\
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/pr_curves/concat.png}}&
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/pr_curves/coatt.png}}&
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/pr_curves/crossatt.png}} \\
\end{tabular}
}
\caption{Bait-and-Switch Training: Here, the probability of mammograms
whose clinical data are removed is set to be 0.3 for both training and testing.}
\label{fig:bait_and_switch}
\end{subfigure}
\caption{ROC curves and PR curves evaluated on the official CBIS-DDSM test set.}
\label{fig:models_evaluation}
\vspace{-3mm}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/visualization/compare_cases.png}
\caption{Visualization of four cases when the original model fails to classify but using additional clinical features can help. For each case, on the left is the prediction of the original model while on the right is for the model that uses additional clinical features.}
\label{fig:visualize_fail_cases}
\end{figure}
\section{Experimental Results}
\subsection{Breast Lesion Dataset}
Our experiments use the CBIS-DDSM dataset that consists of 1592 mass mammograms and 1511 calcification mammograms. The official data train test split is 1231/361 for the mass cases and 1227/284 for the calcification cases. We further split the official training set into 75\% for training and 25\% for validation. We only consider clinical features with single label because the number of multi-label lesions is very small. We use the bounding box inferred from mask ground-truth to crop the lesion from mammogram without any further pre-processing steps.
\subsection{Hyperparamter Settings}
We trained our network using the setting in \cite{shen2019deep}. We used ResNet50 as our backbone and we finetuned the network for three stages with more layers are gradually unfrozen through each stage. We used Adam Optimizer and the learning rates are set to 1e-3, 1e-4, 1e-5 for 1$^{st}$ stage, 2$^{nd}$ stage and 3$^{rd}$ stage. We used the same setting for the multimodal networks. For these networks, both the projected mammogram embedding and projected clinical embedding will have 100 dimensions. For the two fully-connected layers, we set them to have an equal of 200 neurons. Our training used the same data augmentation procedure in \cite{shen2019deep}.
\subsection{Multimodal Classification}
The result when using additional clinical features is shown in Fig.~\ref{fig:origin_vs_with_clinical_feats}. Both the areas under ROC curves (AUC-ROC) and Precision-Recall curves (AUC-PR) significantly increase when clinical features are used. In particular, the average AUC-ROC increases from 0.892 to 0.945, and the average AUC-PR from 0.715 to 0.82 when using attention deep networks. Our results indicate that using additional clinical features can help boost up the pathology classification performance to a large margin. Figure \ref{fig:visualize_fail_cases} shows four example cases when the prediction is incorrect if only mammogram is used, but can be correctly predicted if clinical features are used. Some of the cases which have been misclassified as benign, can be now recognized as malignant by leveraging the clinical information. Three fusion approaches perform competitively well, when co-attention and cross-attention have a slight edge over the traditional feature concatenation method.
\subsection{Effects of Missing Variables}
In our experiments, we set the probability of mammograms whose clinical data are removed to be 0.3 for both training and testing. We evaluated on three different fusion approaches: normal concatenation, with co-attention, and with cross-attention. The results are shown in Figure \ref{fig:bait_and_switch}. For the ROC curves, it seems there are no significant differences among three fusion settings. Nonetheless, the PR curves have shown promising results for using attention to deal with missing data. The AUC-PR of malignant mass and malignant calcification, which are 0.72 and 0.69 for normal concatenation setting, increase up to 0.73 and 0.72 for co-attention setting. They are further increased when using cross-attention, to the extent of 0.74 and 0.75, respectively. It is worth mentioning that for benign mass, its AUC-PR has slightly decreased when attention modules are used (from 0.86 to 0.83).
\section{Conclusion}
This paper studies multimodal deep networks based on simple concatenation, cross-attention, and co-attention to combine mammography and clinical features. We demonstrate that our multimodal approach significantly increases the breast lesion classifier's area under curves and thus the diagnostic precision. We also examine the model's performance under the scenario of missing variables due to variations in clinical practice. Our future work will explore the feasibility of extracting the selected clinical features directly from mammograms. \vspace{5mm}
\textbf{Acknowledgement:} This research is funded by NIH R01CA251710, T.T. and W.F. Chao Foundation and John S
Dunn Research Foundation.
\bibliographystyle{IEEEtran}
\section{Introduction}
Breast cancer is responsible for over 42,000 women deaths in the United States and 685,000 deaths globally in 2020 \cite{Siegel2020}. Mammogram screening for early detection of breast lesions is important for decreasing the mortality rate \cite{broeders2012impact}. A major challenge in the screening procedure is the considerable inter-radiologist diagnostic performance variation \cite{mckinney2020international}. Both missing a cancer case or predicting a lesion to be malignant due to over-diagnosis would create severe consequences. False-negative cases could delay treatment and decrease the patient survival chance \cite{houssami2017epidemiology}. In contrast, false-positive diagnoses can lead to unnecessary biopsies which often create stress, bleeding, bruising, and financial burden.
The diagnostic performance of computer-aided software has significantly increased thanks to recent advances in deep learning. They hold great potential in helping radiologists to make more accurate diagnoses and reduce the performance variation \cite{freer2001screening}. Previous work focuses mostly on the general problem of classifying whether the lesion is benign or malignant \cite{shen2019deep, ribli2018detecting}. Recently, several papers on breast cancer have started to tackle more specific classification problems, including lesion types classification (mass or calcification) and BI-RADS density classification (level 1 to 4) \cite{shen2019deep, matthews2020multi, chougrad2020multi}. In \cite{shen2019deep}, given a lesion image, it will be categorized into five different classes: benign mass, malignant mass, benign calcification, malignant calcification or background. Masses are defined as three-dimensional space-occupying lesions in the breasts. Calcifications are deposits of calcium salts in the breast. Because different types of lesions (mass, calcification, etc.) have different properties, classifying the lesion type first can help the cancer diagnostic process. Breast density is another important factor for pathology evaluation \cite{kerlikowske2010breast}. In \cite{matthews2020multi}, they focused on this sub-problem and showed promising results. Chougrad et al. \cite{chougrad2020multi} combined the information of lesion types, breast density and pathology in a multi-label setting to exploit the correlation that may exist between those labels.
To the best of our knowledge, most approaches only use mammograms as input information for pathology classification. In this work, by using the available labels of clinical features that go along with their corresponding mammogram in the CBIS-DDSM database \cite{lee2017curated}, we propose a multimodal model that help improve the pathology classification performance by leveraging multimodal data. Further, for the problem of missing clinical features data which is often the case in real-world setting, we carry out evaluations for our proposed framework when it is combined with either co-attention or cross-attention.
Specifically, the contributions of this paper include:
\begin{enumerate}
\item \textbf{Multimodal features combination for improving breast cancer pathology diagnostic}: we combine imaging features and clinical features for breast cancer diagnostic. Here, imaging features will be an embedding extracted from a pretrained deep learning model. For clinical features, we use a total of five features: breast density, mass shape, mass margins, calcification type, and calcification distribution. These features will be represented as a vector which is concatenated from 5 one-hot vectors (or a zero vector for each of missing feature), each vector is for one of the 5 features.
\item \textbf{Evaluation of using co-attention and cross-attention in the presence of missing data}: in real-world setting, clinical features sometimes are either not provided or incomplete due to variations in clinical practice. To allow our model to be wisely adaptive to clinical situations so that it can be easily plugged to any clinical workflows, we incorporate either co-attention or cross-attention module to our proposed framework. We further train them to cope with missing clinical features.
\end{enumerate}
\section{Methodology}
\subsection{Multimodal Fusion Network Architecture}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/architectures/concat.png}
\caption{Overall multimodal fusion network architecture. Mammogram embedding is obtained by feeding a mammogram through a pretrained deep network. Both mammogram and clinical embedding are first projected into a fixed k-dimensional embedding. Then, two new projected embeddings are concatenated before feeding into a fully-connected network for classification.}
\label{fig:framework}
\end{figure}
\subsubsection{Mammogram Embedding}
We use a ResNet-50 which has been pretrained on ImageNet to extract feature maps from mammograms. For each mammogram $x_i$, we obtain a 2048-d feature map $e_i$ from the global avarage pooling layer:
\begin{equation}
e_i=ResNet50(x_i)
\end{equation}
In this work, we use ResNet50 as our model backbone but it can be replaced by any other architectures. The obtained feature map $e_i$ is furthed projected into a 100-dimensional embedding vector $\Tilde{e_i}$:
\begin{equation}
\Tilde{e_i}=F(W_x^T\cdot{e_i}+b_x),
\end{equation}
where F is a nonlinear activation function, here ReLU is used.
\subsubsection{Clinical Embedding}
We use a total of five features for our clinical features: breast density, mass shape, mass margins, calcification type, and calcification distribution. Basically, mass shape and mass margins are only present for mass lesions. Similarly, calcification type and calcification distribution are only present for calcification lesions.
Each of the 5 features will be described by an one-hot vector.
\begin{itemize}
\item Breast Density - BI-RADS classifies breast density into four groups: entirely fatty, scattered fibroglandular densities, heterogeneously dense, and extremely dense. Thus, our one-hot vector will have four dimensions.
\item Mass Shape - shape can receive value such as round, oval, irregular, and lobulated. Mass shape is basically categorized into eight classes so the representation will be an 8-dimensional one-hot vector.
\item Mass Margins - margin can receive value such as ill defined, circumscribed, and spiculated. Mass margin is basically categorized into five classes so the representation will be a 5-dimensional one-hot vector.
\item Calcification Type - type can receive value such as amorphous, punctate, and vascular. Calcification type is basically categorized into fourteen classes so the representation will be a 14-dimensional one-hot vector.
\item Calcification Distribution - distribution can receive value such as clustered, linear, and regional. Calcification distribution is basically categorized into five classes so the representation will be a 5-dimensional one-hot vector.
\end{itemize}
In the cases of missing features, the corresponding vectors are set as zero vectors. For instance, mass lesions will neither have calcification type nor calcification distribution so their representations will be two zero vectors. This is similar for the case of calcification lesions. In real-world applications, any missing feature will result in one zero vector. In summary, our clinical embedding will have 36 dimensions, which is the sum of dimensions of 5 feature vectors.
Each clinical embedding $c_i$ is projected into a 100-dimensional embedding $\Tilde{c_i}$, which is the same as for mammogram embedding:
\begin{equation}
\Tilde{c_i}=F(W_t^T\cdot{c_i}+b_t)
\end{equation}
\subsubsection{Multimodal fusion}
After projected mammogram embedding $\Tilde{e_i}$ and projected clinical embedding $\Tilde{c_i}$ are obtained, they will be concatenated before feeding into 2 fully connected layers for pathology classification:
\begin{equation}
k_i=Concat(\Tilde{e_i}, \Tilde{c_i}),
\end{equation}
where $k_i$ is the final concatenated embedding. There are multiple ways to fuse information, e.g., early fusion and late fusion. For simplicity, in this work, we simply concatenate two projected embedding vectors but other methods could be experimented to see whether they further improve the model performance. Figure \ref{fig:framework} shows the overall framework of our proposed multimodal fusion model.
\subsection{Co-attention and Cross-attention Modules}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/architectures/coatt_crossatt.png}
\caption{Network architecture with co-attention/cross-attention. We incorporate either co-attention or cross-attention module to our proposed framework. For cross-attention module, two dashed arrows in the figure are removed.}
\label{fig:att_framework}
\end{figure}
In clinical settings, mammograms are not often provided together with clinical features or only part of the features are provided. This leads to the problems of missing values. In this work, we make evaluations of our model when it is combined with either co-attention or cross-attention module. Here, co-attention/cross-attention is used with the intention of teaching the model to learn the relevant information between a mammogram and its clinical features. Figure \ref{fig:att_framework} shows our proposed framework when combined with co-attention/cross-attention module.
\subsubsection{Co-attention Module}
Co-attention was proposed in \cite{lu2019vilbert}. In this paper, being inspired by the idea of co-attention, we try to combine the projected mammogram embedding $\Tilde{e_i}$ and the projected clinical embedding $\Tilde{c_i}$ as follows:
\begin{equation}
\begin{split}
\alpha_{e_i}=\sigma(W_x^{'T}\cdot{Concat(e_i, c_i)} + b_v^{'}),\\
\alpha_{c_i}=\sigma(W_t^{'T}\cdot{Concat(e_i, c_i)} + b_t^{'}),\\
e_i^{aug}=\alpha_{e_i}\odot\Tilde{e_i}, \quad c_i^{aug}=\alpha_{c_i}\odot\Tilde{c_i},\\
k_i=Concat(e_i^{aug}, c_i^{aug}),
\end{split}
\end{equation}
where $\sigma$ is the sigmoid activation function, $\alpha_{e_i}$ and $\alpha_{c_i}$ are the augmented coefficients, $k_i$ is the final concatenated embedding.
\subsubsection{Cross-attention Module}
Cross-attention was proposed in \cite{abavisani2020multimodal}. Basically, the only difference between co-attention and cross-attention is in the way of calculating the augmented coefficients. While co-attention uses information from both the multimodal data, cross-attention only uses the information in the other modality to compute the coefficient for the current modality, which is as follow:
\begin{equation}
\begin{split}
\alpha_{e_i}=\sigma(W_x^{'T}\cdot{c_i} + b_v^{'}), \quad
\alpha_{c_i}=\sigma(W_t^{'T}\cdot{e_i} + b_t^{'})
\end{split}
\end{equation}
\subsubsection{Missing Clinical Variables}
Hospitals have different practices, therefore, might not use the same set of clinical variables. Even for the same hospital, patient information often has missing values. Therefore, it is important to study the effect of missing variables on our model's performance. We use bait-and-switch training where clinical features are removed from randomly selected mammograms during both training and testing phases.
\begin{figure*}
\centering
\begin{subfigure}[c]{0.37\textwidth}
\resizebox{\linewidth}{!}{
\setlength{\tabcolsep}{5pt}
\begin{tabular}{cc}
\shortstack{\textbf{{\scriptsize Mammogram Only}} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/roc_curves/without_clinical_feats.png}}&
\shortstack{\textbf{{\scriptsize With Clinical Features}} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/roc_curves/with_clinical_feats.png}} \\
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/pr_curves/without_clinical_feats.png}}&
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/with_and_without_using_clincal_feats/pr_curves/with_clinical_feats.png}} \\
\end{tabular}
}
\caption{Results comparison when using additional clinical features.}
\label{fig:origin_vs_with_clinical_feats}
\end{subfigure}
\hfill
\begin{subfigure}[c]{0.52\textwidth}
\centering
\resizebox{\linewidth}{!}{
\setlength{\tabcolsep}{7pt}
\begin{tabular}{ccc}
\shortstack{\textbf{Concat} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/roc_curves/concat.png}}&
\shortstack{\textbf{Co-attention} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/roc_curves/coatt.png}}&
\shortstack{\textbf{Cross-attention} \\ \includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/roc_curves/crossatt.png}} \\
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/pr_curves/concat.png}}&
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/pr_curves/coatt.png}}&
\shortstack{
\includegraphics[width=0.7\linewidth]{figures/new_experiments/bait_and_switch/pr_curves/crossatt.png}} \\
\end{tabular}
}
\caption{Bait-and-Switch Training: Here, the probability of mammograms
whose clinical data are removed is set to be 0.3 for both training and testing.}
\label{fig:bait_and_switch}
\end{subfigure}
\caption{ROC curves and PR curves evaluated on the official CBIS-DDSM test set.}
\label{fig:models_evaluation}
\vspace{-3mm}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/visualization/compare_cases.png}
\caption{Visualization of four cases when the original model fails to classify but using additional clinical features can help. For each case, on the left is the prediction of the original model while on the right is for the model that uses additional clinical features.}
\label{fig:visualize_fail_cases}
\end{figure}
\section{Experimental Results}
\subsection{Breast Lesion Dataset}
Our experiments use the CBIS-DDSM dataset that consists of 1592 mass mammograms and 1511 calcification mammograms. The official data train test split is 1231/361 for the mass cases and 1227/284 for the calcification cases. We further split the official training set into 75\% for training and 25\% for validation. We only consider clinical features with single label because the number of multi-label lesions is very small. We use the bounding box inferred from mask ground-truth to crop the lesion from mammogram without any further pre-processing steps.
\subsection{Hyperparamter Settings}
We trained our network using the setting in \cite{shen2019deep}. We used ResNet50 as our backbone and we finetuned the network for three stages with more layers are gradually unfrozen through each stage. We used Adam Optimizer and the learning rates are set to 1e-3, 1e-4, 1e-5 for 1$^{st}$ stage, 2$^{nd}$ stage and 3$^{rd}$ stage. We used the same setting for the multimodal networks. For these networks, both the projected mammogram embedding and projected clinical embedding will have 100 dimensions. For the two fully-connected layers, we set them to have an equal of 200 neurons. Our training used the same data augmentation procedure in \cite{shen2019deep}.
\subsection{Multimodal Classification}
The result when using additional clinical features is shown in Fig.~\ref{fig:origin_vs_with_clinical_feats}. Both the areas under ROC curves (AUC-ROC) and Precision-Recall curves (AUC-PR) significantly increase when clinical features are used. In particular, the average AUC-ROC increases from 0.892 to 0.945, and the average AUC-PR from 0.715 to 0.82 when using attention deep networks. Our results indicate that using additional clinical features can help boost up the pathology classification performance to a large margin. Figure \ref{fig:visualize_fail_cases} shows four example cases when the prediction is incorrect if only mammogram is used, but can be correctly predicted if clinical features are used. Some of the cases which have been misclassified as benign, can be now recognized as malignant by leveraging the clinical information. Three fusion approaches perform competitively well, when co-attention and cross-attention have a slight edge over the traditional feature concatenation method.
\subsection{Effects of Missing Variables}
In our experiments, we set the probability of mammograms whose clinical data are removed to be 0.3 for both training and testing. We evaluated on three different fusion approaches: normal concatenation, with co-attention, and with cross-attention. The results are shown in Figure \ref{fig:bait_and_switch}. For the ROC curves, it seems there are no significant differences among three fusion settings. Nonetheless, the PR curves have shown promising results for using attention to deal with missing data. The AUC-PR of malignant mass and malignant calcification, which are 0.72 and 0.69 for normal concatenation setting, increase up to 0.73 and 0.72 for co-attention setting. They are further increased when using cross-attention, to the extent of 0.74 and 0.75, respectively. It is worth mentioning that for benign mass, its AUC-PR has slightly decreased when attention modules are used (from 0.86 to 0.83).
\section{Conclusion}
This paper studies multimodal deep networks based on simple concatenation, cross-attention, and co-attention to combine mammography and clinical features. We demonstrate that our multimodal approach significantly increases the breast lesion classifier's area under curves and thus the diagnostic precision. We also examine the model's performance under the scenario of missing variables due to variations in clinical practice. Our future work will explore the feasibility of extracting the selected clinical features directly from mammograms. \vspace{5mm}
\textbf{Acknowledgement:} This research is funded by NIH R01CA251710, T.T. and W.F. Chao Foundation and John S
Dunn Research Foundation.
\bibliographystyle{IEEEtran} |
2108.09526 | \section{Introduction}
Carbon nanotubes (CNTs) are the one of most known carbon allotropes, which have been widely studied since their discovery at 1991 \cite{Iijima1991}.
Their unique mechanical, electrical, optical, and thermal properties give rise to the interest to them in the various fields of the biology, chemistry, electronics, and material science \cite{Rao2018,And02005,Dresselhaus1998}.
Their perspective applications for the development of the nanoscale electronic devices, biosensors, nanocomposite materials need in the knowledge of the properties of the isolated nanotubes as well as of the arrays of them \cite{Rao2018}.
Many fields of the science and technology are in interest in the collective properties of the nanotubes when their interaction is essential.
It relates to the problem of the design of the new nanoscale composite materials, in particular, the materials, which contain the nanotube ropes and mats.
The development of the methods for the fabrication of the aligned nanotubes also gives rise to the interest to the nanotube interactions \cite{Zhang2017,Xiao2009}.
The interaction between nanotubes as well as their interaction with the substrate has the effect on the nanotubes' deformations, frequency characteristics, electronic band structure and charge carrier mobility \cite{ Henrard1999,Sauvajol2002,Xiao2008,vanderGeest2011,Perebeinos2009,Flebus2020}.
These effects are important for the development of the electronic and electro-mechaniccal devices with usage of the nanotubes \cite{Park2013}.
The interaction of the CNTs placed at some distance from one to other results from the van der Waals forces acting between carbon atoms, which form the nanotubes \cite{Girifalco2000,Siber2009}.
In spite of the van der Waals interaction is not strong, its effective radius is large enough that leads to the collective character of the nanotube interaction, when many carbon atoms contribute to the energy.
In order to avoid the infinite summation the effective cut off distance has been usually introduced in the procedure of the numerical simulation \cite{Harik2018,Savin2020}.
The alternative approach consists in the continualization of atom distribution over the CNT's surface with the density, which is determined by the area of the elementary cell
\cite{Harik2018,RafiiTabar2008}.
Such an approach allows us to take into account the mutual arrangement of the CNTs but the final values of the interaction energy can be obtained by the numerical estimation the respective integrals only.
This procedure has been used in the number of works \cite{Sun2005,Sun2006,Popescu2008,Zhao2013} starting from the paper by Girifalco \cite{Girifalco2000}, where the concept of the "universal carbon" potential has been made.
One should notice that the researches mentioned deal with the interaction of the nanotubes without any deformations.
However, the observation of the CNTs in the bundles shows that some "polygonization", i.e. the difference the nanotubes` cross section from the ideal circle, occurs \cite{Tang2000}.
The packaging of the CNTs in the bundles corresponds to the dense hexagonal arrangement, the symmetry of which controls the mode of the nanotube deformation.
If the nanotubes are long enough the bundle can be considered as a fragment of two-dimensional regular array, called the CNT-crystal, which has been studied in work \cite{Tersoff1994} for the first time.
The elastic properties of the CNT crystal were considered in \cite{Popov2000, Saether2003a,Saether2003b}.
It is important that the total energy of the CNT crystal consists of the energy of nanotube interaction as well as the energy of their deformation \cite{Smirnov2019}.
The nanotube deformation in the CNT crystal should be considered as the internal degree of freedom, the presence of which gives rise to the optical-type branch in the dispersion relation.
The description of the CNT crystal dynamics has to include the mutual displacement of the nanotubes' center of masses as well as their deformation oscillations.
The latter may be studied in the framework of the elastic thin shell theory \cite{Amabili2008B}, when the CNT are considered as a thin elastic shell, which is characterized by the elastic moduli, Poisson ratio and the effective thickness of the "wall" \cite{Vodenitcharova2003,Huang06,Gupta2010,Chang10}.
One should notice that such an approach to study of the nanotube vibrations is often used in the problem of the bending vibrations with or without elastic foundation \cite{CYWang04,Liew07}.
Moreover, it was shown that there is well agreement between the data of the molecular-dynamical simulations and the results of the description of the nanotubes in the framework of the nonlinear theory of thin elastic shell by Sanders and Koiter \cite{Silvestre11,Silvestre12,Rafiee2014}.
The latter turn out to be successful in the analysis of the low-frequency oscillation localization in the single-walled CNT \cite{Smirnov2014,Smirnov2016PhysD} as well as of the interaction of nonlinear normal modes which belong to the different branches of the dispersion relation \cite{Smirnov2018}.
Generally speaking, the problem of the thin shell deformation in the nonlinear formulation is one of most difficult one in the contemporary mechanics and it may be solved analytically in isolated cases only \cite{Amabili2008B}.
However, the mentioned deformation of the CNTs is specific in that the changing the nanotubes cross section, which is normal to the tube`s axis, is not accompanied the variation of its contour length.
The latter leads to some relationship between radial and circumferential displacements of the shell that allows us to reduce the complexity of the dynamical problem \cite{Kaplunov2016,Smirnov2016PhysD}.
At a small deformation of the nanotubes the changing cross section`s contour is characterized by the circumferential wave number $l \geq 0$:
\begin{equation}\label{eq:contour}
R(\theta) = R_0 \left(1+\sum_l{w_l \cos{l \theta}} \right),
\end{equation}
where $R_0$ is the radius of non-deformed nanotube, $\theta$ is the azimuthal angle and $w_l$ is the amplitude of the radial displacement of the $l$-th mode.
For the isolated nanotube the circumferential wave numbers $l=0$ and $l=1$ correspond to the well known radial breathing mode (RBM) and bending oscillations, respectively, while $l=2$ gives rise so-called circumferential flexure mode (CFM), which is the most low-frequency optical-type vibration of nanotubes \cite{Dresselhaus00,Mahan02}.
Further we consider the particular system of the one-dimensional array of the single-walled nanotubes that turns to be interesting as the model one and may be useful in various problems of the nanoelectronics and nanomechanics.
\section{The model}
Let us consider the one-dimensional array of the single-walled CNTs, which are placed on some distance $d$ from each other.
In the equilibrium, the nanotubes' interaction results to that the cross section's contour can undergo the deformations \cite{Smirnov2019} which is described by Equation (\ref{eq:contour}) with some set of the circumferential wave numbers $l$.
Figure \ref{fig:MDsim1}(a) shows the snapshot of the molecular dynamics simulation of the (12,0) CNT array of the surface on the three-layered graphene at the temperature $T=300 K$.
The circumferential flexure deformations of the CNTs' walls result to the imperfection of the nanotube cross sections.
Figure \ref{fig:MDsim1}(b) shows the snapshot of the simulation of the interaction two (20,0) nanotubes at $T=300 K$.
The quasi-elliptical deformation of the right-hand nanotube is well observed.
From the viewpoint of the nanotube interaction the energy of the system consists of the energy of elastic deformation of the CNT and the energy of the van der Waals interaction between carbon atoms belonging to neighbouring nanotubes.
The first is determined by the CNT's circumferential rigidity and may be describes as the on-site described as follows:
\begin{eqnarray}
E_c=\frac{\Omega^2}{2} w^2,
\end{eqnarray}
$w$ is the amplitude of the radial deformation and $\Omega$ is the frequency of natural oscillations of the nanotubes which accompanied by varying of the nanotube's cross section (see Supporting Information).
In the one-dimensional array the symmetry dictates the circumferential flexure mode with $l=2$ as the preferential one.
We assume that the energy of the van der Waals interaction between neighbour nanotubes is determined by the distance between nanotubes' walls, which depends on the displacement of center of masses as well as on the radial deformation amplitude.
Figure \ref{fig:f1} shows the sketch of the interaction of two deformed CNTs.
It is essential that the effect of the circumferential deformation on the inter-wall gap differs for the left and right "edges" of the nanotubes.
On the right-hand edge of the nanotube the radial deformation and displacement of center of masses are summarized, while on left-had edge they effect in the opposite direction.
Under these assumptions we can represent the potential energy of the regular array of the nanotubes in the linear approximation as follows:
\begin{eqnarray}\label{eq:Vpot1}
V=\frac{1}{2} \chi ^2 \left(\left(\left(u_n-u_{n-1}\right)-\left(w_{n-1}+w_n\right)\right){}^2+\left(\left(u_{n+1}-u_n\right)-\left(w_{n+1}+w_n\right)\right){}^2\right)+\frac{\Omega ^2}{2} w_n^2,
\end{eqnarray}
where constant $\chi$ characterizes the rigidity of the van der Waals interaction, and frequency $\Omega$ is determined by the own rigidity of the nanotube's contour.
Here we use the dimensionless variables: displacement $u$ and radial deformation $w$ are measured in the units of the nanotube's radius, frequency $\Omega$ and coupling constant $\chi$ are related with the frequency of the radial breathig mode (RBM).
(One should note, that frequency $\Omega$ can take into account the effect of the substrate attraction, if the array is placed on the solid surface \cite{Henrard1999}.)
Let us note that displacement of the center of masses $u_n$ and amplitude of radial displacement $w_n$ of the n-th nanotube are represented by different manner in the first and second terms of Equation \ref{eq:Vpot1}, as it was mentioned above.
Therefore, we define new variables, which describe the displacements of the left and right "edges" of the nanotube as follows:
\begin{eqnarray}\label{eq:edges}
\psi_n=\frac{1}{\sqrt{2}} \left( u_n - w_n \right), \quad \varphi_n =\frac{1}{\sqrt{2}} \left( u_n + w_n \right),
\end{eqnarray}
where factor $\sqrt{2}$ is introduced for the convenience.
So, in terms of these variables the energy of the system can be written in the form
\begin{eqnarray}\label{eq:E0}
E=\sum_n{\frac{1}{2} \left( \left(\frac{d \psi_n}{d t}\right)^2+\left(\frac{d \varphi_n}{d t} \right)^2 \right)+\chi^2 \left( \left(\psi_{n+1}-\varphi_n \right)^2+ \left( \psi_n-\varphi_{n-1} \right)^2 \right) +\frac{\Omega^2}{4} \left( \varphi_n-\psi_n \right)^2}.
\end{eqnarray}
The respective equations of motion are read as
\begin{eqnarray}\label{eq:eqm0}
\frac{d^2 \varphi_n}{d t^2}+\frac{\Omega^2}{2}\left(\varphi_n-\psi_n \right)+2\chi^2 \left(\varphi_n-\psi_{n+1}\right) =0 \\ \nonumber
\frac{d^2 \psi_n}{d t^2}+\frac{\Omega^2}{2} \left(\psi_n-\varphi_n \right)+ 2 \chi^2 \left(\psi_n-\varphi_{n-1} \right) =0.
\end{eqnarray}
The dispersion relations consist of two branches:
\begin{eqnarray}\label{eq:DR0}
\omega^2=\frac{1}{2}\left( 4 \chi ^2+\Omega ^2\pm \sqrt{8 \chi ^2 \Omega ^2 \cos (\kappa )+16 \chi ^4+\Omega ^4} \right)
\end{eqnarray}
Figure \ref{fig:DR} shows dispersion relations (\ref{eq:DR0}) for the CNT array with parameters: $\chi=1/0, \Omega = 1.5$.
One should remark that the right edge of the optical branch of dispersion relation (\ref{eq:DR0}) has to correspond to the frequency of the natural oscillations of the isolated nanotube $\Omega$, while the acoustic branch has to converge to value $2\chi$.
However, it occurs if frequency $\Omega$ is smaller than $2\chi$.
Otherwise, we can observe $\omega \rightarrow 2\chi$ for the optical branch and $\omega \rightarrow \Omega$ for the acoustical one (see Figure \ref{fig:DR}).
In the CNT array the phonon interference arises as the result of the Fano resonance \cite{Kosevich2008,Miroshnochenko2010}, if an additional nanotube is placed in the groove between two neighbour nanotubes of the array.
Such a "discrete state" can be formed artificially or be the result of instability of the array under action of the pressure in the direction, which is normal to the nanotubes' axes.
The example of such instability is shown in Figure \ref{fig:defectarray}.
The "excess" nanotube in Figure \ref{fig:defectarray}b arises as the result of the instability of the initially stressed array in Figure \ref{fig:defectarray}a.
One of the nanotubes is ejected from the array under action of the thermal fluctuations and sites the position in the groove between two neighbour nanotubes.
Such a configuration turns out to be stable and the upper nanotube does not change its location.
The sketch of intertubes' bonds in the fragment of the CNT array with the "discrete state" is represented in Figure \ref{fig:DS01}.
The "discrete state" and the nanotubes of the array interact by the van der Waals forces and the energy of this interaction is controlled by distances $\Delta_1$ and $\Delta_2$.
One can show that the values of these distances depend on the differences $(\Psi-\varphi_{n+1})$ and $(\Phi-\psi_n)$ and the energy of the "discrete state" can be written in the form:
\begin{eqnarray}\label{eq:DSenergy1}
V_d=\frac{\chi ^2}{4} \left(\left(\Psi -\varphi _{n+1}\right){}^2+\left(\Phi -\psi _n\right){}^2\right)+\frac{\Omega_1 ^2}{4} (\Phi -\Psi )^2
\end{eqnarray}
The "discrete state" has two own frequencies:
\begin{eqnarray}\label{eq:DSfrequency1}
\omega_1 = \frac{\chi}{\sqrt{2}}, \quad \omega_2 = \sqrt{\Omega_1 ^2+\frac{\chi ^2}{2}}
\end{eqnarray}
In order to study the transmission of the wave through the "discrete state" placed between sites $n$ and $n+1$ we use the transfer matrix method \cite{Tong1999}.
Assuming that variables $\psi$ and $\varphi$ depend on the time as $~ e^{i \omega t}$ we can represent Equations (\ref{eq:eqm0}) in the vector form:
\begin{eqnarray}\label{eq:regarray1}
\left(
\begin{array}{c}
\psi_{n+1}\\
\varphi_n
\end{array}
\right)
=T_0 \left(
\begin{array}{c}
\psi_n \\
\varphi_{n-1}
\end{array}
\right)
\end{eqnarray}
where transfer matrix $T_0$ is read as follows (see Supporting Information):
\begin{eqnarray}\label{eq:TMregular}
T_0=\left(
\begin{array}{cc}
\frac{\left(2 \chi ^2-\omega ^2\right) \left(2 \chi ^2-\omega ^2+\Omega ^2\right)}{\chi ^2 \Omega ^2} & -\frac{4 \chi ^4-2 \chi ^2 \omega ^2+\chi ^2 \Omega ^2}{\chi ^2 \Omega ^2} \\
-\frac{-4 \chi ^2+2 \omega ^2-\Omega ^2}{\Omega ^2} & -\frac{4 \chi ^2}{\Omega ^2} \\
\end{array}
\right)
\end{eqnarray}
The coupling between sites $n+1$ and $n-m$ is described by the relation:
\begin{eqnarray}\label{eq:regarray2}
\left(
\begin{array}{c}
\psi_{n+1}\\
\varphi_n
\end{array}
\right)
=Z \left(
\begin{array}{c}
\psi_{n-m+1} \\
\varphi_{n-m-1}
\end{array}
\right),
\end{eqnarray}
where $Z$ is the transfer matrix of the $m-$steps way and $Z=T_0^m$ for the regular (defectless) array.
However, if the array's fragment contains the irregularities, the one-step matrix $T_{i,j}$ differs from matrix $T_0$.
Thus, we should calculate the transfer matrix taking into account the interactions between nanotubes of the regular array and the redundant nanotube, which forms the "discrete state".
In particular, the transition from $\left(\begin{array}{c}\psi_{n+3}\\\varphi_{n+2}\end{array} \right)$ to $\left(\begin{array}{c}\psi_{n-1}\\\varphi_{n-2}\end{array} \right)$ is described by the matrix
\begin{eqnarray}\label{eq:Zmatrix0}
Z=\left(I-T_0 T_{2,1} \tau_{1,2} T_0^{-1} \right)^{-1} T_0 \left(T_{2,1} T_{1,0}+\tau_{2,0} \right)T_0,
\end{eqnarray}
where $I$ is identity matrix and the transfer matrixes $T_{i,j}$ and $\tau_{i,j}$ describe the transition from the bond $\{n+i,n+i-1\}$ to $\{n+j,n+j-1\}$ (see Supporting Information for details).
They are calculated with accounting the oscillations of the "discrete state" nanotube.
In order to analyse the transmission of the wave in the system under consideration, we assume that the left half-array contains both the incoming and reflected wave, while only the transmitted wave occurs in the right half-array.
Thus, we should write
\begin{eqnarray}\label{eq:waves}
\left(\begin{array}{c}\psi_{n-j}\\\varphi_{n-j}\end{array} \right) =A_0 e^{i \kappa \left(n-j\right)}+A_r e^{-i \kappa \left(n-j \right)}, \quad j > 1 \\ \nonumber
\left(\begin{array}{c}\psi_{n+j}\\\varphi_{n+j}\end{array} \right)= A_t e^{i \kappa \left( n+j \right)}, \quad\quad\quad\quad\quad\quad\quad j>3,
\end{eqnarray}
where $A_0$ and $A_r$ are the two-component vectors of the amplitudes of the incoming and reflected waves, and $A_t$ is the vector of amplitude of the transmitted wave.
Amplitudes $A_0, A_r$ and $A_t$ are the vectors because the array under study is complex and the solution for Equations (\ref{eq:eqm0}) contains two component.
It should be taken into account in the process of finding the transmission coefficient, which can be defined as the square of the modulus of the ratio of amplitudes of the incoming and transmitted waves.
Under these conditions the transmission coefficient $t=|A_t/A_0|^2$ can be written as follows:
\begin{eqnarray}\label{eq:transmission}
t=\frac{4 \left(\Omega ^4+8 \chi ^2 \Omega ^2 \cos{\kappa}+16 \chi ^4\right) \Omega ^4 \sin ^2{\kappa}}{| \left(\Omega ^2+4 e^{i \kappa } \chi ^2\right) \left(4 \chi ^2-2 \omega ^2+\Omega ^2\right)\left(Z_{12}-Z_{21}\right)-e^{i \kappa } \left(4 \chi ^2-2 \omega ^2+\Omega ^2\right)^2 Z_{22}+e^{-i \kappa } \left(\Omega ^2+4 e^{i \kappa } \chi ^2 \right)^2 Z_{11} |^2 },
\end{eqnarray}
where $Z_ij$ are the components of the matrix $Z$.
One should note that transmission coefficient (\ref{eq:transmission}) is similar to that in \cite{Tong1999}, but is not identical with it because of the complexity of the CNT array.
\section{Phonon interference}
The model discussed above has two parameters: the constant of the intertube interaction $\chi$ and frequency of natural oscillations $\Omega$.
The theoretical estimation of constant $\chi$ can be made by the direct numerical evaluation of the energy of van der Waals interaction \cite{Sun2005,Sun2006,Smirnov2019}.
The parameters of the Lennard-Jones potential are well known for the carbon nanostructures \cite{Girifalco2000}.
The frequency of the circumferential flexure oscillations $\Omega$ can be evaluated in the framework of the thin elastic shell theory \cite{Mahan02,Chico06,Kaplunov2016,Smirnov2016PhysD} with using of the effective elastic constants of the nanotubes or can be obtained from the data of the molecular-dynamical simulations.
In particular, the dispersion curves for the nanotube array in work \cite{Savin2021} can be well approximated with ratio $\Omega/\chi \approx 1.55$ and dimensional value of the frequency $\Omega_d \approx \,80 cm^{-1}$.
The measured value of the frequency of the circumferential flexure mode is $27 \, cm^{-1}$ for the separated (10,0) nanotube \cite{Dresselhaus00} and $\sim 80 cm^{-1}$ for the nanotube in the bundle \cite{Sauvajol2002}.
Further we will use the dimensionless value $\chi=1$ and the ratio $\Omega/\chi = 1.5$ for the calculation of the transmission coefficient.
As it was mentioned above, the simplest structure, which leads to the Fano resonance in the CNT array, is the single "redundant" nanotube ejected from the regular array (see Figure \ref{fig:defectarray}.b).
However, a similar structure with the redundant nanotube, which parameters differ from the parameters of the array's CNTs, can leads to the phonon interference also.
The nanotubes of smaller diameter has the bigger rigidity and the larger frequency $\Omega_1 > \Omega$.
And vice versa, a larger nanotube has a smaller frequency of the circumferential flexure oscillations.
If these frequency are in the permitted band we should observe the effect of the phonon interference.
Figure \ref{fig:single} shows the transmission coefficients for three structures with different redundant nanotube as the function of the reduced frequency $\omega/\omega_{max}$, where $\omega_{max}=\sqrt{4 \chi ^2+\Omega ^2}$ is the frequency of the optical branch at wave number $\kappa=0$.
Solid, dashed and dot-dashed curves correspond to the redundant nanotubes with natural frequencies $\Omega_1
= \Omega, 1.5 \, \Omega$ and $0.5 \, \Omega$, respectively.
One can see that all three structures have the destructive interference that leads to full reflection of the incoming phonons at the relative frequency $\omega/\omega_{nax} \approx 0.3$.
This frequency belongs to the acoustic branch of the spectrum and corresponds to oscillations of the redundant nanotube as whole.
Second resonant frequency, which associates with circumferential flexure oscillations of the excess nanotube, is in the forbidden band if $\Omega_1=\Omega$.
If the redundant nanotube is more rigid than the CNTS of the array, its resonant frequencies are in the acoustical as well as in the optical branches (see Figure \ref{fig:single} - dashed curves).
Thus, such a nanotube reflects both the acoustical and optical phonons.
While the resonance in the acoustical part of the spectrum is similar to one for the nanotube with $\Omega_1=\Omega$, the destructive resonance in the optical branch is essentially more sharp.
If the nanotube above the array is larger than the CNT of the array, its natural frequency is less than $ \Omega $.
The dot-dashed line in Figure \ref{fig:single} shows the transmittance for a nanotube with an eigenfrequency $ \Omega_1 = 0.5 \, \Omega $.
In this case, both resonant frequencies are in the acoustic region, and we can observe a certain overlap of the destructive resonances.
Another structure that can lead to phonon interference in the CNT array is a combination of several nanotubes above the array.
For example, two additional nanotubes can be located in adjacent grooves (double CNTs) formed by three consecutive nanotubes of the array, or they can be placed at some distance from each other (separate CNTs).
In the first case, the connections between the additional CNTs and the nanotubes of the array overlap, and the resulting transfer matrix $Z$ has a more complex structure.
(Some details of these configurations are presented in the Supporting Information.)
The second combination should be considered as two non-interacting resonant structures separated by a fragment of a regular lattice.
In this case, we can construct the resulting transfer matrix as a combination of the matrix $Z$ of the Equation (\ref{eq:Zmatrix0}) and the product of matrices $T_0$.
Figure \ref{fig:double} shows the examples of the transmittance for resonant structures with two nanotubes.
The solid line corresponds to the doubled nanotubes above the array, while the dashed curve is associated with two nanotubes, which are located on three lattice constants from one to the other.
We can observe the effect of both destructive and constructive resonance in both the acoustic and optical regions.
The main acoustic destructive resonance near the frequency $ \omega \approx 0.3 \omega_{max}$ always occurs, but in the case of separated nanotubes, an extremely narrow constructive resonance arises in the vicinity of it.
The transmittances in Figure \ref{fig:double} have been calculated for the redundant nanotubes, which are the same as CNTs in the array.
Nevertheless, the resonances in the optical domain appear as for doubled as well as for the separated nanotubes, therefore the optical phonons of certain frequencies are reflected from the considered structures.
Thus we can effectively control the phonon transmittance through the CNT array by the various combination of the additional nanotubes placed over the array.
\section{Conclusion}
In this work we construct the model of the regular array of the single-walled carbon nanotubes, which is simple enough and allows us to evaluate the phonon interference resulting to the Fano resonance in the presence of the locally resonance structures.
The latter can be formed by the additional nanotubes, which are placed over the CNT array in the various locations.
Varying the parameters of the nanotubes (the diameter, chirality and number of walls) we can change the position and the width of the destructive resonance, which results to the full reflection of the phonons with the certain frequency as in the acoustical as well as in the optical domain.
Also, in order to change the frequency interval we can modify the CNT's surface that leads to the changing the intertube constant $\chi$.
Thus, the model considered here can be useful in the investigations of the phonon as well as the electro-mechanical properties of the regular CNT structures.
\medskip
\medskip
\bibliographystyle{MSP} |
2002.06415 | \section{Introduction}
Colliders known as Higgs factories (e.g.\ International Linear Collider (ILC)\cite{ILC_TDR_vol1}) are future electron-positron colliders to search for new physics beyond the standard model.
Detectors for Higgs factories, such as International Large Detector (ILD)\cite{ILC_TDR_vol4} of ILC, are designed to have an unprecedented jet energy resolution by combining the capabilities of trackers and calorimeters. Particle Flow Algorithm (PFA) is a method to reconstruct particles in a jet that takes advantage of the best measurements available for the type of particle, and it can realize the required energy resolution. According to PFA, $5 \times 5\ \mathrm{mm^2}$ size readout is preferred for the electromagnetic calorimeter (ECAL) of ILD\cite{ILD_LoI}. To achieve this requirement using scintillator-based calorimeter (ScECAL), Strip Splitting Algorithm (SSA) which reconstructs scintillator strips as virtual square tiles is used\cite{KOTERA_SSA}.
ScECAL consists of scintillator strips, wrapped by reflector films, and Silicon Photomultipliers (SiPMs). The size of a scintillator strip under consideration is $5\ \mathrm{mm}\ \times\ 45\ \mathrm{mm}\ \times\ 2\ \mathrm{mm}$, and $10\ \mathrm{\mu m}$ or $15\ \mathrm{\mu m}$ is being considered as the pixel pitch of a SiPM. Using strip-shaped scintillator, the number of readout channels is 1/10 of that for square-shaped scintillator. However, strip scintillator requires sufficient light yield for Minimum Ionising Particle (MIP) and uniform light yield for various incident positions. In order to get the best performance of ScECAL, the strip shape has to be optimize.
Conventional readout methods for ScECAL are side readout and bottom readout. In the side readout method, a SiPM is placed at the $5\ \mathrm{mm}\ \times\ 2\ \mathrm{mm}$ side of a scintillator. The method has good light yield, but the uniformity is bad. The bottom readout method which a SiPM is placed at the $45\ \mathrm{mm}\ \times\ 5\ \mathrm{mm}$ side has good uniformity, but it has less light yield. Recently, a new readout method, dimple readout was proposed. In this method, there is a dimple at the center of the $45\ \mathrm{mm}\ \times\ 5\ \mathrm{mm}$ side of a strip and a SiPM is implanted into the dimple. To evaluate this new method, we explore the characteristics of the dimple readout method through light yield measurement. This result is also used for optimizing the strip shape.
\section{Measurement}
First, we measured the light yield of the dimple readout scintillator for various incident positions. The measurement was performed using a scintillator strip (BC-408 made by SAINT-GOBAIN) with a dimple at the center of the strip (See \figref{fig:sci_dimple_readout}) and a $15\ \mathrm{mm}$ pitch SiPM (S12571-015P made by Hamamatsu Photonics). The SiPM is implanted in the cavity. A $^{90}\mathrm{Sr}$ checking source was used as a beta ray source; beta rays from the source were made to pass through a collimator with a diameter of $0.5\ \mathrm{mm}$. The collimator and trigger counter positions were fixed, and the strip could move along the $45\ \mathrm{mm}$ length by a moving stage.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{sci_dimple_readout.jpg}
\caption{Scintillator strip with a dimple}\label{fig:sci_dimple_readout}
\end{figure}
\figref{fig:meas_result} shows the measurement result of position dependence. The horizontal axis is the incident position of beta ray, and the vertical axis is the average photon count at the position, which was estimated as follows. The photon count distributions from the SiPM readout were fit using a Landau distribution convoluted with a Gaussian. The most probable value of the Landau distribution was extracted as the mean photon count.
The mean light yield is about 21 p.e.\ that is about twice the photon count for the bottom readout, and this scintillator has good light yield uniformity. The width of dimple shape used in the measurement is about $5\ \mathrm{mm}$, but the width of the dip seen around the center in \figref{fig:meas_result} is about $10\ \mathrm{mm}$. This is because beta rays passed through collimator incident into the strip with spreading.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{1-dim_scan.pdf}
\caption{Result of light yield measurement for a dimple readout scintillator.}\label{fig:meas_result}
\end{figure}
\section{Simulation}
To optimize the scintillator shape, we developed a photon tracing simulation using GEANT4 with G4OpticalPhoton class library. First, we performed the simulation under the same condition with our measurement aiming to reproduce the result of the measurement. In the simulation, electrons pass through the scintillator strip and photons are emitted inside the the scintillator strip according to the characteristic emission spectra of the BC-408 scintillator. Emitted scintillation photons move to the scintillator surface or reflector surface and then they are reflected or refracted at the surface. The photon tracking finishes when a photon gets to the photosensitive area of the SiPM or when a photon is absorbed by the reflector or scintillator.
\tbref{tb:sim_parameters} shows the parameters used in the simulation. Some parameters are taken from data sheets\cite{BC_datasheet,ESR_datasheet}. \figref{fig:sim_detasheet} shows the simulation result under the condition of \tbref{tb:sim_parameters}. The Mean light yield of simulation is higher than the results of the measurement, and there is a small peak around the center in the simulation result.
\begin{table}[H]
\centering
\caption{Simulation parameters}\label{tb:sim_parameters}
\footnotesize
\begin{tabularx}{0.9\linewidth}{C|C} \hline
Parameter & Value \\ \hline
size of scintillator & $45\ \mathrm{mm} \times 5\ \mathrm{mm} \times 2\ \mathrm{mm}$ \\
depth of dimple & $0.8\ \mathrm{mm}$ \\
refractive index & 1.58 \\
absorption length & $380\ \mathrm{cm}$ \\
reflectivity of ESR film & 98\% \\
photosensitive area & $1\ \mathrm{mm} \times 1\ \mathrm{mm}$ \\
depth of collimator & $3\ \mathrm{mm}$ \\
diameter of collimator & $0.5\ \mathrm{mm}$ \\ \hline
\end{tabularx}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{sim_datasheet.pdf}
\caption{Result of simulation with parameters in \tbref{tb:sim_parameters}.}\label{fig:sim_detasheet}
\end{figure}
In the previous simulation, we inputted some optical parameters as fixed value. Parameters about optical properties such as the reflectivity of the reflector film and bulk light attenuation length of the scintillator, however, should depend on the wavelength of scintillation photon. So these values would be deviated from the value in the data sheets described as fixed values. By tuning these parameters, the measurement result can be reproduced as shown in \figref{fig:sim_diff_parameter} (left). On the other hand, different parameter values also reproduce the result as in \figref{fig:sim_diff_parameter} (right).
\begin{figure}[H]
\centering
\begin{minipage}[c]{0.49\linewidth}
\includegraphics[width=\linewidth]{sim_ref95_ABSL200.pdf}
\end{minipage}
\begin{minipage}[c]{0.49\linewidth}
\includegraphics[width=\linewidth]{sim_ref96_ABSL100.pdf}
\end{minipage}
\caption{Results of simulation for different parameters. (left) reflectivity: 95\% and absorption length: $200\ \mathrm{cm}$. (right) reflectivity: 96\% and absorption length: $100\ \mathrm{cm}$.}\label{fig:sim_diff_parameter}
\end{figure}
\figref{fig:sim_diff_parameter} suggests that optical properties such as the reflectivity, the absorption length, and the refractive index cannot be determined uniquely by simulation. Then, we investigated the dependency of optical parameters using simulation. \figref{fig:sim_param_scan} (left) shows the results for different reflectivities of the reflector film and \figref{fig:sim_param_scan} (right) shows the results for different light attenuation lengths. These figures show a small variation of these values has a large effect on the light yield. For example, the light yield is reduced by half when changing the reflectivity by 1\%. So it is necessary to accurately determine the parameters about optical properties of the scintillator and the reflector film. As these properties depend on the photon wavelength, their values must be measured for several wavelengths.
\begin{figure}[H]
\centering
\begin{minipage}[c]{0.49\linewidth}
\includegraphics[width=\linewidth]{sim_ref_scan.pdf}
\end{minipage}
\begin{minipage}[c]{0.49\linewidth}
\includegraphics[width=\linewidth]{sim_ABSL_scan.pdf}
\end{minipage}
\caption{Results of simulation for different optical properties. (left) Light yield for different reflectivities. (right) Light yield for different light absorption lengths.}\label{fig:sim_param_scan}
\end{figure}
Next, we investigated the effects of the dimple size and the position of the SiPM on the detected light yield. \figref{fig:sim_geom_scan} (left) shows the results for different depths of the dimple and \figref{fig:sim_geom_scan} (right) shows the results for different distances from the dimple bottom surface to the top of the SiPM. The light yield around the dimple decreases by decreasing the depth of the dimple, and the entire light yield decreases by increasing the distance from the dimple surface to the SiPM. In the dimple readout method, scintillation photons can enter into the package of the SiPM from the side. This is important for optimizing the scintillator shape.
\begin{figure}[H]
\centering
\begin{minipage}[c]{0.49\linewidth}
\includegraphics[width=\linewidth]{sim_depth_scan.pdf}
\end{minipage}
\begin{minipage}[c]{0.49\linewidth}
\includegraphics[width=\linewidth]{sim_distance_scan.pdf}
\end{minipage}
\caption{Results of simulation for the different geometry. (left) Light yield for different depths of the dimple. (right) Light yield for different distances from the dimple surface to the SiPM.}\label{fig:sim_geom_scan}
\end{figure}
\section{Summary}
We are developing scintillator-based electromagnetic calorimeter for Higgs factories. We confirmed that the dimple readout scintillator has good light yield and good uniformity, and we are making simulation code for optimizing the scintillator shape. Simulation software was developed that can reproduce the behavior of our measurement. Some optical parameters such as the reflectance of the reflector film and the bulk light absorption length of the scintillator strip have large effect on the light yield detected in the strip. To match the result of simulation and measurement, we need dedicated measurements about optical properties.
\section*{Acknowledgements}
We thank CALICE-Asia group members and researchers of IHEP and USTC for various advices on this study. |
2203.10042 | \section{Introduction}
Quantum mechanics is responsible for a number of physical phenomena that are impossible in classical physics. Characterizing these unique properties of quantum systems is a problem of wide scientific interest. Bell famously provided one such example~\cite{Bell:1964kc} where the measurements of a small number of entangled spins (qubits) cleanly distinguishes quantum from local classical physics~\cite{Bohm:1951xw,Bohm:1951xx}. Physical properties of real world systems are far more complex and therefore isolating their uniquely quantum behavior can be more challenging~\cite{Vidal:2002zz,Lu:2018yxh,Lu:2019xwg}. One particularly interesting example is the origin of the initial density fluctuations in the universe. They are believed to have arisen from quantum fluctuations during an inflationary epoch~\cite{Mukhanov:1981xt,Hawking:1982cz,Guth:1982ec,Starobinsky:1982ee,Bardeen:1983qw}, however, this hypothesis has been difficult to test observationally. The universe is classical on cosmological scales and one cannot easily apply Bell's inequality to the density fluctuations directly~\cite{Starobinsky:1986fx,Grishchuk:1990bj} (but see e.g.~\cite{Campo:2005sv,Lim:2014uea,Maldacena:2015bha,Martin:2015qta,Goldstein:2015mha,Nelson:2016kjm,Choudhury:2016cso,Martin:2017zxs,Shandera:2017qkg,dePutter:2019xxv,Martin:2019oqq,Brahma:2021mng,Gomez:2021yhd,Martin:2021qkg,Espinosa-Portales:2022yok} for ongoing work). Instead, one is lead to ask if quantum mechanics {\it was} important in establishing the (statistical) initial conditions for our classical cosmological observations.
Quantum effects play a crucial role in the dynamics of the early universe in many inflationary models. While the resulting observational signals can often be traced to quantum mechanics, the challenge is showing no classical mechanism could produce the same signal~\cite{Berera:1995ie,Berera:1998px,Green:2009ds,LopezNacir:2011kk,LopezNacir:2012rm,Turiaci:2013dka}. One proposal of this kind was made in~\cite{Green:2020whw}, where it was shown that the analytic structure of (non-Gaussian) correlation functions is different for classical and quantum theories when the correlations are produced by local evolution. The origin of this difference arises from the non-zero number of particles needed to produce classical density fluctuations, as illustrated in Figure~\ref{fig:Q_vs_C}. Creation of particles from the quantum vacuum violates energy conservation but is allowed because of the uncertainty principle. In contrast, physical particles will scatter and decay in an interacting theory while conserving energy, giving rise to poles at physical momentum for classical fluctuations. These results are consistent with a number of results relating scattering to the analytic structure of cosmological correlators~\cite{Maldacena:2011nz,Raju:2012zr,Arkani-Hamed:2015bza,Lee:2016vti,Arkani-Hamed:2018kmz,Arkani-Hamed:2018bjr,Benincasa:2018ssx,Baumann:2019oyu,Pajer:2020wxk,Baumann:2020dch,Bonifacio:2021azc,Cabass:2021fnw,Baumann:2021fxj,Baumann:2022jpr}. These features of the correlators are also directly tied to the prospects of observing the signal~\cite{Gleyzes:2016tdh,Flauger:2013hra,Baumann:2021ykm}.
The relationship between correlators and scattering is, of course, best understood in flat space. The LSZ reduction formula~\cite{Lehmann:1954rq} gives a rigorous map between in-out correlators and S-matrix elements. While measurements of flat space correlators are not subject to the same limitations as cosmology, we can still ask if the analytic structure of flat space in-in correlators encodes the quantum vacuum in the same way. Furthermore, one could hope to use LSZ to connect the difference between the analytic structure of classical and quantum correlators directly to scattering of particles in the initial state.
On a purely theoretical level, flat space provides a testing ground for our understanding of cosmological correlators. Yet, if in-in correlators are encoding the physics of the quantum vacuum, one would naturally like to understand how they are related to measurable quantities. Cosmological expansion is essential in producing fluctuations from the vacuum during inflation and does not occur in flat space. To make sense of their flat space analogues, we must introduce particle detectors localized in the spacetime (Unruh-de Witt detectors~\cite{Unruh:1976db,DeWitt:1980hx}). The very act of measuring the state of the quantum field at a localized point in spacetime introduces the energy and momentum needed to excite the vacuum (breaking the time and space translations). This observation was essential for making sense of Unruh radiation~\cite{Unruh:1976db} (i.e.~the Rindler temperature), another example of particle production in flat space. In that case, a thermal distribution of particles is seen by a constantly accelerating (Rindler) observer because of the energy and momentum needed to accelerate the detector in the first place~\cite{Unruh:1983ms,Kaplanek:2019dqu}. Similarly, measuring correlators at a fixed time also requires the injection of energy (by the uncertainty principle) and naturally explains the apparent particle production from nothing which is encoded in the in-in correlators.
The non-local correlations of a field in the quantum vacuum also generates entanglement between the various particle detectors used to detect them~\cite{VALENTINI1991321,Reznik:2002fz,Reznik:2003mnx,Lin:2007mu,Martin-Martinez:2012chf,Nambu:2013rta,Salton:2014jaa,Hummer:2015xaa,Pozas-Kerstjens:2015gta,Sachs:2017exo,Henderson:2018lcy}. As such, the detector entanglement represents a probe of the entanglement of the interacting vacuum of the fields themselves. Famously, the entanglement entropy of the fields on a finite region is expected to be proportional to the area for the quantum vacuum~\cite{Srednicki:1993im,Eisert:2008ur} and the volume for a generic excited state (see e.g.~\cite{Casini:2022rlv} for review). While the entanglement entropy is not a quantity we can easily measure (or calculate), naturally one would like to understand if the non-Gaussian signature of the quantum vacuum state is related.
In this paper, we will expose the connection between flat space scattering, entanglement, and cosmological observables through the properties of in-in correlators. While they are a less natural observables in flat space than the S-matrix, we show that there is a precise link between the structure of poles in the in-in correlators and the associated scattering processes, as illustrated in Figure~\ref{fig:Q_vs_C}. This provides a robust demonstration that the poles appearing at physical momenta in classical states are directly tied to the decay or scattering of particles in the initial state. The poles are absent in the quantum vacuum because it contains no particles, connecting the analysis of~\cite{Green:2020whw} to flat space amplitudes.
\begin{figure}[h!]
\centering
\includegraphics[width=5in]{flat_quantum_figure.pdf}
\caption{Illustration of the difference between non-Gaussian in-in correlations of $\phi$ for quantum vacuum fluctuations (left) and classical fluctuations (right). Measuring quantum fluctuations of $\phi$ at three spacelike separated points corresponds to the creation of three particles from the vacuum, producing a total energy pole in the correlator. In contrast, classical fluctuations only occur in a state containing particles. Any local classical process that produces a total energy pole will also cause particles in the initial state to decay, producing additional three point correlations with poles at physical momenta.}
\label{fig:Q_vs_C}
\end{figure}
To make physical sense of these results, we show that the in-in correlators can be interpreted as an the amplitudes to excite multiple UdW detectors localized at space-like separated points. These detectors then provide a natural connection between cosmological Bell-type tests and more typical characterizations in terms of entanglement. The entanglement of these detectors shares many similarities with the entanglement of the underlying field; yet, we can directly connect these properties to the underlying in-in correlators. We will see that the analytic structure of the correlators is directly related to short or long ranged entanglement of the detectors.
This paper is organized as follows: in Section~\ref{sec:Smatrix}, we will discuss the relationship between the S-matrix and the in-in correlators, demonstrating our main results about the analytic structure of these correlators. In Section~\ref{sec:detector}, we show how to interpret our results in terms of UdW detectors. We then show how these detectors are entangled in Section~\ref{sec:entanglement}, and conclude in Section~\ref{sec:conclusions}. Appendix~\ref{app:nonlocal} contains additional details about the relationship between enforcing causality of our classical theory and the existence of anti-particles.
\section{From In-In Correlators and the S-matrix} \label{sec:Smatrix}
The process of creating particles from the vacuum is an inherently quantum mechanical phenomenon. It gives rise to structure in inflationary models~\cite{Mukhanov:1981xt,Hawking:1982cz,Guth:1982ec,Starobinsky:1982ee,Bardeen:1983qw} and Hawking radiation from black holes~\cite{Hawking:1975vcx}. In flat space, such a process is forbidden by energy conservation, but the amplitude is still formally well-defined as it is related to physical scattering processes by crossing symmetry. This statement can be made rigorously through the LSZ reduction formula~\cite{Lehmann:1954rq}.
We would like to understand how quantum fluctuation can be distinguished from classical (e.g. thermal) fluctuations. Classical fluctuations may occur in any spacetime and thus we can ask this question in flat space as well. We will show in this section that an isolated total energy pole in an equal-time correlator (in-in or in-out) is precisely a reflection of the amplitude for production of particles from the vacuum. We will then show that for classical fluctuations, additional poles arise from the on-shell scattering processes of particles in the initial state. This difference between quantum vacuum fluctuations and classical fluctuations is illustrated in Figure~\ref{fig:Q_vs_C}.
\subsection{The In-In Formalism} \label{sec:inin}
Cosmological correlators are described by (equal time) in-in correlations functions. In perturbation theory, these are defined as~\cite{Weinberg:2005vy,Weinberg:2006ac}
\begin{equation}\label{eq:inin}
\langle {\rm in}| Q(t) |{\rm in} \rangle=\left\langle\bar{T} \exp \left[i \int_{-\infty(1+i \epsilon)}^{t} H_{\mathrm{int}}(t') d t' \right] \, Q_{\mathrm{int}}(t) \, T \exp \left[-i \int_{-\infty(1-i \epsilon)}^{t} H_{\mathrm{int}}(t') d t' \right]\right\rangle \ ,
\end{equation}
where $H_{\mathrm{int}}(t) = \int d^3 x \sqrt{-g} {\cal H}_{\rm int} ({\vec x},t)$, $g$ is the determinant of the metric, ${\cal H}_{\rm int}({\vec x}, t)$ is the Hamiltonian density of the interaction Hamiltonian, and $Q_{\rm int}(t)$ is the operator $Q(t)$ in terms of the interaction picture fields. Computed in a quasi-de Sitter background for super-horizon modes (i.e.~points separated by super-horizon distances or fourier modes with super-horizon wavelengths) the in-in correlators give the classical statistical correlations of the initial density fluctuations.
It is occasionally useful to express the in-in correlators in terms of commutators (although, technically, it only applies when $\epsilon = 0$). Expanding the time-ordered exponentials, one finds
\begin{subequations}
\begin{align}
\langle {\rm in}| Q(t) |{\rm in} \rangle &= \sum_{N= 0}^{\infty} i^{N} \int_{-\infty}^{t} d t_{N} \int_{-\infty}^{t_{N}} d t_{N-1} \cdots \int_{-\infty}^{t_{2}} d t_{1} \nonumber \\[4pt]
&\hspace{60pt} \times\Big\langle\big[H_{{\rm int}}\big(t_{1}\big) \big[H_{{\rm int}}\big(t_{2}\big) \cdots\big[H_{{\rm int}}\big(t_{N}\big), Q_{\mathrm{int}}(t) \big] \cdots\big]\big]\Big\rangle \ . \label{eq:commutator}
\end{align}
\end{subequations}
This representation is useful for two reasons. First, this expression makes causality manifest as the commutators must vanish outside the lightcone. Second, the non-zero commutator is the defining characteristic of a quantum theory and thus this representation is useful in isolating the quantum nature of the correlators.
Our goal in this paper is to understand what aspects of the in-in correlator reflect truly quantum fluctuations and what other aspects could arise purely classically. With this in mind, we will focus on the fluctuations of a single scalar field $\phi$, which may be represented as a quantum mechanical operator or a classical stochastic variable. Following~\cite{Green:2020whw}, we can describe the free classical or quantum theories using the mode expansion
\begin{equation}
\phi({\vec x},t) = \int \frac{d^3 k}{(2\pi)^3} e^{i {\vec k}\cdot {\vec x}} \frac{1}{\sqrt{2k}} [ a^\dagger_{-{\vec k}} \, e^{i k t} +a_{{\vec k}} e^{-i k t} ] \ ,
\end{equation}
where $k\equiv |\vec k|$. The distinction between quantum and classical is how we interpret $a_{\vec k}$ and $a^{\dagger}_{\vec k}$.
In the quantum theory, $a_{\vec k}$ and $a_{\vec k}^{\dagger}$ are operators satisfying
\begin{equation}
[a_{\vec k}, a^\dagger_{{\vec k}'}] = (2\pi)^3 \delta({\vec k} - {\vec k}') \qquad a_{\vec k} |0 \rangle = 0\,,
\end{equation}
where $|0 \rangle$ is the vacuum of the free theory. In contrast, in the classical theory they are only random variables obeying the statistics
\begin{equation}
\langle a^{\dagger}_{\vec k} a_{{\vec k}'}\rangle_c =\frac{1}{2} (2\pi)^3 \delta({\vec k}-{\vec k}') = \langle a_{{\vec k}'} a^{\dagger}_{\vec k} \rangle_c\, \ .
\end{equation}
In the free theory, these choices give the same equal-time correlators which are (essentially) the only cosmological observable. Of course, in flat space, we would be free to directly measure the commutators of the operators or non-equal time correlators to expose the difference between classical and quantum mechanics, but we will restrict ourselves to equal-time to parallel the cosmological correlators.
We can start with a simple example for illustration: given a massless scalar $\phi$ with a cubic self interaction, ${\cal H}_{\rm int} = \frac{1}{3!} \mu \phi^3$, the in-in three-point function in terms of fourier modes (the bispectrum) is given by
\begin{eqnarray}
\langle \phi(t, {\vec k}_1) \phi(t,{\vec k}_2) \phi(t,{\vec k}_3) \rangle &=& 2 {\rm Im} \int^t_{-\infty} dt'\frac{\mu}{8 k_1 k_2 k_3} e^{- i (k_1+k_2+k_3) (t-t')}\\
&=& - \frac{\mu}{4 k_1 k_2 k_3 (k_1+k_2+k_3) } \label{eq:vac_bi} ,
\end{eqnarray}
where $\langle Q(t) \rangle \equiv \langle \Omega | Q(t) | \Omega \rangle$ is the correlation in the interacting vacuum, $|\Omega \rangle$. We have only fourier transformed the spatial coordinates by analogy with a typical cosmological correlator. Just like a cosmological correlator, we see that this in-in correlation functions exhibits a pole only in the total energy $k_t= (k_1+k_2+k_3)$. The presence of such a pole is a unique signature of the quantum vacuum and therefore we would like to better understand the physical significance of this correlation.
\vskip 5pt
For classical fluctuations, the appearance of additional poles can be seen by perturbatively solving the equations of motion with the same cubic interaction, ${\cal H}_{\rm int} = \frac{1}{3!} \mu \phi^3$, such that
\begin{equation}
\phi^{(2)}({\vec k},t) =\frac{\mu}{2} \int^t dt' G(k; t-t') \int \frac{d^3 p}{(2\pi)^3} \phi({\vec p}, t') \phi({\vec k}-{\vec p}, t')
\end{equation}
where $G(k,t) =\sin k t /k$ is the causal Green's function. Using this to calculate the bispectrum we find
\begin{equation}
\langle \phi(t, {\vec k}_1) \phi(t,{\vec k}_2) \phi(t,{\vec k}_3) \rangle_c = \frac{\mu}{16 k_1 k_2 k_3} \left(\frac{3}{ k_t } + \sum_{i=1}^3\frac{1}{k_t-2 k_i} \right) \ ,
\end{equation}
where we assumed the contribution from $t \to -\infty$ vanishes\footnote{For vacuum correlators, this is equivalent to the $i\epsilon$ prescription. For excited states, this will ultimately be tied to how the poles at physical momentum are resolved.}. We see that the classical example has poles both in the total energy and in folded configurations where $k_1 = k_2+k_3$ and permutations therefore. In this respect, we see that flat space in-in correlators exhibit the same structure as the cosmological counter-parts~\footnote{In~\cite{Green:2020whw}, it was shown that this conclusion is an inevitable consequence of causality and Lorentz invariance. In Appendix~\ref{app:nonlocal}, we extend this argument to show that it is equivalent to the need for anti-particles in a relativistic quantum theory.}.
Naturally, we would also like to understand how one interpolates between the quantum mechanical and classical result. After all, we certainly live in a quantum universe and would like to understand how classical fluctuations would arise. We can make this connection by taking the quantum theory in the limit where every momentum state is highly occupied,
\begin{equation} \label{eq:n_state}
|n_{{\vec k}} \rangle =\frac{1}{\sqrt{n!} } \left( a^{\dagger}_{{\vec k}}\right)^n |0\rangle \quad \to \quad |n \rangle \equiv \bigotimes_{{\vec k}_i} |n_{{\vec k}} \rangle .
\end{equation}
Suppose we have a real field $\phi$ is in an $n$-particle state, $|n \rangle$, we see that the operator $\hat \phi$ acts schematically
\begin{eqnarray}
\phi({\vec k},t) |n \rangle &\to& \sqrt{n_{-{\vec k}}+1} f(k,t) |n_{-{\vec k}}+1\rangle |\hat n; -{\vec k} \rangle + \sqrt{n_{{\vec k}}} f^*(k,t) |n_{{\vec k}}-1\rangle |\hat n; {\vec k} \rangle\,\\
\langle n | \phi({\vec k},t) &\to& \sqrt{n_{{\vec k}}+1} f^*(k,t) \langle n_{{\vec k}}+1 | \langle \hat n; {\vec k} | + \sqrt{n_{-{\vec k}}} f(k,t) \langle n_{-{\vec k}}-1 |\langle \hat n; - {\vec k} | ,
\end{eqnarray}
where $f(k,t) = e^{i k t} /\sqrt{2k}$ is the positive frequency classical solution, and we defined
\begin{equation}
|\hat n; {\vec q} \rangle \equiv |n \rangle \equiv \bigotimes_{{\vec k}_i \neq {\vec q}} |n_{{\vec k}_i} \rangle \ .
\end{equation}
The two point function in this state is therefore
\begin{equation}
\langle \phi({\vec k}, t) \phi({\vec k}',t') \rangle = \left( (n+1)f^*(k,t) f(k',t') + n f(k,t) f^*(k',t') \right) (2\pi)^3 \delta({\vec k}+{\vec k}') \ ,
\end{equation}
which reproduces our classical statistical when taking $n \to \infty$ with $f({\vec k},t) \sqrt{n}$ fixed. In this limit, the two point function becomes
\begin{equation}
\langle \phi({\vec k}, t) \phi({\vec k}',t') \rangle \to n \left(f(k,t) f^*(k',t')+f^*(k,t) f(k',t') \right) (2\pi)^3 \delta({\vec k}+{\vec k}') \ .
\end{equation}
This expression is symmetric in $t \leftrightarrow t'$ and thus shows that $\phi({\vec k},t)$ and $\phi({\vec k}',t')$ commute, as we would expect for a classical variable and not for a quantum mechanical operator.
\subsection{Relation to the S-Matrix}
Now we want to understand how the poles in our in-in correlators are related to physical scattering processes.
\subsubsection*{Quantum Vacuum}
The most direct relationship between scattering and correlation functions is the LSZ reduction formula~\cite{Lehmann:1954rq}. Given a time-ordered (quantum) vacuum correlation function in flat space, we can extract the associated S-matrix elements via
\begin{align}
\left\langle \{ p_{j} \}_{\text{out}}\right| \{ q_{i} \}_{\text{in}}\rangle=&\int \prod_{i=1}^{m}\left\{\mathrm{d}^{4} x_{i} \frac{i e^{i q_{i} \cdot x_{i}}\left(-\square_{x_{i}}+m^{2}\right)}{(2 \pi)^{\frac{3}{2}} Z^{\frac{1}{2}}}\right\} \prod_{j=1}^{n}\left\{\mathrm{d}^{4} y_{j} \frac{i e^{-i p_{j} \cdot y_{j}}\left(-\square_{y_{j}}+m^{2}\right)}{(2 \pi)^{\frac{3}{2}} Z^{\frac{1}{2}}}\right\} \nonumber \\
& \times \left\langle \Omega\left|\mathrm{T} \phi\left(x_{1}\right) \ldots \phi\left(x_{m}\right) \phi\left(y_{1}\right) \ldots \phi\left(y_{n}\right)\right| \Omega\right\rangle \ ,
\end{align}
where we are using the metric signature $(-+++)$. The scattering states that appear on the left of this expression are defined by
\begin{equation}
| \{ q_{i} \}_{\text{in}}\rangle = \lim_{t\to -\infty} \prod_i \sqrt{2\omega_{{\vec q}_i}} a^\dagger_{{\vec q}_i} |\Omega\rangle \qquad \left\langle \{ p_{j} \}_{\text{out}}\right| = \lim_{t\to +\infty} \langle \Omega | \prod_j \sqrt{2\omega_{{\vec p}_j}} a^\dagger_{{\vec p}_j} \ .
\end{equation}
Because the vacuum, $|\Omega\rangle$, is annihilate by $a_{\vec k}$, isolating the positive frequency via the Fourier transform (i.e.~integrating the correlation function with $\int dt e^{-i \omega t}$ for $\omega > 0$) also isolates a particle in the initial state.
In perturbation theory, the time-ordered and in-in correlators are closely related, allowing us to directly related the poles in each to the associated S-matrix elements. To calculate the in-out correlators, we need the time ordered Green's function
\begin{equation}
\langle 0| T(\phi(t_1,{\vec k}) \phi(t_2,{\vec k}') |0 \rangle = \frac{1}{2 k} e^{-i k |t_1-t_2|} \, (2\pi)^3 \delta({\vec k}+{\vec k}') \ .
\end{equation}
We can calculate an equal time in-out correlator for our example with a cubic interaction to find the same result as the in-in correlator
\begin{eqnarray} \
\langle T \phi(0, {\vec k}_1) \phi(0,{\vec k}_2) \phi(0,{\vec k}_3) \rangle &=& - i \int^{\infty}_{-\infty} dt \frac{\mu}{8 k_1 k_2 k_3} e^{- i (k_1+k_2+k_3)|t| }\\
&=& - \frac{\mu}{4 k_1 k_2 k_3 (k_1+k_2+k_3) } \ .
\end{eqnarray}
To see the connection to the S-matrix elements, we need to consider unequal times such that
\begin{align}
\langle T \phi(t_1, {\vec k}_1) \phi(t_2,{\vec k}_2) \phi(t_3,{\vec k}_3) \rangle =& - i \int^{\infty}_{-\infty} dt \frac{\mu}{8 k_1 k_2 k_3} e^{-i k_1 |t_1-t| } e^{-i k_2 |t_2-t| }e^{-i k_3 |t_3-t| }\\
=& - \frac{\mu}{8 k_1 k_2 k_3} \bigg(\frac{e^{-i k_2 (t_2-t_1) - i k_3(t_3-t_1) } }{k_1+k_2+k_3} \\
& + \frac{e^{i k_1 (t_1-t_2) -i k_3 (t_3-t_2)} - e^{-i k_2 (t_2-t_1) - i k_3(t_3-t_1) } }{-k_1+k_2+k_3} \\
& + \frac{e^{i k_1 (t_1-t_3) + i k_2 (t_2-t_3)} - e^{i k_1 (t_1-t_2) - i k_3(t_3-t_2) } }{-k_1-k_2+k_3} \\
& + \frac{e^{i k_1 (t_1-t_3) + i k_2 (t_2-t_3)} }{k_1+k_2+k_3} \bigg) \ ,
\end{align}
where we have assume $t_1<t_2<t_3$ without loss of generality.
Since our above expression assumes $t_1 < t_2 < t_3$, applying LSZ while maintaining this order is consistent if $t_1$ is associated with the initial state and $t_2, t_3$ are the final states. We can calculate the S-matrix elements by first taking the Fourier transform,
\begin{equation}
\langle T \phi(\omega_3,\vec{k}_3)\phi(\omega_2,\vec{k}_2)\phi(\omega_1,\vec{k}_1)\rangle^{\prime} = \int dt_1 dt_2 dt_3 e^{i (-\omega_1 t_1+\omega_2 t_2+\omega_3 t_3)}\langle T \phi(t_3,\vec{k}_3)\phi(t_2,\vec{k}_2)\phi(t_1,\vec{k}_1)\rangle \ .
\end{equation}
Performing the integral imposing $t_1 < t_2 < t_3$ gives the term of interest
\begin{equation}\label{eq:inin_fourier}
\begin{aligned}
&\langle T \phi(\omega_3,\vec{k}_3)\phi(\omega_2,\vec{k}_2)\phi(\omega_1,\vec{k}_1)\rangle^{\prime} \\ &\supset \frac{i\mu}{8k_2k_2k_3}2\pi\delta(\omega_2+\omega_3-\omega_1)\Big(-\frac{1}{\omega_3-k_3}\frac{1}{\omega_3-k_3+\omega_2-k_2}\frac{1}{k_1+k_2+k_3}-\\&\frac{1}{\omega_3-k_3}\frac{1}{\omega_3+\omega_2-k_1}\frac{1}{-k_1+k_2+k_3}+\frac{1}{\omega_3-k_3}\frac{1}{\omega_3-k_3+\omega_2-k_2}\frac{1}{-k_1+k_2+k_3}-\\&
\frac{1}{\omega_3-k_1-k_2}\frac{1}{\omega_3-\omega_2-k_1}\frac{1}{-k_1-k_2+k_3}+\frac{1}{\omega_3-k_3}\frac{1}{\omega_2+\omega_3-k_1}\frac{1}{-k_1-k_2+k_3}\\&-\frac{1}{\omega_3-k_1-k_2}\frac{1}{\omega_3+\omega_2-k_1}\frac{1}{k_1+k_2+k_3}\Big) \ .
\end{aligned}
\end{equation}
Now we want to isolate the part of the correlator that encodes the ${\vec k}_1 \to {\vec k}_2+{\vec k}_3$ scattering amplitude. Using LSZ, we see
\begin{equation}
\begin{aligned}
\langle k_2,k_3|k_1\rangle &= \lim_{\omega_i \to k_i}(\omega_1-k_1)(\omega_2-k_2)(\omega_3-k_3)(8 k_1 k_2 k_3) \langle T \phi(\omega_3,\vec{k}_3)\phi(\omega_2,\vec{k}_2)\phi(\omega_1,\vec{k}_1)\rangle
\end{aligned}
\end{equation}
Since each factor of $(\omega_i -k_i)$ will vanish in the limit $\omega_i \to k_i$, it is easy to see that only terms in the in-out correlator with three poles in on-shell limit will contribute to amplitude. As result, only two terms from Equation~(\ref{eq:inin_fourier}) contribute to the amplitude
\begin{equation}\label{eq:LSZ_amp}
\begin{aligned}
\langle k_2,k_3|k_1\rangle
&= i\mu (2\pi) \lim_{\omega_i \to k_i}\delta(\omega_2+\omega_3-\omega_1)
\Big(-\frac{\omega_3-k_3}{\omega_3-k_3}\frac{\omega_1-k_1}{\omega_3+\omega_2-k_1}\frac{\omega_2-k_2}{-k_1+k_2+k_3}\\
&\hskip 5cm +\frac{\omega_3-k_3}{\omega_3-k_3}\frac{\omega_2-k_2}{\omega_3-k_3+\omega_2-k_2}\frac{\omega_1-k_1}{-k_1+k_2+k_3}\Big)\\
&=-i\mu(2\pi)\delta(k_2+k_3-k_1)
\end{aligned}
\end{equation}
We see that the poles in $k_1-k_2-k_3$ are precisely those that give the $k_1 \to k_2 + k_3$ scattering amplitude.
The equal-time in-in and in-out correlation functions are the same, yet in both cases the poles responsible for a non-zero scattering amplitude vanish at equal time. This is a reflection of the fact that no scattering process takes place in the vacuum: the only way to produce the scattering process requires one of the operators to be placed at $t\to -\infty$ and the other two taken at $t \to +\infty$. The total energy pole that survives at equal time reflects only the $0\to 3$ or $3\to 0$ processes that are forbidden by energy conservation.
\subsubsection*{Classical Fluctuations}
Time-ordered (in-out) correlation functions are also calculable for classical statistics. Using the time-ordered Green's function, we can calculate the leading correction to $\varphi$ as
\begin{equation}
\phi^{(2)}({\vec k},t) =\frac{\mu}{2} \int dt' G_{\rm F}(k,t-t')\int \frac{d^3 p}{(2\pi)^3} \phi({\vec p},t') \phi({\vec k}-{\vec p},t') \ ,
\end{equation}
where we are using the Feynman propagator
\begin{equation}
G_{\rm F}(k,t) = \frac{1}{2k} e^{-i k |t|} \ .
\end{equation}
We can then calculate the time-ordered correlator by substituting this expression and applying the classical (Gaussian) statistics,
\begin{align}\label{eq:classical_IO}
\langle T \phi(t, {\vec k}_1) \phi(t,{\vec k}_2) \phi(t,{\vec k}_3) \rangle_c &= - i \int^{\infty}_{-\infty} dt' \frac{\mu}{16 k_1 k_2 k_3} e^{- i k_1|t-t'|-i(k_2+k_3)(t-t') }+ {\rm permutations}\\
&= - \frac{\mu}{16 k_1 k_2 k_3 }\left(\frac{3}{ k_t} - \frac{1}{k_1-k_2-k_3}-\frac{1}{k_2-k_3-k_1}- \frac{1}{k_3-k_1-k_2} \right) \ . \nonumber
\end{align}
Like the quantum case, this is precisely the same as the in-in correlator and we see the appearance of poles at physical momenta.
In order to understand these new poles, we first notice that the LSZ formula does not apply straightforwardly to our classical correlator. Concretely, we recall that the relation between $\omega>0$ ($\omega <0$) and particles in the in-state (out-state) relied on the fact that the quantum vacuum is annihilated by the negative frequency mode, $a_{\vec k} |\Omega \rangle = 0$. To make sense of what LSZ would imply for our classical correlations, let us interpret the classical correlators as arising from highly occupied state, $|n\rangle$. If we apply LSZ in this state, we get
\begin{equation}\label{eq:LSZ_class}
\begin{aligned}
\int \mathrm{d}^{4} z \frac{i e^{i q \cdot z}\left(-\square_{z}+m^{2}\right)}{(2 \pi)^{\frac{3}{2}} Z^{\frac{1}{2}}} & \left\langle n_{\text{out}}\left|\mathrm{T} \varphi\left(x_{1}\right) \ldots \varphi\left(x_{m}\right)
\varphi\left(z\right)\varphi\left(y_{1}\right) \ldots \varphi\left(y_{n}\right)\right| n_{\text{in}}\right\rangle \\
=&\sqrt{2\omega_p}\left\langle n_{\text{out}} \left|\mathrm{T} \varphi\left(x_{1}\right) \ldots \varphi\left(x_{m}\right)
\varphi\left(y_{1}\right) \ldots \varphi\left(y_{n}\right) a_p^\dagger \right| n_{\text{in}}\right\rangle\\
&-\sqrt{2\omega_p} \left\langle n_{\text{out}} \left| a_p^\dagger \mathrm{T} \varphi\left(x_{1}\right) \ldots \varphi\left(x_{m}\right)
\varphi\left(y_{1}\right) \ldots \varphi\left(y_{n}\right) \right| n_{\text{in}} \right\rangle \ ,
\end{aligned}
\end{equation}
where we have labeled $|n_{\text{out}}\rangle$ and $|n_{\text{in}}\rangle$ to indicate that they are defined to $t = +\infty$ and $-\infty$ respectively. We see that the LSZ formula does not isolate a particle in an in- or out-state, but instead isolated a particle in the in state minus a hole in the out state (or vice versa) for every field.
A related consequence of this analogue of Equation~(\ref{eq:LSZ_class}) is that equal time correlators can now exhibit physical poles. Specifically, we can use LSZ to calculate the S-matrix element,
\begin{equation}
\lim_{t\to +\infty} \left(\langle n_{\text{out}} | a_{k_2} a_{k_3}a^{\dagger}_{-k_1}\right) \, | n_{\text{in}}\rangle = \langle n_{k_2}+1, n_{k_3}+1, n_{k_1}-1 | n_{\text{in}}\rangle \propto {\cal A}_{1\to 2} \delta({\vec k}_1 -{\vec k}_2 -{\vec k}_3) \ .
\end{equation}
This is just the amplitude for the decay of a particle with momentum ${\vec k}_1$ in the initial state to particles with momentum ${\vec k}_2$ and ${\vec k}_3$ in the final state. Since all the operators acting on $\langle n|$ as $t\to +\infty$ do not vanish, there is no reason to expect the equal-time correlator to vanish either.
The most straightforward consequence is that the poles at physical momenta seen in the equal-time in-out (and consequently in-in) correlators, Equation~(\ref{eq:classical_IO}), are the same poles responsible for the $1\to 2$ and $2\to1$ S-matrix elements determined by LSZ. This can be seen by directly applying the (naive) LSZ formula to the unequal-time classical in-out correlator. The result mirrors our quantum calculation. The full expression for the non-equal time correlator is quite long, but the term responsible for the ``S-matrix" element is now,
\begin{align}\label{eq:class_unequal}
\langle \varphi_{k_1}(t_1)\varphi_{k_2}(t_2)\varphi_{k_3}(t_3)\rangle^{\prime}_c \supset& \frac{\mu}{16k_1k_2k_3}\Bigg(\frac{\cos[k_2\left(t_1-t_2\right)+k_3\left(t_1-t_3\right)]}{k_1-k_2-k_3}
\\
&+\frac{\cos[k_1\left(t_1-t_2\right)+k_3\left(t_2-t_3\right)]}{-k_1+k_2+k_3}
+\frac{\cos\left[k_1\left(t_1-t_3\right)+k_2\left(t_3-t_2\right)\right]}{-k_1+k_2+k_3} \Bigg) \nonumber \\
&\xrightarrow[]{t_1=t_2=t_3} \frac{\mu}{16k_1k_2k_3} \frac{1}{-k_1+k_2+k_3} \ .
\end{align}
In other words, if we were to repeat the LSZ procedure, as in Equation~(\ref{eq:LSZ_amp}), to the first two lines of~(\ref{eq:class_unequal}), we would recover a non-zero result. The final line shows that this term survives the equal limit limit, in contrast to quantum case. This provides a concrete demonstration that the physical poles in the in-in correlators can be interpreted as the decay of particles in the initial state, as was argued in~\cite{Green:2020whw} for inflationary correlators.
\subsection{Resolving and Interpreting Poles at Physical Momenta}
The connection between in-in correlators and S-matrix elements provides some useful intuition. However, if the in-in correlators represent a physical measurements, we would not expect them to have true poles at physical momenta, as the answers to physical questions are rarely infinite\footnote{It is also known that these divergences are not regulated by standard renormalization techniques, as was seen from studying the divergences of perturbation theory in non-Bunch Davies dS vacua~\cite{Banks:2002nv}. }. In the
case of inflationary correlators, it was argued in~\cite{Green:2020whw} that the S-matrix elements that are responsible for the poles also cause the particles to decay, therefore introducing an effective width. As a result, one might expect the physical poles to be replaced by a resonance, both in the inflationary context and in flat space. While this resolution was seen in explicit examples~\cite{LopezNacir:2011kk,Turiaci:2013dka}, it is less clear that this the only resolution, particularly in cosmology. In an expanding universe, the energy density blue shifts in the past and diverges as $t\to -\infty$. As a result, cosmological correlators are also regulated by the finite duration of inflation~\cite{Holman:2007na,Agullo:2010ws,Ashoorioon:2010xg,Ganc:2011dy,Chialva:2011hc,Agullo:2012cs}.
Flat space provides a very useful testing ground for the regulation of these physical poles. First, there is no analogue of the blueshift and the energy does not diverge at early times. As a result, the $t\to -\infty$ limit is not necessarily unphysical. In addition, in flat space, decays can be forbidden by energy conservation\footnote{As energy is not conserved in cosmology, decays can always occur.} and thus can eliminate the role of a finite width. This is easily achieved by considering massive scalars in place of our massless correlators. Using this approach, we will see that it is not strictly cosmological expansion that is responsible for physical divergences as $t \to -\infty$.
\subsubsection*{Massive Particles}
In flat space, the decays of particles are controlled by energy conservation. The simplest way to test the connection between the physical poles and decay is to consider massive particles such that $E_k = \sqrt{k^2 +m^2}$ and the positive frequency mode functions become
\begin{equation}
\phi({\vec k}, t) = \frac{1}{\sqrt{2 E_k}} e^{i E_k t} \ .
\end{equation}
With the modified mode function, we can calculate the equal time in-in bispectrum as before. For quantum and classical statistics, one finds
\begin{eqnarray}
\langle T \phi(0, {\vec k}_1) \phi(0,{\vec k}_2) \phi(0,{\vec k}_3) \rangle &=& - \frac{\mu}{4 E_1 E_2 E_3 (E_1+E_2+E_3) } ,
\end{eqnarray}
and
\begin{equation}
\begin{aligned}
\langle \phi(t, {\vec k}_1) \phi(t,{\vec k}_2) \phi(t,{\vec k}_3) \rangle_c = \frac{\mu}{16 E_1 E_2 E_3} &\Bigg(\frac{3}{ E_t } -\frac{1}{E_1-E_2-E_3}\\
&-\frac{1}{E_2-E_1-E_3}-\frac{1}{E_3-E_1-E_2} \Bigg) \ ,
\end{aligned}
\end{equation}
respectively. While we see the poles when $E_1+E_2=E_3$, and permutations thereof, these poles cannot be reached at physical energies for the same kinematic reason that the lightest massive particle is stable. As a result, we see the connection between the stability of these particles and the absence of poles at physical momentum.
At first sight, this indeed suggests that the finite width of the particle is sufficient to avoid poles at physical momenta. At least for the three point function, eliminating the width also eliminated the pole. However, if we continue to higher point correlators, even stable particles can lead to poles as physical momenta. We do not find such poles at four-points with a $\lambda \phi^4$ interaction. However, the classical five point correlator due to a contact interaction ${\cal H}_{\rm int} = \frac{1}{5!} \frac{\phi^5}{\Lambda}$ takes the form
\begin{equation}
\langle \phi(t,{\vec k}_1) .. \phi(t,{\vec k}_5) \rangle_c = \frac{1}{256\Lambda E_1 E_2 E_3 E_4 E_5} \bigg( \frac{5}{E_{\rm tot}} + \sum_{i} \frac{3}{(E_{\rm tot}-2 E_i)} + \sum_{i\neq j} \frac{1}{E_{\rm tot} - 2 E_i -2 E_j} \bigg)
\end{equation}
where $E_{\rm tot}= \sum_i E_i$. The final term contains a pole that is consistent with the allowed kinematic region of $2\to 3$ scattering. In this sense, we can see that nothing prevents us from reaching this pole for physical momentum. In addition, since the particles don't decay, there is no finite width that needs to be included in this calculation. Finally, there is no analogue of the blue-shifting of energies at early times that demands that we regulate the early time limit of this calculation. Clearly we need another physical interpretation for how this pole arises.
\subsubsection*{Finite Time of Interactions}\label{sec:finite_time}
The origin of the physical poles in the classical case can be understood from the integral expression for the correlator,
\begin{equation}
\langle \phi(t,{\vec k}_1) .. \phi(t,{\vec k}_5) \rangle_c = \frac{1}{16 \Lambda E_1.. E_5} \sum_i \int_{-\infty}^t dt' \sin(E_i (t-t')) \prod_{j\neq i} \cos(E_j (t-t'))
\end{equation}
When we sit on a pole where $E_{\rm tot} - 2 E_i -2 E_j$, there is a non-oscillatory contribution to the integrand such that the integral diverges at $t \to -\infty$. This is a reflection of the fact that there is now an on-shell process that changes the classical distribution. Specifically, our Gaussian state $|n\rangle$ is not a stationary configuration in the presence of these interactions. Instead, the particles can now scatter, exchanging energy, momentum, and even particle number. Given an infinite amount of time to interact, we should expect the final distribution of particles to be a stationary configuration, e.g.~a thermal distribution. Stationary states do not normally exhibit the long range correlations of the quantum vacuum fluctuations\footnote{We are not claiming long range correlations are impossible in general, but the consistent appearance of these poles in perturbation theory suggests that it does not arise from a generic local Hamiltonian near a Gaussian fixed point.}.
We should therefore create the initial Gaussian state at a finite time in the past, $t_i < t$, such that the distribution at time $t$ is only weakly non-Gaussian. For simplicity, we return to the bispectrum
\begin{align}
\langle \phi(t, {\vec k}_1) \phi(t,{\vec k}_2) \phi(t,{\vec k}_3) \rangle_c =& \frac{\mu}{16 k_1 k_2 k_3} \Bigg(\frac{3\left(1- \cos(k_t \Delta t)\right)}{ k_t } \\
& +\bigg( \frac{1- \cos\left((k_1-k_2-k_3) \Delta t\right)}{k_1-k_2-k_3}+{\rm permutations} \bigg) \Bigg)
\end{align}
where $\Delta t = t-t_i$. Unlike a mass or width which moves the poles to complex momenta, now we see that there are no poles at all. Instead, when we take $k_1 \to k_2+k_3$
\begin{equation}
\frac{\mu}{16 k_1 k_2 k_3} \frac{1- \cos\left((k_1-k_2-k_3) \Delta t\right)}{k_1-k_2-k_3} \to \frac{\mu}{32 (k_1+ k_2)k_2 k_3} (\Delta t)^2 (k_1-k_2-k_3) \ ,
\end{equation}
which vanishes as $k_1-k_2-k_3 \to 0$. This correlator gets its largest contribution when $(k_1-k_2-k_3) \approx \Delta t^{-1}$ and is enhanced relative to the total energy pole by a factor $\Delta t k_t$. As a result, the signal in the folded configurations will still dominate over the equilateral~\cite{Babich:2004gb}, much like the cosmological setting~\cite{Green:2020whw}, . This is consistent with more general expectation about the perturbative structure of cosmological correlators. On general grounds, even the total energy pole is expected to vanish for cosmological correlators in a UV complete theory~\cite{Maldacena:2015iua,Arkani-Hamed:2015bza}. Yet, in perturbation theory the poles accurately capture the observable signals~\cite{Smith:2006ud}.
\section{Particle Detectors and the In-In Formalism}\label{sec:detector}
The structure of in-in correlation in flat space is largely the same as in (quasi-) de Sitter space. However, without cosmological particle production, we don't have an obvious interpretation of the correlator in terms of some physical process. One might worry that this is some mathematical devise that lacks a physics reality outside of cosmology. In this section, we will show how the in-in correlator arises in physical models of particle detection. This will allow us to give a clear physical meaning to the flat space correlators and their poles.
\subsection{Unruh -- de Witt Detectors}
We will first review the Unruh-de Witt (UdW) model for particle detection. The central idea is that we have a single qubit, that registers whether or not there was a particle in some localized region of space-time. To do so, we place it in the zero-state initially ($| 0 \rangle_{\boldsymbol{s}}$). We then couple this qubit to our field, $\phi({\vec x}, t)$, for some finite amount of time and inside some localized region of space. After we turn off the coupling to $\phi$, we should have a non-zero probability of finding our qubit in the one-state, $| 1 \rangle_{\boldsymbol{s}}$, if there was a particle (or were particles) in the detector while it was on.
We implement this model by introducing an interaction Hamiltonian that couples our qubit to $\phi$. Following \cite{Unruh:1983ms}, the detector is described by a Hamiltonian
\begin{equation}
H_{\rm D}=\lambda \, \epsilon(t) \int d^3 x \phi({\vec x},t)\left[\psi({\vec x}) {\hat {\boldsymbol{s}}}+\psi^{*}({\vec x}) {\hat {\boldsymbol{s}}}^{\dagger}\right] \ ,
\end{equation}
where $\lambda$ is the coupling constant, $\epsilon(t)$ is a function that defines how we turn on/off the detector, and $\psi({\vec x})$ is a function that defines the spatial resolution of our detector such that $\psi(x)$ vanishes outside the detector. The detector state is defined by
\begin{equation}
{\hat {\boldsymbol{s}}} | 0 \rangle_{\boldsymbol{s}} ={\hat {\boldsymbol{s}}}^{\dagger}| 1 \rangle_{\boldsymbol{s}} =0 \qquad {\hat {\boldsymbol{s}}}^{\dagger} | 0 \rangle_{\boldsymbol{s}} = | 1 \rangle_{\boldsymbol{s}} \qquad {\hat {\boldsymbol{s}}} | 1 \rangle_{\boldsymbol{s}}= | 0 \rangle_{\boldsymbol{s}} \ ,
\end{equation}
and the free field is again given by
\begin{equation}
\phi({\vec x},t) = \int \frac{d^3 k}{(2\pi)^3} \frac{1}{\sqrt{2 k}} e^{i {\vec k}\cdot {\vec x}} [ a^\dagger_{-{\vec k}} e^{i k t} + a_{{\vec k}} e^{-i k t} ] \ .
\end{equation}
Even though we are giving a quantum description of the detector, the fluctuations of field $\phi$ may be quantum or classical.
Let us check that, at leading order in $\lambda$, this detector works as promised. We will put the scalar field into a single particle state
\begin{equation}\label{eq:prob_detect}
|\phi_1({\vec y},t_0) \rangle = \phi({\vec y}) |0\rangle = \int \frac{d^3 p}{(2\pi)^3} e^{-i{\vec p}\cdot {\vec y}} f(p, t_0) a^{\dagger}_{\vec p} | 0\rangle \ ,
\end{equation}
where $f(p,t) = e^{i p t} /\sqrt{2p}$ as before. If we now turn on the detector interaction, the probability for finding an excited detector at leading order, ${\cal O}(\lambda^2)$, is
\begin{equation}
P_{1} = |{\cal A}_{1;1\to 0}|^2 +|{\cal A}_{1;1\to 2}|^2
\end{equation}
where
\begin{eqnarray}
{\cal A}_{1;1\to 0} &=& \langle 0| \lambda \int dt \epsilon(t) \int d^3 x \psi^{*}({\vec x}) \int \frac{d^3 p}{(2\pi)^3} e^{i {\vec p} \cdot({\vec x}-{\vec y})} f^*(p,t) f(p,t_0) |0\rangle \\
&=& \lambda \int dt \epsilon(t) \int d^3 x \psi^{*}({\vec x}) G_{\rm F}({\vec x}, t; {\vec y}, t_0)
\end{eqnarray}
and
\begin{eqnarray}
{\cal A}_{1;1\to 2} &=& \langle 2| \lambda \int dt \epsilon(t) \int d^3 x \psi^{*}({\vec x}) \int \frac{d^3 p}{(2\pi)^3} e^{i {\vec p} \cdot({\vec x}-{\vec y})} \phi(-{\vec p},t) \phi_0({\vec p}) a^{\dagger}_p |0\rangle \\
&=& \lambda \int dt \epsilon(t) \int d^3 x \psi^{*}({\vec x})\langle 2 | \phi(x,t) \phi(y,t_0) |0 \rangle \ .
\end{eqnarray}
The first term, ${\cal A}_{1;1\to 0}$, is the amplitude that the particle at ${\vec y}$ and time $t_0$ is absorbed at time $t$ in the detector located at ${\vec x}$. This term captures the physics of interest, namely the detection of the particle that we put in the initial state. The second term, ${\cal A}_{1;1\to 2}$, the probability that the detector creates an anti-particle\footnote{The real scalar $\phi$ is its own anti-particle, but this distinction is helpful for againing intuition. See Appendix~\ref{app:nonlocal} for more details. } from the vacuum while registering a particle in the detector. This contribution is not the detection of our initial particle, but is the detection of a particle created from the vacuum by the detector.
One important aspect of the UdW detector is that it shows concretely that the act of measuring the field $\phi$ changes the state of the system. In particular, if we try to measure a particle in the vacuum (of the free theory), a non-zero amplitude for exciting the detector (at order $\lambda$) requires the creation of an anti-particle. We can see this, in analogy with Equation~(\ref{eq:prob_detect}), by projecting onto a 1-particle final state,
\begin{equation}
{\cal A}_{1; 0 \to 1} = \lambda \int dt \epsilon(t) \int d^3 x \psi({\vec x}) \langle 1 | \phi(x,t) |0 \rangle \ .
\end{equation}
We can interpret this as follows: the act of performing the measurement absorbs a particle from a particle-anti-particle pair, creating an outgoing anti-particle of equal and opposite momentum in the process. Importantly, it is the act of localizing the measurement in time and space that provides the energy and momentum needed to excite the vacuum. This explanation coincides with our intuition from the uncertainty principle.
To confirm the interpretation, let us consider what happens as we change the function $\epsilon(t)$ to be less localized in time, thus corresponding to smaller energies by the uncertainty principle. If we create the one-particle state at a time $t_0 < 0$ and measure at $t\approx 0$ with
\begin{equation}
\epsilon(t) = \frac{1}{\sqrt{2\pi} \sigma_t} e^{-t^2/(2 \sigma_t^2)} \qquad \psi({\vec x}) = \delta({\vec x}) \ ,
\end{equation}
then the amplitude becomes
\begin{equation}
{\cal A}_{1; 0 \to 1} = \lambda \int dt \epsilon(t) \frac{1}{2 E_{\vec k}} e^{i E_{\vec k} t} = \frac{\lambda}{2 E_k} \, e^{-E_{\vec k}^2 \sigma_t^2/2} \ .
\end{equation}
This is, again, nothing more than the uncertainty principle, as our resolution in energy is inversely proportional to our resolution in time, $\sigma_E \propto \sigma^{-1}_t$. Given that we start in the vacuum (zero energy), we must have a large uncertainty in energy to create particles. For example, if we were to work with massive particles such that $E_{\vec k} \geq m $ for all ${\vec k}$, the probability of finding a particle in the vacuum is exponentially suppressed unless $\sigma_E \gg m$ or, equivalently, $\sigma_t \ll 1/m$.
\subsection{Particle Detection and Cosmological Correlators}
Now suppose we want to measure correlations of these vacuum fluctuations using our particle detector\footnote{Here we are using particle detectors in flat space, but it is interesting to also consider these detectors as a probe of inflation directly~\cite{Kaplanek:2019vzj}.}. We can imagine placing $N$ UdW detectors at distinct points in space ${\vec x}_i$ such that the detectors do not overlap, $\psi({\vec x}-{\vec x}_i)\psi({\vec x}-{\vec x}_j) = 0$ for $i\neq j$. Furthermore, we will assume that $\epsilon(t)$ is sufficiently localized in time such that the detectors are all space-like separated when they are on. We denote $|\Omega \rangle$ as the interacting vacuum of $\phi$ when $\lambda = 0$, so that the amplitude for all $N$ detectors to be in the $| 1 \rangle_{\boldsymbol{s}}$ state {\it and} for $\phi$ to remain in the vacuum is
\begin{equation}
{\cal A}_{N;\Omega} =\left( \prod_i \int dt'_i d^3 x'_i \lambda \epsilon(t-t'_i) \psi({\vec x}_i -{\vec x}_i') \right) \langle \Omega | \phi(t'_1,{\vec x}'_1) .. \phi(t_N',{\vec x}'_N) |\Omega \rangle + {\rm local} \ .
\end{equation}
We see the this amplitude is proportional to the in-in correlator, convolved with the detector. Because the detectors are spacelike separated, from Equation~(\ref{eq:inin}) we see that any additional terms associated with the commutator of the interaction Hamiltonian, $H_{\rm int}$, with the detector Hamiltonian, $H_{\rm D}$, will give purely local terms (i.e.~these contributions are equivalent to field redefinitions of $\phi$ and do not produce a total energy pole). We can write this in terms of the in-in correlation function in fourier space as
\begin{equation}
{\cal A}_{N; \Omega} =\left(\lambda^N \prod_i \int dt'_i d^3 k_i e^{-i {\vec k}_i \cdot ({\vec x}_i-{\vec x}_i')} \tilde \epsilon(t-t'_i) \tilde \psi({\vec k}_i) \right) \langle \Omega | \phi(t'_1,{\vec k}_1) .. \phi(t_N',{\vec k}_N) |\Omega \rangle \end{equation}
where we used the notation where $\tilde \psi({\vec k})$ is the fourier transforms of $\psi({\vec x})$. We see that the fourier modes that contribute to this amplitude are only those that appear in the detector itself via $\tilde \psi(k)$.
In the case where $\epsilon(t) \approx \delta(t)$ and $\psi({\vec x}) \approx \delta({\vec x})$ with $t'_i \to t$, the amplitude becomes proportional to the equal time in-in correlator in position space:
\begin{equation}
{\cal A}_{N;\Omega} \to \lambda^N \langle \Omega | \phi(t,{\vec x}_1) .. \phi(t,{\vec x}_N) |\Omega \rangle \ .
\end{equation}
For quantum vacuum fluctuations, the amplitude for three coincident particles becomes
\begin{eqnarray}
{\cal A}_{3;\Omega} &=& \lambda^3 \langle \phi({\vec x}_1,t) \phi({\vec x}_2,t) \phi({\vec x}_3,t) \rangle \label{eq:3_detectors}\\ &=& -\lambda^3\frac{\mu}{4} \left[ \prod_i \int \frac{d^3 k_i}{(2\pi)^3} e^{i {\vec k}_i \cdot {\vec x}_i} \right] \frac{1}{(k_1+k_2+k_3)} (2\pi)^3 \delta({\vec k}_1+{\vec k}_2 +{\vec k}_3)\\
&=&-\lambda^3\frac{\mu}{4} \int \frac{d^3 k_1}{(2\pi)^3} e^{i {\vec k}_1\cdot {\vec x}_{13} }\int \frac{d^3 k_2} {(2\pi)^3}e^{i {\vec k}_2\cdot {\vec x}_{23} } \frac{1}{(k_1+k_2+k_3)k_1 k_2 k_3 } \ ,
\end{eqnarray}
where we defined ${\vec x}_{ij} \equiv \vec x_i - \vec x_j$. In order to make the ${\vec k}_i $-integrals manageable, let us assume $x_2$ is far from $x_{1}$ and $x_3$ so that $x_{13} \ll x_{23}$. We can then expand in $k_1 \approx k_3 \gg k_2$ to find
\begin{align}
\langle \phi({\vec x}_1,t) \phi({\vec x}_2,t) \phi({\vec x}_3,t) \rangle &\approx -\frac{\mu}{4} \int \frac{d^3 k_1}{(2\pi)^3} e^{i {\vec k}_1\cdot {\vec x}_{13} }\int \frac{d^3 k_2} {(2\pi)^3}e^{i {\vec k}_2\cdot {\vec x}_{23} } \frac{1}{2k_1^3 k_2} \\
&\approx \frac{\mu }{32 \pi^4} \frac{\log x_{13}}{x_{23}^2} \ . \label{eq:quantum_pos}
\end{align}
For comparison, the two point function of the massless field $\phi$ will fall off like $1/x^2$ ($\phi$ is dimension one in 3+1 dimensions). As we separate the detectors, it is important that the contribution to the amplitude will decay with the same power of the distance as the it would in the free theory.
In contrast, let us see what happens to the detector in the presence of classical fluctuations. We again take the in-in three point function and we will focus on the contribution from a physical pole (the total energy pole we give the same result as the quantum theory),
\begin{align}
\langle \phi({\vec x}_1,t) \phi({\vec x}_2,t) \phi({\vec x}_3,t) \rangle_c \supset& \frac{\mu}{16} \left[ \prod_i \int \frac{d^3 k_i}{(2\pi)^3} e^{i {\vec k}_i \cdot {\vec x}_i} \right] \frac{1}{k_1 k_2 k_3 (k_1+k_2-k_3)} (2\pi)^3 \delta({\vec k}_1+{\vec k}_2 + {\vec k}_3) \nonumber \\
\supset&\frac{\mu}{16} \int \frac{d^3 k_1}{(2\pi)^3} e^{i {\vec k}_1\cdot {\vec x}_{13} }\int \frac{d^3 k_2} {(2\pi)^3}e^{i {\vec k}_2\cdot {\vec x}_{23} }\frac{1}{k_1 k_2 k_3} \frac{1}{k_1+k_2-k_3} \ ,
\end{align}
where in the second line $k_3 = |{\vec k}_1+{\vec k}_2|$. Again we will assume the point $x_2$ is far from $x_{1}$ and $x_3$. As such we consider the limit $x_{13} \ll x_{23}$ by expanding in $k_1 \approx k_3 \gg k_2$.
\begin{eqnarray}
\langle \phi({\vec x}_1,t) \phi({\vec x}_2,t) \phi({\vec x}_3,t) \rangle_c &\approx &\frac{\mu}{16} \int \frac{d^3 k_1}{(2\pi)^3} e^{i {\vec k}_1\cdot {\vec x}_{13} }\int \frac{d^3 k_2} {(2\pi)^3}e^{i {\vec k}_2\cdot {\vec x}_{23} } \frac{1}{k_1^2 k_2^2(1-\cos \theta)} \\
&\approx & \frac{\mu}{64 \pi^4} \frac{1}{x_{13} x_{23}}\left(- \log \left( \frac{\theta_{\rm min}^2}{2} \right) \, f(\hat x_{23} \cdot \hat x_{13}) \right) \label{eq:classical_position} \ ,
\end{eqnarray}
where $\theta_{\rm min}$ is the minimum angle between ${\vec k}_1$ and ${\vec k}_2$ and $f(\hat x_{23} \cdot \hat x_{13})$ is a function of the angle between $\vec x_{13}$ and $\vec x_{23}$ with $f(1) = 1$. This formula is noteworthy for two reasons: first, it has a pole as $x_{13} \to 0$ which enhances the size of the non-Gaussian signal. Second, it falls off more slowly than the Gaussian two-point function and will therefore give the dominant contribution to any long distance correlations.
\subsection{Classical Interpretation}
We have seen how the detector model provides a physical interpretation of the quantum in-in correlators. For quantum fluctuations, we can naturally understand the role of the detector in exciting the vacuum and giving rise to particles. The uncertainty principle tells us that localizing a measurement in time will mean that we are no longer in an energy eigenstate (in this case, the vacuum). This is an inevitable feature of a quantum-measurement involving an operator that doesn't commute with the Hamiltonian.
Stated this way, it is the interpretation of the classical measurement that requires explanation. Classical measurements do not have to disturb the state and therefore the response of our detector should be a fundamental property of the classical system. On the other hand, our derivation of the amplitude for exciting the detector did not assume the in-in correlator was calculated in the vacuum and would be equally applicable in the classical limit using
\begin{equation}\label{eq:classical_inin}
\langle \phi({\vec k}_1) .. \phi({\vec k}_N) \rangle_{c} = \lim_{n\to \infty} 2 {\rm Im} \langle n | \phi({\vec k}_1) .. \phi({\vec k}_N) \int^t dt' H_{\rm int} (t') | n \rangle \ .
\end{equation}
For this to be consistent with something classical, it must be how we interpret the state of the detector that has to change. For states close to the quantum vacuum, we demonstrate that the UdW detector is designed to be excited when a particle is present in the initial state. In the classical case, we have lots of particles in the initial state, even in the absence of fluctuations, and therefore the response of the detector to this state requires more care.
Consider what happens as we evolve the state $|n \rangle$ in the presence of the detector Hamiltonian, $H_D$. Working to linear order in $\lambda$, we get at time $t$,
\begin{align}
| 0 \rangle_{\boldsymbol{s}} \, |n\rangle \to& | 0 \rangle_{\boldsymbol{s}} \, |n\rangle + i \lambda | 1 \rangle_{\boldsymbol{s}} \int d^3 x \psi(x) \hat \phi({\vec x},t) |n \rangle \\
=& | 0 \rangle_{\boldsymbol{s}} \, |n\rangle \\
&+ i \lambda | 1 \rangle_{\boldsymbol{s}} \int \frac{d^3 k}{(2\pi)^3} \frac{\psi(-{\vec k})}{\sqrt{2 E_k}} \left( e^{i k t} \sqrt{n_{-{\vec k}}+1} |n_{-{\vec k}}+1\rangle |\hat n; -{\vec k} \rangle +e^{-i k t} \sqrt{n_{{\vec k}}} |n_{{\vec k}}-1\rangle |\hat n; {\vec k} \rangle \right) \nonumber \ .
\end{align}
The second term shows that the detector will register a ``particle" for both an increase and decrease in the total number of particles. In addition, we notice that in the limit $n_k \to \infty$, this is approximately
\begin{align}
\lim_{n \to \infty} | 0 \rangle_{\boldsymbol{s}} \, |n\rangle
\to& | 0 \rangle_{\boldsymbol{s}} \, |n\rangle+ i \lambda | 1 \rangle_{\boldsymbol{s}} \int \frac{d^3 k}{(2\pi)^3} \frac{\psi(-{\vec k})}{\sqrt{2 E_k}} \sqrt{n_k} \cos(k t) |n\rangle \ .
\end{align}
Concretely, our state is responding to the classical (real) oscillations of $\phi$ around the mean density.
In the free theory, there was no problem working in this very carefully defined excited state, $|n\rangle$. Including interactions not only produces higher-point correlations, it also takes the state away from the static initial state we took in the Gaussian theory. Since the detectors are essentially registering changes in the state away from the Gaussian initial state, this classical evolution alone should be sufficient to excite the detectors. Evolving the state according the interacting Hamiltonian, we find
\begin{align}
|n\rangle \to& |n\rangle + \int dt' H_{\rm int} (t') | n \rangle \\
=&|n\rangle + \frac{\mu}{3!} \int dt' \int d^3 x \phi^3({\vec x},t') |n\rangle \\
=& |n\rangle + \frac{\mu}{3!} \int dt' \bigg( \prod_i \int \frac{d^3 k_i}{(2\pi)^3}\frac{1}{\sqrt{2 E_{k_i}}} \Big( e^{i k_i t'} \sqrt{n_{-{\vec k}_i}+1} |n_{-{\vec k}_i}+1\rangle |n_{{\vec k}_i}\rangle
\\
& \qquad +e^{-i k_i t'} \sqrt{n_{{\vec k}_i}} |n_{{\vec k}_i}-1\rangle|n_{-{\vec k}_i}\rangle \Big) \bigg)\delta^{(3)}(\sum_i {\vec k}_i) |\hat n; {\vec k}_1,{\vec k}_2,{\vec k}_3, -{\vec k}_1,-{\vec k}_2,-{\vec k}_3\rangle \ . \label{eq:classical_evolution}
\end{align}
We see that the time evolution of state changes the number (density) of particles of different momenta. This change to the density is precisely of the form that our UdW detector will register as a ``particle". Classically, the detector is not responsible for exciting the fluctuations, those were already created by the classical-time evolution of $\phi$.
An additional confusion with this description is the meaning of the total energy pole. The poles at physical momenta capture the decay of the particles in the initial state which leads to density fluctuations by changing the number densities of particles with different momenta. The total energy pole does not have such a simple description classically. In the quantum theory, we interpreted this pole as a violation of energy conservation, via the uncertainty principle, which would be forbidden in a classical theory. However, the quantum interpretation was needed because the quantum vacuum is an energy eigenstate of the full interacting Hamiltonian. In contrast, what we see in Equation~(\ref{eq:classical_evolution}) is that the state $|n \rangle$ is not a stationary state of the classical Hamiltonian. In the presence of the interaction, the system wants to evolve towards equilibrium as explained in Section~\ref{sec:finite_time}. The total energy pole is therefore not a violation of energy conservation but a reflection that our classical probabilistic system included states of different energies initially.
\section{Detector Entanglement}\label{sec:entanglement}
Entanglement has become an increasingly valuable probe of fundamental physics. It can reveal the structure of quantum field theories and states of matter in flat space~\cite{Casini:2022rlv}. In curved space times, it is central to our understanding of black holes~\cite{Page:1993wv,Almheiri:2020cfm,Bousso:2022ntt} and particle production~\cite{Maldacena:2012xp}. Entanglement is even thought to encode the causal structure of spacetime itself~\cite{VanRaamsdonk:2010pw,Maldacena:2013xja,Rangamani:2016dms}.
In the quantum field theory, the entanglement entropy in vacuum is expected to follow an area law~\cite{Srednicki:1993im,Eisert:2008ur}, while it should scale as volume in a generic excited state. This offers a different starting point for understanding the nature of the quantum vacuum than the structure of poles in cosmological correlators. In this section, we will explore to what degree these properties are related.
In order to make the comparison, we will consider $N$ UdW-detectors and some free field $\phi$ such that the state of the detectors is
\begin{equation}\label{eq:N_detectors}
\begin{aligned}
\left|\Psi_{\phi,{\rm UdW}} \right\rangle &= (\mathbf{1}-C)|0_{\{i\}}\rangle -i \sum_j \Phi_{j}|\Omega\rangle |1_j, 0_{\{i, \hat j \}} \rangle-\sum_{j, k}\Phi_{j} \Phi_{k}|\Omega\rangle |1_j 1_k, 0_{\{i, \hat j, \hat k\}} \rangle +O\left(\lambda^{3}\right) \ ,
\end{aligned}
\end{equation}
where $\Phi_{i}=\lambda \int d t \epsilon_{i}(t)\int d^{3} x \psi_{i}(\vec{x}) \phi(\vec{x}, t)$, $C$ is a normalization constant, and the detector states are defined by
\begin{equation}
|1_j, 0_{\{i, \hat j \}} \rangle \equiv {\hat {\boldsymbol{s}}}^{\dagger}_j | 0 \rangle_{\boldsymbol{s}_j} \bigotimes_{i \neq j} | 0 \rangle_{\boldsymbol{s}_i} \qquad |1_j 1_k, 0_{\{i, \hat j, \hat k\}}\rangle \equiv {\hat {\boldsymbol{s}}}^{\dagger}_j | 0 \rangle_{\boldsymbol{s}_k} \otimes {\hat {\boldsymbol{s}}}^{\dagger}_k | 0 \rangle_{\boldsymbol{s}_k} \bigotimes_{i \neq j,k} | 0 \rangle_{\boldsymbol{s}_i} \ .
\end{equation}
This setup is the $N$-particle generalization of the detector configuration described in~\cite{Reznik:2002fz,Reznik:2003mnx}.
We will be interested in understanding the entanglement of the individual detectors as a probe of the state of $\phi$. As the detectors already encode the cosmological correlators, this offers a simple approach to connecting these correlations with the entanglement. Our approach will be qualitatively similar {\it entanglement harvesting}~\cite{Salton:2014jaa,Hummer:2015xaa}. However, given our cosmological motivations, we will not be interested in whether the detectors themselves are in a uniquely quantum state (cosmological observations are always classical after all). Instead we want to use it as a probe of the quantum nature of $\phi$ itself.
We will assume that we know that $\phi$ is in the interacting vacuum. This is a useful assumption as we can project the state in Equation~(\ref{eq:N_detectors}) onto the vacuum, so that the state of the detector becomes
\begin{align}
\left|\Psi_{\rm UdW}\right\rangle \equiv \langle \Omega \left|\Psi_{\phi,{\rm UdW}} \right\rangle &= (\mathbf{1}-C)|0_{\{i\}} \rangle-\sum_{j,k} \langle \Omega | \Phi_{i}^{+} \Phi_{j}^{+}|\Omega \rangle |1_j 1_k, 0_{\{i, \hat j, \hat k\}} \rangle +O\left(\lambda^{3}\right) \ .
\end{align}
As a theoretical tool, this projection has the advantage that it leaves the detectors in a pure quantum state. In other words, we have projected onto the product state so that the density matrix takes the form $\rho_{\phi,{\rm UdW}}=\rho_{{\rm UdW}}\bigotimes | \Omega\rangle\langle \Omega|$. As a result, the entanglement between the detector and the field degrees of freedom has been set to zero, $S_{\text{detector}}= S_{\phi}=0$. Note that this procedure does not represent a true measurement, as the purpose of the detectors is to measure the state of the field $\phi$, which we presumably to not know directly.
\subsection{Position Space Entanglement}
First, we will assume that each detector is localized at a point in space-time so that the detector is described by $\psi_i({\vec x}) \approx \delta({\vec x}-{\vec z}_i)$ for some point ${\vec z}_i$ and $\epsilon(t) \approx \delta(t)$. Note that with these choices, $\lambda$ has units of distance. We now want to use the entanglement between the UdW detectors as a proxy for the entanglement in the underlying state of $\phi$. Having localized the detectors in space, it is natural to consider entanglement between the detectors localized in two regions $A$ and $B$.
For simplicity, let us take the region $A$ as a sphere of radius $R$ and $B = \overline{A}$ is the region outside the sphere. Now we want to define the state of the detectors located inside $A$ and $B$, $\vec x_i \in A$ and $\vec y_j \in B$, as follows:
\begin{align}
|0,0\rangle&\equiv |0_{\{ i \}} \rangle \\
|\{ {\vec x}_n \} ,0\rangle &\equiv |1_{{\vec x}_1} ..1_{{\vec x}_n}, 0_{\{i, \hat x_{1},..,\hat x_n \}} \rangle \\
|0,\{ {\vec y}_N \} \rangle &\equiv |1_{{\vec y}_1} ..1_{{\vec y}_N}, 0_{\{i, \hat y_{1},..,\hat y_N\}} \rangle \\
|\{ {\vec x}_n \},\{ {\vec y}_N \}\rangle &\equiv |1_{{\vec x}_1} ..1_{{\vec x}_n},1_{{\vec y}_1} ..1_{{\vec y}_N}, 0_{\{i, \hat x_{1},..,\hat x_{n},\hat y_1,..,\hat y_{N} \}} \rangle
\end{align}
so that
\begin{equation}\label{eq:detector_state_position}
\begin{aligned}
\left|\Psi_{\rm UdW}\right\rangle=&|0,0\rangle+\sum_{\{x_n\} \neq \varnothing} a_{\{{\vec x}_n\}}|\{
{\vec x}_n \}, 0\rangle+\sum_{\{{\vec y}_N\} \neq \varnothing} b_{\{{\vec y}_N\}}|0, \{{\vec y}_N\} \rangle\\
&+\sum_{\{{\vec x}_n\},\{{\vec y}_N\} \neq \varnothing} c_{\{{\vec x}_n\}, \{{\vec y}_N\}}|\{{\vec x}_n\}, \{{\vec y}_N\}\rangle \ .
\end{aligned}
\end{equation}
The entanglement between $A$ and $B$ can be determined from reduced density matrix of the detectors in $A$ (inside the sphere) tracing over the detectors in $B$ (outside the sphere). The contributions from $a_{\{{\vec x}_n\}}$ and $ b_{\{{\vec y}_N\}}$ can be removed by a change of basis (as they do not entangle detectors in $A$ with detectors in $B$). The contribution to a non-trivial density matrix comes instead from
\begin{equation}
\rho_{\{{\vec x}_n\},\{{\vec x}_m\}} = \sum_{\{{\vec y}_N\}} c_{\{{\vec x}_n\},\{{\vec y}_N\}} c^{\dagger}_{\{{\vec x}_m\},\{{\vec y}_N\}}|\{{\vec x}_n\} \rangle \langle \{{\vec x}_m\}| \ .
\end{equation}
In the free theory, these coefficients are determined by Wick contractions alone,
\begin{equation}
c_{\{{\vec x}_n\} ,\{ {\vec y}_N \}} = \prod_{i,j} \langle \Phi_{{\vec x}_i} \Phi_{{\vec y}_j} \rangle + {\rm permutations} \ .
\end{equation}
For a massive theory, these two point functions are exponentially suppressed and therefore
\begin{equation}
c_{{\vec x}_i ,{\vec y}_j } \propto e^{-m |{\vec x}_i-{\vec y}_j|} \ .
\end{equation}
Any non-trivial entanglement between the detectors in $A$ and $B$ will therefore be exponentially suppressed as well.
For massless fields, correlations scale like powers for the distance and thus are more entangled than the massive case. Expanding in $\lambda$, the leading contribution is from
\begin{equation}\label{eq:free_2pt}
c_{{\vec x}_i,{\vec y}_j} =\langle \Phi_{{\vec x}_i} \Phi_{{\vec y}_j} \rangle = \frac{\lambda^2}{|\vec x_i-\vec y_j|^2} \ ,
\end{equation}
where we assumed we are in $3+1$ dimensions. The key question we want to understand is how $B$ and $A$ are entangled. We are particularly interested in how the reduced density matrix of the detectors within $A$ depends on their proximity to the boundary with $B$. In massive QFT, entanglement is short ranged and is responsible for the famous area-law in the quantum vacuum. We now want to understand what happens for the range of entanglement of our detectors. For massive fields, the exponential decay of the two point correlators ensures the entanglement is short range.
To understand the range of entanglement of the detectors for massless $\phi$, we will consider a small number of localized detectors with $A$ entangled with detectors in $B$. The simplest such possibility is two detectors, one in $A$ and one in $B$. The state $| \Psi_{\rm UdW} \rangle$ of these detectors gives us a density matrix at ${\cal O}(\lambda^4)$
\begin{equation}
\rho^{(AB)}=\left(\begin{array}{cccc}
1-c_{{\vec x},{\vec y}}^2 & 0 & 0 & -c_{{\vec x},{\vec y}} \\
0 &0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
-c_{{\vec x},{\vec y}} & 0 &0 & c_{{\vec x},{\vec y}}^2
\end{array}\right) \ ,
\end{equation}
where the matrix indices are $|0_{\vec x} 0_{\vec y}\rangle$, $|1_{\vec x} 0_{\vec y}\rangle$, $|0_{\vec x} 1_{\vec y}\rangle$ and $|1_{\vec x} 1_{\vec y}\rangle$. This is, of course, the density matrix of a pure state\footnote{Notice this density matrix differs from those in similar two-detector models as in~\cite{Reznik:2002fz,Reznik:2003mnx}. The difference arises because we have projected onto the ground state of $\phi$ to arrive at a pure state, rather than tracing over $\phi$, which results in a mixed state.}. Tracing over $B$ produces the reduced density matrix for the detector in $A$,
\begin{equation}
\rho^{(A)}= {\rm Tr}_B \rho^{(AB)} =\left(\begin{array}{cc}
1-c_{{\vec x},{\vec y}}^2 & 0 \\
0 &c_{{\vec x},{\vec y}}^2
\end{array}\right) \ ,
\end{equation}
and entanglement entropy
\begin{equation}
S^A_{\rm ent}= - {\rm Tr} \rho^{(A)} \log \rho^{(A)} = - c_{{\vec x},{\vec y}}^2 (\log c_{{\vec x},{\vec y}}^2 - 1) +{\cal O}(\lambda^6) \ .
\end{equation}
We see that in the limit $y \to \infty$, holding ${\vec x}$ fixed, $S^A_{\rm ent} \propto y^{-4} \log y \to 0$.
Now let us consider the generalization of this case where we take a single detector in $A$ at location ${\vec x}$ with a large number of detectors in $B$, $N_B$, that are uniformly distributed in space. The entanglement between any the detector in $A$ and any detector $B$ is the same as our above example, such that the reduced density matrix is again diagonal with $\rho_{00} = 1-\rho_{11}$ and
\begin{equation}
\rho_{11} \approx \lambda^4 \sum_{{\vec y}_j} \frac{1}{|{\vec x} - {\vec y}_j|^4} \ .
\end{equation}
First let us place the detector in $A$ near then center of the sphere so that $x \ll R$. Using the large $N_B$ limit to replace the sum by an integral, the leading contribution to the reduced density matrix becomes
\begin{equation}\label{eq:N_B}
\rho_{11} \approx \frac{4\pi \lambda^4 N_B}{V} \int_R^\infty r^2 dr \frac{1}{r^4} = \frac{4\pi \lambda^4}{R} n_B \ ,
\end{equation}
where $n_B = N_B/V$ is the number density of detectors in $B$. In contrast, a point near the boundary gets a much larger contributions. E.g. if $x \approx R$ we can write
\begin{equation}
\rho_{11} \approx \frac{2 \pi \lambda^4 N_B}{V} \int^\infty_{1/\Lambda} r^2 dr \frac{1}{r^4} = 2\pi \lambda^4 \Lambda n_B \ ,
\end{equation}
where $\Lambda^{-1}$ is the smallest distance between a detector in $B$ and ${\vec x}$. So far, the entanglement of a few detectors inside $A$, for a free theory of $\phi$, are consistent with the intuition that entanglement mostly arises from point near the boundary of $A$, or that entanglement in the vacuum is short ranged\footnote{We can also repeat this argument for generalized free field (e.g.~\cite{Dymarsky:2014zja}) by substituting $|{\vec x}-{\vec y}|^{-2} \to |{\vec x}-{\vec y}|^{-2\Delta}$ in Equation~(\ref{eq:free_2pt}), where $\Delta$ is the scaling dimension of the operator. It is interesting to note that the unitarity bound $\Delta\geq 1$ ensures that entanglement of the detectors is short ranged.}.
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{entangle.pdf}
\caption{Entanglement between detector in region $A$, located at ${\vec x}$, and two detectors in $B$, located at ${\vec y}_1$ and ${\vec y}_2$, produced by the three point interaction ${\cal H}_{\rm int} = \mu \phi^3$. Time runs in the vertical direction such that the correlation between detectors on the equal time surface is due to interactions in the past. }
\label{fig:entangle}
\end{figure}
Now we would like to understand how the range of entanglement changes with interactions. We will again focus on the cases where the underlying QFT has a non-trivial three-point function. The contributions of interest will arise from entanglement induced between a single detector in $\vec x \in A$, now with two detectors in $B$ at positions ${\vec y}_1$ and ${\vec y}_2$. The interaction introduces a non-trivial amplitude for exciting the three detectors simultaneously,
\begin{equation}
c_{{\vec x},\{ {\vec y}_1, {\vec y}_2\}} = \lambda^3 \langle \Omega | \Phi({\vec x}) \Phi({\vec y}_1) \Phi({\vec y}_2) \rangle \Omega \rangle \ ,
\end{equation}
where we have used Equation~(\ref{eq:3_detectors}) to relate the in-in correlator and the entanglement between detectors. This contribution to the state of the detectors is illustrated in Figure~\ref{fig:entangle}. Tracing over the detectors in $B$ we arrive again at a diagonal density matrix for the single detector in $A$ with $\rho^{(A)}_{00}=1-\rho^{(A)}_{11}$, but now with
\begin{equation}
\rho^{(A)}_{11} = c_{{\vec x},{\vec y}_1}^2 + c_{{\vec x},{\vec y}_2}^2 + c_{{\vec x},\{ {\vec y}_1, {\vec y}_2\}}^2 \ .
\end{equation}
Since the three point functions are different for classical and quantum statistics of $\phi$, we must consider the entanglement separately as well. First, we consider the case where $\phi$ is in the interacting (quantum) vacuum with a non-zero three-point function given by Equation~(\ref{eq:quantum_pos}). We are specifically interested in the case where ${\vec x}$ is located near the center of the sphere and the two detectors of $B$ are close together so that $|{\vec y}_1-{\vec y}_2| \ll |{\vec x}-{\vec y}_1| \approx |{\vec x}-{\vec y}_2|$. In this limit, we find
\begin{equation}
c^{\rm (quantum)}_{{\vec x},\{ {\vec y}_1, {\vec y}_2\}} \approx \lambda^3 \frac{\mu }{32\pi^4} \frac{\log |{\vec y}_1-{\vec y}_2|}{|{\vec x}-{\vec y}_1|^2} \ .
\end{equation}
We notice that the contribution from the interaction is suppressed relative to the contribution from the free theory. Concretely, in the limit $y_1, y_2 \to \infty$, we find the scaling behavior
\begin{equation}
c^{\rm (quantum)}_{{\vec x},\{ {\vec y}_1, {\vec y}_2\}}\propto \lambda^3 \mu y_1^{-2} \ll c_{{\vec x},{\vec y}_1} \propto \lambda^2 y_1^{-2} \ ,
\end{equation}
where we used $\mu \lambda \ll 1$, which is required for perturbative control. The entanglement in the region $A$ is therefore dominated by the free theory and is again short ranged. This is consistent with our expectations from QFT.
Now let use consider classical (excited) state for $\phi$ where the three-point correlator is given by Equation~(\ref{eq:classical_IO}). When we compute the reduced density matrix with the detectors again near the center of the sphere, we can use Equation~(\ref{eq:classical_position}) to find
\begin{equation}
c^{\rm (classical)}_{{\vec x},\{ {\vec y}_1, {\vec y}_2\}} \approx \lambda^3 \frac{\mu}{64 \pi^4} \frac{1}{|{\vec x}-{\vec y}_1||{\vec y}_1-{\vec y}_2|}\left(- \log \left( \frac{\theta_{\rm min}^2}{2} \right) \, f(\hat x\cdot \hat y_{12})\right) \ ,
\end{equation}
where $\vec y_{12} ={\vec y}_1 -{\vec y}_2$, and $\hat x$ and $\hat y_{12}$ are the unit vectors associated with ${\vec x}$ and ${\vec y}_{12}$. We again take the limit in the limit $y_1, y_2 \to \infty$ but instead find
\begin{equation}
c^{\rm (classical)}_{{\vec x},\{ {\vec y}_1, {\vec y}_2\}}\propto \lambda^3 \mu y_1^{-1} |{\vec y}_1-{\vec y}_2|^{-1} \gg c_{{\vec x},{\vec y}_1} \propto \lambda^2 y_1^{-2}
\end{equation}
The range of entanglement is clearly been increased by the interaction. Furthermore, if we were to repeat the calculation in Equation~(\ref{eq:N_B}) with a large number of detectors in $B$, the integrals would no longer converge. This long range entanglement between detectors at large distances for the classical case, which is absent in the quantum vacuum, is suggestive of the area versus volume law behavior know to distinguish the two cases.
This difference between classical and quantum correlators also manifests itself in the relative entropy~\cite{Vedral:2002zz}, a measure of distance between two states. Since we are interested in distinguishing the quantum vacuum from the classical (excited) state, consider the relative entropy for detector A, $S(\rho_{A}|\rho^{\prime}_{A}) = \text{tr}\rho_{A}\log\rho_{A}-\text{tr}\rho_{A}\log \rho^{\prime}_{A}$, where $\rho_A$ is the density matrix for detector A in the quantum vacuum and $\rho^{\prime}_A$ is the same density matrix in a classical state. Here we can write $\rho^{\prime}_A=\rho_A+\delta \rho$, where
\begin{equation}
\delta \rho =\left(\begin{array}{cc}
(c_{{\vec x},\{{\vec y}_1,{\vec y}_2\}}^{\rm (classical)})^2 -(c_{{\vec x},\{{\vec y}_1,{\vec y}_2\}}^{\rm (quantum)})^2 & 0 \\
0 &-(c_{{\vec x},\{{\vec y}_1,{\vec y}_2\}}^{\rm (classical)})^2 +(c_{{\vec x},\{{\vec y}_1,{\vec y}_2\}}^{\rm (quantum)})^2
\end{array}\right) \ ,
\end{equation}
Since both $\rho_A$ and $\delta \rho$ are diagonal, $[\rho_A,\delta \rho]=0$, we can expand the relative entropy
\begin{align}
S(\rho_{A}|\rho^{\prime}_{A}) &= \text{tr}\rho_{A}\log\rho_{A}-\text{tr}\rho_{A}\log (\rho_{A}+\rho_{A}^{-1}\delta\rho-\frac{1}{2}\rho_{A}^{-2}\delta\rho^2)\\
&= \frac{1}{2}\text{tr}\rho_{A}^{-1}\delta\rho^2 \ .
\end{align}
As a result, the leading order contribution to the relative entropy is
\begin{equation}
S(\rho_{A}|\rho^{\prime}_{A})=\frac{c^{-2}_{{\vec x},{\vec y}}}{4}((c_{{\vec x},\{{\vec y}_1,{\vec y}_2\}}^{\rm (classical)})^2 -(c_{{\vec x},\{{\vec y}_1,{\vec y}_2\}}^{\rm (quantum)})^2)^2 \ .
\end{equation}
This provides further confirmation that the difference between correlators manifests itself a robust difference in the states of the detectors used to observe them.
\subsection{Momentum Space Detectors and Entanglement}
Although entanglement is more often discussed in position space, momentum space entanglement~\cite{2010momentum,Balasubramanian:2011wt,Hsu:2012gk,Lello:2013bva,Peschanski:2016hgk,Grignani:2016igg} offers another useful window into the nature of cosmological correlators. Cosmological correlators are easier to calculate and represent in momentum space. Likewise, entanglement in momentum space is easier to calculate because the vacuum of the free theory is a tensor product in the momentum basis. As a result, entanglement between states of different momenta is uniquely a property of the interacting theory and thus is a window into the non-Gaussian nature of the correlations.
A natural approach that combines these benefits is to work with UdW detectors that register particles in specific momentum eigenstates, rather than a specific positions. We can achieve this by modifying the detector response so that
\begin{equation}
\psi_i({\vec x}) = e^{-i {\vec k}_i \cdot {\vec x}} \ ,
\end{equation}
and therefore
\begin{equation}
\Phi_{i} = \lambda \phi({\vec k}_i,t) \ .
\end{equation}
By measuring momentum eigenstates at a fixed time, our detectors are responding to $\phi({\vec k},t)$ which is precisely what appears in our cosmological correlators.
Now we can again split our detectors into two groups, $A$ and $B$, where momenta $\vec k \in A$ and $\vec p \in B$. We will define group $A$ as the detectors with momenta below a fixed cutoff, $k_i \leq \Lambda$, and the $B$ detectors have momenta above the cutoff, $p_j > \Lambda$. The state of the detectors is again represented in analogy with Equation~(\ref{eq:detector_state_position}), now defining
\begin{align}
|0,0\rangle&\equiv |0_{\{ i \}} \rangle \\
|\{ \vec k_n \} ,0\rangle &\equiv |1_{{\vec k}_1} ..1_{{\vec k}_n}, 0_{\{i, \hat k_{1},..,\hat k_n \}} \rangle \\
|0,\{ \vec p_N \} \rangle &\equiv |1_{{\vec p}_1} ..1_{{\vec p}_N}, 0_{\{i, \hat p_{1},..,\hat p_N\}} \rangle \\
|\{\vec k_n \},\{ \vec p_N \}\rangle &\equiv |1_{{\vec k}_1} ..1_{{\vec k}_n},1_{{\vec p}_1} ..1_{{\vec p}_N}, 0_{\{i, \hat k_{1},..,\hat k_{n},\hat p_1,..,\hat p_{N} \}} \rangle \ .
\end{align}
As a result, the state of our momentum detectors is given by
\begin{equation}\label{eq:detector_state_momentum}
\begin{aligned}
\left|\Psi_{\rm UdW}\right\rangle=&|0,0\rangle+\sum_{\{{\vec k}_n\} \neq 0} a_{\{{\vec k}_n\}}|\{ {\vec k}_n \}, 0\rangle+\sum_{\{{\vec p}_N\} \neq 0} b_{\{{\vec p}_N\}}|0, \{p_N\} \rangle\\
&+\sum_{\{{\vec k}_n\},\{{\vec p}_N\} \neq 0} c_{\{{\vec k}_n\}, \{{\vec p}_N\}}|\{{\vec k}_n\}, \{{\vec p}_N\}\rangle \ .
\end{aligned}
\end{equation}
In the free theory, the Hilbert space of $\phi$ can be decomposed into a tensor product over momentum eigenmodes via the Fock space, ${\cal H} = \otimes_{\vec p} {\cal H}_{\vec p}$. Since each detector is tied only to a single momentum scale, the combined state of the detectors in $A$ and $B$ with the fields are similarly described by a tensor product in momentum space. More dramatically, when we project onto the vacuum of $\phi$, which is a zero momentum state, we will find the detector is never excited unless we have two detectors with equal and opposite momentum (i.e. $k$ and $-{\vec k}$ or ${\vec p}$ and $-{\vec p}$). This reflects the fact that, in the free theory, measuring a particle at momentum ${\vec k}$ or ${\vec p}$ requires the production of an (anti)-particles with momentum $-{\vec k}$ or $-{\vec p}$. Without a second detector, this is a real particle and therefore is not in the vacuum. Furthermore, since $k$ and $-{\vec k}$ are in $A$ and ${\vec p}$ and $-{\vec p}$ are in $B$, we cannot generate entanglement between $A$ and $B$ in the free theory. Therefore, We need interactions in order to generate entanglement in momentum space.
We can easily determine the detector state in the interacting theory for a single detector in $A$ with momentum ${\vec k}$ and two detectors in $B$ with momenta ${\vec p}_1$ and ${\vec p}_2$. Like the free theory, the quadratic terms vanish, $c_{k,{\vec p}_1} = c_{k, {\vec p}_2} =c_{{\vec p}_2,{\vec p}_1} = 0$, such that the leanding non-trivial contribution to the state is
\begin{equation}\label{eq:c12_momentum}
c_{{\vec k},\{{\vec p}_1,{\vec p}_2\}} = \lambda^3 \langle \phi({\vec k},t) \phi({\vec p}_1,t) \phi({\vec p}_2,t) \rangle \ .
\end{equation}
We can now trace over the detectors in $B$ to arrive at a diagonal reduced density matrix for $A$ with $\rho^{(A)}_{00} = 1-\rho_{11}$ and
\begin{equation}
\rho^{(A)}_{11} = |c_{{\vec k},\{{\vec p}_1,{\vec p}_2\}} |^2 \ .
\end{equation}
Again, the entanglement entropy of $A$ is given in terms of this single coefficient
\begin{equation}
S^A_{\rm ent}= - {\rm Tr} \rho^{(A)} \log \rho^{(A)} = - |c_{{\vec k},\{{\vec p}_1,{\vec p}_2\}} |^2 (\log |c_{{\vec k},\{{\vec p}_1,{\vec p}_2\}} |^2 - 1) +{\cal O}(\lambda^8) \ .
\end{equation}
From Equation~(\ref{eq:c12_momentum}), we see that the entanglement entropy is determine by the in-in correlators.
Now we want to compare the nature of the momentum space entanglement for our two types of statistics. These are just our in-in correlatators in moemtnum space, so in the quantum theory we have
\begin{equation}
c^{\rm (quantum)}_{{\vec k},\{{\vec p}_1,{\vec p}_2\}} =-\lambda^3 \frac{\mu}{4 k p_1 p_2 (k+p_1+p_2) } \ ,
\end{equation}
and in the classical theory
\begin{equation}
c^{\rm (classical)}_{{\vec k},\{{\vec p}_1,{\vec p}_2\}} =-\lambda^3 \frac{\mu}{16 k p_1 p_2} \left(\frac{3}{k+p_1+p_2 } +\frac{1}{k-p_1-p_2}+\frac{1}{p_1-k-p_2}+\frac{1}{p_2-k-p_1} \right) \ .
\end{equation}
The presence of poles at physical momenta means that $S_{\rm ent}^{(A),{\rm classical}} \gg S_{\rm ent}^{(A),{\rm quantum}}$. This again reflects the underlying fact that creating a state with classical fluctuations introduces much stronger correlations between scales than occur in the vacuum.
We also note that, in this case, the structure of entanglement of the UdW detectors in momentum space is proportional to the entanglement of $\phi$ in momentum space~\cite{Balasubramanian:2011wt}. The origin of this relationship is that entanglement in momentum space is trivial in the free theory. Therefore, the perturbative construction of the detector states is consistent with the perturbative nature of the entanglement in the vacuum. In position space, the reduced density matrix is non-trivial in the free theory and is less naturally organized as a perturbative expansion in correlators (although see e.g.~\cite{Iso:2021vrk,Iso:2021rop,Iso:2021dlj} for developments).
\section{Conclusions}\label{sec:conclusions}
The nature of the vacuum in quantum field theory is unlike any classical statistical state. The quantum vacuum is the lowest-energy state and therefore dictates that fluctuations only have positive energies. This fact, built into the structure of perturbative QFT, manifests itself in the structure of correlation functions and gives rise to the LSZ reduction formula relating correlators and S-matrix elements~\cite{Lehmann:1954rq}. Classical (e.g.~thermal) fluctuations are always around a positive energy state and thus can both increase and decrease the energy.
In cosmology, these kinds of vacuum fluctuation are thought to be responsible for structure in the universe. Yet, it remains a viable possibility that structure arose from thermal fluctuation~\cite{Berera:1995ie,Berera:1998px,Green:2009ds,LopezNacir:2011kk,LopezNacir:2012rm,Turiaci:2013dka}. One cannot perform experiments on the state of the universe to isolate their quantum nature and resolve this question~\cite{Maldacena:2015bha}. Instead, we must rely on statistical properties of the initial conditions to infer how structure was created. Concretely, it was proposed in~\cite{Green:2020whw} that the difference in the analytic structure of correlators for quantum and classical fluctuations is both completely general and observable.
In this paper, we demonstrated that this proposed cosmological Bell-like test has a flat space analogue. The same analytic structure seen in cosmological correlators appears in both in-in and in-out correlators in flat space, and is responsible for the LSZ reduction for quantum vacuum correlators. For classical correlators, the additional poles seen in inflationary correlators is a direct consequence of scattering processes involving particles that are necessarily present in the initial state.
The meaning of the in-in correlator is less clear in flat space than it is in cosmology. In cosmology, we interpret these correlators as a signal of cosmological particle production. In flat space, the interacting (quantum) vacuum is a well-defined energy eigenstate containing no particles and, yet, has non-zero in-in correlations. Instead, we show the flat space in-in correlator contributes to the amplitude for exciting a localized (Unruh - de Witt) particle detector. The particle production arises in flat space because of the uncertainty principle: a particle detector that is sufficiently localized in space and time to make such a measurement breaks translations and excites particles from the vacuum. We additionally show that one can use the entanglement of these detectors as probes to the entanglement of the underlying field.
Much is known about the unique properties of quantum mechanical systems in flat space. Naturally one would hope these insights could be applied to cosmology, particularly in light of some analogous structure of the correlators. The central challenge is that cosmological observables are classical, for all practical purposes; one cannot simply expose the quantum nature of cosmology through the direct measurement of non-commuting observables. Yet, the question of what kinds of initial conditions can only be prepared in a quantum universe is closely related to the problem of quantum state preparation on a quantum computer. One might hope that intuition from quantum computing could shed further light on cosmology, or vice versa~\cite{Swingle:2014qpa,Swingle:2016foj}.
\paragraph{Acknowledgements}
We are grateful to Daniel Baumann, Jonathan Braden, Dick Bond, Tim Cohen, John McGreevy, Mehrdad Mirbabayi, Eva Silverstein, and Rafael Porto for helpful discussions. The authors were supported by the US~Department of Energy under grant no.~\mbox{DE-SC0009919}. |
2009.10465 | \section{Introduction}
Multi-class classification is one of the fundamental problems in machine learning and data mining communities, where one trains a model with labeled data of different classes for classification purposes. The multi-class classification exists diverse real-world applications from computer vision tasks such as object recognition~\cite{felzenszwalb2005pictorial,lowe1999object,riesenhuber1999hierarchical}, face verification~\cite{kittler2003face}, to natural language processing tasks like sentiment classification~\cite{glorot2011domain,pang2002thumbs}.
To handle multi-class classification problems, existing approaches could be mainly divided into two groups. One group focuses on solving the multi-class problems directly by extending its corresponding binary classification algorithm. These approaches include decision tree-based methods~\cite{deng2011fast}, multi-class linear discriminant analysis~\cite{torkkola2001linear}, multi-layer perceptron~\cite{freund1999large}, multi-class support vector machines (SVM)~\cite{chang2011libsvm} and etc. Another research direction focuses on the decomposition of a multi-class problem into multiple binary sub-problems so that one can reuse the well-studied binary classification algorithms for their simplicity and efficiency. Most of these methods can be reinterpreted in the framework of error correcting output codes (ECOC)~\cite{dietterich1991error,dietterich1994solving}. For example, Allwein \emph{et al.}\xspace~\cite{allwein2000reducing} show one-versus-one (OVO), one-versus-all (OVA) could be incorporated into the framework of ECOC where all the classes are reassigned with either binary codes $\{-1,1\}$ or ternary codes $\{-1,0,1\}$ for each base learners ($1/-1$ represents positive/negative class, $0$ represents non-considered class). Zhou \emph{et al.}\xspace~\cite{zhou2019n} further extend traditional ECOCs into $N$-ary ECOC by introducing $N$ meta-classes rather than binary classification for each base learner. The final results are determined by the ensemble of a series of base learners. The biggest advantage of ECOCs methods is their easiness of implementation and parallelization.
Most traditional ECOCs methods are based on the pre-defined hand-craft features and focus on how to ensemble the results of base learners on these features. Recently, deep learning methods significantly advance the multi-class classification performance through learning features in an end-to-end fashion. For example, a single AlexNet~\cite{alex2012imagenet} outperforms the second place at the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) by more than $10\%$. To further improve performance, Goodfellow \emph{et al.}\xspace~\cite{goodfellow2016deep} demonstrate that a simple ensemble of seven AlexNet models with different random initiations could significantly reduce an error rate from $18.2\%$ to $15.3\%$. In most high profile competitions, e.g. ImageNet\footnote{ImageNet: \url{http://www.image-net.org/}}~\cite{deng2009imagenet} or Kaggle\footnote{Kaggle: \url{https://www.kaggle.com/}}, ensembles techniques often appear in the winner solution. Traditional ensemble methods usually assume that the base learners for binary classification are inexpensive to train, such as SVMs and decision trees. Unfortunately, this assumption appears to be invalid with deep learning algorithms. For example, AlexNet consisting of more than 60 millions of parameters~\cite{alex2012imagenet} takes between five and six days to train on two GTX 580 3GB GPUs. Therefore, the expensive learning procedure hinders the use of the ensemble of deep neural networks on a large scale.
In this paper, we focus on addressing the ensemble of deep neural networks in the
framework of ECOC. The biggest reason to choose ECOC rather than other ensemble
techniques such as Boosting [18] is that ECOC is easy to parallel due to the independence of base learners. In contrast, boosting trains a number of models sequentially and continuously compensates the mistakes made by the earlier models, which results in that each base models in boosting are highly dependent on each other. At this point, ECOC exhibits a large advantage in large-scale real-world applications since all the base learners could be trained independently and simultaneously.
Specifically, we choose $N$-ary ECOC, an extension of ECOC, which shows significant
improvement over OVA, OVO, and traditional ECOCs~\cite{zhou2019n}. Many existing works did not investigate the influence of deep learning on ECOCs or $N$-ary ECOC. In this paper, we make a marriage between $N$-ary ECOC to investigate such an influence. In the sequence, we term this problem as Deep $N$-ary ECOC. The main contributions of this paper are as follows:
\begin{itemize}
\item We investigate a new problem named \textit{Deep $N$-ary ECOC} where we mainly discuss how to effectively and efficiently leverage advantages of deep learning models in the framework of ECOC.
\item To facilitate the training procedure, we further propose three different parameter sharing strategies for Deep $N$-ary ECOC framework, \emph{i.e.,}\xspace full parameters share, partial parameters share, and no parameter share. Specifically, the full share model shares all the feature learning parameters except for top classifier; the partial share model shares part of feature learning parameters; the no parameter share means all the base learners are learned from scratch.
\item We explore the influence of two crucial hyper-parameters of $N$-ary ECOC, \emph{i.e.,}\xspace $N_L$ and $N$, with deep neural networks for improving the accuracy. We also give specific suggestions for choosing those two hyper-parameters.
\item We conduct extensive experiments and compare with several ensemble strategies, \emph{i.e.,}\xspace an ensemble of random initialization (ERI), ECOC and $N$-ary ECOC, on both image and text classification tasks to analyze the advantages and disadvantages of each ensemble strategy.
\end{itemize}
The rest of this paper is organized as follows. Section~\ref{sec:related_work} reviews related work. Section~\ref{sec:deep_nary_ecoc} presents Deep $N$-ary ECOC. Finally, Section~\ref{sec:experiment} discusses our empirical studies and Section~\ref{sec:conclude} concludes this work.
\section{Related Work}\label{sec:related_work}
Our proposed deep $N$-ary ECOC is highly related to the following topics, including
ECOCs, ensemble learning, and deep neural networks.
\subsection{ECOCs}
Many ECOC approaches~\cite{allwein2000reducing,bagheri2013subspace,escalera2008decoding,pujol2006discriminant} have been proposed to design a good coding matrix in recent years. Most of them are fallen into the following two categories. The first one is data-independent coding, such as OVO, OVA, and ECOCs~\cite{escalera2008decoding}. Their coding matrix design is not optimized for the training dataset nor the instance labels such that all the base learners could be independently learned. For example, the sparse ECOC coding approach aims to construct the ECOC matrix $M\in\{-1, 0, 1\}^{N_C\times N_L}$, where $N_C$ is the number of classes, $N_L$ is the code length, and its elements are randomly chosen as either $-1$, $1$, or $0$~\cite{escalera2008decoding}. In ECOCs, the classes corresponding to $1$, $-1$ are considered as positive and negative classes, respectively, and $0$ are not considered in the learning process. More recently, Zhou \emph{et al.}\xspace~\cite{zhou2019n} extend the existing ECOCs into $N$-ary ECOC to enable the construction of $N$ meta-classes. Both theoretical and empirical findings validate the superiority of $N$-ary ECOC over traditional ECOCs.
Another direction is data-dependent ECOCs where the data are considered in the learning coding matrix, such as discriminant ECOC (D-ECOC)~\cite{pujol2006discriminant}, ECOC-ONE~\cite{radeva2006ecoc}, subspace ECOC~\cite{bagheri2013subspace}, Adaptive ECOC~\cite{zhong2013adaptive}, etc. In this way, different base learners interact with each other during training phrases, which is also similar to Boosting~\cite{schapire1990strength} methods, such as AdaBoost~\cite{freund1997decision}. In Boosting methods, a series of models are sequentially trained with latter models correcting mistakes committed in previous models. Compared to data-independent ECOCs, these methods require sophisticated algorithm design and are difficult to be paralleled.
To our best knowledge, there is little research to investigate the combination of ECOCs and deep learning. In this paper, we take a step further to analyze the performance of combining our previous work $N$-ary ECOC with deep learning.
\subsection{Deep Ensemble Learning}
A lot of studies show that deep neural network models are nonlinear and have a high variance, which can be frustrating when preparing a final model for making predictions~\cite{goodfellow2016deep}. Deep ensemble learning appears to one of the solutions that combine the predictions from multiple neural network models to reduce the variance of predictions and reduce generalization error. Recently, there are some studies to integrate base learners of deep neural networks with ensemble learning in three major ways. The first one is ensemble training data including re-sampling~\cite{efron1982jackknife}, bootstrap aggregation~\cite{breiman1996bagging}, where the choice of data is varied for training different base models in the ensemble. The second one is to ensemble models where different base models are used in the ensemble, including different random initialization, a random selection of mini-batches, differences in hyper-parameters, etc~\cite{goodfellow2016deep}. The third way is varying combinations where one vary the choice of combining outcomes from ensemble members. The most famous method is a model averaging ensemble and weighted average ensemble. Different from the aforementioned deep ensemble learning methods, deep $N$-ary ECOC serves a complementary piece for existing methods.
\subsection{Deep Neural Networks}
In recent years, a lot of different deep neural networks are proposed for different applications. For computer vision tasks, the most dominating model comes from Convolutional Neural Networks (CNNs), and its follow-up works such as AlexNet~\cite{alex2012imagenet}, VGGs~\cite{simonyan2014very}, ResNet~\cite{he2016deep} and DenseNet~\cite{huang2017densely}. For natural language processing tasks, most popular networks belong to Recurrent Neural Network (RNNs), or its many variants such as Long Short-Term Memory (LSTM)~\cite{hochreiter1997long}, Gated Recurrent Unit (GRU)~\cite{cho2014learning} and etc. In the experiment, we validate deep $N$-ary ECOC in both CNNs and LSTMs architecture for vision and text datasets, respectively.
\section{Deep $N$-ary ECOC}\label{sec:deep_nary_ecoc}
In this section, we first introduce the concept of $N$-ary ECOCs. To facilitate the training procedure, we further propose three different parameter sharing architectures, namely full, partial and no sharing.
\begin{figure}[t]
\centering
\subfigure[\small ECOC]
{\label{ecoc_example} \includegraphics[width=0.45\textwidth]{figures/ecoc_table}}
\subfigure[\small $N$-ary ECOC with $N=4$]
{\label{nary_ecoc_example} \includegraphics[width=0.45\textwidth]{figures/nary_ecoc_table}}
\caption{\small Example of ECOC and $N$-ary coding matrix.}
\label{fig:ecoc_nary_ecoc_example}
\end{figure}
\subsection{$N$-ary Ensemble for Multi-class Classification}
Error correcting output codes (ECOC) constructs an ensemble of binary base classifiers by randomly and independently assigning positive/negative pseudo labels (\emph{i.e.,}\xspace $1/-1$ in the coding matrix) for each base task. The results of all the base learners are combined to make a prediction. ECOC consists of two main steps: 1) encoding 2) decoding. In encoding, we create an encoding matrix to encode each class into a unique code that is as different as possible from the codes of the remaining classes. One example of the encoding matrix is illustrated in Fig.~\ref{ecoc_example}. A row of the coding matrix represents the code of each class, while a column of the coding matrix represents the binary classes to be considered when learning a base classifier. In decoding, ECOC first computes the prediction vector that is the concatenation of results of all the base tasks. The final label is determined by assigning the label with the “closest” label vector in encoding matrix $M\in\{1,2,\dots,N\}^{N_C\times N_L}$, where $N_C$ denotes the number of classes and $N_L$ denotes the number of base learners. As proved in many research works~\cite{dietterich1994solving,zhou2019n}, the capability of error correction relies on the minimum distance, $\Delta_{\min}(M)$, between any distinct pair of rows in the coding matrix $M$. In this way, the trained base classifiers could be sufficiently differentiated from each other.
To achieve this goal, ECOC is extended to a new framework named $N$-ary ECOC~\cite{zhou2019n}, where the original classes are decomposed into $N$ meta-class ($3\leq N \leq N_C$). Fig.~\ref{nary_ecoc_example} shows an example of $N$-ary ECOC encoding matrix. Zhou \emph{et al.}\xspace~\cite{zhou2019n} both empirically and theoretically showed that $N$-ary ECOC is able to achieve larger row separation and lower column correlation. It is interesting to note that $N$-ary ECOC is a more general framework for ECOC since traditional coding schemes could be treated as special cases of $N$-ary ECOC. For
example, when $N=2$, $N$-ary ECOC corresponds to the binary coding scheme; when $N=3$, $N$-ary ECOC corresponds to the ternary coding scheme. Furthermore, recent works~\cite{goodfellow2016deep} showed that an Ensemble of models with different Random Initialization (ERI) is able to improve multi-class classification performance. This deep ensemble learning strategy could be also viewed as a special case in the framework of $N$-ary ECOC if we keep the original label assignment, namely $N=N_C$.
On the other side, most existing work on ECOCs including our previous work on $N$-ary ECOC is constrained to classifier training with pre-defined features. With a significant advance of deep learning, the performance of various machine learning tasks has been improved. There is few works to discuss how to extend ECOC in the scenario of deep learning. In this work, we specifically study this open problem in the framework of $N$-ary ECOC, termed \textit{Deep $N$-ary ECOC}, and propose several approaches to address it. In this paper, we mainly investigate the following three questions:
\begin{enumerate}
\item Do we necessarily independently train all the deep base learners from scratch for all the situation?
\item Whether the $N$-ary ECOC framework still retains the advantages over other data-independent ensemble approaches with deep neural network?
\item Any new suggestion on the choice of the meta-class number $N$ and base learners number $N_L$?
\end{enumerate}
For the first question, we are going to propose three different parameter sharing architectures, which is described in more details next section. For the remaining two questions, we delay the investigation in the experiment section.
\subsection{Efficient Implementation for Deep $N$-ary ECOCs}
Different from the traditional ECOC with pre-defined features, deep ECOCs require to consider deep feature learning as well as the classifier construction during training. This increases the difficulty of deploying ECOCs in real-world scenarios, since even training a single deep neural network is also expensive. Fortunately, thanks to the nature of ECOCs, all the base deep neural networks could be trained simultaneously. Furthermore, in this paper, we investigate a more efficient realization and propose three different parameter sharing strategies, namely, no share, partial share, and full share, which is depicted in Fig.~\ref{fig:param_sharing_architecture}.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/parameter_sharing_architecture}
\caption{\small An example of three different parameters sharing strategies.}
\label{fig:param_sharing_architecture}
\end{figure}
Typically, we take the model of the CIFAR dataset, termed CIFAR-CNNs (explain in detail later), as an example to illustrate the three strategies. For the no parameter sharing strategy, as shown in Fig.~\ref{fig:param_sharing_architecture}(a), we trained $N_L$ base learners independently, which means that the feature encode layers of each base learner are trained by the inputs directly and do not interact with other learners. The partial parameter sharing strategy contains shared and task-specific layers, as in Fig.~\ref{fig:param_sharing_architecture}(b), the first three feature encoder layers are shared by all the base learners while the top encoder layer is task-specific, which is only optimized by the corresponding meta-class objectives. The full parameter sharing strategy is simply set all the feature encode layers to be shared by all the base learners except the top classifiers (see Fig.~\ref{fig:param_sharing_architecture}(c)). The top layer classifiers of all the sharing strategies are trained independently with its meta-class objectives. Note that, all the base learners of no parameters sharing strategy are trained from scratch while the shared layers of partial and full parameters sharing strategies are initialized by a pre-trained single model and fine-tuned through training to accelerate the model convergence rate. Obviously, the no parameter sharing strategy contains most parameters ($N_n$), then the partial sharing strategy ($N_p$) and the full sharing strategy ($N_f$) is least, say, $N_n>N_p>N_f$.
\section{Experiments}\label{sec:experiment}
\subsection{Datasets}
We conduct the experiments on 4 image datasets and 2 text datasets. The image datasets contain MNIST~\cite{lecun1998gradient}, CIFAR-10~\cite{krizhevsky2009learning}, CIFAR-100~\cite{krizhevsky2009learning}, and FLOWER-102~\cite{nilsback2008automated}, which are widely used image classification datasets in the computer vision community. The text datasets are Text REtrieval Conference (TREC)~\cite{li2002learning} dataset and Stanford Sentiment Treebank (SST)~\cite{socher2013recursive} dataset. The TREC is the question and answering dataset which involves classifying question sentences into 6 question types, say, whether the question is about person, location, numeric information and etc. The SST is the sentiment analysis sentence data with $5$ classes that range from $0$ (most negative) to $5$ (most positive). The statistics of these datasets are described in Table~\ref{tab:stat_datasets}. Note that we do not utilize the $K$-fold cross-validation method, but simply use the split of train/validation/test sets. If the datasets do not contain development part, we randomly split $10\%$ training samples as the development dataset.
\begin{table}[t]
\scriptsize
\caption{\small Statistics of Image and Text Datasets.}
\label{tab:stat_datasets}
\centering
\begin{tabular}{l c c c c c}
\toprule
\multicolumn{6}{c}{Image Dataset} \\
\midrule
Dataset & Image Size & \# Train Samples & \# Dev Samples & \# Test Samples & \# Classes ($N_C$) \\
\midrule
MNIST & $28\times 28$ & $60,000$ & N/A & $10,000$ & $10$ \\
CIFAR-10 & $32\times 32$ & $50,000$ & N/A & $10,000$ & $10$ \\
CIFAR-100 & $32\times 32$ & $50,000$ & N/A & $10,000$ & $100$ \\
FLOWER-102 & $256\times 256$ & $6,552$ & $818$ & $819$ & $102$ \\
\midrule
\multicolumn{6}{c}{Text Dataset} \\
\midrule
Dataset & Avg. Sent. Len. & \# Train & \# Dev & \# Test & \# Classes ($N_C$) \\
\midrule
TREC & $10$ & $5,500$ & N/A & $500$ & $6$ \\
SST & $18$ & $11,855$ & N/A & $2,210$ & $5$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Experimental Setup}
\subsubsection{Deep Neural Networks.} We employ different neural network-based models for different datasets. Specifically, we use LeNet~\cite{lecun1998gradient} for the MNIST dataset and the FLOWER-102 dataset is trained by AlexNet~\cite{alex2012imagenet}. Note that, due to the difficulty for the AlexNet model to learn consequential and representative features from the small training dataset of FLOWER-102 directly, the AlexNet is not trained from scratch but obtained by fine-tuning the pre-trained AlexNet model\footnote{Pre-trained AlexNet: \url{http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/}}, which is trained on ILSVRC dataset. For CIFAR-10/100 datasets, we build a model with eight convolutional layers and two full-connected layers, named as \textit{CIFAR-CNNs}, as shown in Fig.~\ref{fig:general_architecture}(a), where the eight convolutional layers are divided into four groups, they share the same structure with the different numbers of filters and kernel widths. The architecture of each group is structured as follows: one convolutional layer following the batch normalization~\cite{ioffe2015batch} and dropout~\cite{srivastava2014dropout} layer, another convolutional layer with batch normalization and max-pooling is applied. And the ELU~\cite{clevert2016fast} activation function is used for each convolutional layer.
To train the TREC and SST text datasets, we construct a three-layer bidirectional LSTM model with character-level CNN~\cite{kim2016character} and self-attention~\cite{bahdanau2015neural} mechanism, termed \textit{Bi-LSTMs}, as shown in Fig.~\ref{fig:general_architecture}(b), where the character-level CNN learned the character features to represent a word from the character sequences of such word, which can help to enrich the meaning of word features, especially for rare and out-of-vocabulary words, and boost the performance by capturing morphological and semantic information, and the self-attention mechanism encodes the learned contextual affluent word-level feature sequence of bidirectional LSTM into a single vector by considering the importance of each word feature.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/general_architecture}
\caption{\small The general architecture of \textit{CIFAR-CNNs} and \textit{Bi-LSTMs} models.}
\label{fig:general_architecture}
\end{figure}
\subsubsection{Parameters Setup.} For LeNet of MNIST dataset, we follow the same settings as LeCun \emph{et al.}\xspace~\cite{lecun1998gradient} and RMSProp~\cite{tieleman2012lecture} is chosen as the parameters optimization method with a learning rate of $0.001$ and decay rate of $0.9$, we also introduce Dropout~\cite{srivastava2014dropout} strategy with a drop rate of $0.5$ at each convolutional layer and the first full-connected layer to prevent over-fitting. While the AlexNet for FLOWER-102 dataset, we utilize exactly the same structure, parameter setting, and optimization method as Alex \emph{et al.}\xspace~\cite{alex2012imagenet}. For \textit{CIFAR-CNNs} model of CIFAR-10/100 datasets, we set the number of filters for each convolutional block as $32$, $64$, $128$, and $256$, respectively, the kernel sizes of $(3,3)$ and pool size of $(2,2)$ for all the blocks. The hidden size of the first fully-connected layer is $512$, while the second depends on the class size. To avoid over-fitting, we apply $l_2$ regularization with weight decay rate of $0.0005$ for all the weight parameters and Dropout strategy, the drop rate is $0.3$, $0.4$, $0.4$, $0.4$ for each convolutional block respectively, and $0.5$ for the first fully-connected layer. Parameters optimization is performed by Adam optimizer~\cite{kingma2015adam} with gradient clipping of $5.0$ and learning rate decay strategy. We set the initial learning rate of $\beta_0=0.002$ and fixed it for the first $5000$ training iterations, then the learning rate $\beta_t$ is updated by $\beta_t=\beta_0 / \big(1 + \rho \times \frac{t-5000}{T}\big)$, where $T$ is the decay step of $500$ and $\rho$ is the decay rate of $0.05$. Meanwhile, in order to improve the performance, the data augmentation is also utilized.
For the \textit{Bi-LSTMs} model of TREC and SST text datasets, we use the $300$-dimensional publicly available pre-trained word embeddings as the word-level feature representation, which is trained by fastText\footnote{fastText: \url{https://github.com/facebookresearch/fastText}} package on \textit{Common Crawl} and \textit{Wikipedia}~\cite{bojanowski2017enriching,grave2018learning}, and the $50$-dimensional randomly initialized task-specific character embeddings. The word embeddings are fixed and character embeddings are learned during training. We use three different convolutional layers with widths $2$, $3$, $4$, respectively, for character-level CNN encoder and set the filter number of each layer as $20$, the learned character features of each layer are concatenated and then optimized by a two-layer highway network~\cite{srivastava2015highway} before concatenating with the corresponding word embeddings. The dimension of hidden states of LSTM layers are set as $200$. Parameters optimization is performed by Adam optimizer~\cite{kingma2015adam} with gradient clipping of $5.0$ and learning rate decay strategy. We set the initial learning rate of $\beta_0=0.001$, at each epoch $t$, learning rate $\beta_t$ is updated by $\beta_t=\beta_0/(1+\rho\times t)$, where $\rho$ is the decay rate with $0.05$. To reduce overfitting, we also apply Dropout~\cite{srivastava2014dropout} at the embedding layer and the output of each LSTM layer with the drop rate of $0.2$ and $0.3$, respectively.
\subsubsection{$N$-ary ECOC Coding Matrix Setup.} For $N$-ary ECOCs, including ECOCs (which is a special issue of $N$-ary ECOC when $N=2$), we train the $N_L$ base learners based on the coding matrix and use the predicted code sequence of each class and generated coding matrix to make a prediction based on distance measurement. Zhou \emph{et al.}\xspace~\cite{zhou2019n} introduced several coding matrix construction methods and distance measurements designed for general or task-specific applications. For simplicity, we utilize the random dense encoding method to randomly split the original classes $N_C$ into $N$ subsets and make sure that the number of classes in each subset should be approximately balanced, simultaneously. For the decoding method, we adopt the minimum Hamming distance due to its simplicity and effectiveness. In our experiments, we experiment on the different numbers of meta-class $N$ and number of base learners $N_L$ for different datasets, as described in Table~\ref{tab:summ_n_n_l}. Note that we do not experiment on all the possible meta-classes for each dataset, because of the limitations of computing resources and we only trained $60$ base learners for MNIST, FLOWER-102, TREC, and SST datasets and $100$ base learners for CIFAR-10/100 datasets, respectively. Specifically, in order to evaluate the effects of number of base learners on the ensemble learning performance, we trained another $300$ classifiers for FLOWER-102 and TREC datasets, respectively.
\begin{table}[t]
\scriptsize
\caption{\small Summarization of tested $N$ and $N_L$ for experiments.}
\label{tab:summ_n_n_l}
\centering
\begin{threeparttable}
\begin{tabular}{l c c c}
\toprule
Dataset & \# Classes ($N_C$) & Tested \# Meta-Class ($N$) & Tested \# Base Learners* ($N_L$) \\
\midrule
MNIST & $10$ & $2,4,5,8,10$ & $60$ \\
CIFAR-10 & $10$ & $2,4,5,8,10$ & $100$ \\
CIFAR-100 & $100$ & $2,5,10,30,50,75,95,100$ & $100$ \\
FLOWER-102 & $102$ & $2,3,5,10,20,40,60,80,90,95,102$ & $60$ \\
\midrule
TREC & $6$ & $2,3,4,5,6$ & $60$ \\
SST & $5$ & $2,3,4,5$ & $60$ \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\small
\item[] \scriptsize{*It indicates the maximal number of classifiers is used for training.}
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Experimental Results}
\subsubsection{Comparison with Different Ensemble Methods.}
In this section, we compare the performance of different ensemble methods on the aforementioned image and text datasets. In the experiment, we trained a single model and ensemble models of three coding schemes for each dataset, \emph{i.e.,}\xspace Ensemble with Random Initializations (ERI), ECOC, $N$-ary ECOC, then report their (ensemble) accuracy with standard deviations. Note that we only report the highest score under a specific meta-class $N$ for $N$-ary ECOC. For the MNIST, FLOWER-102, TREC, and SST datasets, we use $60$ base learners for each scheme, while $100$ base learners for CIFAR-10/100 datasets. The results are summarized in Table~\ref{tab:main_results}. Generally, we observe that most ensemble models show relatively significant improvements, compared with the single model, on the given datasets with different deep neural networks.
We observe two interesting results in Table~\ref{tab:main_results}. First, comparing the single model with $N$-ary ECOC, we find that the improvement ratio of $N$-ary ECOC is inverse relation with single model performance, \emph{i.e.,}\xspace the improvement of $N$-ary ECOC scheme is more prominent if the performance of the single model is lower. For example, it is obvious that the baseline accuracies are higher on MNIST, CIFAR-10, TREC and FLOWER-102 ($>80\%$) than on CIFAR-100 and SST (almost $<60\%$), then the improvement ratios are $0.59\%$, $5.54\%$, $5.64\%$ and $5.80\%$ from the single model to $N$-ary ECOC on MNIST, CIFAR-10, TREC, and FLOWER-102 datasets, respectively, while the improvement ratios of CIFAR-100 dataset are $13.72\%$ and $15.15\%$ on SST dataset.
\begin{table}[t]
\scriptsize
\caption{\small Ensemble accuracies with their standard deviations.}
\label{tab:main_results}
\setlength{\tabcolsep}{5.5 pt}
\centering
\begin{threeparttable}
\begin{tabular}{l l c c c c}
\toprule
\multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multirow{2}{*}{Single Model} & \multicolumn{3}{c}{Ensemble Model*} \\
& & & ERI & ECOC & $N$-ary ECOC \\
\midrule
MNIST & LeNet~\cite{lecun1998gradient} & 98.98$\pm$0.07\% & 99.11$\pm$0.11\% & 99.23$\pm$0.08\% & \textbf{99.57}$\pm$0.09\% \\
CIFAR-10 & CIFAR-CNNs & 87.12$\pm$0.43\% & 90.54$\pm$0.31\% & 89.37$\pm$0.54\% & \textbf{91.95}$\pm$0.24\% \\
CIFAR-100 & CIFAR-CNNs & 61.50$\pm$0.57\% & 69.57$\pm$0.29\% & 34.26$\pm$2.42\% & \textbf{69.94}$\pm$0.32\% \\
FLOWER-102 & AlexNet~\cite{alex2012imagenet} & 83.12$\pm$0.29\% & 86.32$\pm$0.60\% & 77.05$\pm$0.73\% & \textbf{87.94}$\pm$0.28\% \\
\midrule
TREC & Bi-LSTMs & 90.50$\pm$0.12\% & 94.80$\pm$0.09\% & 95.80$\pm$0.08\% & \textbf{95.60}$\pm$0.10\% \\
SST & Bi-LSTMs & 44.17$\pm$0.92\% & 48.69$\pm$0.18\% & 48.91$\pm$0.26\% & \textbf{50.86}$\pm$0.13\% \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\small
\item[] \scriptsize{*Here $N_L$ are $60$, $100$, $100$, $60$, $60$ and $60$, respectively, for the ensemble models from top to bottom row. While $N$ are $3$, $4$, $95$, $95$, $3$, $4$, respectively, for the $N$-ary ECOC.}
\end{tablenotes}
\end{threeparttable}
\end{table}
Second, the $N$-ary ECOC scheme outperforms ECOC and ERI ensemble methods on most image and text datasets, except for the TREC text dataset. Specifically, $N$-ary ECOC always performs better than ERI. This is due to that $N$-ary ECOC varies the predicted classes for each base learner and makes them more diverse than ERI, where the diverse forecast errors made by base learners of $N$-ary ECOC are more beneficial to the ensemble learning in comparison to the similar base learner errors of ERI. Meanwhile, compared with ECOC, $N$-ary ECOC also shows its superiority in most cases, especially when the number of classes is large (\emph{i.e.,}\xspace $N_C\geq 100$ in our experiments). It is primarily due to, as mentioned by Zhou \emph{et al.}\xspace~\cite{zhou2019n}, the better quality of the coding matrix and the higher discriminative ability (in terms of how many meta-classes a base learner tries to discriminative) of $N$-ary ECOC than ECOC.
In fact, we find the contribution of class merge degree to the ensemble accuracy of $N$-ary ECOC replies on the dataset, say, the datasets with a different number of classes require different class merge degree strategy, as discuss in Section~\ref{sssec:meta_class}. Note the class merge degree, which is measured by $\frac{N_C-N}{N_C}$ , is the ratio of class numbers reduced when the classes are merged into meta-classes.
\subsubsection{Evaluation on the Effect of Meta-class Number $N$.}\label{sssec:meta_class}
In this section, we investigate the influence of meta-class number $N$, which is one of the crucial hyper-parameters of $N$-ary ECOC. For the datasets with a small value of $N_C$, we experiment on all the possible meta-class numbers, \emph{i.e.,}\xspace from $2$ to $N_C$ ($N=2$ denotes ECOC and $N=N_C$ denotes ERI), while for the datasets with a large value of $N_C$, we select several representative meta-class numbers for the experiment. The ensemble accuracies with respect to $N$ are depicted in Fig.~\ref{fig:ensemble_acc_wrt_n}.
From Fig.~\ref{fig:ensemble_acc_wrt_n}(a), we observe that the performances of ensemble models with different $N$ are relatively stable, the highest ensemble accuracies of MNIST, CIFAR-10, and SST achieve when $N=3$, $N=4$, and $N=4$ respectively, and the best performance of TREC is obtained at $N=3$ if we do not consider the ECOC. After that, the performance of each dataset is gradually decreased with small fluctuations with an increasing number of meta-class $N$. It is interesting to see that $N$-ary ECOC for datasets with a small value of $N_C$ always tends to arrive the best performance with small value of $N$, \emph{i.e.,}\xspace large class merge degree. Specifically, the class merge degree for MNIST, CIFAR-10, TREC and SST are $0.7$, $0.6$, $0.5$ and $0.2$ respectively.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/ensemble_acc_wrt_n}
\caption{\small Ensemble accuracies with respect to $N$, where the first point of each line represents ECOC ($N=2$), the last represents ERI ($N=N_C$) and the rest is $N$-ary ECOC with various $N$.}
\label{fig:ensemble_acc_wrt_n}
\end{figure}
However, as shown in Fig.~\ref{fig:ensemble_acc_wrt_n}(b), the performance of ensemble models with different $N$ fluctuates significantly on the datasets with a large value of $N_C$. For the ECOC scheme, it only achieves $34.26\%$ ensemble accuracy on the CIFAR-100 dataset and $77.05\%$ on the FLOWER-102 dataset. For the $N$-ary ECOC with $N=3$, it obtains $83.52\%$ and $59.75\%$ on FLOWER-102 and CIFAR-100 datasets, respectively. Then the ensemble accuracy improves gradually with the increase of meta-class $N$ and reaches the summit with accuracies of $87.94\%$ and $69.94\%$, when $N=95$, for FLOWER-102 and CIFAR-100 datasets, and mildly decreases after the optimal performances. The ensemble accuracies of the ERI scheme ($N=N_C$) on these two datasets are slightly lower than that of $N$-ary ECOC. Obviously, the ECOC fails to address the datasets with a large value of $N_C$, while the higher ensemble performance of $N$-ary ECOC needs a large value of $N$, namely, a small class merge degree. For the FLOWER-102 and CIFAR-100 datasets, the $N$-ary obtains good results after $N\geq 75$ and achieve best at $N=95$. In particular, the class merge degree for FLOWER-102 is $0.069$ and CIFAR-100 is $0.05$.
In general, we conclude from the experiment that the ensemble performance is relatively stable for the datasets with a small value of $N_C$, which slightly improves until the peak and then decreases a bit, or just slightly decreases with the increase of $N$ after the peak. While the performance, for the datasets with a large value of $N_C$, boosts significantly at the very beginning, then it saturates as $N$ continues increasing and reaching the optimum when $N$ is close to $N_C$. This could be explained by that the base learners with large $N$ has stronger discriminability~\cite{zhou2019n}.
Thus, our suggestions for the choice of $N$ are: 1) For the dataset with small $N_C$, the large class merge degree strategy, \emph{i.e.,}\xspace small $N$, is better for achieving good performance, such as $N=3$ or $4$ for the dataset with $N_C\leq 10$. 2) Reversely, for the dataset with large $N_C$, the small class merge degree strategy should be applied, \emph{e.g.,}\xspace $75\leq N\leq 95$ for $N_C$ is around $100$.
\begin{table}[t]
\scriptsize
\caption{\small Ensemble accuracies with their standard deviations.}
\label{tab:esemble_acc_wrt_n_l}
\setlength{\tabcolsep}{3.8 pt}
\centering
\begin{threeparttable}
\begin{tabular}{l l c c c c c c c c}
\toprule
\multirow{2}{*}{Dataset} & \multirow{2}{*}{$N$} & \multicolumn{8}{c}{\# of Base Learners ($N_L$)} \\
& & 10 & 20 & 30 & 45 & 50 & 60 & 80 & 100 \\
\midrule
MNIST & 3 & 99.14\% & 99.20\% & 99.35\% & 99.48\% & \textbf{99.57\%} & \textbf{99.57\%} & - & - \\
CIFAR-10 & 4 & 87.45\% & 89.76\% & 91.78\% & 91.83\% & 91.82\% & 91.92\% & \textbf{91.95\%} & 91.93\% \\
CIFAR-100 & 95 & 67.94\% & 69.12\% & 69.11\% & 69.33\% & 69.34\% & 69.46\% & 69.67\% & \textbf{69.94\%} \\
FLOWER-102 & 95 & 86.06\% & 86.45\% & 86.45\% & 87.06\% & 87.16\% & \textbf{87.94\%} & 87.46\% & 87.59\% \\
\midrule
TREC & 3 & 93.80\% & 94.00\% & 95.20\% & 95.20\% & \textbf{95.60\%} & \textbf{95.60\%} & 95.50\% & \textbf{95.60\%} \\
SST & 4 & 46.74\% & 48.19\% & 49.41\% & 50.18\% & 50.45\% & \textbf{50.86\%} & - & - \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\small
\item[] \scriptsize{*Here $N_L$ are $60$, $100$, $100$, $60$, $60$ and $60$, respectively, for the ensemble models from top to bottom row. While $N$ are $3$, $4$, $95$, $95$, $3$, $4$, respectively, for the $N$-ary ECOC.}
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsubsection{Evaluation on the Effect of Base Learner Number $N_L$.}
In this experiment, we further explore another crucial hyper-parameter of $N$-ary ECOC, namely the number of base learner $N_L$ (also equivalent to the code length), and study its influence on the ensemble accuracy. We first report the ensemble accuracies of different $N_L$ for each dataset with the optimal meta-class number $N$, as described in Table~\ref{tab:esemble_acc_wrt_n_l}. Then, we study the ensemble accuracies of different meta-class $N$ with respect to $N_L$ (see Fig.~\ref{fig:ensemble_acc_diff_n_wrt_n_l} and~\ref{fig:ensemble_acc_wrt_large_n_l}).
From Table~\ref{tab:esemble_acc_wrt_n_l}, we observe that one requires a smaller number of base learners $N_L$ for datasets with small $N_C$ than that for datasets with large $N_C$ to reach the optimal ensemble accuracies generally. For example, MNIST and TREC only need $50$ base learners to get the optimum, while SST obtains best accuracies with $60$ base learners and CIFAR-10 requires $80$. In comparison, it reaches the optimal ensemble accuracies with the help of $100$ base learners on CIFAR-100 (note that it first reaches optimum when $N_L=90$). There is a special issue that FLOWER-102 holds a large $N_C$ ($102$ classes), but only requires $60$ base learners to derive the optimal ensemble accuracies. It is because the pre-trained model on the large-scale dataset (ILSVRC dataset in our experiment) is utilized and the pre-trained model already encodes a variety of abstractly and typically well-learned features. Moreover, we also find that the requirement of $N_L$ is related to the single model performance to some degree, say, the single model achieves better performance, then its ensemble model requires fewer base learners to achieve the optimal result.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/ensemble_acc_diff_n_wrt_n_l}
\caption{\small Ensemble accuracies of different values for $N$ with respect to $N_L$ on the image and text datasets, where $N=2$ is ECOC scheme, $N=N_C$ (biggest $N$ in each sub-figure) is ERI scheme, and the rest is $N$-ary ECOC schemes.}
\label{fig:ensemble_acc_diff_n_wrt_n_l}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/ensemble_acc_wrt_large_N_l}
\caption{\small Ensemble accuracies with respect to large $N_L$($=300$) for three coding schemes on FLOWER-102 and TREC datasets.}
\label{fig:ensemble_acc_wrt_large_n_l}
\end{figure}
In addition to the observations from Table~\ref{tab:esemble_acc_wrt_n_l}, we also study the impact of $N_L$ on ensemble performance under different meta-class $N$. Obviously, the optimal number of base learners $N_L$ for achieving the best accuracy is related to the meta-class $N$, as shown in Fig.~\ref{fig:ensemble_acc_diff_n_wrt_n_l} and~\ref{fig:ensemble_acc_wrt_large_n_l}. Normally, an ensemble model with small meta-class $N$ needs more base learners $N_L$ to achieve the same result compared to an ensemble model with large meta-class. It is because the discriminative ability of the codes for small $N$ is worse than that for large $N$.
Considering the definition of ECOC, $N$-ary ECOC and ERI schemes, the discriminative ability of ECOC is worst due to its small meta-class ($N=2$), which means ECOC needs relatively most base learners to reach the optimal performance compared with $N$-ary ECOC and ERI, where ERI holds the best discriminative ability. Thus, we conclude that $N_L$ for ECOC is greater than or equal to $N_L$ for $N$-ary ECOC, while $N_L$ for $N$-ary ECOC is greater than or equal to $N_L$ for ERI. Note that we use ``greater than or equal to'' since there is no guarantee that the optimal $N_L$ for a small $N$ must be larger than that for a large $N$, especially for some extreme situations such as $N=2$ (ECOC) versus $N=3$ ($N$-ary ECOC) or $N=99$ ($N$-ary ECOC) versus $N=100$ (ERI) for the dataset with $100$ classes ($N_C$).
In Fig.~\ref{fig:ensemble_acc_diff_n_wrt_n_l}, the experiment results on all the datasets show similar trends that ensemble accuracies of larger $N$ converge faster than that of smaller $N$ as the increasing of $N_L$, which means larger $N$ requires less $N_L$ and vice versa. For example, as shown in Fig.~\ref{fig:ensemble_acc_diff_n_wrt_n_l}(a), ECOC reaches optimal ensemble accuracies at $N_L=100$, while $N$-ary ECOC with $N=4$ and $5$ optima at $N_L=80$, then $N_L=60$ for $N=8$ and ERI peaks at $N_L=55$. The patterns on TREC and SST datasets are consistent as CIFAR-10. Typically, such patterns are more distinct for datasets with large $N_C$ (ref. Fig.~\ref{fig:ensemble_acc_diff_n_wrt_n_l}(d) and~\ref{fig:ensemble_acc_diff_n_wrt_n_l}(e)). For instance, in Fig.~\ref{fig:ensemble_acc_diff_n_wrt_n_l}(d), the ensemble accuracies of $N=10$ are highest at $N_L=100$, $N_L=95$ for $N$-ary ECOC with $N=30,50,75$, while $90$ base learners are required for $N=95$ and ERI needs $N_L=85$. Here we do not take ECOC into consideration, since it fails to improve the ensemble accuracy with only $34.26\%$. For the Fig.~\ref{fig:ensemble_acc_diff_n_wrt_n_l}(e), we see that ensemble accuracies of $N=2,3,5,10$ converge at $N_L=60$, $N=20,40$, converges at around $N_L=50$, $N=60,80,90$ at approximately $N_L=45$ and $40$ base learners are needed for $N$-ary ECOC with $N=95$ and ERI to reach the optimal ensemble accuracy.
Apart from the optimal $N_L$ for each meta-class $N$ to reach optimal ensemble accuracy, we also observe that using $15\sim 25$ base learners for ERI is good enough for datasets with small $N_C$ while $20\sim 40$ base learners for large $N_C$. For ECOC, it fails with the large $N_C$, and on the dataset with small $N_C$. Although ECOC performs comparably to $N$-ary ECOC and ERI, it still needs more base learners to converge, which is different from the conclusion in~\cite{allwein2000reducing} that ECOC requires $N_L=10\log_{2}(N_C)$ on traditional classifiers. For $N$-ary ECOC, the optimal performance is highly related to the choice of $N$. If the choice of $N$ follows suggestions in Section~\ref{sssec:meta_class}, $40\sim 60$ base learners for small $N_C$ are enough to achieve good performance, while large $N_C$ needs around $60\sim 100$ base learners.
We further extend number of base learners to $300$ and experiment on the FLOWER-102 and TREC datasets to investigate ensemble performances with the increasing $N_L$, as in Fig.~\ref{fig:ensemble_acc_wrt_large_n_l}. From Fig.~\ref{fig:ensemble_acc_wrt_large_n_l}(a), we find that the performance of ECOC improves significantly when $N_L$ increases, then keep relatively stable with a slight increase after $N_L=100$ and reach the optimal accuracy of $82.17\%$ at around $N_L=270$. However, the best performance of ECOC derived by using a large number of base learners is still lower than $N$-ary ECOC with $N=95$ and ERI with only $5$ base learners used, which indicates that ECOC is not suitable for the large $N_C$ case. For the $N$-ary ECOC and ERI, they obtain good scores with only small numbers of base learners and slightly improve to the optimal accuracy at around $N_L=40$. After that, the performance remains stable with the increase of $N_L$ and it drops when $N_L$ continues to increase, which indicates that increasing $N_L$ monotonously has no impact on performance. Similar observations could be found in Fig.~\ref{fig:ensemble_acc_wrt_large_n_l}(b).
Generally, there is no concrete conclusion for the choice of the number of base learners $N_L$, but some helpful guidelines can be summarized for experiments: 1) The choice of meta-class $N$ is more important than the number of base learners $N_L$ for the performance of $N$-ary ECOC, especially for the dataset with large $N_C$. Since the increase of $N_L$ cannot compensate for the negative effects caused by a badly selected $N$ (\emph{e.g.,}\xspace $N=10$ for CIFAR-100). 2) Albeit the optimal number of base learners $N_L$ varies along $N_C$, the suggested $N_L$ is in the range of $\big[\lfloor 10\log_{2.2}(N_C)\rfloor, \lceil 10\log_{1.5}(N_C)\rceil\big]$. For example, the optimal $N_L$ ranges in $[30,58]$ for $N_C=10$ and $[59,110]$ for $N_C=100$, which aligns with the observations in our experiments.
\subsubsection{Comparison with Three Parameter Sharing Strategies.}
In this Section, we study the effect of three different parameter sharing strategies in the framework of ECOC, $N$-ary ECOC, and ERI. Note that, for the $N$-ary ECOC framework, we only select the optimal meta-class $N$ of each dataset for display except for the CIFAR-100 dataset which four different $N$ are chosen for display. We first study the performance of three different parameter sharing strategies on each tested dataset.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/param_sharing_trec}
\caption{\small Parameter sharing strategies in ECOC, $N$-ary ECOC and ERI for TREC dataset.}
\label{fig:param_sharing_trec}
\end{figure}
From the experimental results on the TREC dataset (see Fig.~\ref{fig:param_sharing_trec}), we observe that no parameter sharing strategy performs better than partial and full parameter sharing strategy for ECOC, N-ary ECOC, and ERI. When the number of base learner $N_L$ is small, the performance of no share is not satisfactory. Then it improves significantly with the increase of $N_L$, while the performances of partial and full share are relatively stable with respect to $N_L$. Moreover, when the number of meta-class $N$ is small, partial share outperforms the full share and the performance of no share is much better than partial and full share. However, when $N$ is large, full share is better than partial share and the performance of no share is just slightly higher than partial and full share.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/param_sharing_sst}
\caption{\small Parameter sharing strategies in ECOC, $N$-ary ECOC and ERI for SST dataset.}
\label{fig:param_sharing_sst}
\end{figure}
From Fig.~\ref{fig:param_sharing_sst}, we have the following observations. First, when the number of meta-class $N$ is small, both partial and no share models improve significantly with the increase of $N_L$. The partial share generally outperforms the no and full share except when $N_L$ is less. Second, when the number of meta-class $N$ is large, as shown in Fig.~\ref{fig:param_sharing_sst}(b) and~\ref{fig:param_sharing_sst}(c), the performance of the three strategies are stable, and the improvement of no share is most significant with the increase of $N_L$. No share strategy governs the best performance with $N=4$ while partial share strategy always performs best for ERI situation.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/param_sharing_cifar10}
\caption{\small Parameter sharing strategies in ECOC, $N$-ary ECOC and ERI for CIFAR-10 dataset.}
\label{fig:param_sharing_cifar10}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/param_sharing_cifar100}
\caption{\small Parameter sharing strategies in ECOC, $N$-ary ECOC and ERI for CIFAR-100 dataset.}
\label{fig:param_sharing_cifar100}
\end{figure}
In Fig.~\ref{fig:param_sharing_cifar10}, the performances of no, partial, and full share strategies are more stable. When the number of base learners $N_L$ is small, we see that the performance of no share is worst with ECOC and $N$-ary ECOC, and partial share performs better with $N$-ary ECOC and ERI situations. With the increase of $N_L$, for ECOC, all the strategies improve significantly, partial share outperforms another two strategies at the beginning, and then no share comes closer to partial share and reaches slightly higher performance than partial share. For $N$-ary ECOC, partial and full share strategies do not show significant improvement, while no share improves obviously and outperforms the partially and full share despite its lower ensemble accuracy at the very beginning. For the ERI, all these three strategies perform stable while no share always performs best and the performance of full share stays the bottom.
In the last experiment, we study the parameter sharing strategies in ECOC, $N$-ary ECOC, and ERI for the dataset with a large number of classes, as shown in Fig.~\ref{fig:param_sharing_cifar100}. For $N$-ary ECOC situation, we experiment on four different meta-class with $N=10,30,50,95$.
First, we observe that ECOC model with no share strategy fails to achieve satisfactory performance, while partial and full share strategies with the ECOC improve significantly with the increase of $N_L$. Moreover, partial share always outperforms full share.
Secondly, for the $N$-ary ECOC with small number of meta-class, we observe that partial share strategy outperforms no and full share always. No share improves most significantly and its performance is comparable to that of partial share with the increase of $N_L$. The performance of full share always maintains the worst. With an increasing number of meta-class $N$, partial share strategy outperforms no share strategy at the beginning, but its performance is gradually surpassed by no share when number of base learners $N_L$ increases. For $N=50$ and $95$, the performance of no share is comparable to that of partial share when the number of base learners $N_L$ is small. No share outperforms partial share with the increases of $N_L$. Moreover, for the $N$-ary ECOC, full share strategy consistently performs worst.
Thirdly, for the ERI model, the observations are similar to the $N$-ary ECOC with large meta-class $N$ and the no share strategy is comparable to partial share when $N_L$ is small. It always performs best when $N_L$ increases, meanwhile, the performance of full share is worst.
Finally, we conclude that: 1) In general, for the dataset with the small number of classes, the performance of no share model is better than or equal to that of the partial share model, thus no share strategy is suggested to be chosen. 2) For the dataset with the small number of classes, when the number of meta-class $N$ is large, these three strategies perform stable. 3) For the dataset with a large amount of classes, when the number of meta-class is small, the performance of partial share model is the best. 4) For the dataset with large amount of classes, when the number of meta-class is large, no share strategy model outperforms partial and full share models in most cases. Thus no share strategy should be preferred in such a case. 5) If the number of meta-class is large, the performance difference between three sharing strategies is marginal. Then full share could be suggested due to its parameter efficiency.
\section{Conclusion}\label{sec:conclude}
In this paper, we mainly investigate how to effectively integrate deep learning with the $N$-ary ECOC framework, also termed Deep $N$-ary ECOC. To achieve this goal, we give three different realizations. We further carry out extensive experiments to show the superiority of deep $N$-ary ECOC over existing data-independent deep ensemble strategies.
\section*{Acknowledgement}
The research work is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045). Ivor W. Tsang was supported by ARC DP180100106 and DP200101328.
\bibliographystyle{splncs04} |
2012.15104 | \section{Introduction}
Hyperspectral image (HSI), a three-dimensional (3D) spatial-spectral cube, which is rich in spectral information, has been widely used in applications such as face recognition~\cite{HyperfaceTPAMI2003}, remote sensing~\cite{Stein2002}, food surveillance~\cite{Bioucas2012jstars} and so on. Typically, a spectrometer is used, which scans a 1D-line or 2D plane to capture a full 3D image~\cite{Shaw2003}. Though it can achieve a high spectral resolution such as 10 nm or less~\cite{Green1998}, it is time-consuming and unsuitable for dynamic scenes~\cite{Lizhi_2017PAMI}. Therefore, it is necessary to improve the efficiency of hyperspectral imaging for the dynamic applications.
Inspired by compressive sensing~\cite{Donoho}, several compressive spectral imaging (CSI) techniques have been well developed~\cite{descour1995computed,CASSI2008,ford2001large} to improve the imaging efficiency.
Specifically, a single or few 2D snapshot compressed measurements are captured by the CSI system to reconstruct the 3D HSI data-cube using inverse algorithms.
Among these CSI systems, the coded aperture snapshot spectral imager (CASSI)~\cite{CASSI2008} has attracted much attention, which utilizes a coded aperture and one or two dispersive elements to modulate the optical field from a scene. A detector captures a 2D, multiplexed projection of the 3D spatial-spectral data-cube representing the scene~\cite{CASSI2008}. Following this, different variants of CASSI have been developed, such as dual dispersive CASSI (DDCASSI)~\cite{DDCCSI}, single disperser CASSI (SDCASSI)~\cite{CASSI2008}, spatial-spectral encoded compressive hyperspectral imager (SSCSI)~\cite{SSCSI}, dual-coded hyperspectral imager (DCSI)~\cite{DCSI}, colored coded aperture spectral camera imager (CCASSI)~\cite{arguello2014colored} and so on. For more details, we recommend to the review works~\cite{ImagingRev2011,CASSIreV2016,arce2013compressive}.
In a nutshell, CSI is a system composed of a hardware setup plus a reconstruction algorithm.
In this paper, we focus on the algorithm. In particular, we develop an efficient HSI recovery method via non-iterative fusion of CASSI and the RGB measurements.
In order to solve the inverse problem in CSI, {\em i.e.}, retrieving the 3D HSI from the coded 2D measurement, various regularized methods, including sparse representation~\cite{DCSI,SSCSI,RGBRecovery,SIAM2013,zhang2016dictionary}, total variation (TV)~\cite{dualCam2015,yuan2016generalized}, and non-local low-rank regularization~\cite{Yuan_PAMI_2019,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten} have been developed.
Recently, deep learning methods based on the convolutional neural network (CNN) have also been utilized to explore the prior knowledge from coded image to the reconstructed HSI~\cite{Zhang_2019_ICCV,Miao_2019_ICCV}.
However, the quality of the reconstructed HSI is still limited, due to the fact that CASSI is \textit{too} undersampled by compressing HSI from tens to hundreds of spectral bands to one single compressed measurement. In addition, CNN based methods~\cite{Zhang_2019_ICCV,Miao_2019_ICCV} depend on the quality of the training samples, and lose the flexibility~\cite{Yuan_2020}.
In short, the main factors of the reconstruction algorithms for CASSI are the \textit{accuracy, speed and flexibility}~\cite{Yuan_2020}.
However, due to the severe ill-posed nature of CASSI, existing algorithms cannot meet all these three criteria.
One solution from the hardware side to alleviate this challenge is to capture complementary measurements such as panchromatic or RGB images with CASSI~\cite{side_2015,Zhang_2019_ICCV_Ten,CASSI_RGB}.
The related reconstruction algorithms for these hybrid systems~\cite{dualCam2015,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten,side_2015,CASSI_RGB} usually regard the complementary measurements as an addition side information to boost the quality of reconstructed HSI. In particular, these algorithm compose a larger sensing matrix by stacking the complementary sensing matrix and CASSI sensing matrix. That is to say, the computational algorithms to reconstruct HSIs from CASSI and complementary measurements will cost much more time compared to the reconstruction of single CASSI.
Bearing the above concerns in mind, in this paper, we aim to develop an efficient HSI reconstruction algorithm from CASSI and complementary measurements. Different from the previous works, which utilize iteration methods to update the reconstructed HSIs step-by-step to explore the spatial-spectral priors of the full HSIs~\cite{Yuan_PAMI_2019,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten,CASSI_RGB}, our proposed method is inspired by the fact that HSIs are assumed to underlie a low dimensional spectral subspace, which has been widely used in different applications~\cite{BioucasTGRS2008,he2018non,yokoya2012coupled}.
In detail, by taking the advantages of both measurements in the hybrid system, we propose to compute the spatial coefficients from the RGB measurement, and optimize the orthogonal spectral basis from the CASSI measurement, and finally reconstruct the HSI from the product of these two components. We depict the proposed method in Fig.~\ref{fig:fuse}. To make the algorithm efficient, we further propose a non-iterative optimization, which will be described in Section~\ref{sec:fusion}.
In addition, to reduce the rank of the spatial coefficients, we segment the HSI into spatially overlapping patches, and process each patch separately.
\subsection{Contributions of This Paper}
The main advantages of the proposed method compared with the previous algorithms~\cite{dualCam2015,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten,Zhang_2019_ICCV,Zhao_2019_CVPR,side_2015,CASSI_RGB} can be summarized as follows:
\begin{itemize}[]
\item \textit{Fast:} our proposed two-stage fusion method of CASSI and RGB measurements, does not require regularization or iteration. Therefore, it is efficient to finish the HSI reconstruction within seconds, which is $5000\times$ faster compared to DeSCI~\cite{Yuan_PAMI_2019} and DLTR~\cite{Zhang_2019_ICCV_Ten}.
\item \textit{Flexible:} our model does not need to know the spectral response (sensing matrix) of the RGB detector in advance. What we need is the alignment of CASSI and RGB measurements. It is thus flexible to apply our model to different applications.
\item \textit{High accuracy:} the proposed method is evaluated on extensive simulated and real data experiments. It is reported to achieve the best results compared to other state-of-the-art methods~\cite{Yuan_PAMI_2019,Zhang_2019_ICCV_Ten}.
\end{itemize}
\subsection{Paper Organization and Notations}
The remainder of this paper is organized as follows. Section II introduces the related works of computational imaging reconstruction. Section III introduces the dual-camera compressive hyperspectral imaging system, and presents the proposed fusion model and related solutions. Section IV illustrates the experimental results and Section V presents the detailed discussion of our method, which is followed by the conclusions in Section VI.
In this paper, tensors of order $3$ are denoted by boldface Euler script letters, e.g., $\tensor{X}\in\mathbb{R}^{M\times N \times B}$. Scalars are denoted by normal lowercase letters or uppercase letters, e.g., $x, X \in\mathbb{R}$. $X(i,j,k)$ denotes the element of tensor $\tensor{X}$ in position $(i,j,k)$. Vectors are denoted by boldface lowercase letters, e.g., $\vect{x}\in\mathbb{R}^{M}$. Matrices are denoted by boldface capital letters, e.g., $\mat{X}\in\mathbb{R}^{M\times N}$. The mode-$n$ unfolding \cite{kolda2009tensor} of tensor $\tensor{X} \in\mathbb{R}^{I\times J \times K}$ is denoted by $\mat{X}_{(n)}\in\mathbb{R}^{B \times {M N}}$.
The definition of mode-$n$ unfolding is denoted as $\text{fold}_n(\cdot)$, $i.e.,$ for a tensor $\tensor{X}$, we have $\text{fold}_n(\mat{X}_{(n)})=\tensor{X}$.
\begin{figure}[!t]
\centering
\includegraphics[width=1 \linewidth]{Fuse.png}
\caption{Illustration showing the proposed fusion model.}
\label{fig:fuse}
\vspace{-5pt}
\end{figure}
\section{Related Works}
In this section, we briefly introduce the related reconstruction algorithms from CASSI measurements. \par
To reconstruct HSI from the coded image from CASSI measurements, a two-step iterative shrinkage/thresholding (TwIST) algorithm was firstly utilized to undertake the reconstruction task~\cite{dualCam2015,Sep_imging2013}. As we know, the HSI reconstruction for the highly compressed CASSI measurements is an under-estimated problem. Furthermore, the sensing matrix in CASSI is specifically designed and brings challenge for the reconstruction of HSI~\cite{Yuan_PAMI_2019}. Therefore, different kinds of HSI regularization are introduced into the reconstruction framework to increase the quality of reconstructed HSI. At the beginning, sparse regularization with TI-haar basis~\cite{DDCCSI}, wavelet basis~\cite{CASSI2008} and learned over-complete dictionary~\cite{DCSI,SSCSI,RGBRecovery,SIAM2013,zhang2016dictionary} have been utilized to explore the spatial-spectral sparsity of HSI. To exploit the spatial smoothness of the HSI, total variation (TV) has been utilized into the reconstruction framework~\cite{dualCam2015,yuan2016generalized}. In addition, low-rank matrix/tensor decomposition has also been introduced to explore the spatial-spectral low-rank property~\cite{golbabaee2012joint,wang2017compressive,jiang2018efficient,YingFu2016} and non-local spatial low-rank property of HSI~\cite{Yuan_PAMI_2019,Lizhi_2017PAMI,xue2019nonlocal,Zhang_2019_ICCV_Ten}. As the development of different state-of-the-art HSI regularizers, although the quality of reconstructed HSIs has increased significantly, they also meet new problems. As reported in~\cite{Yuan_PAMI_2019}, the proposed non-local regularized method DeSCI costs more than $3$ hours to reconstruct a HSI of size $512\times 512 \times 31$, preventing the real-time acquisition of HSI. Different regularizers are suitable for different scenario of HSI, which also result in another problem that it is very hard to choose the optimal regularizer for the reconstruction of HSI. \par
Very recently, deep learning has also been utilized to the reconstruction of CASSI~\cite{meng2020end}. \cite{AE_SC_recovery} proposed to use convolutional auto-encoder (AE) to learn a nonlinear sparse representation for CASSI. \cite{Zhao_2019_CVPR} introduced the CNN to train the reconstruction model, and then tested on the coded image. After that, different CNN architectures have been introduced to the reconstruction of coded image via CASSI~\cite{Zhang_2019_ICCV,Miao_2019_ICCV}. The CNN based methods utilize numerous external dataset to train the model, and then apply the trained model to the testing stage. Frankly speaking, the training stage is time-consuming, however the test stage is fast. From this perspective, CNN based methods can alleviate the problems in the previous regularizer based methods, and learn the prior knowledge from the training dataset. However, these end-to-end deep learning methods lose the flexibility~\cite{Yuan_2020}. Since each trained model is allocated to one specific sensing matrix, as the sensing matrix changes, these methods need to train a new model, which is again time-consuming. Furthermore, CNN based methods are out of work when the training samples are insufficient. \par
Although regularization based methods improve the quality of the reconstructed HSI from CASSI measurements, the performance faces the upper bound due the limited measurements. The complex regularizers further bring the heavy computation burden for the reconstruction task. Therefore, some researchers try to reconstruct the HSI from much more measurements, \textit{i.e.,} CASSI measurements with complementary measurements (panchromatic, or RGB). Several improved imagers have been proposed, including multi-frame CASSI~\cite{CASSIreV2016}, dual-camera CASSI with a panchromatic measurement~\cite{dualCam2015,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten}, and dual-camera CASSI with an RGB measurement~\cite{side_2015,CASSI_RGB}. However, the previous works~\cite{dualCam2015,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten,side_2015,CASSI_RGB} regard the complementary measurements as an addition to CASSI, and compose a larger sensing matrix by stacking the complementary sensing matrix and CASSI sensing matrix, which leads to even longer running time.
To reconstruct HSI from CASSI and the complementary measurements, various regularizers have been investigated to restrict full HSI, $i.e.,$ TV~\cite{dualCam2015,CASSI_RGB}, non-local sparse representation~\cite{Lizhi_2017PAMI}, and non-local low-rank matrix/tensor factorization~\cite{Yuan_PAMI_2019,Zhang_2019_ICCV_Ten}.
Compared to CASSI reconstruction, the HSI reconstruction from CASSI and the complementary measurements can achieve higher accuracy, since the additional measurements of complementary image. However, during the optimization, the new sensing matrix (by stacking the complementary sensing matrix and CASSI sensing matrix) has also increased to several times larger than that of CASSI, resulting in the unacceptable time cost of the HSI reconstruction. Furthermore, the optimization needs to know the complementary sensing matrix in advance, which requires additional cross-sensor calibration in the real applications. \par
\begin{figure}[!t]
\centering
\includegraphics[width=1 \linewidth]{illustration_flow.png}
\caption{Illustration showing the DCCHI system.}
\label{fig:ill}
\vspace{-10pt}
\end{figure}
\section{Proposed Fusion Method}
In this section, we firstly introduce the dual-camera compressive hyperspectral imaging (DCCHI) system. Subsequently, we present the proposed fusion model of CASSI and RGB measurements with non-iterative optimization strategy. Finally, we analyze the advantages of the proposed model compared to the previous works.
\subsection{Dual-camera Compressive Hyperspectral Imaging}
The dual-camera compressive hyperspectral imaging system was firstly proposed by~\cite{dualCam2015,side_2015}. As illustrated in Fig. \ref{fig:ill}, the incident light firstly goes through the beam splitter, and is equally partitioned to two directions. The first partition of the incident light is captured by the CASSI, which is composed of objective lens, dispersive prism, mask, relay lens, dispersive prism, and finally the light is sensed by a gray-scale camera to a coded image. We adopt $\tensor{X}\in\mathbb{R}^{M\times N \times B}$ to represent the original HSI, with $M,N$ denoting the two spatial size and $B$ denoting the spectral channels. Therefore, the compressive measurement at position $(i,j)$ via CASSI can be represented as
\begin{equation}
\label{eq:CASSI}
Y(i,j) = \sum_{k=1}^{B} X(i,j,k) \odot C(i,j,k),
\end{equation}
where $\mat{Y} \in \mathbb{R}^{M\times N}$ is the coded image, $\odot$ is the Hadamard (element-wise) product, and $\tensor{C}\in\mathbb{R}^{M\times N \times B}$ denotes the mask. The illustration of CASSI from 3D scene to 2D coded image is presented in Fig~\ref{fig:ill_cassi}. As a linear transformation, \eqref{eq:CASSI} can be reformulated as the following~\cite{Yuan_PAMI_2019,wang2019hyperspectral}
\begin{equation}
\label{eq:CASSI_linear}
\vect{y} = \mat{\Phi}^{c} \vect{x},
\end{equation}
where $\mat{\Phi}^{c} \in \mathbb{R}^{MN\times MNB}$ is the sensing matrix from $\tensor{C}$, $\vect{y}, \vect{x}$ are the vectorization of $\mat{Y}, \tensor{X}$ respectively. \par
Recalling Fig. \ref{fig:ill}, the second partition of the incident light is captured by the RGB camera and the obtained measurements for each position $(i,j)$ is
\begin{equation}
\label{eq:RGB}
\mat{Z}_{ij:} = \mat{A}^{\top}\mat{X}_{ij:} .
\end{equation}
Here, $\tensor{Z} \in \mathbb{R}^{M\times N \times 3}$ is the measured RGB image, and $\mat{A}\in \mathbb{R}^{B \times 3}$ is the spectral response function of RGB detector~\cite{yokoya2012coupled}. $\mat{Z}_{ij:}\in \mathbb{R}^{B}$ and $\mat{X}_{ij:}\in \mathbb{R}^{3}$ represent the vectorization of $\mat{Z}(i,j,:)$ and $\mat{X}(i,j,:)$, respectively. Similar to \eqref{eq:CASSI_linear}, \eqref{eq:RGB} can be also reformulated as the linear case
\begin{equation}
\label{eq:RGB_linear}
\vect{z} = \mat{\Phi}^{r} \vect{x},
\end{equation}
where $\mat{\Phi}^{r}\in\mathbb{R}^{3MN\times MNB}$ is the spectral sensing matrix from $\mat{A}$, $\vect{z}$ is the vectorization of $\tensor{Z}$.\par
\begin{figure}[!t]
\centering
\includegraphics[width=1 \linewidth]{CASSI_flow.png}
\caption{Illustration of CASSI from 3D scene to 2D coded image.}
\label{fig:ill_cassi}
\vspace{-10pt}
\end{figure}
\subsection{Proposed Observation Model}
To reconstruct the HSI from the CASSI and RGB measurements, previous works~\cite{dualCam2015,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten,Zhang_2019_ICCV,side_2015,CASSI_RGB} stack $\vect{y}$ and $\vect{z}$, meanwhile $\mat{\Phi}^{c}$ and $\mat{\Phi}^{r}$ from \eqref{eq:CASSI_linear} and \eqref{eq:RGB_linear} to build the following framework
\begin{equation}
\label{eq:CASSI_RGB_pre}
\left[\begin{array}{c}\vect{y} \\ \vect{z}\end{array}\right] = \left[\begin{array}{c}\mat{\Phi}^{c}\\
\mat{\Phi}^{r} \end{array} \right] \vect{x}.
\end{equation}
Therefore, \eqref{eq:CASSI_RGB_pre} can be regarded as an extension framework as that of CASSI reconstruction~\eqref{eq:CASSI}. The optimization of \eqref{eq:CASSI_RGB_pre} with proper regularizers~\cite{dualCam2015,CASSI_RGB,Lizhi_2017PAMI,Yuan_PAMI_2019,Zhang_2019_ICCV_Ten} always cost huge computational time.
To avoid the huge computation, different from \eqref{eq:CASSI_RGB_pre}, we assume that {\em the HSIs can be decomposed to the orthogonal spectral basis and related spatial coefficients}. We separately compute the coefficients from the RGB measurement, and the orthogonal spectral basis from the CASSI measurement, and finally fuse the two elements to reconstruct the HSIs.
In detail, HSIs are assumed to lie in an approximate low-dimensional spectral subspace.
Specifically, the spectral low-rank representation of HSIs can be formulated as
\begin{equation}
\label{eq:low-rank}
\mat{X} = \mat{E} \mat{W},
\end{equation}
where we define $\mat{X}:=\mat{X}_{(3)}$ without losing generality, $\mat{E}\in \mathbb{R}^{B\times k}$ and $\mat{W}\in \mathbb{R}^{k\times MN}$ are the orthogonal spectral basis and coefficients, respectively. $k$ is the rank of $\mat{X}$.
Our major contribution is to embed \eqref{eq:low-rank} into \eqref{eq:RGB} and \eqref{eq:CASSI} to obtain a new observation model from CASSI and RGB measurements
\begin{eqnarray}
\mat{Z} &=& (\mat{A}^{\top}\mat{E}) \mat{W},
\label{eq:fusion1} \\
\vect{y} &=& \mat{\Phi}^{\mat{W}} \vect{e}, \label{eq:fusion2}
\end{eqnarray}
where we again define $\mat{Z}:=\mat{Z}_{(3)}$. $\mat{\Phi}^{\mat{W}}$ is the composition of $\tensor{C}$ and $\mat{W}$, and $\vect{e}$ is the vectorization of $\mat{E}$. We detailed introduce how to obtain $\mat{\Phi}^{\mat{W}}$ from known $\tensor{C}$ and $\mat{W}$.
From \eqref{eq:CASSI}, we can observe that the sensor mask $\tensor{C}$ of CASSI measures the HSI $\tensor{X}$ pixel by pixel. Therefore, \eqref{eq:CASSI} can be reformulated as~\cite{GolbabaeeTIP}
\begin{align}
\label{eq:CASSI_pix}
Y(i,j) &= \mat{C}_{ij:}^{\top} \mat{E}\mat{W}_{ij:} \\ \notag
&= \{\mat{W}_{ij:}^{\top} \otimes \mat{C}_{ij:}^{\top}\} \vect{e},
\end{align}
where $\mat{C}_{ij:}$ represents the vectorization of $\mat{C}(i,j,:)$, $\mat{W}_{ij:}=\mat{W}(:,i+(j-1)\times N)$, and $\otimes$ is the Kronecker operator. By stacking the measurements of whole pixels together, we can obtain $\mat{\Phi}^{\mat{W}} \in \mathbb{R}^{MN\times kB}$ in \eqref{eq:fusion2} as
\begin{equation}
\label{eq:phi}
\mat{\Phi}^{\mat{W}} = [\mat{W}_{11:}^{\top} \otimes \mat{C}_{11:}^{\top};\ldots;\mat{W}_{MN:}^{\top} \otimes \mat{C}_{MN:}^{\top}].
\end{equation}
The objective of~\eqref{eq:fusion1},~\eqref{eq:fusion2} is to reconstruct the $\tensor{X}$ from the measurements $\mat{Y}$ and $\tensor{Z}$ via CASSI and RGB camera, respectively. Using this new observation model, we can efficiently reconstruct the HSIs using non-iterative fusion method.
\subsection{Proposed Fusion Model}
\label{sec:fusion}
According to the observation model \eqref{eq:fusion1},~\eqref{eq:fusion2}, we try to estimate $\mat{W}$ and $\mat{E}$ separately, and then reconstruct the HSI. As illustrated in Fig. \ref{fig:fuse}, we optimize the coefficients $\mat{W}$ from the RGB measurements $\tensor{Z}$, and then obtain the orthogonal spectral basis $\mat{E}$ from CASSI measurements $\mat{Y}$. Finally, we fuse $\mat{E}$ and $\mat{W}$ to reconstruct the HSI. The proposed fusion model can be formulated as
\begin{eqnarray}
\mat{W} &=& \underset{\mat{W}}{\arg\min} \Vert \mat{Z}- (\mat{A}^{\top}\mat{E}) \mat{W} \Vert_F^2,\label{eq:fusion11} \\
\vect{e} &=& \underset{\vect{e}}{\arg\min} \Vert \vect{y} - \mat{\Phi}^{\mat{W}} \vect{e} \Vert_F^2. \label{eq:fusion22}
\end{eqnarray}
From \eqref{eq:fusion11}, we can easily optimize $\mat{W}$ via singular value decomposition (SVD)~\cite{he2018non}. Subsequently, we can adopt \eqref{eq:phi} to compose $\mat{\Phi}^{\mat{W}}\in \mathbb{R}^{MN \times kB}$. Since $MN \gg kB$, $\mat{\Phi}^{\mat{W}}$ can be regarded as a full-column rank matrix. Therefore, the optimization of $\vect{e}$ can be deduced as
\begin{equation}
\label{eq:opt22}
\vect{e} = \left[(\mat{\Phi}^{\mat{W}})^{\top}(\mat{\Phi}^{\mat{W}})\right]^{-1}(\mat{\Phi}^{\mat{W}})^{\top} \vect{y}.
\end{equation}
Then, we reshape $\vect{e}$ to the matrix version $\mat{E}$, and reconstruct HSI via $\mat{X} = \mat{E} \mat{W}$. The final 3D HSI can be obtained via the folding operator $\tensor{X} = \text{fold}_3(\mat{X})$. \par
Compared to previous works~\cite{Yuan_PAMI_2019,Lizhi_2017PAMI,Zhang_2019_ICCV_Ten,CASSI_RGB} which try to reconstruct the full HSIs, our proposed observation model \eqref{eq:fusion1}, \eqref{eq:fusion2} is to decompose the original $\tensor{X}$ into two very low-dimensional components, and propose to reconstruct the low-dimensional components instead of high dimensional $\tensor{X}$. The spectral low-rank property has been embedded into our observation model, and the ill condition of HSI reconstruction is significantly alleviated. This advantage of our proposed observation model assures the success of non-iterative optimization with no regularization, further saves huge computational time. Furthermore, different from precious works~\eqref{eq:CASSI_RGB_pre}, the spectral sensing matrix of RGB detector in our model is absorbed into the estimation of orthogonal spectral basis, and we do not need to know the spectral sensing matrix in advance.
\subsection{Analysis and Improvements}
\label{Analysis1}
\noindent
\textbf{Fusion model choice:}
From the proposed observation model \eqref{eq:fusion1},~\eqref{eq:fusion2}, we can formulate a generalized fusion model as follows
\
\begin{align}
\label{eq:general}
\{\mat{E},\mat{W}\}
\notag
= \underset{\mat{E},\mat{W}}{\arg\min} &\Vert \mat{Z}- (\mat{A}^{\top}\mat{E}) \mat{W} \Vert_F^2 + \Vert \vect{y} - \mat{\Phi}^{\mat{W}} \vect{e} \Vert_F^2 \\
&+ \lambda_1 \Vert \mat{W} \Vert_{reg1} + \lambda_2 \Vert \mat{E} \Vert_{reg2}
\end{align}
to jointly update $\mat{E}$ and $\mat{W}$. Here, $\Vert \mat{W} \Vert_{reg1}$ and $\Vert \mat{E} \Vert_{reg2}$ mean the regularizers on $\mat{W}$ and $\mat{E}$, separately; $\lambda_1$ and $\lambda_2$ are the parameters to balance the contributions from each regularizers. Typically, TV~\cite{dualCam2015,CASSI_RGB}, non-local sparse representation~\cite{Lizhi_2017PAMI}, and non-local low-rank matrix/tensor factorization~\cite{Yuan_PAMI_2019,he2018non} can be utilized to regularize $\mat{W}$ and spectral smoothness can be utilized to restrict $\mat{E}$. However, we utilize the proposed fusion model \eqref{eq:fusion11}, \eqref{eq:fusion22} rather than \eqref{eq:general} to reconstruct the HSI due to the following reasons. $i$) Our proposed model do not require the additional regularizers on $\mat{W}$ and $\mat{E}$, which significantly reduces the computation time; meanwhile we don't need to know the spectral sensing matrix $\mat{A}$ in advance, which increases the applicability in different settings. $ii$) As reported in the experimental section, our proposed fusion model has achieved the state-of-the-art reconstruction results compared to existing non-training based methods. $iii$) Theoretical guarantee can be weakly obtained to our proposed fusion model.
\begin{proposition}{(conditioned exact reconstruction)}
\label{pr:cond}
If we have the conditions: 1) the spectral sensing matrix of RGB is of full-column rank; 2) the composed $\mat{\Phi}^{\mat{W}}$ is of full-column rank; and 3) $k\leqslant 3$. Then $\tensor{X}$ is the exact reconstruction from the measurements $\mat{Y}$ and $\tensor{Z}$ via $\text{fold}_3(\mat{\mat{E}\mat{W}})$, with $\mat{E}$ from SVD, and $\mat{W}$ from \eqref{eq:opt22}.
\end{proposition}
The proof is provided in the Appendix. In Proposition~\ref{pr:cond}, condition 1) is related to the design of RGB camera. As reported in \cite{qu2018unsupervised}, the spectral sensing matrix in Nikon D700 is of full-column rank. That is to say, condition 1) is easy to obtain. Furthermore, since $MN \gg kB$, $\mat{\Phi}^{\mat{W}}$ can be regarded as a full-column rank matrix with specific pattern $\mat{\Phi}$. The only problem of condition is that $k\leqslant 3$, which is not the real case in most of HSIs. We will propose an approach to meet this condition and thus an improvement of the fusion algorithm in the next subsection.
\begin{figure}[htbp!]
\centering
\includegraphics[width=1 \linewidth]{SVD_values.png}
\caption{(a) 3D patch extracted from the Toy dataset; (b) singular values in logarithm of reshaped patches V.S global image. We select 100 patches and average the singular values in logarithm.}
\label{fig:SVD}
\end{figure}
\noindent
\textbf{Patch based fusion algorithm:}
As analyzed before, the condition of $k\leqslant 3$ is unreasonable. Generally speaking, for a HSI scenario, the rank is always larger that $3$~\cite{BioucasTGRS2008,he2018non}. However, from another perspective, the HSIs usually have the piece-wise smoothness property~\cite{Yuan_PAMI_2019,YingFu2016}. That is to say, the pixel signatures from a local spatial patch have a higher probability of being similar. In Fig. \ref{fig:SVD}, we illustrate the comparison of the singular values in logarithm between the extracted patch images and the global Toy image. We choose 100 patch images of size $100 \times 100 \times 31$ and plot the average singular values. From the figure, it can be seen that the average singular values of patch images drop much faster than the ones from the global image, indicating that local patch analysis can significantly increase the spectral low-rank property~\cite{he2015hyperspectral}. To utilize this finding, we propose to segment the HSIs to overlapping patches, and reconstruct each patch separately (and in parallel). Finally, the whole reconstructed patches are composed to the final reconstructed HSI. The proposed patch based fusion algorithm is presented in Algorithm~\ref{alg:pfusion}. What we need to mention is that we should obey $mn > kB$, where $m, n$ are the patch spatial size, to make sure $\mat{\Phi}^{\mat{W}}$ satisfy the full-column rank condition. \par
The patch based fusion can enhance the spectral low-rank property and increase the HSI reconstruction performance, as analyzed in Table~\ref{tab:patchsize}. However, the condition of $k\leqslant 3$ is not always strictly satisfied, as presented in Fig. \ref{fig:SVD}. Furthermore, the patch based fusion also increases the computational time. This means that the proposed patch based fusion algorithm still has room for improvement. Another strategy to alleviate this condition is to explore priors with regularization. However, as we mentioned before, the regularizations will significantly improve the computational burden. We leave this question, \textit{i.e.,} the condition of $k\leqslant 3$, as future work.
\begin{algorithm}[tp]
\caption{Patch based fusion}
\label{alg:pfusion}
\begin{algorithmic}[1]
\REQUIRE $\mat{Y}$ measured via CASSI, $\tensor{Z}$ measured via RGB camera, CASSI sensing matrix $\mat{\Phi}$, rank $k$.
\STATE Initialization: segment $\mat{Y}$, $\tensor{Z}$ and $\mat{\Phi}$ into aligned overlapping patches of size $m \times n$, $m \times n \times 3$, and $m \times n \times B$ respectively.
\FOR{each aligned patch}
\STATE {A). Update the coefficients via SVD.}
\STATE {B). Update the orthogonal spectral basis via \eqref{eq:opt22}.}
\STATE {C). Reconstruct each patch of size $m \times n \times B$.}
\ENDFOR
\RETURN Aggregate all the patches into the final $\mathcal{X}$;
\end{algorithmic}
\end{algorithm}
\noindent
\textbf{Orthogonal spectral basis estimation improvement:} The optimization of $\mat{E}$ via \eqref{eq:opt22} simply utilized the measurement $\tensor{C}$, ignoring the RGB sensor matrix $\mat{A}$. Similar to \eqref{eq:CASSI_pix}, the RGB measurement can be also formulated as the pixel-by-pixel version
\begin{align}
\label{eq:RGB_pix}
Z(i,j,k) &= (\mat{C}(:,k))^{\top} \mat{E}\mat{W}_{ij:} \\ \notag
&= \{\mat{W}_{ij:}^{\top} \otimes (\mat{C}(:,k))^{\top}\} \vect{e}.
\end{align}
So we can obtain $\mat{\Phi}_{RGB}^{\mat{W}}$ in the similar way of \eqref{eq:phi}, and update $\vect{e}$ via
\begin{equation}
\label{eq:fusion22_ext}
\vect{e} = \underset{\vect{e}}{\arg\min} \left\| [\vect{y};\vect{z}] - [\mat{\Phi}^{\mat{W}};\mat{\Phi}_{RGB}^{\mat{W}}] \vect{e} \right\|_F^2
\end{equation}
instead of \eqref{eq:fusion22}. In Section \ref{sec:improvel}, we compare the results obtained by \eqref{eq:fusion22} and \eqref{eq:fusion22_ext}, respectively. As reported, optimizing $\mat{E}$ via \eqref{eq:fusion22_ext} results in the limited improvement of accuracy, unfortunately, increasing almost $4\times$ more computation time. We thus recommend to use \eqref{eq:fusion22} to update the orthogonal spectral basis $\mat{E}$.
\begin{table}[ht]
\footnotesize
\centering
\vspace{-5px}
\caption{The size of the dataset used in the experiments.}
\begin{tabular}{c | c | c |c}
\hline
Image name & HSI & CASSI & RGB \\ \hline
CAVE & 512$\times$512$\times$31 & 512$\times$512 & 512$\times$512$\times$3 \\ \hline
ICVL & 512$\times$512$\times$31 & 512$\times$512 & 512$\times$512$\times$3 \\ \hline
RS & 400$\times$200$\times$128 & 400$\times$200 & 400$\times$200$\times$3 \\ \hline
Bird & 1021$\times$703$\times$24 & 1021$\times$703 & 1021$\times$703$\times$3 \\ \hline
\end{tabular}
\label{tab:datasize}
\vspace{-5px}
\end{table}
\begin{table*}[!htbp]
\centering
\caption{Quantitative evaluation of CAVE data experiments for different HSI reconstruction methods from CASSI and RGB measurements.}
\begin{tabular}{cccccccccccc|c}
\toprule
CAVE & method & balloons & beads & cd & toy & clay & cloth & egyptian & face & beers & food & Average \\
\midrule
\multirow{7}[2]{*}{M-PSNR}
& TV & 31.10 & 20.82 & 28.73 & 25.93 & 27.75 & 22.98 & 33.70 & 31.86 & 30.07 & 28.89 & 28.18 \\
& DeSCI & 32.14 & 21.76 & 29.04 & 28.70 & 28.15 & 25.30 & 36.10 & 33.49 & 31.28 & 29.78 & 29.57 \\
& TV-RGB & 36.75 & 26.67 & 31.68 & 29.19 & 36.42 & 25.45 & 37.49 & 35.82 & 35.13 & 34.27 & 32.89 \\
& DeSCI-RGB & 40.35 & 30.54 & 32.94 & 34.36 & 39.93 & 30.27 & 42.23 & 39.78 & 39.28 & 37.67 & 36.74 \\
& DLTR & 41.35 & 28.66 & 32.26 & 33.12 & 39.68 & 29.00 & 42.13 & 43.63 & 41.29 & 37.82 & 36.89 \\
& Fusion & 39.50 & 32.30 & 29.25 & 40.02 & 38.94 & 38.34 & 48.40 & 43.10 & 41.01 & 38.86 & 38.97 \\
& PFusion & \textbf{48.50} & \textbf{32.91} & \textbf{32.95} & \textbf{44.33} & \textbf{49.24} & \textbf{39.28} & \textbf{50.93} & \textbf{46.40} & \textbf{46.49} & \textbf{43.12} & \textbf{43.42} \\
\midrule
\multirow{7}[2]{*}{M-SSIM}
& TV & 0.952 & 0.659 & 0.922 & 0.880 & 0.845 & 0.660 & 0.948 & 0.956 & 0.946 & 0.852 & 0.862 \\
& DeSCI & 0.963 & 0.727 & 0.939 & 0.925 & 0.861 & 0.819 & 0.968 & 0.971 & 0.960 & 0.878 & 0.901 \\
& TV-RGB & 0.964 & 0.878 & 0.908 & 0.944 & 0.911 & 0.783 & 0.980 & 0.971 & 0.946 & 0.913 & 0.920 \\
& DeSCI-RGB & 0.991 & 0.942 & 0.962 & 0.981 & 0.954 & 0.923 & 0.994 & 0.990 & 0.991 & 0.956 & 0.968 \\
& DLTR & 0.985 & 0.903 & \textbf{0.975} & 0.971 & 0.973 & 0.900 & 0.981 & 0.992 & 0.992 & 0.977 & 0.965 \\
& Fusion & 0.985 & 0.933 & 0.916 & 0.968 & 0.934 & 0.975 & 0.983 & 0.981 & 0.995 & 0.952 & 0.962 \\
& PFusion & \textbf{0.995} & \textbf{0.944} & 0.971 & \textbf{0.989} & \textbf{0.994} & \textbf{0.977} & \textbf{0.996} & \textbf{0.994} & \textbf{0.997} & \textbf{0.980} & \textbf{0.984} \\
\midrule
\multirow{7}[2]{*}{MSA}
& TV & 10.21 & 25.35 & 13.01 & 12.49 & 17.15 & 16.30 & 14.21 & 10.29 & 4.80 & 16.12 & 13.99 \\
& DeSCI & 9.79 & 25.28 & 11.08 & 12.06 & 15.94 & 14.48 & 13.43 & 9.49 & 4.72 & 15.09 & 13.14 \\
& TV-RGB & 6.63 & 13.78 & 12.40 & 10.36 & 14.24 & 14.22 & 11.33 & 8.83 & 3.71 & 13.40 & 10.89 \\
& DeSCI-RGB & 4.64 & \textbf{9.96} & 7.44 & 7.23 & 9.40 & 10.16 & \textbf{8.23} & 6.60 & 2.60 & \textbf{9.57} & 7.58 \\
& DLTR & 6.24 & 13.79 & 8.03 & 8.67 & 15.38 & 9.23 & 13.27 & 7.10 & 1.27 & 6.43 & 8.94 \\
& Fusion & 9.91 & 17.65 & 28.94 & 15.90 & 29.15 & 5.84 & 23.38 & 15.58 & 2.19 & 23.99 & 17.25 \\
& PFusion & \textbf{4.54} & 14.59 & \textbf{7.26} & \textbf{6.81} & \textbf{7.99} & \textbf{5.07} & 9.25 & \textbf{6.37} & \textbf{1.21} & 10.77 & \textbf{7.39} \\
\bottomrule
\end{tabular}%
\label{tab:eva1}%
\end{table*}%
\begin{figure*}[!htp]
\centering
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_orig.png}\\(a) Original
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_CASSI.png}\\(b) CASSI
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_RGB.png}\\(c) RGB
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_TV.png}\\(d) TV
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_WNNM.png}\\(e) DeSCI
\end{minipage}\\
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_TVRGB.png}\\(f) TV-RGB
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_WNNMRGB.png}\\(g) DeSCI-RGB
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_DLTR.png}\\(h) DLTR
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_HURGB.png}\\(i) Fusion
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_patch_HURGB.png}\\(j) PFusion
\end{minipage}\\
\caption{Reconstructed results of different methods on CAVE-Toy image. The color images are composed of bands 31, 11, 6 for red, green and blue, respectively.}
\label{fig:CAVE_toy}
\end{figure*}
\section{Experimental Results}
\label{Experiments}
In this section, we adopted simulated and real dataset experiments to demonstrate the advantage of the proposed methods on the reconstruction of CASSI and RGB measurements compared to other state-of-the-art methods. We employ the following methods for comparison:
\textit{CASSI reconstruction methods}, \textit{i.e.}
GAP-TV \cite{yuan2016generalized},
DeSCI \cite{Yuan_PAMI_2019}\footnote{\url{ https://github.com/liuyang12/DeSCI}}
and \textit{CASSI \& RGB reconstruction methods}, \textit{i.e.}
TV-RGB \cite{CASSI_RGB},
DeSCI-RGB\footnote{We implement the algorithm by using~\eqref{eq:CASSI_RGB_pre} instead of \eqref{eq:CASSI_linear}.}
and DLTR \cite{Zhang_2019_ICCV_Ten}\footnote{Thank Dr. S. Zhang for providing the experimental results.}.
The proposed algorithm for the global reconstruction is marked as `Fusion', meanwhile the proposed patch based fusion algorithm is denoted as `PFusion'.
In the simulated experiments, the peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and the mean of the spectral angle (MSA)~\cite{Zhang_2019_ICCV,he2018non} between the reconstructed and original images are used to evaluate the results. Since HSIs have multi spectral bands, we calculate the mean PSNR and mean SSIM of each band, and then average them~\cite{Zhang_2019_ICCV,he2018non}, which are denoted as `M-PSNR' and `M-SSIM', respectively. The whole experiments (except DLTR) are programmed in Matlab R2017b on a laptop with CPU Core i7-8750H 16G memory~\footnote{DLTR was programmed on a laptop with CPU Core i7-6700 64G memory}. TV-RGB and DeSCI-RGB dealing with the global HSIs are out of memory on the laptop. Therefore, as the same of `PFusion', we segment the HSIs to overlapping patches and adopt TV-RGB and DeSCI-RGB to process small patches separately.
\subsection{Simulated Experiments On CAVE}
The first test dateset is CAVE\footnote{\url{http://www1.cs.columbia.edu/CAVE/databases/}}, which contains $32$ HSI scenes. We select the first $10$ images for the experiments. We adopted the CASSI sensing matrix from \cite{Yuan_PAMI_2019} to simulate the CASSI measurements, meanwhile spectral sensing matrix of Nikon D700 camera \cite{qu2018unsupervised} to simulate the RGB measurements. The size of original HSI, CASSI measurements and RGB measurements are presented in Table \ref{tab:datasize}. For our proposed methods, we choose $k=3$ and $m=n=100$.
\begin{figure}[!htp]
\centering
\begin{minipage}[t]{0.24\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_sig_cloth.png}\\(a) beads
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}\centering
\includegraphics[width=\textwidth]{CAVE_sig_beer.png}\\(b) food
\end{minipage}\\
\caption{The absolute difference errors (smaller is better) between the means of the whole original signatures and the one obtained by different reconstruction methods on CAVE images.}
\label{fig:CAVE_sig}
\vspace{-5pt}
\end{figure}
\noindent
\textbf{Quantitative comparison.}
We present the M-PSNR, M-SSIM and MSA values of reconstructed images via different methods in Table \ref{tab:eva1}. Our proposed PFusion method achieves the best values in average of ten different HSI scenes of all these evaluation metrics, conforming the advantage of our proposed method. Compared to CASSI reconstruction methods TV and DeSCI, the reconstruction from CASSI and RGB measurements can significantly improve the quality of the reconstructed image. This benefit is from the additional RGB measurements. DeSCI-RGB and DLTR can achieve satisfied results, indicating the efficiency of non-local low-rank processing. Our proposed method can achieve higher MPSNR values compared to DeSCI-RGB, meanwhile lower MSA values. This is mainly because our proposed Fusion can take full advantage of RGB measurements to enhance the spatial characteristic, unfortunately fails to take the full features of global spectral property within rank $k=3$. Therefore, the spectral distortion of our proposed Fusion is severe. Our proposed PFusion, which processes overlapping patches instead of global image, can significantly improve the reconstructed spectral quality with smaller MSA values. Furthermore, the M-PSNR and M-SSIM values obtained by PFusion are further improved compared to those of Fusion.
\noindent
\textbf{Visual comparison.}
To further illustrate the efficiency of different methods, we present the color images (composed of bands 31, 21 and 6~\cite{he2018non}) of different reconstruction methods on Toy image in Fig.~\ref{fig:CAVE_toy}. From the figure, it can be observed that the RGB measurements are different from the original images composed of bands 31, 11 and 6. TV based method results in the blurred results. DeSCI can produce clearer image, and DeSCI-RGB and DLTR can further improve the results. However, the spectral distortion is also obvious, as some undesired color artifacts appeared in the results of DeSCI-RGB and DLTR. Our proposed Fusion and PFusion can produce the most similar results compared to the original image presented in Fig.~\ref{fig:CAVE_toy}(a). \par
To further check the reconstructed spectrum, we also present the absolute values between the means of the whole original signatures and the one obtained by different reconstruction methods. We choose two images, $i.e.,$ Beads and food for illustration. From Fig.~\ref{fig:CAVE_sig}, we can obtain that Fusion method can achieve similar results compared to DeSCI-RGB and DLTR. Our proposed PFusion can significantly improve the spectral quality compared to Fusion, indicating the significant advantage of our proposed PFusion to process overlapping patches instead of the global image.
\subsection{Simulated Experiments On ICVL}
\begin{table}[!htbp]
\centering
\caption{Quantitative evaluation of ICVL data experiments for different HSI reconstruction methods from CASSI and RGB measurements.}
\setlength{\tabcolsep}{1.5mm}{
\begin{tabular}{ccccccc|c}
\toprule
ICVL & method & 4cam & BGU & IDS & Ist & prk & Average \\
\midrule
\multirow{7}[2]{*}{M-PSNR}
& TV & 31.11 & 28.93 & 28.39 & 34.85 & 32.42 & 31.14 \\
& DeSCI & 33.17 & 31.38 & 30.46 & 35.44 & 34.24 & 32.94 \\
& TV-RGB & 35.20 & 33.80 & 32.67 & 38.34 & 37.67 & 35.54 \\
& DeSCI-RGB & 40.39 & 40.30 & 37.91 & 39.93 & 43.89 & 40.48 \\
& DLTR &41.28 & 39.12 & 36.33 & 39.98 & 39.01 & 38.04 \\
& Fusion & 43.65 & 44.91 & 43.19 & 37.13 & 46.95 & 43.17 \\
& Pfusion & \textbf{48.76} & \textbf{48.94} & \textbf{45.07} & \textbf{41.91} & \textbf{49.87} & \textbf{46.91} \\
\midrule
\multirow{7}[2]{*}{M-SSIM}
& TV & 0.912 & 0.858 & 0.912 & 0.967 & 0.907 & 0.911 \\
& DeSCI & 0.953 & 0.927 & 0.951 & 0.974 & 0.940 & 0.949 \\
& TV-RGB & 0.946 & 0.931 & 0.928 & 0.982 & 0.961 & 0.950 \\
& DeSCI-RGB & 0.992 & 0.990 & 0.992 & 0.990 & 0.994 & 0.991 \\
& DLTR & 0.986 & 0.979 & 0.983 & 0.991 & 0.969 & 0.975 \\
& Fusion & 0.993 & 0.992 & 0.993 & 0.985 & 0.993 & 0.991 \\
& Pfusion & \textbf{0.995} & \textbf{0.996} & \textbf{0.995} & \textbf{0.992} & \textbf{0.996} & \textbf{0.995} \\
\midrule
\multirow{7}[2]{*}{MSA}
& TV & 4.36 & 6.50 & 4.34 & 5.08 & 5.84 & 5.23 \\
& DeSCI & 3.87 & 5.57 & 3.94 & 4.59 & 4.76 & 4.55 \\
& TV-RGB & 3.12 & 4.41 & 2.92 & 2.92 & 3.87 & 3.45 \\
& DeSCI-RGB & 1.93 & 2.28 & 1.81 & 1.97 & 1.94 & 1.99 \\
& DLTR &1.23 &2.27 & 0.96 & 1.25 & 2.97 & 1.74 \\
& Fusion & 1.91 & 2.28 & 0.94 & 5.20 & 2.49 & 2.56 \\
& Pfusion & \textbf{0.82} & \textbf{1.16} & \textbf{0.64} & \textbf{1.94} & \textbf{1.21} & \textbf{1.15} \\
\bottomrule
\end{tabular}}%
\label{tab:eva2}%
\end{table}%
\begin{figure*}[!htp]
\centering
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_orig.png}\\(a) Original
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_CASSI.png}\\(b) CASSI
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_RGB.png}\\(c) RGB
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_TV.png}\\(d) TV
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_WNNM.png}\\(e) DeSCI
\end{minipage}\\
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_TVRGB.png}\\(f) TV-RGB
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_WNNMRGB.png}\\(g) DeSCI-RGB
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_DLTR.png}\\(h) DLTR
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_HURGB.png}\\(i) Fusion
\end{minipage}
\begin{minipage}[t]{0.18\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_patch_HURGB.png}\\(j) PFusion
\end{minipage}\\
\caption{Reconstructed results of different methods on ICVL-BGU image. The color images are composed of bands 31, 11, 6 for red, green, and blue, respectively.}
\label{fig:ICVL_BGU}
\end{figure*}
\begin{figure}[!htp]
\centering
\begin{minipage}[t]{0.24\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_sig_BGU.png}\\(a) BGU
\end{minipage}
\begin{minipage}[t]{0.24\textwidth}\centering
\includegraphics[width=\textwidth]{ICVL_sig_IDS.png}\\(b) IDS
\end{minipage}\\
\caption{The absolute difference errors between the mean of the whole original signatures and the one obtained by different reconstruction methods on ICVL images.}
\label{fig:ICVL_sig}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Quantitative evaluation of EO-1 data experiments for different HSI reconstruction methods from CASSI and RGB measurements.}
\setlength{\tabcolsep}{1.1mm}{
\begin{tabular}{cccccccc}
\toprule
Method & TV & DeSCI & TV-RGB & DeSCI-RGB & DLTR & Fusion & PFusion \\
\midrule
M-PSNR & 26.64 & 28.96 & 29.91 & 37.01 & 35.56 & 41.8 & \textbf{43.34} \\
M-SSIM & 0.839 & 0.948 & 0.931 & 0.977 & 0.963 & 0.983 & \textbf{0.984} \\
MSA & 4.929 & 4.253 & 3.864 & 1.6 & 1.78 & 1.03 & \textbf{0.85} \\
\bottomrule
\end{tabular}%
\label{tab:eva3}
}%
\end{table}%
\begin{figure*}[!htp]
\scriptsize
\centering
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_orig.png}\\(a) Original
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_CASSI.png}\\(b) CASSI
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_RGB.png}\\(c) RGB
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_TV.png}\\(d) TV
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_WNNM.png}\\(e) DeSCI
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_TVRGB.png}\\(f) TV-RGB
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_WNNMRGB.png}\\(g) DeSCI-RGB
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_DLTR.png}\\(h) DLTR
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_HURGB.png}\\(i) Fusion
\end{minipage}
\begin{minipage}[t]{0.09\textwidth}\centering
\includegraphics[width=\textwidth]{RS_patch_HURGB.png}\\(j) PFusion
\end{minipage}\\
\caption{Reconstructed results of different methods on EO-1 image. The color images are composed of bands 100, 60, 10 for red, green, and blue, respectively.}
\label{fig:RS}
\end{figure*}
\begin{figure*}[!htp]
\centering
\begin{minipage}[t]{0.30\textwidth}\centering
\includegraphics[width=\textwidth]{RS_sig1.png}\\(a) point 1
\end{minipage}
\begin{minipage}[t]{0.30\textwidth}\centering
\includegraphics[width=\textwidth]{RS_sig2.png}\\(b) point 2
\end{minipage}
\begin{minipage}[t]{0.30\textwidth}\centering
\includegraphics[width=\textwidth]{RS_sig3.png}\\(c) point 3
\end{minipage}\\
\caption{The absolute difference errors between the original signature and the one obtained by different reconstruction methods on EO-1 image.}
\label{fig:RS_sig}
\end{figure*}
\begin{figure*}[htp]
\footnotesize
\centering
\begin{minipage}[t]{0.16\textwidth}\centering
\includegraphics[width=\textwidth]{bird_CASSI.png}\\(a) CASSI
\end{minipage}
\begin{minipage}[t]{0.16\textwidth}\centering
\includegraphics[width=\textwidth]{bird_RGB.png}\\(b) RGB
\end{minipage}
\begin{minipage}[t]{0.16\textwidth}\centering
\includegraphics[width=\textwidth]{bird_TV.png}\\(c) TV
\end{minipage}
\begin{minipage}[t]{0.16\textwidth}\centering
\includegraphics[width=\textwidth]{bird_WNNM.png}\\(d) DeSCI
\end{minipage}
\begin{minipage}[t]{0.16\textwidth}\centering
\includegraphics[width=\textwidth]{bird_HURGB.png}\\(e) Fusion
\end{minipage}
\begin{minipage}[t]{0.16\textwidth}\centering
\includegraphics[width=\textwidth]{bird_patch_HURGB.png}\\(f) PFusion
\end{minipage}\\
\caption{Reconstructed results of different methods on real Bird image. The color images are composed of bands 20, 10, 6 for red, green, and blue, respectively.}
\label{fig:bird}
\end{figure*}
\begin{figure*}[htp]
\centering
\begin{minipage}[t]{0.30\textwidth}\centering
\includegraphics[width=\textwidth]{bird_sig1.png}\\(a) (100,100)
\end{minipage}
\begin{minipage}[t]{0.30\textwidth}\centering
\includegraphics[width=\textwidth]{bird_sig2.png}\\(b) (350,350)
\end{minipage}
\begin{minipage}[t]{0.30\textwidth}\centering
\includegraphics[width=\textwidth]{bird_sig3.png}\\(c) (700,700)
\end{minipage}\\
\caption{The absolute difference errors between the original signature and the one obtained by different reconstruction methods on real Bird image.}
\label{fig:bird_sig}
\end{figure*}
The second test dataset is ICVL\footnote{\url{http://icvl.cs.bgu.ac.il/hyperspectral/}}. We simulate the CASSI measurements and RGB measurements the same as that of CAVE dataset. We select $5$ images from ICVL dataset for the experiments. The original image from ICVL is of size $1392 \times 1304 \times 31$ and we reshape it to the size of $512 \times 512 \times 31$. For the proposed methods, we still choose $k=3$ and $m=n=100$.
\noindent
\textbf{Quantitative comparison.}
Table \ref{tab:eva2} presents the quantitative comparison results of reconstructed images via different methods on $5$ ICVL images. Again, the reconstruction methods from only the CASSI measurements $i.e.,$ TV and DeSCI achieve the worst values of M-PSNR, M-SSIM and MSA. The combined measurements of CASSI and RGB can significantly improve the reconstruction results, just as TV-RGB. DeSCI-RGB and DLTR further improve the results. DeSCI-RGB can achieve higher M-PSNR and M-SSIM values, meanwhile DLTR obtains higher MSA values. Our proposed PFusion significantly improves the spectral quality compared to Fusion, and again achieves the best quantitative values across different ICVL images, further indicating the advantage of our proposed strategy to process the reconstruction of CASSI and RGB measurements.
\noindent
\textbf{Visual comparison.}
Fig.~\ref{fig:ICVL_BGU} presents the color images (composed of bands 31, 11 and 6) of different reconstruction methods on ICVL BGU image. It can be clearly observed that RGB measurements can provide the spatial structure information, but totally different spectral information. The reconstruction result of TV from the CASSI measurements is blurred. DeSCI and TV-RGB can improve the results, but they are still blurred. DeSCI-RGB and DLTR further improve the results. However, as presented in the enlarged rectangle of Fig.~\ref{fig:ICVL_BGU}(h), DLTR loses the color of the board. To sum up, the proposed Fusion and PFusion produce the best results, with much more detailed information compared to the original image. Fig.~\ref{fig:ICVL_sig} illustrates the absolute values between the mean of the original signatures and the one obtained by different reconstruction methods of $3$ signatures from three different ICVL images. Compared to DeSCI-RGB and DLTR, the advantage of the proposed Fusion is limited to preserve the spectral information. However, the proposed PFusion can produce the smallest absolute difference errors, indicating the best performance in the preservation of HSI spectrum.
\subsection{Simulated Experiments On Remote Sensing}
We further conduct the remote sensing compressive reconstruction experiments to testify the efficiency of the proposed methods. The HSI is from the earth-observing One (EO-1) Hyperion sensor~\footnote{\url{https://archive.usgs.gov/archive/sites/eo1.usgs.gov/index.html}}.
Similar to \cite{Yuan_PAMI_2019}, we adopt CASSI to compress the HSI, meanwhile the spectral sensing matrix to obtain RGB measurements. The size of HSI, CASSI measurements and RGB measurements are presented in Table \ref{tab:datasize}.
\noindent
\textbf{Quantitative comparison.}
We adopt Table \ref{tab:eva3} to present the quantitative comparison results of reconstructed images via different methods on the EO-1 image. From the table, it can be observed that the proposed PFusion obtains the best accuracy in three different metrics. TV-RGB, DeSCI-RGB and DLTR achieves better results than that of TV and DeSCI, indicating that the reconstruction results from the combined CASSI and RGB measurements are better those of only CASSI measurements.
\noindent
\textbf{Visual comparison.}
Fig.~\ref{fig:RS} presents the color images (composed of bands 100, 60 and 10) of different reconstruction methods on the EO-1 image. We can observe that our proposed Fusion and PFusion outperform others to preserve the details of the reconstructed image. Other comparison methods produce the blurred results. Fig.~\ref{fig:RS_sig} shows the absolute difference errors between the original signature and the ones obtained by different reconstruction methods on the EO-1 image. Our proposed PFusion again beats other comparison methods, and achieves the smallest absolute difference errors. That is to say, our proposed Fusion and PFusion can preserve the spatial details, meanwhile, the overlapping patches processing of PFusion can significantly improve the capability to preserve the spectral information.
\subsection{Real Experiments On Bird}
In this section, we apply the proposed method to the HSI reconstruction of the real snapshot-hyperspectral compressive imaging data. The real Bird data is from CASSI system~\cite{CASSI2008}, which has been widely used for the real experiments analysis~\cite{Yuan_PAMI_2019,side_2015}. The size of reconstructed Bird image, CASSI and RGB measurements are presented in Table~\ref{tab:datasize}. Since we do not know the spectral sensing matrix from HSI to RGB measurements, we cannot implement TV-RGB, DeSCI-RGB and DLTR for comparison. So we simply compare our proposed methods to TV and DeSCI\footnote{The results of TV and DeSCI are provided by Dr. Y. Liu at website https://github.com/liuyang12/DeSCI}. \par
\noindent
\textbf{Visual comparison.}
Fig.~\ref{fig:bird} illustrates the color images (composed of bands 20, 10 and 6) of different reconstruction methods on the real bird data. We may find that the measured RGB image contains abundance spatial information. However, the reconstructed images via TV and DeSCI lose the spatial details. By contrast, our proposed Fusion and PFusion are capable of preserving the spatial information from RGB measurements. Fig.~\ref{fig:bird_sig} presents the signatures of three points obtained by different methods. It can be observed that our proposed PFusion are closest to the reference signatures. It indicates that our proposed PFusion can preserve the spectral information, therefore, again demonstrating the advantage of our proposed method.
\section{Ablation Study and Discussion}
\subsection{Spectral Sensing Matrix Analysis}
We first analyze the proposed methods on different spectral sensing matrices. The spectral sensing matrix of RGB detector is adopted to measure the RGB image from HSI, and the RGB measurement is adopted to provide the coefficients, as illustrated in Fig.~\ref{fig:fuse}. Fig.~\ref{fig:sensor} presents different kinds of designed spectral sensing matrices, including (a) provided by Nikon D700 camera, (b) average of the bands, and (c) random selection of three single bands. Table~\ref{tab:sensor} shows the average quantitative evaluation results of proposed methods with different sensor matrices on 10 CAVE HSIs. It can be clearly observed that the proposed Fusion and PFusion can produce almost similar results with spectral sensing matrices provided by Nikon D700 camera and average design strategy. On the other hand, the random single band selection strategy produces the worse results. This comparison of different spectral sensing matrices guides us to design the RGB measurements efficiently. This suggests that the RGB measurement should try to cover all bands information. In this case, the coefficients estimated from the measured RGB image will provide more useful information for the subsequent HSI reconstruction.
\begin{figure}[!htp]
\footnotesize
\centering
\begin{minipage}[t]{0.45\textwidth}\centering
\includegraphics[width=\textwidth]{sen1.png}(a)
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}\centering
\includegraphics[width=\textwidth]{sen2.png}(b)
\end{minipage}
\begin{minipage}[t]{0.45\textwidth}\centering
\includegraphics[width=\textwidth]{sen3.png}(c)
\end{minipage}\\
\caption{Different spectral sensing matrix to measure RGB from HSI. (a) Provided by Nikon D700 camera, (b) average based design, and (c) single band selection.}
\label{fig:sensor}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Quantitative evaluation of CAVE data experiments for proposed methods with different sensor matrices.}
\begin{tabular}{c|ccc|ccc}
\toprule
\multirow{2}[2]{*}{sensing matrix} & \multicolumn{3}{c|}{Fusion} & \multicolumn{3}{c}{PFusion} \\
& (a) & (b) & (c) & (a) & (b) & (c) \\
\midrule
MPSNR & \textbf{38.97} & 38.47 & 35.32 & \textbf{43.42} & 43.39 & 37.76 \\
MSSIM & \textbf{0.962} & 0.955 & 0.906 & \textbf{0.984} & \textbf{0.984} & 0.955 \\
MSA & 17.25 & \textbf{16.06} & 16.86 & 7.39 & \textbf{6.16} & 7.61 \\
\bottomrule
\end{tabular}%
\label{tab:sensor}%
\end{table}%
\subsection{Rank $k$ Analysis}
\begin{figure}[!htp]
\centering
\begin{minipage}[t]{0.15\textwidth}\centering
\includegraphics[width=\textwidth]{dimen_PSNR.png}\\(a) MPSNR
\end{minipage}
\begin{minipage}[t]{0.15\textwidth}\centering
\includegraphics[width=\textwidth]{dimen_SSIM.png}\\(b) MSSIM
\end{minipage}
\begin{minipage}[t]{0.15\textwidth}\centering
\includegraphics[width=\textwidth]{dimen_MSA.png}\\(c) MSA
\end{minipage}\\
\caption{Changes of quantitative evaluation results obtained by the proposed methods with different rank $k$. We test on 10 CAVE HSIs.}
\label{fig:dimen}
\end{figure}
Subsequently, we analyze the influence of rank $k$. Note that, we assume that HSIs lie in an approximate low-dimensional spectral subspace, and therefore, a larger rank $k$ will assure more precise low-rank approximation for the original HSI as presented in~\eqref{eq:low-rank}. However, in our fusion model~\eqref{eq:fusion1} and~\eqref{eq:fusion11}, the rank $k$ is bounded by the size of spectral sensing matrix $\mat{A}$. In the proposed methods, we set $k=3$ (3 is the column size of $\mat{A}$) to explore more information of HSI. In this section, we change the size of spectral sensing matrix $\mat{A}$, and the measured RGB image is changed from 1-band panchromatic image to 5-band multispectral image, and therein the rank $k$ changes from 1 to 5. Fig.~\ref{fig:dimen} presents the quantitative evaluation results obtained by the proposed methods with different rank $k$. As illustrated, with the increase of rank $k$, the quantitative evaluation results increase significantly. This is reasonable, since more measurements suggest higher reconstruction accuracy. From the figure, we can observe that the complementary 3-band RGB measurements can produce much higher HSI reconstruction results compared to complementary panchromatic measurements. By contrast, the improvement from 3-band RGB to multispectral measurements with more than 3-band is limited. What's more, the multispectral measurements will bring huge measured burden for the hardware. Therefore, to realize our proposed methods on the real hardware, we recommend the complementary 3-band RGB measurements.
\subsection{Patch Spatial Size Analysis}
\begin{table}[!htbp]
\centering
\caption{Changes of quantitative evaluation results obtained by the proposed PFusion with patch size $m$. We test on on 10 CAVE HSIs.}
\setlength{\tabcolsep}{1.7mm}{
\begin{tabular}{cccccccc}
\toprule
patch size & 20 & 40 & 60 & 100 & 160 & 320 & 512 \\
\midrule
MPSNR & 41.72 & 43.17 & 43.41 & 43.42 & 42.31 & 40.03 & 38.97 \\
MSSIM & 0.988 & 0.989 & 0.985 & 0.984 & 0.979 & 0.967 & 0.962 \\
MSA & 6.72 & 6.98 & 7.29 & 7.39 & 9.94 & 15.82 & 17.25 \\
\bottomrule
\end{tabular}%
\label{tab:patchsize}%
}
\end{table}%
As analyzed in Section~\ref{Analysis1} and the experimental results presented in Section~\ref{Experiments}, we can conclude that the proposed PFusion with patch processing can significantly improve the reconstruction results compared to Fusion. For our proposed PFusion on the 10 CAVE images, we change the patch size $m$ (we set $m=n$) from $20$ to $512$, and present the quantitative evaluation results in table~\ref{tab:patchsize}. It can be observed that our proposed PFusion is robust when the patch size changes from $40$ to $100$. On the other hand, when the patch size becomes larger than $320$, the performance decreases significantly. This is mainly because the rank $k=3$ is not enough to describe the spectral information of such big patches. Meanwhile, the smaller patch size will decrease the quality of orthogonal spectral basis estimation, as analyzed in Section~\ref{Analysis1}. It should be noticed that the patch size should be also related to the complexity of the image. If the image is complex and contains multiple materials, the patch size should be smaller, and vice versa. In the whole experiments, we fix the patch size as 100.
\subsection{Improved Spectral Basis Estimation}
\label{sec:improvel}
We present an improved orthogonal spectral basis estimation method in \eqref{eq:fusion22_ext}. Compared to original \eqref{eq:fusion22}, the improved \eqref{eq:fusion22_ext} combines CASSI and RGB measurements to estimate the orthogonal spectral basis. Table \ref{tab:ext} illustrates the quantitative evaluation results obtained by PFusion and the improved method \eqref{eq:fusion22_ext} on different datasets. We can clearly observe that the improved method can slightly ($\le$0.16dB in M-PSNR) enhance the results. However, it brings more than three times additional computational burden compared to the original case \eqref{eq:fusion22}. It is in fact a trade-off between the accuracy and speed of the HSI reconstruction.
\begin{table}[htbp]
\centering
\caption{Quantitative evaluation results obtained by PFusion and the improved on different dataset.}
\setlength{\tabcolsep}{1.1mm}{
\begin{tabular}{c|cc|cc|cc}
\toprule
\multirow{2}[2]{*}{} & \multicolumn{2}{c|}{CAVE} & \multicolumn{2}{c|}{ICVL} & \multicolumn{2}{c}{RS} \\
& PFusion & Improved & PFusion & Improved & PFusion & Improved \\
\midrule
M-PSNR & 43.42 & \textbf{43.46} & 46.91 & \textbf{47.07} & 43.34 & \textbf{43.38} \\
M-SSIM & \textbf{0.984} & \textbf{0.984} & \textbf{0.995} & \textbf{0.995} & 0.984 & \textbf{0.985} \\
MSA & 7.39 & \textbf{7.27} & \textbf{1.15} & 1.16 & 0.85 & \textbf{0.81} \\
Time & \textbf{1.8} & 9.2 & \textbf{1.8} & 9.2 & \textbf{5.1} & 18.2 \\
\bottomrule
\end{tabular}%
\label{tab:ext}%
\vspace{-5px}
}
\end{table}%
\subsection{Computational Efficiency}
\begin{table}[htbp]
\centering
\caption{Running time (in seconds) of different methods on different dataset.}
\setlength{\tabcolsep}{1.1mm}{
\begin{tabular}{cccccccc}
\toprule
Method & TV & DeSCI & TV-RGB & DeSCI-RGB & DLTR & Fusion & PFusion \\
\midrule
CAVE & 251 & 3787 & 523 & 12544 & 52585 & 1.3 & 1.8 \\
ICVL & 250 & 3776 & 511 & 12549 & 48572 & 1.3 & 1.8 \\
RS & 1123 & 8288 & 1478 & 18297 & 79234 & 3.3 & 5.1 \\
Bird & 472 & 18690 & * & * & * & 2.8 & 3.9 \\
\bottomrule
\end{tabular}%
\label{tab:time}%
}
\end{table}%
Finally, we analyze the computational efficiency of the proposed methods. The cost time of TV and DeSCI methods on real Bird image are provided by \cite{Yuan_PAMI_2019}. Table \ref{tab:time} presents the running time of different methods on different dataset. TV-RGB and DeSCI-RGB cost much more time compared to TV and DeSCI due to the additional measurements \eqref{eq:CASSI_RGB_pre}. Non-local low-rank related methods DeSCI-RGB and DLTR~\footnote{The cost time was provided by Dr. S. Zhang} can obtain the satisfied reconstruction results, however the computational burden is also huge. In particular, our proposed methods can reconstruct the HSI within several seconds, more than 5000 times faster than that of non-local related methods DeSCI-RGB and DLTR. This demonstrates a huge advantage of our proposed model to decompose the full HSI into orthogonal spectral basis and spatial coefficients, and then reconstruct the two smaller components separately. Compared to Fusion, PFusion needs additional computational time. However, PFusion significantly improves the reconstruction accuracy, as presented in Table~\ref{tab:patchsize}. To conclude, our proposed model~\eqref{eq:fusion1},~\eqref{eq:fusion2} is proved to be efficient to obtain the best accuracy, meanwhile much less computational time.
\section{CONCLUSION}
In this study, we have proposed a new model to reconstruct the HSI from CASSI and RGB measurements. We explore the spectral low-rank property of HSI, and decompose it to the orthogonal spectral basis and spatial coefficients. The RGB measurements can provide the estimation of coefficients, meanwhile CASSI measurements are adopted to estimate the orthogonal spectral basis. Compared to the previous works that try to reconstruct full HSIs with different regularizers, our proposed Fusion and PFusion methods do not require non-local processing or iteration, which can save huge computational time. Furthermore, our proposed methods does not require the spectral sensing matrix in advance. The experiments on three simulated HSI datasets and one real dataset demonstrated that our proposed Fusion and PFusion can reconstruct the HSI with the highest accuracy using far less computational time. In summary, our proposed methods can reconstruct the HSIs in a fast, flexible and high accuracy manner. As illustrated in \eqref{eq:general}, we did not add any additional regularizer on our model, to alleviate the computational burden. However, as presented in Section~\ref{Analysis1}, our model still has room for improvement to meet the condition of $k\leqslant 3$. We hope that the proposed optimization can be used as a baseline, and regard the improvement of our proposed model as future work.
\section{Appendix}
\subsection{Proposition 1}
\begin{proof}
The proof can be decomposed to two steps. First, the original HSI can be uniquely decomposed to $\mat{X} = \mat{E} \mat{W}$, where $\mat{W}$ is computed via SVD on $\mat{Z}$. Using SVD, we have $\mat{Z} = (\mat{A}^{\top}\mat{X}) = \mat{F} \mat{W}$, where $\mat{F} \in \mathbb{R}^{3 \times K}$. Since $k\leqslant 3$ and $\mat{A}$ is of full-column rank, we can conclude that $\mat{F}$ is of full-column rank, and $\mat{W}$ is of full-row rank. If we have another $\mat{E}^{'}$ such that $\mat{X} = \mat{E}^{'} \mat{W}$. We can obtain $(\mat{E}-\mat{E}^{'}) \mat{W}=\mat{0}$. Since $\mat{W}$ is of full-row rank, we can conclude that $\mat{E}=\mat{E}^{'}$. Therefore, the decomposition $\mat{X} = \mat{E} \mat{W}$ is unique. \par
Second, $\mat{E}$ can be uniquely obtained via \eqref{eq:opt22}. We have obtained the full-row rank $\mat{W}$ via SVD on $\mat{Z}$. Since the full-row rank property of $\mat{W}$, and the specific design of CASSI mask $\tensor{C}$ as in \eqref{eq:phi}, we have the condition that $\mat{\Phi}^{\mat{W}}$ is of full-column rank. Therefore, the solution of \eqref{eq:fusion22} is an overdetermined equation problem. That is to say, the obtained $\mat{E}$ from \eqref{eq:fusion22} is unique.
Thus, we obtain the proposition.
\end{proof}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran} |
1503.04667 | \section{Introduction}\label{sec:introduction}
Onsager's celebrated paper \cite{onsager:effects} on the effect of shape on the interaction between hard particles has perhaps been the most influential contribution to colloidal sciences of the last century \cite{frenkel:perspective}. There, entropic forces alone were first recognized as capable of inducing a structural ordering transition with no involvement of whatever cohesion force may be present. The typical prototype of such an ordering transition remains indeed the isotropic-to-nematic transition predicted in \cite{onsager:effects} for an assembly of slender hard rods as their number density is increased beyond a critical value (falling within a narrow gap of phase coexistence). As paradoxical as it may appear at a superficial glance, such an ordering transition is duly accompanied by an increase in entropy, since the loss in orientational disorder attached to the rods' alignment is outbalanced by the gain in translational disorder made possible by the increase in the volume available for the particles' centers of mass \cite{frenkel:perspective,frenkel:entropy}. The conjugated counterpart of this volume is the \emph{excluded volume}.
The excluded volume of two rigid bodies is the volume in space that any one point in one body cannot access by the very presence of the other body. This definition is delusively simple as it conceals a formidable mathematical task which can seldom be accomplished in an exact analytic form.\footnote{We learn from \cite{palffy:distance_2D} that Viellard-Baron, who took an early interest in this problem \cite{vieillard-baron:phase}, ``was reportedly greatly disturbed by the difficulties he encountered.''} Of course, there are exceptions to this general statement, but they are very few. Noticeable among these are the excluded volume of circular cylinders \cite{onsager:effects}, sphero-cylinder \cite{vroege:phase}, sphero-platelets \cite{mulder:solution}, and sphero-zonotopes \cite{mulder:excluded}.\footnote{Isihara~\cite{isihara:theory} is often credited with having provided an explicit formula for the excluded volume of ellipsoids of revolution. In Sec.~\ref{sec:spheroids} below, we shall discuss this case in some detail.}
Despite its technical difficulties, the excluded volume remains a key ingredient of both Onsager's original theory and its most recent extensions. In all of these, the per-particle free energy $F$ of an assembly of hard bodies (appropriately made dimensionless) is a functional of the single-body local density $\dens$. A number of papers have interpreted Onsager's original theory in the light of the modern density functional theories; here we refer the reader to the most recent review on the subject \cite{mederos:hard-body}, which is mostly concerned with hard-body systems that exhibit liquid crystalline phases.\footnote{A general reference for simple liquids is still the classical book \cite{hansen:theory}, now enriched by an addition on complex fluids.} $F[\dens]$ differs from the free-energy functional for an ideal gas by the addition of an \emph{excess} free energy $F_\mathrm{ex}[\dens]$, which characterizes the interactions of anisometric particles. In general, $F_\mathrm{ex}[\dens]$ is not known explicitly, but it can always be expressed as a power series in the total number density $\ndens$, which is often called the \emph{virial} expansion. The first non-trivial term of such an expansion is $\ndens\vir_2[\dens]$, where the functional $\vir_2$ is the \emph{second} virial coefficient, which is nothing but the ensemble average of the excluded volume,
\begin{equation}\label{eq:second_virial_coefficient}
\vir_2[\dens]:=\frac12\int_{\Omega^2}\evou(\omega,\omega')\dens(\omega)\dens(\omega')d\omega d\omega'.
\end{equation}
In \eqref{eq:second_virial_coefficient}, $\Omega$ is the \emph{orientational manifold}, which describes all possible orientations of a particle in the system and $\evou(\omega,\omega')$ is the excluded volume of two particles with orientations $\omega$ and $\omega'$, respectively. Higher powers of $\ndens$ bear higher virial coefficients $\vir_n$, which however are even more difficult to compute than $\vir_2$.
Onsager \cite{onsager:effects} remarkably estimated that for rods sufficiently slender $\vir_2$ actually prevails over all other $\vir_n$'s. This makes Onsager's theory virtually exact, as was also subsequently confirmed directly by numerical computations \cite{frenkel:onsager,frenkel:onsager_erratum}. Nevertheless, even when the second virial coefficient $\vir_2[\dens]$ cannot be proved to be dominant, it remains a viable approximation to $F_\mathrm{ex}[\dens]$ in establishing, at least qualitatively, the variety of possible equilibrium phases in a hard-body system and the entropy-driven transitions between them. To this end, explicit formulas for the excluded volume of rigid bodies are to be especially treasured.
This is the motivation for our study. Our objective is to express $\evop$, the excluded volume for two rigid bodies, $\bodyu$ and $\bodyt$, in terms of \emph{shape} functionals depending solely on the individual bodies $\bodyu$ and $\bodyt$. We shall accomplish this task for bodies both convex and cylindrically symmetric, for which $\evop$ can be given with no loss in generality as the sum of a series of Legendre polynomials $P_n$,
\begin{equation}\label{eq:excluded_volume_series}
\evop=\sum_{n=0}^\infty B_nP_n({\mdm}),
\end{equation}
where $\m_1$ and $\m_2$ are unit vectors along the symmetry axes of $\bodyu$ and $\bodyt$, respectively.\footnote{Following Isihara~\cite{isihara:theory}, we denote by $B_n$ the Legendre coefficients of $\evou$, though often in more recent literature this symbol is used to designate the virial coeffients, here denoted as $\vir_n$.} The shape functionals involved in our explicit representation will be natural extensions of the classical functionals on which was largely based the celebrated Brunn-Minkowski theory of convex bodies.\footnote{Besides the original sources \cite{brunn:thesis,minkowski:volumen}, the general books \cite{bonnesen:theory,schneider:convex} are highly recommended. We also collected a number of relevant results phrased in the same mathematical language employed here in Appendix~A to our earlier study on this subject \cite{piastra:octupolar}. Finally, a different but equivalent approach is presented in \cite{singh:molecular}.} The major advantage of the method proposed here is the explicit computability of such extended Minkowski functionals, which makes our representation formula directly applicable to bodies $\bodyu$ and $\bodyt$ not necessarily congruent, possibly representing particles of different species.
The paper is organized as follows. In Sec.~\ref{sec:volume_averages}, we set the scene for our development by showing that the Legendre coefficients $B_n$ of the representation formula \eqref{eq:excluded_volume_series} can be expressed as appropriate anisotropic volume averages. Section~\ref{sec:no_dipole} is devoted to the coefficient $B_1$ of the first Legendre polynomial $P_1(\mdm)=\mdm$ in \eqref{eq:excluded_volume_series}. We attach a special meaning to this, as it represents the \emph{dipolar} contribution to $\evop$ which would possibly arise from tapered, cylindrically symmetric, convex bodies, if only one could unambiguously assign a \emph{shape dipole} to them. The somewhat surprising conclusion will be that $B_1$ \emph{vanishes} identically on this class of bodies, making the very notion of shape dipole void, despite its intuitive appeal. Section~\ref{sec:Extended_Minkowski_functionals} is concerned with the extended Minkowski functionals, in terms of which, once evaluated on the bodies $\bodyu$ and $\bodyt$, we can write in closed form all coefficients $B_n$ in \eqref{eq:excluded_volume_series}. An explicit application of our method is illustrated in Sec.~\ref{sec:cones}, where we evaluate the extended Minkowski functionals for a generic circular cone and validate our evaluations through a direct computation of the coefficients $B_n$ made possible by an independent shape-reconstruction algorithm, appropriately modified to tackle efficiently the cone's sharp ridge. Likewise, in Sec.~\ref{sec:spheroids}, we determine the extended Minkowski functionals for a spheroid, that is, an ellipsoid of revolution, either prolate or oblate. In Sec.~\ref{sec:conclusions}, we collect the main conclusions of our work, looking back afresh to some of them, also in the light of possible future developments that they may suggest.
We shall endeavor to make our presentation as free as possible from unwanted technical details that might obscure both the outcomes of our study and the strategy adopted to obtain them. To provide, however, the interested reader with enough information to appreciate the mathematical infrastructure underlining this paper, we collect in two closing appendices the details of both the mathematical theory and the shape-reconstruction algorithm.
\section{Anisotropic volume averages}\label{sec:volume_averages}
It was proved by Mulder~\cite{mulder:excluded} that the excluded volume of $\evop$ of two bodies, $\bodyu$ and $\bodyt$, be they convex or not, can be expressed as
\begin{equation}\label{eq:Mulder_equality}
\evop=V[\bodyu+\bodyt^\ast],
\end{equation}
where $V$ is the volume functional, $\bodyt^\ast$ is the central inverse (relative to a specified origin $o$) of the body $\bodyt$, and $+$ denotes the Minkowski addition (to the definition of which concurs the origin $o$).\footnote{We shall often call \eqref{eq:Mulder_equality} Mulder's identity. The reader is referred to the primer on the Brunn-Minkowski theory of convex bodies in Appendix~A of \cite{piastra:octupolar}. A short recapitulation of this theory is also given in Appendix~\ref{sec:essentials} below to make our paper self-contained.} Letting both $\bodyu$ and $\bodyt$ be cylindrically symmetric bodies with axes $\m_1$ and $\m_2$, respectively, since $\evop$ is an isotropic scalar-valued function, by a theorem of Cauchy,\footnote{See, for example, Sec.~113.1 of \cite{gurtin:mechanics}.} we can say that $\evop$ is a function (still denoted as) $\evou$ of the inner product $\mdm$. Setting $\mdm=\cos\vt$, the function $\evou(\cos\vt)$ can be expanded as the sum of a series of Legendre polynomials (see, for example, Secs.~18.2 and 18.3 of \cite{NIST:DLMF}):
\begin{equation}\label{eq:V_e_expansion}
\evou(\cos\vt)=\sum_{n=0}^\infty B_nP_n(\cos\vt),
\end{equation}
where
\begin{equation}\label{eq:B_n_definitions}
B_n:=\frac{2n+1}{2}\int_0^\pi\evou(\cos\vt)P_n(\cos\vt)\sin\vt d\vt
\end{equation}
are the \emph{Legendre coefficients} of $\evou$. We record for future use a few basic properties of the orthogonal polynomials $P_n$ (see, in particular, Secs.~18.6.1 of \cite{NIST:DLMF} and 8.917.1 of \cite{gradshteyn:table}):
\begin{equation}\label{eq:P_n_properties}
P_n(-x)=(-1)^nP_n(x),\quad P_n(1)=1,\quad|P_n(x)|\leqq1.
\end{equation}
There is another way of expressing the coefficients $B_n$, which we find illuminating. Consider the average
\begin{equation}\label{eq:V_e_anistropic_average}
\ave{P_n\evou}\bodypair:=\ave{P_n(\mdm)\evou(\mdm)}_\replica
\end{equation}
computed for fixed $\bodyu$ over all possible replicas of $\bodyt$ obtained by rotating arbitrarily $\bodyt$ in space. By the cylindrical symmetry of $\bodyt$, the average \eqref{eq:V_e_anistropic_average} also acquires the equivalent form
\begin{equation}\label{eq:V_e_anistropic_average_equivalent}
\ave{P_n\evou}\bodypair=\ave{P_n(\mdm)\evou(\mdm)}_{\m_2},
\end{equation}
where, for any function $f(\e)$ defined on the unit sphere $\sphere$,
\begin{equation}\label{eq:average_over_Sphere_definition}
\ave{f}_{\e}:=\frac{1}{4\pi}\int_{\sphere}f(\e)da(\e)
\end{equation}
and $da(\e)$ denotes the area element with unit normal $\e$. Representing $\m_2$ in polar spherical coordinates with polar axis $\m_1$ and combining \eqref{eq:V_e_anistropic_average_equivalent} and \eqref{eq:B_n_definitions}, we readily arrive at
\begin{equation}\label{eq:V_e_anistropic_average_and_B_n}
\ave{P_n\evou}\bodypair=\frac12\int_0^\pi P_n(\cos\vt)\evou(\cos\vt)\sin\vt d\vt=\frac{1}{2n+1}B_n.
\end{equation}
Since both functions $\evou$ and $P_n$ are symmetric under the exchange of $\m_1$ and $\m_2$, the average $\ave{P_n\evou}\bodypair$ is also symmetric under the exchange of bodies $\bodyu$ and $\bodyt$:
\begin{equation}\label{eq:V_e_anistropic_average_symmetry}
\ave{P_n\evou}\bodypair=\ave{P_n\evou}[\bodyt,\bodyu].
\end{equation}
Equation \eqref{eq:Mulder_equality} allows us to express the Legendre coefficients $B_n$ of the excluded volume of two cylindrically symmetric bodies in a way directly related to the anisotropic averages of the volume of a Minkowski sum. Combining \eqref{eq:V_e_anistropic_average_and_B_n}, \eqref{eq:V_e_anistropic_average}, and \eqref{eq:Mulder_equality}, we readily see that
\begin{equation}\label{eq:B_n_averages}
\begin{split}
B_n&=(2n+1)\ave{P_n(\mdm)V[\bodyu+\bodyt^\ast]}_{\replica}
=(2n+1)(-1)^n\ave{P_n(\m_1\cdot\m_2^\ast)V[\bodyu+\bodyt^\ast]}_{\replica}\\
&=(2n+1)(-1)^n\ave{P_n(\m_1\cdot\m_2^\ast)V[\bodyu+\bodyt^\ast]}_{\replicaa},
\end{split}
\end{equation}
where $\m_2^\ast=-\m_2$ is the symmetry axis of the central inverse $\bodyt^\ast$ of $\bodyt$ and use has been made of \eqref{eq:P_n_properties} and the fact that averaging over $\replica$ is just the same as averaging over $\replicaa$. Thus, to obtain all coefficients $B_n$ in \eqref{eq:V_e_expansion}, we need to learn how to compute the \emph{anisotropic volume averages}
\begin{equation}\label{eq:anisotropic_volume_average_definition}
\ave{P_nV}\bodypair:=\ave{P_nV[\bodysum]}_{\replica},
\end{equation}
as then \eqref{eq:B_n_averages} would simply reduce to
\begin{equation}\label{eq:B_n_averages_central}
B_n=(2n+1)(-1)^n\ave{P_nV}[\bodyu,\bodyt^\ast],
\end{equation}
which obeys the same symmetry relation as in \eqref{eq:V_e_anistropic_average_symmetry}. Equation \eqref{eq:B_n_averages_central} is the basic building block of our development.
Although \eqref{eq:B_n_averages_central} is as general as \eqref{eq:Mulder_equality} for cylindrically symmetric bodies, this paper will solely be concerned with the excluded volume of \emph{convex} cylindrically symmetric bodies. For $n=0$, the average in \eqref{eq:anisotropic_volume_average_definition} becomes isotropic as $P_0\equiv1$ and its expression has long been know for generic convex bodies:\footnote{A derivation of \eqref{eq:isotropic_volume_average} can be found in \cite{singh:molecular}. Moreover, Kihara~\cite{kihara:coefficients,kihara:isihara} credits Isihara~\cite{isihara:determination} and Isihara and Hayashida~\cite{isihara:theory_I,isihara:theory_II} for having proved \eqref{eq:isotropic_volume_average}, although he also seems aware that a proof had already been contained in the classical work of Minkowski~\cite{minkowski:volumen}.}
\begin{equation}\label{eq:isotropic_volume_average}
\ave{V}\bodypair=V[\bodyu]+V[\bodyt]+\frac{1}{4\pi}\left(M[\bodyu]S[\bodyt]+M[\bodyt]S[\bodyu]\right),
\end{equation}
where $M$ is the \emph{total mean curvature} functional in \eqref{eq:M_functional} and $S$ is the \emph{surface area} functional in \eqref{eq:S_functional}. Since both $M[\body]$ and $S[\body]$ are invaraint under central inversion of $\body$, it follows from \eqref{eq:B_n_averages_central} and \eqref{eq:isotropic_volume_average} that
\begin{equation}\label{eq:B_0}
B_0=\ave{V}\bodypair.
\end{equation}
Here our challenge is to extend the neat classical formula \eqref{eq:isotropic_volume_average} for the isotropic average of the volume of the Minkowski sum of convex bodies to the anisotropic averages needed in \eqref{eq:B_n_averages_central}. This will be achieved in the two following sections with the aid of appropriate extensions of the classical Minkowski functionals $M$ and $S$. We anticipate that they are invariant under central body inversion like the classical Minkowski functionals, so that, in complete analogy with \eqref{eq:isotropic_volume_average} and \eqref{eq:B_0}, we shall be able to express the excluded volume $\evop$ of cylindrically symmetric bodies $\bodyu$ and $\bodyt$ in terms of functionals evaluated separately on $\bodyu$ and $\bodyt$.
As recalled in Appendix~\ref{sec:mathematical_details}, there is no loss in generality in limiting attention to the class $\convp$ of convex bodies with smooth boundaries and strictly \emph{positive} principal curvatures, as $\convp$ is dense in the whole class $\conv$ of convex bodies (see Appendix~\ref{sec:essentials}). Thus, our strategy will be to compute first the anisotropic volume averages in $\convp$ and then extend them by continuity to the whole of $\conv$. In the following section, we shall first accomplish our task for $\ave{P_1V}\bodypair$; this will lead us to conclude that $B_1\equiv0$, a general result of some import. In Sec.~\ref{sec:Extended_Minkowski_functionals}, we shall compute $\ave{P_nV}\bodypair$ for all $n\geqq2$ and arrive at the expected general explicit formula for all $B_n$'s.
\section{No shape dipoles}\label{sec:no_dipole}
Here our task is to compute $B_1$. To this end we remark that
\begin{subequations}\label{eq:B_1_preliminaries}
\begin{align}
\ave{P_n(\mdm)}_{\replica}&=\ave{P_n(\mdm)}_{\m_2}=0,\label{eq:B_1_preliminaries_one}\\
\ave{P_n(\mdm)V[\bodyt]}_{\replica}&=V[\bodyt]\ave{P_n(\mdm)}_{\replica}=0,\quad n\geqq1,\label{eq:B_1_preliminaries_two}
\end{align}
\end{subequations}
the former following from \eqref{eq:average_over_Sphere_definition} and the orthogonality of Legendre polynomials, and the latter also from the invariance of the volume functional under rotations. Then we represent $\m_2$ in a Cartesian frame $\Frame$ fixed in $\bodyu$. Letting $\ez=\normal$, where $\normal$ is the outer unit normal to $\bodyu$ at a selected point on $\partial\bodyu$, and choosing $\ey$ orthogonal to the plane $(\m_1,\normal)$, we have that
\begin{subequations}\label{eq:m_representations}
\begin{align}
\m_1&=\sin\vt_1\ex+\cos\vt_1\normal,\label{eq:m_1_representation}\\
\m_2&=\cos\Angle\sin\vt_2\ex+\sin\Angle\sin\vt_2\ey+\cos\vt_2\normal,\label{eq:m_2_representation}
\end{align}
\end{subequations}
the latter of which represents all possible orientations of $\m_2$, for given $\vt_1$ and $\vt_2$, the angles that $\m_1$ and $\m_2$ make with $\normal$ (see Fig.~\ref{fig:conical}).
\begin{figure}[h]
\centering
\includegraphics[width=0.14\linewidth]{avgStrategy.eps}
\caption{Sketch representing the unit vectors $\normal$, $\m_1$, and $\m_2$. With $\normal$ and $\m_1$ fixed, $\m_2$ as represented by \eqref{eq:m_2_representation} describes a cone around $\normal$ in the first step of the averaging process described in the text.}
\label{fig:conical}
\end{figure}
An easy, but important consequence of \eqref{eq:m_representations} is that
\begin{equation}\label{eq:mdm}
\begin{split}
\mdm&=\sin\vt_1\sin\vt_2\cos\Angle+\cos\vt_1\cos\vt_2\\
&=\sin\vt_1\sin\vt_2\cos\Angle+(\m_1\cdot\normal)(\m_2\cdot\normal).
\end{split}
\end{equation}
Now, using also \eqref{eq:B_1_preliminaries}, we can derive from \eqref{eq:V_bodysum} the following expression\footnote{Unlike Mulder's identity \eqref{eq:Mulder_equality}, which is valid for general bodies, equation \eqref{eq:V_bodysum}, which is indeed one basic ingredient of our theory, has only been established for convex bodies.}
\begin{equation}\label{eq:P_1_V_bodypair}
\begin{split}
\ave{P_1V}\bodypair&=\frac13\left(\ave{\frac{\m_2\cdot\normal}{\Kt}}_\normal \int_\sphere(\normal\cdot\ru)(\normal\cdot\m_1)d\areanu+
\ave{(\normal\cdot\rt)(\normal\cdot\m_2)}_\normal\int_\sphere\frac{\m_1\cdot\normal}{\Ku}d\areanu\right)
\\
&+\frac16\ave{(\m_2\cdot\normal)\left(\rhout+\rhott\right)}_\normal \int_{\sphere}(\normal\cdot\ru)(\m_1\cdot\normal)\left(\rhouu+\rhotu\right)d\areanu\\
&+\frac16\ave{(\normal\cdot\rt)(\m_2\cdot\normal)\left(\rhout+\rhott\right)}_\normal \int_{\sphere}(\m_1\cdot\normal)\left(\rhouu+\rhotu\right)d\areanu,
\end{split}
\end{equation}
which results from computing the average over $\bodyt$ in two separate steps: first averaging over the angle $\Angle$ in \eqref{eq:mdm} which ranges in $[0,2\pi]$ and then averaging formally over $\normal$, meant as the outward unit normal to $\bodyt$, which ranges over $\sphere$. If the former average is taken over the process in which, with $\normal$ and $\m_1$ fixed, $\m_2$ is seen to describe a cone around $\normal$ (see Fig.~\ref{fig:conical}), the latter is nothing but the average over the independent process in which all different points of $\partial\bodyt$ come to be associated with one and the same fixed normal $\normal$. As in \eqref{eq:V_bodysum}, also in \eqref{eq:P_1_V_bodypair} $\rhouu$ and $\rhotu$ denote the principal radii of curvature of $\partial\bodyu$ and $\rhout$ and $\rhott$ denote the principal radii of curvature of $\partial\bodyt$; correspondingly, $\Ku=(\rhouu\rhotu)^{-1}$ and $\Kt=(\rhout\rhott)^{-1}$ are the Gaussian curvatures of $\partial\bodyu$ and $\partial\bodyt$ and $\ru$ and $\rt$ are the \emph{radial mappings} of $\bodyu$ and $\bodyt$ (see Appendix~\ref{sec:essentials} for more details).
Now, with the aid of the theory recalled in Appendix~\ref{sec:essentials}, we compute the new shape functionals featuring in \eqref{eq:P_1_V_bodypair}. It readily follows from \eqref{eq:surface_dilation_ratio} that for any body $\body\in\convp$
\begin{equation}\label{eq:B_1_shape_functional_one}
\int_\sphere\frac{\m\cdot\normal}{K}d\areanu=\int_\boundary\mdn\, d\arean=\int_\body\diver\m\, dv=0,
\end{equation}
where use has also been made of the classical divergence theorem (and the fact that $\m$ can be extended to the whole space as a uniform field). Likewise, \eqref{eq:surface_divergence_radial} and \eqref{eq:surface_dilation_ratio} imply that
\begin{equation}\label{eq:B_1_shape_functional_two}
\begin{split}
\int_\sphere(\m\cdot\normal)(\rho_1+\rho_2)d\areanu&=\int_\sphere(\m\cdot\normal)\frac1K\divs\n\, d\areanu\\
=\int_\boundary(\m\cdot\n)\divs\n\, d\arean&=\int_\boundary\divs\m\,d\arean=0,
\end{split}
\end{equation}
where use has also been made of the surface divergence theorem recalled in \eqref{eq:surface_divergence_theorem}. Combining \eqref{eq:B_1_shape_functional_one} and \eqref{eq:B_1_shape_functional_two}, we obtain from \eqref{eq:P_1_V_bodypair} that $\ave{P_1 V}\bodypair$ vanishes identically for all $\bodyu$ and $\bodyt$, and so, by \eqref{eq:B_n_averages_central},
\begin{equation}\label{eq:B_1_vanishes}
B_1=-3\ave{P_1V}[\bodyu,\bodyt^\ast]\equiv0.
\end{equation}
Equation \eqref{eq:B_1_vanishes} says that for cylindrically symmetric bodies, $\bodyu$ and $\bodyt$, the excluded volume $\evou$ in \eqref{eq:V_e_expansion} does not contain any dipolar contribution, no matter how tethered $\bodyu$ and $\bodyt$ can be, suggesting that \emph{no} shape dipole can be associated with them. It was already argued in \cite{piastra:octupolar} that a shape dipole cannot be unambiguously assigned to a body $\body$. Equation \eqref{eq:B_1_vanishes} shows that no matter how we endeavor to assign a shape dipole to $\body$ it plays no role in the hard-particle interactions governed by the excluded volume. Of course, polarity effects are also expected to be seen in these interactions. For example, it was proved in \cite{palffy:minimum} that the excluded volume of two congruent cylindrically symmetric convex bodies is minimized when the bodies are in the \emph{antiparallel} configuration, where $\m_2=-\m_1$. Such polar effects, however, cannot involve shape dipoles: as shown in \cite{piastra:octupolar}, they start being manifested through the shape \emph{octupole} that features in \eqref{eq:V_e_expansion} through the coefficient $B_3$. This and all higher order Legendre coefficients will be computed in the following section.
\section{Extended Minkowski functionals}\label{sec:Extended_Minkowski_functionals}
Computing the anisotropic volume averages $\ave{P_nV}\bodypair$ for $n\geqq2$ is technically more complicated than computing $\ave{P_1V}\bodypair$, although conceptually this task is not much different from that just accomplished in the preceding section. As shown in Appendix~\ref{sec:appendix_volume_averages}, this computation led quite naturally to the introduction of a number of shape functionals that extend the classical Minkowski functionals $M$ and $S$. They are defined for all $n\geqq2$ as follows:
\begin{subequations}\label{eq:M_and_S_functionals_definitions}
\begin{align}
M_n[\body]&:=\int_{\boundary}P_n(\mdn)Hd\arean,\label{eq:M_definition}\\
M_n'[\body]&:=\int_{\boundary}(\n\cdot\x) P_n(\mdn)Kd\arean, \label{eq:M_prime_definition}\\
M_n''[\body]&:=\int_{\boundary}[1-(\mdn)^2]\frax12(\sigma_1-\sigma_2)P_{n-2}^{(2,2)}(\mdn)d\arean, \label{eq:M_double_prime_definition}\\
S_n[\body]&:=\int_{\boundary}P_n(\mdn)d\arean, \label{eq:S_definition}\\
S_n'[\body]&:=\int_{\boundary}(\n\cdot\x) P_n(\mdn)Hd\arean, \label{eq:S_prime_definition}\\
S_n''[\body]&:=\int_{\boundary}(\n\cdot\x)[1-(\mdn)^2]\frax12(\sigma_1-\sigma_2)P_{n-2}^{(2,2)}(\mdn)d\arean. \label{eq:S_double_prime}
\end{align}
\end{subequations}
We shall often refer to them as the \emph{extended} Minkowski functionals.\footnote{More shortly, also as the extended $M$ and $S$ functionals.} They give $\ave{P_nV}\bodypair$ the following concise, explicit representation:
\begin{equation}\label{eq:anisotropic_volume_average_representation}
\begin{split}
\ave{P_nV}\bodypair&=
\frac{1}{12\pi}\left(M_n'[\body_1]S_n[\body_2]+M_n'[\body_2]S_n[\body_1]\right)
+\frac{1}{6\pi}\left(M_n[\body_1]S_n'[\body_2]+M_n[\body_2]S_n'[\body_1]\right)\\
&-\frac{1}{6\pi}\frac{(n-2)!(n+2)!}{(4n!)^2} \left(M_n''[\body_1]S_n''[\body_2]+M_n''[\body_2]S_n''[\body_1]\right).
\end{split}
\end{equation}
Strictly speaking, in Appendix~\ref{sec:appendix_volume_averages} we arrived at \eqref{eq:M_and_S_functionals_definitions} through the representation via radial mapping of the convex bodies in the special class $\convp$. However, the extended Minkowski functionals can also be extended by continuity to the whole of $\conv$. Moreover, as clearly shown by \eqref{eq:M_and_S_functionals_definitions}, their definition actually applies to any cylindrically symmetric body, be it convex or not. The extended $M$ and $S$ functionals are invariant under rotations. Their behavior under translations is further discussed in Appendix~\ref{sec:invariance_translations}.
Since the extended Minkowski functionals for a body $\body$ are invariant under central inversion of $\body$ (see Appendix~\ref{sec:essentials}), it follows from \eqref{eq:anisotropic_volume_average_representation} that $\ave{P_nV}[\bodyu,\bodyt^\ast]=\ave{P_nV}\bodypair$, and so equation \eqref{eq:B_n_averages_central} becomes \begin{equation}\label{eq:B_n_averages_final}
B_n=(2n+1)(-1)^n\ave{P_nV}\bodypair,
\end{equation}
which by \eqref{eq:anisotropic_volume_average_representation} expresses the Legendre coefficients of $\evou$ in \eqref{eq:V_e_expansion} in terms of shape functionals evaluated on the individual bodies $\bodyu$ and $\bodyt$. Formula \eqref{eq:B_n_averages_final} will be applied in the two following sections to special classes of bodies, namely, circular cones and ellipsoids of revolution.
As shown in Appendix~\ref{sec:reduction_formulae}, the functionals $M''_n$, $M'_n$, and $M_n$ are not independent of one another. If $M''_n$ is certainly related to $M_n$ through
\begin{equation}\label{eq:M_double_prime_M}
M''_n[\body]=\frac{4n}{n+2}M_n[\body]\quad\forall\ n\geqq2,
\end{equation}
for all cylindrically symmetric convex bodies $\body$, we expect the relation
\begin{equation}\label{eq:M_prime_M}
M'_n[\body]=-\frac{2}{(n-1)(n+2)}M_n[\body]
\end{equation}
to be valid at least for both classes of bodies studied in detail in this paper, having checked it by direct inspection for a large number of indices.\footnote{Of course, we are aware that this can by no means be considered as a proof of \eqref{eq:M_prime_M}, which remains for us a conjecture, though with a high likelihood of being true.} Whenever \eqref{eq:M_prime_M} applies, the anisotropic volume averages $\ave{P_nV}\bodypair$ in \eqref{eq:anisotropic_volume_average_representation} take on a much simpler form,
\begin{equation}\label{eq:eq:anisotropic_volume_average_representation_simplified}
\ave{P_nV}\bodypair=\frac{1}{6\pi}\left(M_n[\body_1]A_n[\body_2]+M_n[\body_2]A_n[\body_1]\right),
\end{equation}
where
\begin{equation}\label{eq:A_functional}
A_n[\body]:=S'_n[\body]-\frac{1}{(n-1)(n+2)}S_n[\body]-\frac{n+1}{4(n-1)}S''_n[\body],\quad\forall\ n\geqq2.
\end{equation}
In particular, for two congruent bodies, $\body_1\thicksim\body_2\thicksim\body$,\footnote{Meaning that $\body_1$ and $\body_2$ are images of $\body$ under the action of the full orthogonal group $O(3)$.} by \eqref{eq:B_n_averages_final}, $B_n$ can be given the following \emph{factorized} expression,
\begin{equation}\label{eq:B_n_multiplicative}
B_n=\frac{(2n+1)(-1)^n}{3\pi}M_n[\body]A_n[\body],
\end{equation}
which we shall assume to be valid in the following (and will be very convenient in our development below).\footnote{In the language of \cite{rosenfeld:desnity} and \cite{hansen-goos:fundamental}, once combined with \eqref{eq:excluded_volume_series}, \eqref{eq:B_n_multiplicative} would be called a \emph{convolution decomposition} (or simply a \emph{deconvolution}) of the excluded volume.}
\section{Circular cones}\label{sec:cones}
We denote by $\conea$ a circular cone with semi-amplitude $\alpha\in[0,\frac\pi2]$, radius $R$, and height $h$, both related through \eqref{eq:cone_R_and_h} to the slant height $L$ (see Fig.~\ref{fig:cone_text}).
\begin{figure}[h]
\centering
\includegraphics[width=0.25\linewidth]{figure2.eps}
\caption{(Color online) A circular cone with vertex in the origin $o$, semi-amplitude $\alpha$, radius $R$, height $h$, and slant height $L$.}
\label{fig:cone_text}
\end{figure}
It is a simple matter to show that the classical Minkowski functionals for $\conea$ take the explicit forms (see also (A61) and (A62) of \cite{piastra:octupolar}),
\begin{subequations}\label{eq:cone_classical_Minkowski_functionals}
\begin{align}
M[\conea]&=\pi L\left[\cos\alpha+\left(\frac\pi2+\alpha\right)\sin\alpha\right],\label{eq:cone_M}
\\
S[\conea]&=\pi L^2\sin\alpha(1+\sin\alpha),\label{eq:cone_S}
\\
V[\conea]&=\frac13\pi L^3\cos\alpha\sin^2\alpha.\label{eq:cone_V}
\end{align}
\end{subequations}
As follows easily from \eqref{eq:cone_curvatures}, the Gaussian curvature $K$ vanishes identically on all smooth components of $\partial\conea$. Moreover, the contribution of the vertex $o$ to all the integrals in \eqref{eq:M_and_S_functionals_definitions} vanishes, as can be seen by replacing $o$ with a fitting spherical cap of radius $\ve$ (whose area surface scales like $\ve^2$) and then taking the limit as $\ve\to0^+$, in complete analogy to the method used in Appendix~\ref{sec:ridge_M_and_S} to compute the extended Minkowski functionals on a circular ridge $\ridge$. The formulae \eqref{eq:ridge_M_and_S_functionals} obtained there for a $\ridge$ can be directly applied here to the rim of the cone's base by simply setting $\theta_1=\frac\pi2-\alpha$ and $\theta_2=\pi$. Use of \eqref{eq:cone_R_and_h} finally leads us to
\begin{subequations}\label{eq:cone_M_S_functionals}
\begin{align}
M_n[\conea]&=\pi L\left(P_n(\sin\alpha)\cos\alpha +\sin\alpha\iap P_n(\cos\vt)d\vt\right), \label{eq:cone_M_n}
\\
M_n'[\conea]&=-2\pi L\iap\cos(\vt+\alpha)P_n(\cos\vt)\sin\vt d\vt, \label{eq:cone_M_n_p}
\\
M_n''[\conea]&=-\pi L\left(\Ja(\sin\alpha)\cos^3\alpha-\sin\alpha\iap\Ja(\cos\vt)\sin^2\vt d\vt\right), \label{eq:cone_M_n_p_p}
\\
S_n[\conea]&=\pi L^2\sin\alpha\left[P_n(\sin\alpha)+(-1)^n\sin\alpha\right], \label{eq:cone_S_n}
\\
S_n'[\conea]&=-\pi L^2\sin\alpha\iap\cos(\vt+\alpha)P_n(\cos\vt)d\vt, \label{eq:cone_S_n_p}
\\
S_n''[\conea]&=-\pi L^2\sin\alpha\iap\cos(\vt+\alpha)\Ja(\cos\vt)\sin^2\vt d\vt, \label{eq:cone_S_n_p_p}
\end{align}
\end{subequations}
for all $n\geqq2$. Inserting \eqref{eq:cone_M_S_functionals} in \eqref{eq:B_n_averages_final}, we obtain explicit, analytic formulae for the Legendre coefficients $B_n$ of the excluded volume of two congruent circular cones, $\coneu$ and $\conet$, which for completeness are recorded in \eqref{eq:cone_B_n} for the first seven indices $n\geqq1$. They are plotted in Fig.~\ref{fig:cone_B_n}
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[width=.33\linewidth]{B_Even_Plots_Lecture.eps}}
\hspace{.02\linewidth}
\subfigure[]{\includegraphics[width=.33\linewidth]{B_Odd_Plots_Lecture.eps}}
\caption{(Color online) (a) For two congruent circular cones, $\coneu$ and $\conet$, with slant height $L$ and semi-amplitude $\alpha$, the graphs of $B_n$ scaled to $L^3$ are plotted against $0\leqq\alpha\leqq\frac\pi2$ for $n=2$ (solid line), $n=4$ (dashed line), and $n=6$ (dotted line), according to \eqref{eq:cone_B_n}.
(b) For the same cones, $\coneu$ and $\conet$, the graphs of $B_n$ scaled to $L^3$ are plotted against $0\leqq\alpha\leqq\frac\pi2$ for $n=1$ (thin solid line), $n=3$ (solid line), $n=5$ (dashed line), and $n=7$ (dotted line).
In both panels, crosses represent the values computed numerically on the shape of the excluded body $\exc{\coneu}{\conet}$ reconstructed with the algorithm recalled in Appendix~\ref{sec:algorithm}.}
\label{fig:cone_B_n}
\end{figure}
as functions of $\alpha$. Inserting \eqref{eq:cone_classical_Minkowski_functionals} in \eqref{eq:isotropic_volume_average}, we also obtain the isotropic average $B_0$ in \eqref{eq:B_0}, which is plotted in Fig.~\ref{fig:B_Zero} with two possible normalizations, relative to the volume $\Vc$ of each cone delivered by \eqref{eq:cone_V} in Fig.~\ref{fig:B_Zero}(a), and relative to $L^3$ in Fig.~\ref{fig:B_Zero}(b).
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[width=.33\linewidth]{B_Zero.eps}}
\hspace{.02\linewidth}
\subfigure[]{\includegraphics[width=.33\linewidth]{BB_Zero.eps}}
\caption{(Color online) (a) The isotropic average $B_0$ as in \eqref{eq:B_0} and \eqref{eq:isotropic_volume_average} normalized to the cone's volume $\Vc$ delivered by \eqref{eq:cone_V}; it attains its minimum at $\alpha\doteq0.14\pi$.
(b) $B_0$ normalized to $L^3$ like all other coefficients $B_n$'s shown in Fig.~\ref{fig:cone_B_n}; it attains its maximum at $\alpha\doteq0.47\pi$.
In both panels, crosses represent the volumes computed numerically to benchmark the shape-reconstruction algorithm described in Appendix~\ref{sec:algorithm}.}
\label{fig:B_Zero}
\end{figure}
The even-indexed coefficients $B_n$'s are mostly negative, indicating by \eqref{eq:P_n_properties} a tendency for the corresponding terms in the sum \eqref{eq:V_e_expansion} to minimize $\evou$ for either $\vt=0$ or $\vt=\pi$, irrespectively. On the contrary, the odd-indexed coefficients are mostly positive (apart from $B_3$ which is never negative), indicating a tendency for the corresponding terms in \eqref{eq:V_e_expansion} to minimize $\evou$ for $\vt=\pi$, that is, when the cones $\coneu$ and $\conet$ are in the \emph{antiparallel} configuration, with $\m_2=-\m_1$. This suggests that the excluded volume of two congruent circular cones is minimized in the antiparallel configuration, as shown by direct computation in \cite{piastra:octupolar} in accord with the general minimum property established more recently in \cite{palffy:minimum}.
The crosses superimposed to the graphs in Fig.~\ref{fig:cone_B_n} represent the values of $B_n$ extracted numerically from the volume of the excluded body $\exc{\coneu}{\conet}$, the region in space that cone $\conet$ cannot access by the presence of cone $\coneu$. Determining $\exc{\coneu}{\conet}$ is indeed necessary for a direct determination of $\evo{\coneu}{\conet}$, as the general proper geometric definition of the excluded volume of bodies $\bodyu$ and $\bodyt$ is precisely the volume of the excluded body $\excp$, $\evop:=V[\excp]$ (see also \cite{piastra:octupolar}). Here $\exc{\coneu}{\conet}$ was obtained from the shape-reconstruction algorithm outlined in Appendix~\ref{sec:algorithm}. Our strategy was completely different from that adopted so far in this paper. For a given $\alpha$, we reconstructed $\exc{\coneu}{\conet}$ for a number of values of the angle $\vt$ made by the cones' axes $\m_1$ and $\m_2$; we computed numerically the excluded volume $\evou$ as a function of $\vt$ by applying \eqref{eq:V_functional} to a triangulation of $\partial\exc{\coneu}{\conet}$ and we extracted from this function the coefficients $B_n$ through \eqref{eq:B_n_definitions}. To what extent the two methods agree, thus granting support to each other, is left to the reader to judge from Fig.~\ref{fig:cone_B_n}. Quantitative details about both the shape-reconstruction algorithm employed here (including its adaptation to the specific case of cones, which with their sharp edge and pointed vertex required special attention) and the way the coefficients $B_n$ were computed can be found in Appendix~\ref{sec:algorithm} below.
Figure~\ref{fig:excluded_volume_cone} shows three graphs representing the excluded volume $\evou$ of
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{V_Plots.eps}
\caption{Excluded volume $\evou$ of two congruent circular cones, $\coneu$ and $\conet$, with slant height $L$ and semi-amplitude $\alpha_0\doteq0.14\,\pi$ corresponding to the minimum value of the scaled average $\Va/\Vc$, where $\Vc$ is the volume of each cone. Two graphs, plotted against the angle $0\leqq\vt\leqq\pi$ made by the cones' axes, are delivered by \eqref{eq:V_e_expansion} truncated at $n=3$ (solid line) and $n=9$ (dashed line). The third graph (dotted line) represents the \emph{octupolar approximation} proposed in \cite{piastra:octupolar}, which interpolates the excluded volumes of parallel ($\vt=0$) and antiparallel ($\vt=\pi$) configurations.}
\label{fig:excluded_volume_cone}
\end{figure}
$\coneu$ and $\conet$ scaled to their common volume $\Vc$ (given by \eqref{eq:cone_V}) as a function of the angle $\vt$ between their axes. The semi-amplitude $\alpha$ of both cones is taken here to be $\alpha_0\doteq0.14\,\pi$, for which, as shown in Fig.~\ref{fig:B_Zero}, the isotropic average $\Va$ scaled to $\Vc$ takes on its minimum value. The graphs in Fig.~\ref{fig:excluded_volume_cone} correspond to the function in \eqref{eq:B_n_averages} truncated at $n=3$ and $n=9$; they are both contrasted against the \emph{octupolar approximation}, which in \cite{piastra:octupolar} was shown to be rather accurate. While, by construction, the latter takes on the exact values of $\evou$ at both $\vt=0$ (parallel cones) and $\vt=\pi$ (antiparallel cones), which are $14\Vc$ and $8\Vc$, respectively, both truncated expansions do not. Actually, as expected,\footnote{Since the expansion in \eqref{eq:V_e_expansion} is an approximation in the $L^2$-norm, and not pointwise.} the convergence of the series in \eqref{eq:V_e_expansion} at these points is rather slow: for example, a computation with $61$ terms was required to obtain
\begin{equation}\label{eq:V_e_computation}
\frac\evou\Vc\doteq14.01\quad\text{and}\quad\frac\evou\Vc\doteq8.153,
\end{equation}
at $\vt=0$ and $\vt=\pi$, respectively. Thus, if for cones the explicit octupolar approximation of the excluded volume could still be a good choice, for other cylindrically symmetric convex bodies, the general method proposed in this paper might be even a better choice.
\section{Spheroids}\label{sec:spheroids}
Spheroids are cylindrically symmetric ellipsoids (see Fig.~\ref{fig:spheroid}).
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[width=.25\linewidth]{spheroid-prolate.eps}}
\hspace{.1\linewidth}
\subfigure[]{\includegraphics[width=.25\linewidth]{spheroid-oblate.eps}}
\caption{A spheroid is an ellipsoid of revolution. The symmetry axis is here denoted by $\m$; $a$ and $b$ are the ellipsoid's semi-axes, in the direction of $\m$ and in the direction orthogonal to $\m$, respectively. This spheroid is said to be prolate (a) if the aspect ratio $\eta:=b/a$ is less than unity; it is said to be oblate (b) if $\eta>1$.}
\label{fig:spheroid}
\end{figure}
Letting $a$ be the semi-axis of the spheroid along the symmetry axis $\m$ and $b$ the semi-axis orthogonal to $\m$, we set
\begin{equation}\label{eq:eta_definition}
\eta:=\frac{b}{a}
\end{equation}
and call it the \emph{aspect ratio} of the body. A spheroid with aspect ratio $\eta$ will denoted $\sphero$ for short; it is \emph{prolate} along the symmetry axis for $0<\eta<1$ and \emph{oblate} for $\eta>1$. Clearly, for $\eta=1$, $\sphero$ reduces to a sphere of radius $a$. Making use of the explicit representation of $\sphero$ described in Appendix~\ref{sec:appendix_spheroids}, we may write the classical Minkowski functionals as
\begin{subequations}\label{eq:spheroids_calssical_M_functionals}
\begin{align}
M[\sphero]=&\pi a\left(2+\ione\frac{\eta^2}{\denu}\right)du,\\
S[\sphero]=&2\pi a^2\eta\ione\sqrt{\denu}du,\\
V[\sphero]=&\frac{4\pi}{3}a^3\eta^2=:\Vs,
\end{align}
\end{subequations}
where $\Vs$ has been introduced as a shorthand for the spheroid's volume. It is often useful to describe how far $\sphero$ is from a sphere by defining its \emph{eccentricity} $\ecc$ as
\begin{equation}\label{eq:eccentricity}
\ecc:=
\begin{cases}
\sqrt{1-\eta^2}&\text{for}\quad 0\leqq\eta\leqq1,\\
\sqrt{1-\frac{1}{\eta^2}}&\text{for}\quad \eta\geqq1.
\end{cases}
\end{equation}
A relevant property of $\ecc$ is that the transformation $\eta\mapsto1/\eta$, which represents the \emph{reciprocal inversion} of $\sphero$ relative to its center, changes a prolate spheroid into an oblate spheroid with the same eccentricity. Though neither of the functionals \eqref{eq:spheroids_calssical_M_functionals} is invariant under reciprocal inversion of $\sphero$, all the ratios
\begin{equation}\label{eq:f_n}
f_n:=\frac{B_n}{\Vs}
\end{equation}
are expected to be so, as such a property should indeed be enjoyed by the ratio of the excluded volume $\evo{\sphero_1}{\sphero_2}$ of two congruent spheroids, $\sphero_1$ and $\sphero_2$, to their common volume.\footnote{Tjipto-Margo and Evans~\cite{tjipto-margo:onsager} credit Ho{\l}yst and Poniewierski~\cite{holyst:study} for having proved analytically such an invariance property for uniaxial ellipsoids, but we were unable to retrace a convincing analytic proof in \cite{holyst:study}. Similarly, the extension of this property to biaxial ellipsoids was established numerically in \cite{tjipto-margo:onsager} by a Monte Carlo method. Contrariwise, the explicit analytic formula obtained by Mulder~\cite{mulder:solution,mulder:isotropic} for the excluded volume of spheroplatelets allows one to prove that its ratio to the individual spheroplatelet's volume is invariant under reciprocal transformation of the three unequal lengths that characterize these bodies. In any event, as shown in \cite{rigby:hard}, even for spheroids, this property does \emph{not} apply to higher-order virial coefficients.} As a consequence, all $f_n$'s should be functions of $\ecc$ alone. The expression for $f_0$ was already obtained by Isihara~\cite{isihara:determination},
\begin{equation}\label{eq:f_0}
f_0=2+\frac32\left(1+(1-\ecc^2)\frac{\arc\ecc}{\ecc} \right)\left(1+\frac{\arcsin\ecc}{\ecc\sqrt{1-\ecc^2}}\right),
\end{equation}
which is also known as the Isihara-Ogston-Winzor formula \cite{ogston:treatment,ambrosetti:percolative}.
The representation for $B_n$ in \eqref{eq:B_n_multiplicative} can appropriately be used to obtain all even-indexed functions $f_n$.\footnote{Clearly, all odd-indexed $f_n$ vanish identically since spheroids are symmetric under central inversion.} To this end, we first record the form taken on a spheroid $\sphero$ by the extended Minkowski functional (see Appendix~\ref{sec:spheroids_B_n} for more details):
\begin{subequations}\label{eq:M_and_S_spheroids}
\begin{align}
M_n[\sphero]=&\pi a\ione P_n(\xi)\frac{\eta^2[1+\denx]}{[\denx]^{\frac32}}d\xi,\label{eq:M_n_spheroids}\\
M'_n[\sphero]=&2\pi a\ione P_n(\xi)\sqrt{\denx}d\xi,\label{eq:M'_n_spheroids}\\
M''_n[\sphero]=&\pi a\ione\Ja(\xi)\frac{\eta^2(\eta^2-1)(1-\xi^2)^2}{[\denx]^{\frac32}}d\xi,\\
S_n[\sphero]=&2\pi a^2\ione P_n(\xi)\frac{\eta^4}{[\denx]^2}d\xi,\\
S'_n[\sphero]=&\pi a^2\ione P_n(\xi)\frac{\eta^2}{\denx}d\xi,\\
S''_n[\sphero]=&\pi a^2\ione \Ja(\xi)\frac{\eta^2(\eta^2-1)(1-\xi^2)^2}{\denx}d\xi.
\end{align}
\end{subequations}
For $n=2$, we obtained
\begin{equation}\label{eq:f_2}
f_2=\frac{15}{32}\frac{1}{\ecc^4}\left(\ecc^2-3+(\ecc^2+3)(1-\ecc^2)\frac{\arc\ecc}{\ecc}\right) \left(3-2\ecc^2+\frac{4\ecc^2-3}{\ecc\sqrt{1-\ecc^2}}\arcsin\ecc\right).
\end{equation}
It is worth noting that this formula coincides with that found by Isihara~\cite{isihara:theory} for oblate spheroids ($\eta\geqq1$).\footnote{See equations (48)--(50) of \cite{isihara:theory}.} For prolate spheroids, Isihara~\cite{isihara:theory} records a result which does not comply with the requirement of $f_2$ being invariant under the transformation $\eta\mapsto1/\eta$. For this reason, we deem it to be incorrect. This should not indeed surprise us, as Isihara's method delivers $B_n$ in the form of two separate power series in $\ecc$, one for the prolate case and the other for the oblate case,\footnote{See equations (29) and (47) of \cite{isihara:theory}.} which need then be resummed.\footnote{A similar discrepancy for $f_4$ is pointed out in Appendix~\ref{sec:spheroids_B_n} below.}
Explicit formulae for both $f_4$ and $f_6$ are reproduced in Appendix~\ref{sec:spheroids_B_n}; here we shall be contented with showing in Fig.~\ref{fig:B2oB0} $B_6$, $B_4$, and $B_2$ normalized to $B_0$ as functions of $\eta$ for prolate spheroids (as for oblate spheroids these ratios also remain unchanged under the transformation $\eta\mapsto1/\eta$).
\begin{figure}[h]
\centering
\includegraphics[width=0.4\linewidth]{B2oB0.eps}
\caption{The plots of $B_2$ (solid line), $B_4$ (dashed line), and $B_6$ (dotted line) normalized to $B_0$ for $0\leqq\eta\leqq1$. In the limit as $\eta\to0$ (needle-shaped spheroids), the ratios shown here tend to $B_2/B_0=-5/8\doteq-0.63$, $B_4/B_0=-9/64\doteq-0.14$, and $B_6/B_0=-65/1024\doteq-0.06$.}
\label{fig:B2oB0}
\end{figure}
The graphs plotted in Fig.~\ref{fig:B2oB0} may help deciding how many terms to retain in \eqref{eq:V_e_expansion} for any given value of $\eta$.
For completeness, we show in Fig.~\ref{fig:B0B2oV0} the graphs of the coefficients $B_0$ and $B_2$, the former of which is normalized to $8\Vs$, the minimum excluded volume of two congruent spheroids of volume $\Vs$ (attained when they are in the parallel configuration).
\begin{figure}[h]
\centering
\subfigure[]{\includegraphics[width=0.33\linewidth]{B0oV0.eps}}
\hspace{.02\linewidth}
\subfigure[]{\includegraphics[width=0.30\linewidth]{B2oV0.eps}}
\caption{(Color online) (a) The plot of $B_0$ (normalized to $8\Vs$) for a prolate spheroid. It behaves like $3/32\eta$ as $\eta\to0$. (b) The plot of $B_2$ (normalized to $\Vs$) for a prolate spheroid. It behaves like $-15\pi/32\eta$ as $\eta\to0$. Both plots are easily extended to oblate spheroids by preserving their values under the transformation $\eta\mapsto1/\eta$.}
\label{fig:B0B2oV0}
\end{figure}
Over the past few decades, hard ellipsoids have been the object of many studies revisiting the classical Onsager theory of hard cylindrical rods. In all these studies, the excluded volume of ellipsoids plays by necessity a central role (occasionally, along with some higher-order virial coefficients which have also been computed).\footnote{We refer the reader to \cite{masters:virial} for a review. Other relevant information can be gathered from the works \cite{wertheim:third,wertheim:fluids_3,singh:geometry,rigby:virial,rigby:hard,baus:finite,colot:desnity}.} More recently, a formula was also obtained in \cite{ambrosetti:percolative} for the excluded volume of two congruent oblate spheroids, elaborating on the original method of Isihara~\cite{isihara:theory}. That formula\footnote{See equation (B12) of \cite{ambrosetti:percolative}.} is not directly comparable with ours, as it expresses the excluded volume as a power series of trigonometric functions of the angle between the bodies' symmetry axes, which unlike Legendre polynomials is not a system of orthogonal functions.
\section{Conclusions}\label{sec:conclusions}
The major objective of this paper was to express explicitly the excluded volume $\evop$ of two arbitrary cylindrically symmetric, convex bodies $\bodyu$ and $\bodyt$ (with symmetry axes $\m_1$ and $\m_2$), in terms of shape functionals to be evaluated separately for $\bodyu$ and $\bodyt$. We accomplished this task by relating the coefficients $B_n$ that represent $\evop$ in the basis of Legendre polynomials $P_n(\mdm)$ to certain anisotropic volume averages which, in complete analogy with the classical Minkowski formula for the isotropic average of the excluded volume, were expressed in terms of shape functionals that extend Minkowski's. As demonstrated by the examples of cones and spheroids, which we worked out in full details, the extended Minkowski functionals can be evaluated exactly. A large number of them might be required to obtain $\evop$ at a high degree of accuracy, but the proposed method provides them exactly in any desired number.
As witnessed by the case of cones, one motivation of our study was to explore the role of shape polarity in the excluded volume of tapered bodies. It has already been shown that when such congruent bodies $\bodyu$ and $\bodyt$ are convex and cylindrically symmetric, $\evop$ attains its minimum in the \emph{antiparallel} configuration \cite{palffy:minimum}. Therefore, one could think of assigning a \emph{shape dipole} $\dip$ to these bodies by extracting from $\evop$ the dipolar component, $B_1\mdm$, and rewriting it formally as $\dip_1\cdot\dip_2$.\footnote{Actually, for selected $\m_1$ and $\m_2$ on the symmetry axes of the congruent bodies $\bodyu$ and $\bodyt$, one could either orient the vectors $\dip_1$ and $\dip_2$ along $\m_1$ and $\m_2$, respectively, or in the opposite directions, provided their orientations are reverted in both bodies.} Instead, we proved that $B_1\equiv0$, thus making elusive the definition of any shape dipole for a tapered, cylindrically symmetric, convex body. Clearly, the antipolar property revealed by the minimum of $\evop$ remains valid, but it can in general be read off from the coefficient $B_3$, and so properly speaking it is an \emph{octupolar} effect.
Cones indeed interested us because they are tapered, but they are not the easiest cylindrically symmetric, convex bodies for which one would compute the excluded volume. Perhaps, ellipsoids of revolution might come first in anyone's list. For this reason, we also applied our method to ellipsoids of revolution. Other methods have already been devised to compute the excluded volume of these bodies, such as the overlap criteria used in computer simulations \cite{vieillard-baron:phase,perram:statistical}, or the approximations stipulated in the Gaussian overlap model originally introduced in \cite{berne:gaussian},\footnote{An Onsager theory for hard-ellipsoids based on this approximation can be found in \cite{lee:onsager}, a paper well aware of the possible inaccuracies stemming from the hard-body modification of the simple Gaussian overlap model \cite{bhethanabotla:comparison}. See also \cite{singh:structure} for a recent review of the Gaussian overlap model for hard-ellipsoids.} but with the admirable exception of the classical factorized formulae of Isihara~\cite{isihara:theory} for the first Legendre coefficients $B_n$ and the closed form expression for the distance of closest approach for two ellipses in two space dimensions \cite{palffy:distance_2D},\footnote{Unfortunately, the extension to ellipsoids in three space dimensions of the method that was successful in two dimensions can only be performed numerically \cite{palffy:distance_3D}.} no explicit analytic representation was known for the excluded volume of ellipsoids of revolution. We hope that we have provided one, rooting on geometric grounds the multiplicative structure of Isihara's formulae and emending some of them.
Several other applications could be foreseen for our representation formula. In tune again with Onsager's paper \cite{onsager:effects}, we mention just one: the role of shape in steric interactions of filamentous viruses. This was indeed the original motivation of Onsager's work, which intended to provide a theoretical explanation for the liquid crystalline behavior of tobacco mosaic viruses, which were the first to be isolated and purified \cite{bawden:liquid}. An up-to-date review of the recent applications of Onsager's theory to viruses of various elongated shapes can be found in \cite{dogic:ordered}. We trust that our representation formula for the excluded volume could help making the role of viruses' shape more explicit.
\begin{acknowledgements}
One of us (EGV) is grateful to Peter Palffy-Muhoray for having raised the question about which would be the most appropriate definition of shape dipole for a cylindrically symmetric rigid body, which prompted the study presented here.
We are indebted to an anonymous Referee for a number of learned and constructive critiques of an earlier version of our paper, answering which has noticeably improved our work.
\end{acknowledgements} |
2008.06317 | \section{Introduction}
\label{sec:1}
Query complexity considers the model of computation where a function is evaluated by an algorithm,
either with certainty or with some probability. This is achieved exploiting a classical or a quantum
computer such that the inputs to the function can only be accessed by making queries to an oracle.
In this paradigm, the complexity of an algorithm is decided by ``the maximum number of queries it
makes to an oracle to calculate the output for any input". Studying the classical-quantum
separation in this model is of theoretical interest.
In this paper our basic tool is Boolean functions expressed as $f: \mathbb{F}^n_2 \rightarrow \mathbb{F}_2$,
defined at all the points in $\mathbb{F}^n_2$. Since it is defined at all the points, sometimes we call these
Total Boolean functions. In this text, we will sometime simply use the word ``function" also.
Boolean functions are the most widely studied class of functions in the query model.
At the same time these are also of huge importance in the fields of cryptography and coding theory~\cite{pbook}.
In fact, our motivation here is to explore the well studied functions in coding and cryptography to identify
the separation in terms of query complexity.
In the query complexity model, different aspects of Boolean functions are analyzed to design efficient quantum query algorithms.
Examples are symmetry and Walsh Spectrum (Fourier spectrum) of the function in consideration. In this paper we concentrate on
the $\mathbb{F}_2$ polynomial of a Boolean function. This is also called the algebraic normal form (ANF). Notably ANF is a
very important property of Boolean functions from a cryptographic aspect~\cite{pbook}, in particular the algebraic degree.
However ANFs are seldom used in complexity theoretic studies. Against this backdrop, a central theme of this paper is to use the
ANF of the functions in the $\sf pdsp$ class to merge our positive and negative results. Our exact quantum query algorithm $\Q()$
is designed by analyzing the ANF of these functions. In this regard we have created a novel untangling protocol that gives us
optimal complexity. On the other hand the functions in the $\sf pdsp$ class have high granularity. Granularity is a property of
the Fourier spectrum of a function and higher value implies higher parity decision tree complexity. Now, the high granularity
of the $\sf pdsp$ class is a direct consequence of its ANF structure. This result allows us to separate ${ QC_{\textrm{algo}}}(f)$ and
$D_{\oplus}(f)$, which, alongside the novel untangling protocol, is the main result of our paper.
The query complexity model we discuss here is technically different from querying the universal quantum gate
$U_f$, given a Boolean function $f$. Many important Quantum techniques, such as Deutsch-Jozsa~\cite{deutsch},
the Grover's search \cite{grover}, Simon's period finding algorithm~\cite{simon} and the hidden subgroup problem,
underlying Shor's algorithm \cite{shor} are understood in a different setting. For detailed discussion related
to quantum paradigm, the reader is referred to~\cite{nc}.
Given the model of complexity we discuss in this paper, the value of any variable can only be queried
using an oracle. An oracle may be viewed as a black-box that can perform a particular computation.
In the classical model, an oracle can accept an input $i \ (1 \leq i \leq n)$ and output the value of
the variable $x_i$. In the quantum model, the oracle is reversible and it can be represented as an unitary
$O_x$ which works as:
$O_x\ket{i}\ket{\phi}=\ket{i}\ket{\phi \oplus x_i},~1 \leq i \leq n$.
The query complexity of a function can be defined as the maximum number of times this oracle
needs to be used to obtain the output value of the function $f$ for any value of the variables
$x_1, x_2, \ldots, x_n$. Naturally, the query complexity in quantum paradigm will be less than or equal
to that in classical domain. It is easy to understand that the maximum number of queries for
$n$ variables will be $n$. The question is how can we reduce the value over classical domain using
the advantage of quantum computation.
Query complexity can be defined for both deterministic and probabilistic classical
computational settings, as well as in the bounded error quantum and exact quantum
model. Out of these models, the Exact Quantum Query complexity model is perhaps
the least explored. Algorithms showing separations between the classical deterministic
and the exact quantum query model has been formulated for very few classes of functions.
In the exact quantum query model, a Boolean function $f$ needs to be evaluated correctly
for all possible inputs. The class of functions for which classical-quantum separation is known,
and more importantly, for which we have exact quantum algorithms which outperform
the classical algorithms are far and few.
Mostly the exact quantum query algorithms that exist use the same method
of calculating of parity of two input bits in one query, as mentioned
in the work by Ambainis et. al.~\cite{amb2}.
\begin{quote}
``However, the techniques for designing exact quantum algorithms are rudimentary compared to the
bounded error setting. Other than the well known `XOR' trick — constructing a quantum algorithm from
a classical decision tree that is allowed to `query' the XOR of any two bits — there are few alternate
approaches."
\end{quote}
Even in this case, there is no generalized method for constructing parity decision trees
that exhibit deterministic-exact quantum advantage for a given class of functions.
The most striking result in this area is the example of super-linear separation
between the deterministic classical and exact quantum models, shown in \cite{amb1}.
The work by Barnum et. al. \cite{sdp} is also equally important, defining a semidefinite programming
formulation to find out the exact quantum query complexity of a given function and also discovering
an algorithm to achieve it. Finding such separations remains the most interesting problem in this area.
In terms of the gap between $D(f)$ and $Q_E(f)$, the separations can be distinguished into two different kinds.
\begin{enumerate}
\item The first is identifying functions $f$
so that $Q_E(f) < \frac{D(f)}{2}$. As explained in~\cite{exact} this leads to
super-linear separation between $Q_E(f^k)$ and $D(f^k)$ where $f^k$ is
obtained by recursively expanding the function $f$.
\item The second is where $Q_E(f) \geq \frac{D(f)}{2}$. We do not have any known method of converting such a result
into super-linear separation. However, studying separation of this kind is still of considerable importance.
This is because there are very few results in the exact quantum query model, and it is always of interest to find new
approaches of evaluating functions in this model beyond the well known parity method, as highlighted in the quote
above~\cite{amb2}.
\end{enumerate}
Before proceeding further, let us first review the main results in this area in a chronological fashion to
show where exactly our work is placed among the state of the art literature.
\begin{itemize}
\item[]
{\bf 2012:} \cite{amb1} Superlinear separation between exact quantum query complexity ($Q_E(f)$) and deterministic query complexity
($D(f)$) is obtained. This could be achieved by first obtaining a function $f$ with $Q_E(f) < \frac{D(f)}{2}$
and then recursively expanding it.
\item[]
{\bf 2013:} \cite{amb3} Exact quantum query complexities of the symmetric function classes $\sf Threshold^n_k$ and
$\sf Exact^n_k$ have been obtained. For both the cases $Q_E(f) >\frac{D(f)}{2}$ and thus these results provided linear separation only.
\item[]
{\bf 2015:} \cite{amb4} Near quadratic separation between $Q_E(f)$ and $D(f)$ was obtained using the concept of pointer functions.
\item[]
{\bf 2016:} \cite{amb2} Exact quantum query complexity of the symmetric function class $\sf Exact^n_{k,l}$ was obtained.
For all the functions we have $Q_E(f) > \frac{D(f)}{2}$ and thus the separation was linear.
\end{itemize}
As observed from the chronology above, discovering exact quantum query complexity, with only linear separation between the classical and
quantum models, that is, ${Q_E(f)} > \frac{D(f)}{2}$, remained a relevant topic even after discovering the results
related to near quadratic separation.
Against this backdrop, we study the exact quantum query model for Boolean functions by a combined analysis of $\mathbb{F}_2$ polynomial and
Fourier spectrum to obtain linear separation between $Q_E(f)$ and $D(f)$ and show that our algorithms are more efficient than any
parity decision tree method. In fact, the algorithms we design outperform generalized parity decision tree methods,
where one can obtain the parity of any $i \leq n$ variables using a single query. Another interesting characteristics of
the class of functions we obtain is that, the size of these classes are considerably larger compared to the symmetric function classes
for which linear separations were previously obtained. The comparison of the existing results with our findings are
summarized in Table~\ref{tab:1}.
\begin{table}[ht!]
\centering
\normalsize
\begin{tabular}{| c | c | c | c | c |}
\hline
Function & Ref. &\makecell{ Complexity of\\ Exact Quantum \\ Query Algorithm} & \makecell{Total \\ functions \\ covered for $n$} & \makecell{ Provably \\ Optimal?} \\ \hline
$\sf Exact^n_k$ & \cite{amb3} & $\max\{k,n-k\}$ &\makecell{ $n+1$ \\ (one for\\ each value of $k$ )} & yes \\ \hline
$\sf Threshold^n_k$ & \cite{amb3} & $\max\{k,n-k+1\}$ & \makecell{ $n+1$ \\ (one for \\ each value of $k$ )} & yes \\ \hline
$\sf Exact^n_{k,l}$ & \cite{amb2} & $\max\{n-k,l\}+1$ &\makecell{ $n \choose 2$ \\ (one for \\ each $\{k,l\}$ pair)} &
\makecell{For most \\ cases} \\ \hline
\makecell{ The class -- \\ $\sf pdsp$} & \makecell{our \\ work} & $\lfloor \frac{3n}{4} \rfloor$ &$\Omega(\sqrt{2^{\sqrt{n}}})$
& yes \\ \hline
\makecell{ A subclass of \\ MM type \\ Bent functions} & \makecell{our \\ work} & $\lceil \frac{5n}{8} \rceil$ & $\Omega((2^{\lfloor \frac{n}{4} \rfloor}!)^22^{2^{\lfloor \frac{n}{4} \rfloor}})$ & No \\
\hline
\end{tabular}
\caption{Advantage achieved by Query Algorithms}
\label{tab:1}
\end{table}
Before proceeding further, we first define the following notations that we use in the paper.
\begin{definition}
\label{def:1} \
\begin{enumerate}
\item {$\bm{D(f)}$:} The Deterministic (classical) query complexity $D(f)$ of a Boolean function $f$
is the minimum number number of queries any classical algorithm must make
to evaluate the function correctly for any possible input.
\item {$\bm{Q_E(f)}$:} The exact quantum query complexity $Q_E(f)$ of a Boolean function $f$ is the minimum
number of queries any quantum algorithm must make to evaluate the function correctly for
any possible input.
\item {$\bm{D_{\oplus}(f)}$ and $\bm{{D_{\oplus}^{(2)}}(f)}$:}
We define the generalized parity decision tree complexity $D_{\oplus}(f)$ of a function $f$ as
the minimum number of queries any algorithm must make where the algorithm can
obtain any parity $\oplus_{i \in S} x_i$ in a single query where $S$ is any subset of
$[n]=\{1, \ldots, n\}$.
If we restrict $\abs{S}=2$ then the algorithm is a parity decision tree, which is
the most well known exact quantum query algorithm and we denote by ${D_{\oplus}^{(2)}}(f)$
the minimum number of queries any parity decision tree needs to make to evaluate $f$.
Consequently $D_{\oplus}(f) \leq {D_{\oplus}^{(2)}}(f)$.
\item {$\bm{\Q(f)}$ and $\bm{{ QC_{\textrm{algo}}}(f)}$:}
For any Boolean function $f$, we denote by $\Q(f)$ the exact quantum query algorithm
designed in this paper to evaluate the function.
The number of queries required by $\Q(f)$ is denoted with
${ QC_{\textrm{algo}}}(f)$, which we denote as the query complexity of the
exact quantum query algorithm $\Q(f)$.
We call an algorithm $\Q(f)$ optimal if we have ${ QC_{\textrm{algo}}}(f)=Q_E(f)$.
Here it should be clearly noted that $\Q(f)$ is an exact quantum query algorithm we design
whereas ${ QC_{\textrm{algo}}}(f)$ is a number, which is the query complexity of $\Q(f)$.
\end{enumerate}
\end{definition}
We have the following relations between the aforementioned
quantities.
\begin{fact}
For any Boolean function $f$ we have
\begin{multicols}{2}
\begin{enumerate}
\item $Q_E(f) \leq {D_{\oplus}^{(2)}}(f) \leq D(f)$.
\columnbreak
\item $ D_{\oplus}(f) \leq {D_{\oplus}^{(2)}}(f) \leq D(f)$.
\end{enumerate}
\end{multicols}
\end{fact}
One should note here that unlike the situation of $Q_E(f) \leq {D_{\oplus}^{(2)}}(f)$, there is no strict relationship between $D_{\oplus}(f)$ and $Q_E(f)$.
In fact for the simple parity function on $n$ variables, we have $Q_E(f)=\lceil \frac{n}{2} \rceil$ whereas it only takes a single
query in the generalized parity decision tree model by definition, making $D_{\oplus}(f)=1$. Let us now lay out the organization and
contribution of this paper.
\subsection{Organization \& Contribution}
In this paper we discuss the exact quantum query complexity of two classes of non-symmetric functions
which we analyze based on their $\mathbb{F}_2$ polynomial structure. We attempt to obtain $Q_E(f)$, $D_{\oplus}(f)$ and $D(f)$ of the functions
and identify the situations with ${ QC_{\textrm{algo}}}(f)=Q_E(f) < D_{\oplus}(f) \leq D(f)$.
The motivation for this is threefold.
\begin{enumerate}
\item To design non-trivial optimal exact quantum algorithms $\Q(f)$ for non-symmetric functions.
\item To identify situations where ${ QC_{\textrm{algo}}}(f) < {D_{\oplus}^{(2)}}(f)$. In fact, we discover classes for which ${ QC_{\textrm{algo}}}(f) < D_{\oplus}(f)$.
\item Design a class of algorithmic techniques, so that it can outperform parity decision tree method for a large number
of functions for any $n$.
\end{enumerate}
We summarize the list of our results in Table~\ref{tab:2}.
\begin{table}[H]
\centering
\normalsize
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
\makecell{ Functions} & Size & $Q_E(f)$ & ${ QC_{\textrm{algo}}}(f)$ & $D_{\oplus}(f)$ &
${D_{\oplus}^{(2)}}(f)$ & $D(f)$
\\ \hline
\makecell{ The classes \\ $\sf{pdsp}(n,\lceil \frac{3n}{4} \rceil,t+1)$, \\ $1 \leq t \leq \lfloor \frac{n}{4} \rfloor$} &
$\Omega \left( \sqrt{2^{\sqrt{n}}} \right)$ &
$\lfloor \frac{3n}{4} \rfloor$ & $\lfloor \frac{3n}{4} \rfloor$ & $n-t$ & $ n-\lfloor \frac{t}{2} \rfloor$ & $n$
\\ \hline
\makecell{ A subclass of \\ MM type \\ Bent functions} & $\Omega((2^{\lfloor \frac{n}{4} \rfloor}!)^22^{2^{\lfloor \frac{n}{4} \rfloor}})$
& $\geq \lceil \frac{n}{2} \rceil$ & $\lceil \frac{5n}{8} \rceil$ & $\lceil \frac{n}{2} \rceil +1$
& $\leq \lceil \frac{3n}{4} \rceil$ & $n$ \\ \hline
\end{tabular}
\caption{Advantage achieved by Query Algorithms}
\label{tab:2}
\end{table}
Let us now discuss these results in more details. We have worked with two Boolean function classes,
the $\sf pdsp$ class and Maiorana-McFarland (MM) type bent functions.
In this direction, Section~\ref{sec:prelim0} explains these two important classes.
As well, in Section~\ref{sec:prelim}, we describe the different unitary operations needed towards building the quantum algorithms.
Section~\ref{sec:2} is on the $\sf pdsp$ class of functions. We build various algorithmic techniques needed
towards proving Theorem~\ref{th:main}, which is the main contribution of this paper. First we explain the methodologies to obtain
$D_{\oplus}(f)$ and $D(f)$ in Section~\ref{sec:2-1}. Then, in Section~\ref{sec:2-2}, we describe the oracle in the quantum query model
and the registers on which quantum query algorithms are designed. Section~\ref{sec:2-3} presents the algorithmic techniques
to design the family of exact quantum algorithms and how these techniques are modified when dealing with different functions in question
leading to the $\sf pdsp$ class. Finally, for the function $f$ in $\sf{pdsp}(n,\lceil \frac{3n}{4} \rceil,t+1)$ class, we have
obtained the following result in Theorem~\ref{th:main}.
\begin{enumerate}
\item
We first observe $D(f)=n$ using the real polynomial representation of $f$.
\item
Then we obtain that the generalized parity decision tree complexity $D_{\oplus}(f)$
is $n-t$, which we derive using the concept of Granularity in the work~\cite{PAR}
and results we obtain in Lemma~\ref{th:par}.
In this direction it is easy to see that ${D_{\oplus}^{(2)}}(f)$ is at most $n-\lfloor \frac{t}{2} \rfloor$.
Thus we have $n-t \leq {D_{\oplus}^{(2)}}(f) \leq n- \lfloor \frac{t}{2} \rfloor$ as we know $D_{\oplus}(f) \leq {D_{\oplus}^{(2)}}(f)$.
\item
Next we observe that $Q_E(f) \geq \lfloor \frac{3n}{4} \rfloor$ by reducing $f$ to the $\textrm{AND}_{\lfloor \frac{3n}{4} \rfloor}$ function.
Finally we design an algorithm that reaches this query complexity using the untangling protocol described in Theorem~\ref{lemma:1}.
Thus, for $0 \leq t \leq \lfloor \frac{n}{4} \rfloor$ this is more efficient than any generalized parity decision tree method.
\end{enumerate}
We conclude with Corollary~\ref{cor:3} showing that there are $\Omega\left( \sqrt{2^{\sqrt{n}}} \right)$ such functions,
for which this separation is achieved.
Next we discuss a subclass of the MM type bent functions.
Section~\ref{sec:4} describes the results related to the MM bent functions borrowing certain ideas
from Section~\ref{sec:2}. We first study the relation between $D(f), {D_{\oplus}^{(2)}}(f), D_{\oplus}(f)$ and $Q_E(f)$ on a small number of variables,
and then get into the generic discussion. In fact the study for $n = 4, 6$ had been our initial study that directed us to explore the
area in this manner. The effort starts with this subclass and we finally estimated that
it contains a number of functions that is doubly exponential in $n$, as explained in Section~\ref{mm:num}.
However, while we design an algorithm in Theorem~\ref{th:4} that evaluates any function in this class by making a total
of $\lceil \frac{5n}{8} \rceil$
functions, our results are incomplete from two directions.
\begin{itemize}
\item Firstly, we cannot obtain $Q_E(f)$ of the functions in these class. All we know is
$\frac{n}{2} \leq Q_E(f) \leq \lceil \frac{5n}{8} \rceil$.
\item On the other hand, as described in~\cite{parity} we have ${D_{\oplus}^{(2)}}(f) \leq \lfloor \frac{3n}{4} \rfloor$, but we do not have any relevant lower bound
on this measure. Furthermore, it is easy to see that $D_{\oplus}(f)$ is $\lceil \frac{n}{2} \rceil +1$ and thus even linear
separation between $Q_E(f)$ and $D_{\oplus}(f)$ is not possible. Coming up with tight bounds for $Q_E(f)$ and ${D_{\oplus}^{(2)}}(f)$ remains
the main open problem related to this MM class.
\end{itemize}
We conclude the paper in Section~\ref{sec:5}.
Having discussed the function related results, we now talk about another important point related to our contribution. Let us
explain our novel algorithmic idea based on untangling of qubits that allows us to finally design algorithm whose complexity
touches $Q_E(f)$ for the $\sf pdsp$ class.
\subsubsection*{Novel Algorithmic Techniques}
The exact quantum query algorithms for the symmetric functions such as Threshold or Exact as described in Table~\ref{tab:1} are designed
by creating an equal superposition of all possible input states. Then some properties of the symmetric functions are exploited to obtain the
desired result. In this paper we design a general class of algorithms that works in a completely different way. Informally, our algorithms
are based on treating the function in $f$ in question as the direct sum of two functions $g$ and $h$, i.e.,
$f(\mathbf{x})=g({\mathbf{\hat{x}}}) \oplus h({\mathbf{\tilde{x}}})$ where the variables in $\mathbf{x}$ are divided (partitioned) into two disjoint subsets ${\mathbf{\hat{x}}}$ and ${\mathbf{\tilde{x}}}$.
Then $(-1)^{g({\mathbf{\hat{x}}})}$ and $(-1)^{h({\mathbf{\tilde{x}}})}$ are evaluated as relative phases at parallel using the superposition property of quantum
computation which in its course entangles the system. Next we design a novel way of un-entangling the system described in
Theorem~\ref{lemma:1}. We effectively negate the entangling due to $4$ variables using a single query in the functions we choose,
which is driven by the fact that ${\mathbf{\hat{x}}}$ and ${\mathbf{\tilde{x}}}$ do not share any variable. Repeated use of this technique is central to the
separations we achieve.
\subsection{Boolean Function Classes}
\label{sec:prelim0}
\begin{definition}[The pdsp class]
\label{def:2}
First we define a perfect direct sum function to be a function on $n$ variables
such that all the variables $x_i, 1 \leq i \leq n$ appear only once in the function's
unique algebraic normal form ($\mathbb{F}_2$ polynomial).
Then a function $f$ is said to belong to the class ${\sf pdsp}(n,l,q)$
if the variable space $\mathbf{x}=(x_1,x_2, \ldots, x_n)$ consisting of $n$ variables
can be divided into the two subspaces ${\mathbf{\hat{x}}}=(x_{r_1},x_{r_2},\ldots, x_{r_l})$ and
${\mathbf{\tilde{x}}}=(x_{r_{l+1}},x_{r_{l+2}}, \ldots ,x_{r_n})$ containing $l$ and $n-l$ variables
respectively so that
\begin{enumerate}
\item $f(\mathbf{x})= f_1({\mathbf{\hat{x}}})f_2({\mathbf{\tilde{x}}})$.
\item $f_1$ is a perfect direct sum function defined on the $l$ variables
${\mathbf{\hat{x}}}$, which consists of $q$ monomials such that each monomial consists
of at least $q$ variables.
\item $f_2$ is the product function of the $n-l$ variables in ${\mathbf{\tilde{x}}}$.
That is, $f_2({\mathbf{\tilde{x}}})=\prod\limits_{i=l+1}^n x_{r_i}$.
If $l=n$ then $f_2$ function is not defined.
\end{enumerate}
\end{definition}
We observe that these functions have high granularity as well as their $\mathbb{F}_2$ polynomial structure agrees with our exact quantum query algorithm, and
this leads us to Theorem~\ref{th:main} which provides the provable separations noted in Table~\ref{tab:2}.
To see that $f$ (as in Theorem~\ref{th:main}) is indeed a function in ${\sf pdsp}(n,\lceil \frac{3n}{4} \rceil,t+1)$,
it suffices if we define
${\mathbf{\hat{x}}}= \left(x_1, \ldots ,x_{\frac{n}{2}}, x_{\lfloor \frac{3n}{4} \rfloor+1}, \ldots x_n \right)$ and
${\mathbf{\tilde{x}}}=\left( x_{\frac{n}{2}+1}, \ldots x_{\lfloor \frac{3n}{4} \rfloor} \right)$. Then We have
$f_1({\mathbf{\hat{x}}})=\prod_{i=1}^{\frac{n}{2}} x_i \bigoplus g(\mathbf{x}')$
and $f_2({\mathbf{\tilde{x}}})=\prod_{j=\frac{n}{2}+1}^{\lfloor \frac{3n}{4} \rfloor} x_j$, and thus $f_1$
is a perfect direct sum function containing $t+1$ monomials so that the degree of each monomial is at least $t+1$.
Considering $g(\mathbf{x}')=\prod_{k=\lfloor \frac{3n}{4} \rfloor+1}^n x_k$, we get the function
$f(\mathbf{x})= \prod_{i=1}^{\lfloor \frac{3n}{4} \rfloor} x_i \oplus \prod_{j=\frac{n}{2}+1}^n x_j$ for which we obtain the desirable separations.
\begin{definition}[MM type functions]
For any two positive integers $n_1$ and $n_2$ with $n_1+n_2=n$,
An MM Boolean function on $\mathbb{F}_2^n$ is defined as
$$f({\mathbf{\hat{x}}},{\mathbf{\tilde{x}}})= \phi({\mathbf{\hat{x}}}) \cdot {\mathbf{\tilde{x}}} \oplus g({\mathbf{\hat{x}}})$$
where the subspaces are
$\hat{x} \in \mathbb{F}_2^{n_1}$, $\tilde{x} \in \mathbb{F}_2^{n_2}$,
$g$ is any Boolean function defined on $F_2^{n_1}$
and $\phi$ is a map of the form $\phi: \mathbb{F}_2^{n_1} \rightarrow \mathbb{F}_2^{n_2}$.
\end{definition}
Here $a \cdot b$ is the dot product of two $n_2$ dimensional vectors, defined as
$a \cdot b=a_1b_1 \oplus a_2b_2 \ldots a_{n_2}b_{n_2}$.
If we set $n_1=n_2$ and restrict $\phi$ to be a bijective map, then all resultant
Boolean functions are bent. These are the functions with highest possible nonlinearity for any even $n$ \cite{bent1}.
Now consider the quadratic bent functions on $4$ variables, $f_{id}^4(x) = x_1x_3 \oplus x_2x_4$
and on $6$ variables, $f_{id}^6(x)=x_1x_4 \oplus x_2x_5 \oplus x_3x_6$.
We observed that for any MM type bent function there is a parity decision tree of query complexity $\lceil \frac{3n}{4} \rceil$.
For the case of $f_{id}^4$, this strategy proved to be optimal, as verified through semi-definite programming
formulation of \cite{sdp}.
However for $f_{id}^6$ we note that $Q_E(f_{id}^6)=4$, whereas the parity decision technique has a query
complexity of $5$. This motivated us to study how to design an exact quantum query algorithm $\Q$ based on the $\mathbb{F}_2$ polynomial
structure of these functions, so that it matches the $Q_E$ value for $f_{id}^6$, and then generalized it to it a novel untangling based algorithmic technique that we have discussed in
Theorem~\ref{lemma:1} which has given us the efficient quantum query algorithms for the said $\sf pdsp$ classes.
\subsection{Some Unitary Matrices}
\label{sec:prelim}
\subsubsection*{$\sg{n}{i}{j}$}
This operation is also defined on an $n+1$ dimensional register.
It only performs the transformation
$~~ \ket{\mb{0}} \xrightarrow{\sg{n}{i}{j}} \frac{1}{\sqrt{2}} ( \ket{\mb{i}} +\ket{\mb{j}} )$.
The corresponding matrix can fairly simply be defined as
$\sg{n}{i}{j}=\big( \parg{n}{i}{j} \big)^*$.
Below are examples of the matrices corresponding to operations of type $\pg{n}{i}$, $\parg{n}{i}{j}$ and
$\sg{n}{i}{j}$.
\begin{multicols}{3}
$$\pg{4}{3}=
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1
\end{bmatrix}
$$
\vfill\null
\columnbreak
$$\parg{4}{1}{3}=
\begin{bmatrix}
0 & \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} & 0 \\
0 & \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1
\end{bmatrix}
$$
\vfill\null
\columnbreak
$$\sg{4}{0}{1}=
\begin{bmatrix}
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 & 0 & 0 \\
\frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1
\end{bmatrix}
$$
\vfill\null
\end{multicols}
To avoid confusion, it should be noted that in a Quantum
Computer built with qubits, a unitary operator working
on $z$ qubits is has a dimension of $2^z$.
In the following remark we state how a general $n$ dimensional
register can be implemented in such a setup.
\begin{remark}
To implement an unitary operator $U$ on an $n+1$ dimensional register, it would require $\lceil \log(n+1) \rceil$ qubits and
thus the corresponding unitary matrix
$U'$ being applied on these qubits
would actually be $2^{\lceil \log(n+1) \rceil}$ dimensional.
The matrix $U'$ can be formed by adding
$2^{\lceil \log(n+1) \rceil}-(n+1)$ rows and columns
to $U$, such that entries in the
rows and columns corresponding to the basis states $\ket{i}, i \in\{ n+1,
\ldots ,\lceil \log(n+1) \rceil-1 \}$
would simply be $U'(i,i)=1$.
\end{remark}
For example, given the matrix $\pg{2}{2}$,
the matrix $\pg{2}{2}'$ that would be implemented in a $2^{\lceil \log(3+1) \rceil}=4$ dimensional system built of $2$ qubits is as follows:
\begin{multicols}{3}
$$\pg{2}{2}=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0
\end{bmatrix}
$$
\columnbreak
$$\pg{2}{2}'=
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}
$$
\columnbreak
\end{multicols}
\subsubsection*{${\sf C_0} U$ and ${\sf C_1} U$}
The algorithm we design uses unitary matrices that are controlled
on the state of a work qubit $w_1$.
At each step we apply a set of unitaries controlled on $w_1=\ket{0}$
and another set controlled on $w_1=\ket{1}$.
Given any unitary $U$, we denote by ${\sf C_0} U$ the operation that is
$U$ controlled on $w_1=\ket{0}$.
We use the notation ${\sf C_1} U$ to denote the operation that is
$U$ controlled on $w_1=\ket{1}$.
It is easy to see that if $U$ is a unitary operation, then so is
${\sf C_0} U$ and ${\sf C_1} U$.
\subsubsection*{$\cn{a}{b}$}
This operation is the Controlled-NOT operation from register $a$ to register $b$,
with register $a$ as control and register $b$ as target.
Here either one of the registers is the query
register, or else both the registers are work qubits.
Let us suppose register $a$ is the query register,
then the transformation will be denoted by
\begin{itemize}
\item $ \ket{\mb{1}}_a\ket{0}_b \rightarrow \ket{\mb{1}}_a\ket{1}_b$
\item $ \ket{\mb{1}}_a\ket{1}_b \rightarrow \ket{\mb{1}}_a\ket{0}_b$
\end{itemize}
If both the registers are qubits, then it works as the conventional C-NOT operation.
If $a$ and $b$ are both qubits, then it is a $4$ dimensional
unitary operation, otherwise it is a $2(n+1)$ dimensional
operation.
\subsubsection*{$swap(a,b)$}
This operation is simply defined as
$swap(a,b)=\cn{a}{b}~\cn{b}{a}~\cn{a}{b}$, and swaps the value of two registers $a$ and $b$,
with dimensions $d_1$ and $d_2$, so that it is defined on the
computational basis states $\ket{i} \otimes \ket{j}: i,j \leq \min (d_1,d_2)$.
\section{Results on the $\sf pdsp$ class of functions}
\label{sec:2}
In this section we build till Theorem~\ref{th:main},
by obtaining different query complexity measures, describing the properties of
Boolean functions and the algorithmic techniques designed
for the $\sf pdsp$ class of functions.
First we obtain the deterministic query complexity $D(f)$ and
generalized parity query complexity $D_{\oplus}(f)$ of this class of functions.
\subsection{$D_{\oplus}(f)=n-t$ and $D(f)=n$ for the $\sf pdsp$ class}
\label{sec:2-1}
We first obtain the deterministic query complexity of the functions
defined in Definition~\ref{def:2} by analyzing the polynomial degree of the function.
Let ${\sf pdeg}(f)$ be the degree of the unique real multi-linear
polynomial $p: \mathbb{F}_2^n \rightarrow \mathbb{R}$ such that $f(\mathbf{x})=p(\mathbf{x})~ \forall \mathbf{x} \in \mathbb{F}_2^n$.
From~\cite{polypoly}, we know $D(f) \geq {\sf pdeg}(f)$. Then we have the following result.
\begin{theorem}
\label{th:df}
For any function $f \in {\sf pdsp}(n,l,q)$ we have ${\sf pdeg}(f)=n$.
\end{theorem}
Now let us discuss the generalized parity decision tree complexity $D_{\oplus}(f)$
for these functions. Naturally this also works as lower bound on the parity decision
tree complexity ${D_{\oplus}^{(2)}}(f)$. We first obtain lower bounds on $D_{\oplus}(f)$
by analyzing the granularity of the functions as described in~\cite{PAR}
and then show that these bounds are in fact tight for $\sf pdsp$ class.
Let us first define the notion of granularity and the result.
\begin{definition}\cite{PAR}
\label{def:gran}
\begin{enumerate}
\item For a set $S \subseteq [n]$ the Fourier Character
at $S$ is defined as
$\chi_S(\mathbf{x})=\displaystyle \prod_{i \in S} (-1)^{x_i}$.
\item Given a function $f$ on $n$ variables its Fourier coefficient
on the $S \subseteq [n]$ is denoted as
$\hat{f}(S)= \frac{ \displaystyle \sum_{\mathbf{x} \in \{0,1\}^n} \left( (-1)^{f(\mathbf{x})}\chi_S(\mathbf{x}) \right) }{2^n}$.~~
\item The granularity of a rational number $t$ is defined as $gran(f)=k$
where $k$ is the smallest power of two such that $t \times k$ is an integer.
\end{enumerate}
\end{definition}
Having noted the notations used in \cite{PAR}, we now refer to the following result.
\begin{theorem}
\cite{PAR}
\label{th:par2}
The general parity complexity of a Boolean function $f$ on $n$ variables is lower
bounded by $D_{\oplus}(f) \geq gran_m(f)+1$
where $gran_m(f)= \displaystyle \max_{S \subseteq [n]} gran(\hat{f}(S))$.
\end{theorem}
We observe that the bound due to Theorem~\ref{th:par2}
is tight for any perfect direct sum function, and then also
obtain that it is tight for the $\sf pdsp$ class of functions
defined in Definition~\ref{def:2}.
Interestingly we also identify that the result for this class is even more specific,
in the sense that the granularity is maximum for $\hat{f}(\phi)$. That is,
the lower bound on $D_{\oplus}(f)$ is solely based on the number of
ones in the truth tables of the functions.
In this regard we have the following result.
\begin{lemma}
\label{th:par}
Let $f$ be a function defined on $n$ variables such that
$f \in {\sf pdsp}(n,l,q)$.
Then we have $D_{\oplus}(f)=gran_m(f)+1= gran(\hat{f}(\{ \phi \})+1 =n-q+1$.
\end{lemma}
\begin{proof}
We prove this by first showing $gran(\hat{f}(\{ \phi \}) = n-q$ which implies $D_{\oplus}(f) \geq n-q+1$
and then describe a simple general parity decision tree with complexity $n-k+1$.
For simplicity let us assume $r_i=i$.
\paragraph*{\bf $gran_m(f) \geq n-k$}
The ANF of $f_1$ can be represented as a partition of ${\mathbf{\hat{x}}}=\{x_{r_1},x_{r_2},\ldots x_{r_l}\}$
into $q$ disjoint sets. We denote these sets as $m_i, 1 \leq i \leq q$,
where $m_i$ consists of $q_i$ variables.
Then $f$ can be written as
$$f(\mathbf{x})=\left(\bigoplus_{i=1}^q \left( \prod\limits_{x_j \in m_i} x_j \right) \right) \prod\limits_{p=l+1}^n x_p.$$
\
We know that $\hat{f}(\{ \phi \})= \sum\limits_{\bm{a} \in \{0,1\}^n}(-1)^{f(\bm{a})}= 2^n- 2wt(f)$ where $wt(f)$ is the number of input points
for which the function outputs $1$.
The output of $f_1$ for some input is $1$ if some odd number of these
$q$ monomials are evaluated to $1$ and $x_{r_p}=1,~ l+1 \leq p \leq n$.
Let us denote by $x^j$ all inputs from $\{0,1\}^l$ for which $j$ of the said monomials evaluate to $1$.
If $j$ is odd then for each input in $x_j$ $f_1$ evaluates to $1$.
For each such $a \in x^j$, there is only input $a' \in \{0,1\}^n$
for which $f$ evaluates to $1$, where $a'=a||1_{n-l}$.
We can represent the number of ones in the truth table of $f$ as
$\sum\limits_{i=0}^{\lfloor \frac{q-1}{2} \rfloor} \abs{x^{2i+1}}.$
Then we have $\abs{x^1}= \sum\limits_{i=1}^q \left( \prod\limits_{j \neq i} (2^{q_j}-1) \right)$ as
it consists of inputs for which exactly one monomial has all variables set to one, and because
of the monomial disjoint nature of the function there is no repetition in the counting.
We can express $x^1$ as
\begin{itemize}
\item[] $\abs{x^1}=\alpha_1 2^q +q$ such that $\alpha_1$ is an integer, if $q$ is odd.
\item[] $\abs{x^1}=\alpha_1 2^q -q$ such that $\alpha_1$ is an integer, if $q$ is even.
\end{itemize}
This is because in expansion of $\abs{x^1}$ in each product term
we have a $(-1)^{q-1}$ ( $-1$ if $k$ is even $+1$ otherwise )
and all other terms are of the form $\pm 2^{q_{i_1} + q_{i_2} \ldots + q_{i_j}}$.
since $q_i \geq q~\forall i$, all these terms are integer multiple of $2^q$,
and thus their sum is also an integer multiple of $2^q$, or zero.
Now since each product term has a $(-1)^{q-1}$ and
there are $q$ terms in the expansion can be written as some $\alpha_1 2^q + (-1)^{q-1} q$.
Similarly, $x^i$ can be expressed as $\alpha_i2^q + (-1)^{q-1} {q \choose i}$
and therefore the support set of $f$ is of the size
\
$\displaystyle \sum_{i=0}^{\lfloor \frac{q-1}{2} \rfloor} \left( \alpha_{2i+1}2^q +(-1)^{q-1}{q \choose {2i+1}} \right)
=\alpha2^q + (-1)^{q-1} 2^{q-1}$.
\
Therefore the Fourier coefficient of the function at $S=\{\phi \}$ is
\
\begin{align*}
\widehat{f}(\{\phi \})=\frac{2^n-2\left(\alpha2^q + (-1)^{q-1} 2^{q-1} \right)}{2^n}
= \frac{2^n - \alpha2^{q+1} + (-1)^q 2^q}{2^n}.
\end{align*}
Thus granularity of the Fourier coefficient at this point is $gran(\widehat{f}(\{\phi \}))=n-q$
and therefore $gran_m(f) \geq n-q+1$.
\paragraph*{$D_{\oplus} \leq n-k+1$}
We now show a simple general parity tree of with $n-q+1$ queries
that evaluates $f$, showing $D_{\oplus}(f) \leq n-q+1$.
Given an input $\mathbf{a}= \{a_1, a_2 ,\ldots , a_n \}$
It first queries all but one variable from each monomial of $f_1$.
This takes $l-q$ queries.
For the monomial $m_i$ the product of these variables evaluate to
$\tilde{m_i}=\prod_{j=1}^{q_i-1} a_{i_j}$.
Then only if $\tilde{m_i}=1$ the output of $f_1$ depends on $x_{i_{q_i}}$.
Therefore the final query to evaluate $f_1$ is the linear
function $\bigoplus_{i=1}^q \tilde{m_i} x_{i_{q_i}}$ as the value of
$\tilde{m_i}$ are already calculated. Thus evaluating $f_1$ needs $l-q+1$ queries.
Now we can simply evaluate $f_2$
which is defined on $n-l$ variables by querying each of the variables
individually which enables us as to output the function $f(\mathbf{x})=f_1({\mathbf{\hat{x}}})f_2({\mathbf{\tilde{x}}})$.
Therefore this method requires a total of $l-q+1+n-l=n-q+1$ query, which shows $D_{\oplus}(f) \leq n-q+1$.
Since $D_{\oplus}(f) \geq gran_m(f)$ and we have $gran_m(f) \geq n-q+1$ and $D_{\oplus}(f) \leq n-k+1$
this implies $D_{\oplus}(f)=gran_m(f)=n-q+1$.
\end{proof}
Having determined $D(f)$, $D_{\oplus}(f)$ and ${D_{\oplus}^{(2)}}(f)$, we now describe
the family of exact quantum query algorithms $\Q$ that we design,
and their query complexity ${ QC_{\textrm{algo}}}(f)$.
Let us now proceed with the functionality of the oracle and the different
registers that are used by $\Q$.
\subsection{Quantum Query Algorithms}
\label{sec:2-2}
The set-up for a Quantum Query algorithm in relation to Boolean functions is as follows.
Given a Boolean function on $n$ influencing variables,
a Quantum Query Algorithm for evaluating the function is defined
on the Hilbert space $H=H^a \otimes H^q \otimes H^w$.
\begin{itemize}
\item Here $H^a$ represents an $n$ qubit register that contains the input to the function.
The inputs stored in the input register can only be accessed using the oracle $O_x$,
which operates on $H^a \otimes H^q$.
\item The Hilbert space $H^q$ is $n+1$ dimensional,
and can be indexed with the basis states $\ket{\pmb{0}}$ $\ket{\pmb{1}}, \ket{\pmb{2}}, \ldots \ket{\pmb{n}}$.
This space is used to make queries to the oracle and we call this the query register $Q_n$.
\item The Hilbert space $H^w$ is used as the working memory and has no restrictions.
We define $H^w$ to be formed of some $w$ qubits,
where the basis states of a qubit is $\ket{0}$ and $\ket{1}$
\footnote{
Therefore a Quantum Query Algorithm corresponding to a function $f$ with $n$ influencing variables
is essentially a circuit defined on the Hilbert space $H$ of $n+ \lceil \log (n+1) \rceil +w $ qubits
with the restriction that the $n$ qubits corresponding to the input register
can only be accessed through an oracle.}.
\end{itemize}
The oracle $O_x$ works on the space $H^a \otimes H^q$ in the following way.
\begin{itemize}
\item $1 \leq i \leq n : O_x \ket{\mathbf{x}}\ket{\pmb{i}}\ket{w}=(-1)^{x_i} \ket{\mathbf{x}}\ket{\pmb{i}}\ket{w}$.
\item $i=0 : O_x \ket{\mathbf{x}}\ket{\pmb{0}}\ket{w}=\ket{x}\ket{\pmb{0}}\ket{w}$.
\end{itemize}
Since the input register remains unchanged throughout the algorithm,
we describe our algorithm on $H^q \otimes H^w$,
and describe the working of the oracle as $O_x\ket{\pmb{i}}=(-1)^{x_i}\ket{\pmb{i}}, ~ 1 \leq i \leq n$
and $O_x\ket{\pmb{0}}=\ket{\pmb{0}}$.
An algorithm that uses the oracle $k$ times can be expressed as a series of unitaries
$U_0, U_1, \ldots U_k$ applied on $H^q \otimes H^w$ with an oracle access between
each $U_i$ and $U_{i+1},~ 0 \leq i \leq k-1$.
The algorithm starts with the state $\ket{\psi}=\ket{\pmb{0}}\ket{0}\ldots \ket{0}$
and finally reaches the state $U_k O_x U_{k-1} \ldots U_1 O_x U_0 \ket{\psi}$,
on which some measurement is performed and the output is decided
depending on some predefined condition on the result.
An exact quantum query algorithm is one which evaluates a function correctly for
any input. The Exact Quantum Query complexity ($Q_E$) of a function is the least possible
number of queries an exact quantum query algorithm needs to make at most to
evaluate the function in any point.
\subsubsection{The workspace of $\Q$}
The workspace of the algorithms that we design consists of
$Q_n$ and $l+1$ qubits for some $l$.
In this paper we denote the basis states of the query register with
$\ket{\pmb{0}}_0,\ket{\pmb{1}}_0,\ldots, \ket{\pmb{n}}_0$.
The qubits are denoted by $w_1$ through $w_{l+1}$.
We denote the computational basis states of the
$i$-th work qubit by $\ket{0}_i$ and $\ket{1}_i$.
Thus we describe this system with the basis states
$$\ket{\pmb{a_0}}_0 \bigotimes_{i=1}^t\ket{a_i}_i,
~a_0 \in \{0,1,\ldots, n\},
~ a_i \in \{0,1\}~ \forall i \in \{1,\ldots ,l+1 \}.$$
\subsection{Constructing $\Q$ leading to the $\sf pdsp$ class}
\label{sec:2-3}
\begin{remark}
\label{rem:n}
From here on we assume $n \equiv 2 \bmod 4$. This is to simply reduce the tediousness
of the proof. For other cases the algorithms and the bounds develop in an almost identical manner, conforming to the same generalized query complexity value.
One can refer to the extended version of this work~\cite{self} (uploaded as an earlier Arxiv version)to view the simple modifications needed to incorporate the other cases.
\end{remark}
The general flow of the algorithms are as follows.
\begin{itemize}
\item The function is expressed as $f(\mathbf{x})=g({\mathbf{\hat{x}}}) \oplus h({\mathbf{\tilde{x}}})$
where ${\mathbf{\hat{x}}}$ and ${\mathbf{\tilde{x}}}$ are two subspaces that form a disjoint partition of $\mathbf{x}$.
\item We then start with the state $\ket{\pmb{0}}_0 \otimes _{i=1}^{s+1} \ket{0}_i$
where $s$ is dependent on the structure of the function.
\item We apply a Hadamard gate on the first work qubit $w_1$ to obtain the state
\begin{equation*}
\ket{\psi_0}=
\frac{1}{\sqrt{2}} \left( \ket{\pmb{0}}_0\ket{0}_1 \otimes _{i=1}^s \ket{0}_{i+1}+
\ket{\pmb{0}}_0 \ket{1}_1 \otimes _{i=1}^s \ket{0}_{i+1} \right).
\end{equation*}
\item Then we apply certain transformations to obtain a state of the form
\begin{equation}
\label{eq:01}
\ket{\psi_f}=
\frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{\pmb{0}}_0\ket{0}_1 \otimes _{i=1}^s \ket{x_{r(i)}}_{i+1}+
(-1)^{h({\mathbf{\tilde{x}}})} \ket{\pmb{0}}_0 \ket{1}_1 \otimes _{i=1}^s \ket{x_{r(l+i)}}_{i+1} \right)
\end{equation}
using some $k \leq s$ queries.
Here $r(i), 1 \leq i \leq 2s$ are elements of an injective map $r: [n] \rightarrow [n]$.
\end{itemize}
At this stage if we had $x_{r(l+i)}=x_{r(i)}=m_i \text{ (say) } \forall i$ we could write the
state as
$
\ket{\pmb{0}}_0 \frac{1}{\sqrt{2}}
\left( (-1)^{g({\mathbf{\hat{x}}})} \ket{0}_1 + (-1)^{h({\mathbf{\tilde{x}}})} \ket{1}_1 \right)
\bigotimes _{i=1}^s \ket{m_i}_{i+1}
$
and we could simply apply a Hadamard gate on the $w_1$ to obtain
$
\ket{\pmb{0}}_0 \frac{1}{\sqrt{2}}
\kett*{{g({\mathbf{\hat{x}}})} \oplus {h({\mathbf{\tilde{x}}})}} \bigotimes _{i=1}^s \ket{m_i}_{i+1}
$
(ignoring global phase)
and measuring $w_1$ would suffice.
However we do not have any way of ensuring $x_{r(i)}=x_{r(l+i)}$ which leaves the state $\ket{\psi_f}$
in an entangled form. Here we design a new un-entangling protocol that finally gives us the separations.
\subsubsection{The un-entangling protocol}
Our algorithm is currently in the state
$$
\ket{\beta_0}=
\frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{\pmb{0}}_0\ket{0}_1 \otimes _{i=1}^s \ket{x_{r(i)}}_{i+1}+
(-1)^{h({\mathbf{\tilde{x}}})} \ket{\pmb{0}}_0 \ket{1}_1 \otimes _{i=1}^s \ket{x_{r(l+i)}}_{i+1} \right).
$$
Here it is important to note that $x_{r(i)} \in hx$ and $x_{r(l+i)} \in {\mathbf{\tilde{x}}}$, that is the possible
values of the qubits in the superposition states are also decided by our partition of $\mathbf{x}$ into
${\mathbf{\hat{x}}}$ and ${\mathbf{\tilde{x}}}$.
If the system was in a product state at this stage, we could have simply obtained the parity
of the phases $(-1)^{g({\mathbf{\hat{x}}})}$ and $(-1)^{h({\mathbf{\tilde{x}}})}$, which would have given us the desired outcome.
However, the system is entangled as the value of $x_{r(i)}$ may differ depending on the input
on which we have no control. At this stage we design a technique of untangling two qubits
deterministically using a single query. This can be summarized as follows.
\begin{theorem}
\label{lemma:1}
Let a quantum query algorithm be in the state
$$\ket{\gamma} =\frac{1}{\sqrt{2}} \big(\ket{\mb{x_a}}_0\ket{0}_1\ket{x_b}_2 \ket{W_1} + \ket{\mb{x_c}}_0\ket{1}_1\ket{x_d}_2\ket{W_2} \big)$$
Here $x_a$, $x_b$, $x_c$ and $x_d$ are inputs to a function corresponding to an oracle.
Then this state can be transformed to
$$\ket{\gamma'}=(-1)^{x_b} \frac{1}{\sqrt{2}} \big(\ket{\mb{x_b}}_0\ket{0}_1\ket{x_d}_2\ket{W_1} + \ket{\mb{x_b}}_0\ket{1}_1\ket{x_d}_2\ket{W_2} \big)$$
using a single query to the oracle.
Here $\ket{W_1}$ and $\ket{W_2}$ represent any two arbitrary $m$-qubit states.
\end{theorem}
\begin{proof}
We again define a protocol, $\sf untangle$ which enables the defined
transformation by making a single query to the oracle.
We first define the unitaries $U_1$ and $U_2$ that act on $Q_n$.
The structure of $\sf untangle$ is as follows.
First ${\sf C_0} U_1$ and ${\sf C_1} U_2$ are applied,
followed by the oracle $O_x$ and then
${\sf C_0} \parg{n}{a}{d}$ and ${\sf C_1} \parg{n}{b}{c}$.
That is, we define
$${\sf untangle}= \left( {\sf C_0} \parg{n}{a}{d}~{\sf C_1} \parg{n}{b}{c}~O_x~{\sf C_0} U_1~{\sf C_1} U_2 \right),$$
and show that $\ket{\gamma} \xrightarrow{\sf untangle} \ket{\gamma'}$.
We denote $\ket{\mb{x_a}}_0\ket{0}_1\ket{x_b}_2 \ket{W_1}= \ket{\gamma_1}$ and
$\ket{\mb{x_c}}_0\ket{1}_1\ket{x_d}_2 \ket{W_2}= \ket{\gamma_2}$.Then
$\ket{\gamma}=\frac{1}{\sqrt{2}}\left( \ket{\gamma_1} +\ket{\gamma_2} \right)$
Let us now observe the evolution of the two states $\ket{\mb{x_a}}_0\ket{0}_1\ket{x_b}_2$ and
$\ket{\mb{x_c}}_0\ket{1}_1\ket{x_d}_2$ individually,
depending on the state of the $w_1$.
We start with the case when $w_1=\ket{0}$.
$U_1$ can be looked as the composition of two operations $U_{10}$ and $U_{11}$.
$U_{10}$ and $U_{20}$ acts on the register $Q_n$
depending on if $w_2=\ket{0}$ or $\ket{1}$, i.e. $x_b=0$ or $x_b=1$
respectively.
That is $U_{10}$ and $U_{20}$ are operators acting on $H^q \otimes H_2$.
Therefore at any point, only one of the unitaries actually perform
their transformations, depending on the value of $x_b$. These transformations are
defined as follows.
\begin{multicols}{2}
\paragraph{$U_{10}$}
\begin{enumerate}
\item $\ket{\mb{0}}_0 \rightarrow \frac{1}{\sqrt{2}} (\ket{\mb{a}}_0+\ket{\mb{d}}_0)$
\item $\ket{\mb{1}}_0 \rightarrow \frac{1}{\sqrt{2}} (-\ket{\mb{a}}_0+\ket{\mb{d}}_0)$
\end{enumerate}
\columnbreak
\paragraph{\bf $U_{11}$}
\begin{enumerate}
\item $\ket{\mb{0}}_0 \rightarrow \frac{1}{\sqrt{2}} (-\ket{\mb{a}}_0+\ket{\mb{d}}_0)$
\item $\ket{\mb{1}}_0 \rightarrow \frac{1}{\sqrt{2}} (\ket{\mb{a}}_0+\ket{\mb{d}}_0)$
\end{enumerate}
\end{multicols}
That is,
\begin{itemize}
\item $\ket{\mb{x_a}}_0 \xrightarrow{U_{10}} \frac{1}{\sqrt{2}} ( (-1)^{x_a} \ket{\mb{a}}_0+\ket{\mb{d}}_0)$
\item $\ket{\mb{x_a}}_0 \xrightarrow{U_{11}} \frac{1}{\sqrt{2}} ((-1)^{x_a+1} \ket{\mb{a}}_0+\ket{\mb{d}}_0)$
\end{itemize}
The oracle is then applied on ${\sf C_0} U_{10} {\sf C_0} U_{11} \ket{\gamma_1}$,
followed by the Unitary Operation ${\sf C_0} \parg{n}{a}{d}$.
We now observe the state $\big( {\sf C_0} \parg{n}{a}{d} O_x {\sf C_0} U_{10} {\sf C_0} U_{11} \big) \ket{\gamma_1}$,
depending on the value of $x_2$ and compare the resultant
state with $$(-1)^{x_b} \big(\ket{\mb{x_b \oplus x_d}}_0)\ket{0}_1\ket{x_b}_2.$$
We tabulate the comparisons for both $x_b=0$ and $x_b=1$ in
Table~\ref{tab:3}.
The transformations ${\sf C_0} U_10, {\sf C_0} U_{11}, O_x$ and ${\sf C_0} \parg{n}{a}{d}$ only act on
the query register, depending on the values of the qubits $w_1$ and $w_2$,
which remain unaltered throughout. Therefore we only show the evolution
of the query register.
\begin{table}[H]
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\multicolumn{1}{l}{$\bf x_b=0:$}\\
\hline
$x_a$ & ${\sf C_0} U_{10}\ket{\gamma_1}$ & $O_x {\sf C_0} U_{10}\ket{\gamma_1}$ & ${\sf C_0} \parg{n}{a}{d} O_x {\sf C_0} U_{10} \ket{\gamma_1}$ & $\ket{\mb \beta}$ \\ \hline
$0$ & \makecell{$\frac{1}{\sqrt{2}}\ket{\mb{a}}_0$ \\+ $ \frac{1}{\sqrt{2}} \ket{\mb{d}}_0$}
& \makecell{$\frac{1}{\sqrt{2}} (-1)^{x_a}\ket{\mb{a}}_0$ \\ + $\frac{1}{\sqrt{2}} (-1)^{x_d}\ket{\mb{d}}_0$}
& \makecell{$(-1)^{x_a}\ket{\mb{x_a \oplus x_d}}_0$\\= $\ket{\mb{x_d}}_0$}
& $\ket{\mb{x_d}}_0$ \\ \hline
$1$ & \makecell{$-\frac{1}{\sqrt{2}} \ketm{a}$\\+$\frac{1}{\sqrt{2}} \ketm{d}$}
& \makecell{$\frac{1}{\sqrt{2}} (-1)^{x_a+1}\ketm{a}$ \\+ $\frac{1}{\sqrt{2}} (-1)^{x_d}\ketm{d}$}
& \makecell{$(-1)^{x_a+1}\ketm{x_a \oplus x_d \oplus 1}$\\= $\ketm{x_d}$}
& $\ketm{x_d}$ \\ \hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\multicolumn{1}{l}{$\bf x_b=1:$}\\
\hline
$ x_a $ & ${\sf C_0} U_{11}\ket{\gamma_1}$ & $O_x {\sf C_0} U_{11}\ket{\gamma_1}$ & ${\sf C_0} \parg{n}{a}{d} O_x {\sf C_0} U_{11} \ket{\gamma_1}$ & $\ket{\mb \beta}$ \\ \hline
$0$ & \makecell{$-\frac{1}{\sqrt{2}}\ketm{a}$ \\+ $ \frac{1}{\sqrt{2}} \ketm{d}$}
& \makecell{$\frac{1}{\sqrt{2}} (-1)^{x_a+1}\ketm{a}$ \\ + $\frac{1}{\sqrt{2}} (-1)^{x_d}\ketm{d}$}
& \makecell{$(-1)^{x_a+1}\ketm{x_a \oplus x_d \oplus 1}$\\= $-\ketm{x_d \oplus 1}$}
& $-\ketm{x_d \oplus 1}$ \\ \hline
\ $1$ & \makecell{$\frac{1}{\sqrt{2}} \ketm{a}$\\+$\frac{1}{\sqrt{2}} \ketm{d}$}
& \makecell{$\frac{1}{\sqrt{2}} (-1)^{x_a}\ketm{a}$ \\+ $\frac{1}{\sqrt{2}} (-1)^{x_d}\ketm{d}$}
& \makecell{$(-1)^{x_a}\ketm{x_a \oplus x_d }$\\= $-\ketm{x_d \oplus 1}$}
& $-\ketm{x_d \oplus 1}$ \\ \hline
\end{tabular}
\end{center}
\caption{Evolution of $\ket{\gamma_1}$ and comparison
with $\ket{\mb \beta}=(-1)^{x_b}
\ketm{x_b \oplus x_d}$}
\label{tab:3}
\end{table}
Therefore in all the cases the state post these transformations is
$$(-1)^{x_b} \ketm{x_b \oplus x_d} \ket{0}_1\ket{x_b}_2.$$
Now we describe the evolution of the state $\ketm{x_c}\ket{1}_1\ket{x_d}_2$.
As in the previous case, we apply an unitary ${\sf C_1} U_2$ and then the state
queries to the oracle, which is followed by ${\sf C_1} \parg{n}{b}{c}$.
We define $U_2$ as the composition of two unitary operators
defined on $H^q \otimes H_2$, $U_{20}$ and $U_{21}$.
Similar to $U_{10}$ and $U_{11}$, these are operators
that transform the query register depending on $w_2=\ket{0}$
and $\ket{1}$, respectively.
The transformations due to $U_{20}$ and $U_{21}$ are as follows.
\begin{multicols}{2}
\paragraph{$U_{20}$}
\begin{enumerate}
\item $\ketm{0} \rightarrow \frac{1}{\sqrt{2}} (\ketm{b}+\ketm{c})$
\item $\ketm{1} \rightarrow \frac{1}{\sqrt{2}} (\ketm{b}-\ketm{c})$
\end{enumerate}
\columnbreak
\paragraph{\bf $U_{21}$}
\begin{enumerate}
\item $\ketm{0} \rightarrow \frac{1}{\sqrt{2}} (\ketm{b}-\ketm{c})$
\item $\ketm{1} \rightarrow \frac{1}{\sqrt{2}} (\ketm{b}+\ketm{c})$
\end{enumerate}
\end{multicols}
That is
\begin{itemize}
\item $\ketm{x_c} \xrightarrow{U_{20}} \frac{1}{\sqrt{2}} (\ketm{b}+(-1)^{x_c}\ketm{c}) $
\item $\ketm{x_c} \xrightarrow{U_{21}} \frac{1}{\sqrt{2}} (\ketm{b}+(-1)^{x_c+1}\ketm{c}) $
\end{itemize}
The oracle is applied on ${\sf C_1} U_{21} {\sf C_1} U_{20} \ket{\gamma_2}$
and on the resultant state, \\ $O_x {\sf C_1} U_{21} {\sf C_1} U_{20} \ket{\gamma_2}$ we apply ${\sf C_1} \parg{n}{b}{c}$.
We observe the evolution for all possible $\{x_b,x_c,x_d \}$ tuples and compare
the final state with $(-1)^{x_b} \ketm{x_b \oplus x_d}\ket{1}_1\ket{x_d}_2$.
We again list solely the evolution of the query register
in Table~\ref{tab:4}, as the other registers
remain unchanged.
\begin{table}[H]
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$x_d$ & $ {\sf C_1} U_{21} {\sf C_1} U_{20}\ket{\gamma_2}$ & $O_x {\sf C_1} U_{21} {\sf C_1} U_{20}\ket{\gamma_1}$
& \makecell{${\sf C_1} \parg{n}{b}{c} O_x$ \\ $ {\sf C_1} U_{21} {\sf C_1} U_{20} \ket{\gamma_2}$} & $\ket{\mb \beta}$ \\ \hline
$0$ & \makecell{$\frac{1}{\sqrt{2}}\ketm{b}$ \\ $+(-1)^{x_c} \frac{1}{\sqrt{2}} \ketm{c}$}
& \makecell{$\frac{1}{\sqrt{2}} (-1)^{x_b}\ketm{b}$ \\ $+\frac{1}{\sqrt{2}} (-1)^{2x_c}\ketm{c}$}
& \makecell{$(-1)^{x_b}\ketm{x_b}$}
& $(-1)^{x_b}\ketm{x_b}$ \\ \hline
$1$ & \makecell{$\frac{1}{\sqrt{2}}\ketm{b}$ \\ $+ (-1)^{x_c+1} \frac{1}{\sqrt{2}} \ketm{c}$}
& \makecell{$\frac{1}{\sqrt{2}} (-1)^{x_b}\ketm{b}$ \\ $+\frac{1}{\sqrt{2}} (-1)^{2x_c+1}\ketm{c}$}
& \makecell{$(-1)^{x_b}\ketm{x_b \oplus 1}$}
& $(-1)^{x_b}\ketm{x_b \oplus 1}$ \\ \hline
\end{tabular}
\end{center}
\caption{Evolution of $\ket{\gamma_2}$ and comparison
with $\ket{\mb \beta}=(-1)^{x_b}
\ketm{x_b \oplus x_d}$}
\label{tab:4}
\end{table}
Therefore in all the cases the state post these transformations is
$$(-1)^{x_b} \ketm{x_b \oplus x_d} \ket{0}_0\ket{x_d}_1.$$
We now look at the collective effect of the transformations
${\sf C_0} U_1$, ${\sf C_1} U_2$, $O_x$, ${\sf C_0} \parg{n}{a}{d}$ and ${\sf C_1} \parg{n}{b}{c}$.
The state at start was
$$\frac{1}{\sqrt{2}} \big(\ketm{x_a}\ket{0}_1\ket{x_b}_2\ket{W_1} + \ketm{x_c}\ket{1}_1\ket{x_d}_2\ket{W_2} \big).$$
The state after these operations are applied is
$$ \frac{1}{\sqrt{2}} \big( (-1)^{x_b}\ketm{x_b \oplus x_d}\ket{0}_1\ket{x_b}_2\ket{W_1} + (-1)^{x_b}\ketm{x_b \oplus x_d}\ket{1}_1\ket{x_d}_2\ket{W_2} \big) .$$
We now apply the operations ${\sf C_0} \cn{Q_n}{w_2}$
followed by $\cn{w_2}{Q_n}$,
evolving the system to
$
(-1)^{x_b} \frac{1}{\sqrt{2}} \big( \ketm{x_b}\ket{0}_1\ket{x_d}_2 \ket{W_1}+ \ketm{x_b}\ket{1}_1\ket{x_d}_2 \ket{W_2} \big).
$
and this completes the step.
This also shows that for this method the qubit $w_2$ can be swapped with any other work qubit,
and the method is indifferent towards its choice.
\end{proof}
Observe that this subroutine does not depend on the function
we are dealing with. However, the advantage is most prominent
for the classes of functions that we discuss.
Given the general framework of this technique, it is an interesting problem to check if this technique can have applications in other black box problems in the quantum paradigm as well as if this methodology can be further optimized in the bounded error quantum model.
Let us now denote the generalized routine
in this regard that form part of the exact quantum query algorithm.
This is simply obtained by applying the untangling protocol many times, each time
untangling two new qubits. We omit this proof for brevity.
\begin{lemma}
\label{lemma:3}
Corresponding to a quantum query algorithm defined on the variables $\mathbf{x}=(x_1,x_2, \ldots ,x_n)$ with
${\mathbf{\hat{x}}}$ and ${\mathbf{\tilde{x}}}$ are two subspaces that form a disjoint partition of $\mathbf{x}$,
where $s=2t$
the state
$$ \ket{\beta_0}=
\frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{\bm{0}}_0 \ket{0}_1 \bigotimes_{i=1}^{s}\ket{x_{r_i}}_{i+1}
+
(-1)^{h({\mathbf{\tilde{x}}})}\ket{\bm{0}}_0 \ket{1}_1 \bigotimes_{j=1}^{s}\ket{x_{r_{s+j}}}_{j+1} \right) $$
can be evolved to the state $\ket{\beta_f}$ using the protocol ${\sf untangle^s_n}$,
where,
$$\ket{\beta_f}=
\frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{\bm{0}}_0 \ket{0}_1
+
(-1)^{h({\mathbf{\tilde{x}}})}\ket{\bm{0}}_0 \ket{1}_1
\right)
\bigotimes_{i=1}^{t} \left(\kett*{x_{r(2i)}}_{2i} \kett*{x_{r(s+2i)}}_{2i+1} \right).
$$
by making $t$ queries to the oracle $O_x$.
\end{lemma}
Using this protocol on the state $\ket{\psi_f}$ described in Equation~\eqref{eq:01}
gives us the state using a further $t$ queries:
$$
\ket{\psi_{end'}}=
\frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{\pmb{0}}_0 \ket{0}_1 +
(-1)^{h({\mathbf{\tilde{x}}})} \ket{\pmb{0}}_0 \ket{1}_1 \right)
\bigotimes_{i=1}^{t} \left(\kett*{x_{r(2i)}}_{2i} \kett*{x_{r(s+2i)}}_{2i+1} \right).$$
Applying a Hadamard gate and then measuring $w_1$ in the computational basis gives us the output after a total of $k+ \lceil \frac{s}{2} \rceil$ queries.
The efficiency of the algorithm relies on how well can we partition $\mathbf{x}$ into ${\mathbf{\hat{x}}}$ and ${\mathbf{\tilde{x}}}$ and then choose $k$
and $s$ properly.
Let us now go over the rest of the lemmas, subroutines and intermediate results that we obtain en-route.
\subsubsection{The complete algorithm}
We start with $\ket{\psi}_0$ where $l=k$ and define the
following transformation.
\begin{lemma}
\label{lemma:0}
Let $f(\mathbf{x})$ be a Boolean function on $n$ variable which is being evaluated
using an algorithm $\Q(f)$ with the registers $Q_n$ and $k$ qubits of working memory.
Then there exists a transformation $acq(i-1)$ which transforms the state
$\ket{\psi_{i-1}}$ to $\ket{\psi_i}$ by making a single query to the oracle,
where $\ket{\psi_i}$ is defined as follows.
\
\begin{align*}
\ket{\psi_i}=
\frac{1}{\sqrt{2}} \left( \ket{\bm{0}}_0 \ket{0}_1
\bigotimes_{j=1}^{i-1} \ket{x_j}_{j+1}
\bigotimes_{j=i+1}^{k} \ket{0}_{j}
+
\ket{\bm{0}}_0 \ket{1}_1
\bigotimes_{j=1}^{i-1} \ket{x_{k+j}}_{j+1}
\bigotimes_{j=i+1}^{k} \ket{0}_{j}
\right).
\end{align*}
\end{lemma}
\begin{proof}
This is very similar to the $\sf acquire(i)$ transformation shown in Lemma~\ref{lemma2}, and one can refer to it for a more
detailed view of a similar process.
Here the difference is that in this case the value of two
variables are stored in the qubits in the entangled system
with each query.
We show that this transformation can be achieved by using the $acq(i-1)$ transformation defined as the
sequential application of the following unitaries and the oracle in the given order.
$${\sf C_0}\sg{n}{0}{i}, {\sf C_1}\sg{n}{0}{k+i} ~ O_x,~ {\sf C_0}\parg{n}{0}{i},
{\sf C_1}\parg{n}{0}{k+i} , \cn{Q_n}{w_{i+1}}, \cn{w_{i+1}}{Q_n}.$$
That is
$acq(i-1)=\cn{w_{i+1}}{Q_n}\cn{Q_n}{w_{i+1}}{\sf C_1}\parg{n}{0}{k+i}
{\sf C_0}\parg{n}{0}{i}O_x{\sf C_1}\sg{n}{0}{k+i}{\sf C_0}\sg{n}{0}{i}$
The step-wise transformation due to $acq(i-1)$ on $\ket{\psi_{i-1}}$ is as follows.
\
\begingroup
\allowdisplaybreaks
\begin{align*}
\ket{\psi_{i-1}}=& \frac{1}{\sqrt{2}} \left( \ket{\bm{0}}_0 \ket{0}_1 \ket{x_1}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k \right.
\\
& \left. +\ket{\bm{0}}_0 \ket{1}_1 \ket{x_{k+1}}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k
\right)
\\
\xrightarrow{{\sf C_0}\sg{n}{0}{i} ~ {\sf C_1}\sg{n}{0}{k+i}} &
\frac{1}{\sqrt{2}} \left(\left(\frac{\ket{\bm{0}}_0+\ket{\bm{i}}_0}{\sqrt{2}} \right) \ket{0}_1 \ket{x_1}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k \right.
\\
& \left. + \left(\frac{\ket{\bm{0}}_0 + \ket{\bm{k+i}}_0}{\sqrt{2}} \right) \ket{1}_1 \ket{x_{k+1}}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k \right)
\\
\xrightarrow{O_x} &
\frac{1}{\sqrt{2}} \left(\left(\frac{\ket{\bm{0}}_0+(-1)^{x_i}\ket{\bm{i}}_0}{\sqrt{2}} \right) \ket{0}_1 \ket{x_1}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k \right.
\\
+& \left. \left(\frac{\ket{\bm{0}}_0 +(-1)^{x_{k+i}} \ket{\bm{k+i}}_0}{\sqrt{2}} \right) \ket{1}_1 \ket{x_{k+1}}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k \right)
\\
\xrightarrow{{\sf C_0}\parg{n}{0}{i} ~ {\sf C_1}\parg{n}{0}{k+i}} &
\frac{1}{\sqrt{2}} \left( \ket{\bm{x_i}}_0 \ket{0}_1 \ket{x_1}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k \right.
\\
+& \left. \ket{\bm{x_{k+i}}}_0 \ket{1}_1 \ket{x_{k+1}}_2 \ldots \ket{x_{i-1}}_{i} \ket{0}_{i+1} \ldots \ket{0}_k \right)
\\
\xrightarrow{\cn{Q_n}{w_{i+1}}} &
\frac{1}{\sqrt{2}} \left( \ket{\bm{x_i}}_0 \ket{0}_1 \ket{x_1}_2 \ldots \ket{x_{i-1}}_{i} \ket{x_i}_{i+1} \ldots \ket{0}_k \right.
\\
+& \left. \ket{\bm{x_{k+i}}}_0 \ket{1}_1 \ket{x_{k+1}}_2 \ldots \ket{x_{i-1}}_{i} \ket{x_{k+i}}_{i+1} \ldots \ket{0}_k \right)
\\
\xrightarrow{\cn{w_{i+1}}{Q_n}} &
\frac{1}{\sqrt{2}} \left( \ket{\bm{0}}_0 \ket{0}_1 \ket{x_1}_2 \ldots \ket{x_{i-1}}_{i} \ket{x_i}_{i+1} \ldots \ket{0}_k \right.
\\
+& \left. \ket{\bm{0}}_0 \ket{1}_1 \ket{x_{k+1}}_2 \ldots \ket{x_{i-1}}_{i} \ket{x_{k+i}}_{i+1} \ldots \ket{0}_k \right)
\\
&= \ket{\psi_i}.
\end{align*}
\
\endgroup
\
\end{proof}
The first transformation is applied $k-1$ times
(which requires $k-1$ queries to be made to the oracle)
and it transforms
the system to
$$\ket{\psi_{k-1}}=
\frac{1}{\sqrt{2}} \left( \ket{\pmb{0}}_0\ket{0}_1 \otimes _{i=2}^k \ket{x_{i-1}}_i+
\ket{\pmb{0}}_0 \ket{1}_1 \otimes _{i=2}^k \ket{x_{k+i-1}}_i \right).$$
Then we apply the following transformational result.
\begin{lemma}
\label{lemma:onetime}
The state $\ket{\psi_{k-1}}$ can be converted to the state
$$\ket{\beta_0}=\frac{1}{\sqrt{2}} \left( (-1)^{\prod\limits_{i=1}^k x_i} \ket{\bm{0}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+
(-1)^{\prod\limits_{i=k+1}^n x_i}\ket{\bm{0}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right)$$
by making a single query to the oracle.
\end{lemma}
\begin{proof}
We begin with the system being in the state
$$\ket{\psi_{\frac{n}{2}-1}}=
\frac{1}{\sqrt{2}} \left( \ket{\bm{0}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+ \ket{\bm{0}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right).$$
\
At this stage we apply a unitary transformation $C^{k-1}$
that changes the state of $Q_n$
from $\ket{\bm{0}}_0$ to $\ket{\bm{1}}_0$, controlled on
$w_i=\ket{1}_i, 2 \leq i \leq k$, similar to a $\sf C^{k-1}-NOT$ operation.
That is, iff
$ \prod_{i=1}^{k-1} x_i=1$, $Q_n$ changes to the state $\ket{\bm{1}}_0$
in the superposition state with $w_1=\ket{0}$ and similarly
iff
$ \prod_{i=k+1}^{n-1} x_i=1$ then $Q_n$ changes to $\ket{\bm{1}}_0$
in the superposition state with $w_1=\ket{1}$, forming $\ket{\psi_{k-1}'}$,
which is
$$
C^{k-1} \ket{\psi_{k-1}}=
\frac{1}{\sqrt{2}} \left( \kett*{\bm{ \prod_{i=1}^{k-1}x_i }}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+ \kett*{\bm{\prod_{i=k+1}^{n-1}x_i}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right).
$$
The next step takes one query and this is the last query the algorithm makes before
starting the un-entanglement protocol.
We apply ${\sf C_0} \pg{n}{\frac{n}{2}} ~{\sf C_1} \pg{n}{n}$
followed by the oracle $O_x$ and then ${\sf C_0} \pg{n}{\frac{n}{2}} ~{\sf C_1} \pg{n}{n} $ again.
Let $p_{q}^{r}=\prod\limits_{i=q}^{r}x_i,~ q<r$.
Then the transformation due to the operations is as follows.
\begin{align}
\label{eq:2}
& \ket{\psi_{k-1}'}
\xrightarrow{{\sf C_0} \pg{n}{\frac{n}{2}} ~{\sf C_1} \pg{n}{n}} \\ \nonumber
&
\frac{1}{\sqrt{2}} \left( \kett*{\bm{k \times (p_1^{k-1})}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+ \kett*{\bm{n \times (p_{k+1}^{n-1})}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \vphantom{\frac{1}{2}}\right)
\\ \nonumber
& \xrightarrow{O_x}
\frac{1}{\sqrt{2}} \left( (-1)^{x_k(p_{1}^{k-1})} \kett*{\bm{k \times (p_{1}^{k-1})}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j \right.
\\ \nonumber &
+ \left. (-1)^{x_n(p_{k+1}^{n-1})}\kett*{\bm{n \times (p_{k+1}^{n-1})}}_0 \ket{1}_1
\bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right)
\
\xrightarrow{{\sf C_0} \pg{n}{\frac{n}{2}} ~{\sf C_1} \pg{n}{n}} \\ \nonumber &
\frac{1}{\sqrt{2}} \left( (-1)^{x_k(\prod\limits_{j=1}^{k-1}x_j)} \ket{\bm{0}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+ (-1)^{x_n(\prod\limits_{j=1}^{k-1}x_{k+j})}\ket{\bm{0}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right)
\\ \nonumber
&=
\frac{1}{\sqrt{2}} \left( (-1)^{\prod\limits_{i=1}^k x_i} \ket{\bm{0}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+
(-1)^{\prod\limits_{i=k+1}^n x_i}\ket{\bm{0}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right).
\end{align}
\
\end{proof}
At this stage we only need to untangle the system to obtain the output.
We now apply Lemma~\ref{lemma:3} to obtain the final output,
which costs a further $\lfloor \frac{n}{4} \rfloor$ queries.
Together with the result of Lemma~\ref{th:par},
this results in the following Algorithm~\ref{algo}.
\begingroup
\allowdisplaybreaks
\begin{algorithm}
\caption{$\Q(f)$ to evaluate $f(\mathbf{x})=\prod\limits_{i=1}^{\frac{n}{2}}x_i \oplus
\prod\limits_{j=\frac{n}{2}+1}^{n}x_j$ along with query complexity count (${ QC_{\textrm{algo}}}(f)$) :}
\begin{algorithmic}
\label{algo}
\item[1] Begin with the state $\ket{\bm{0}}_0 \otimes_{i=1}^k \ket{0}_i$, consisting of the Query register and $\frac{n}{2}$ work qubits $w_i, 1 \leq i \leq \frac{n}{2}$.
\item[2] We apply a Hadamard to the first work qubit $w_1$ to get
$\ket{\psi_0}=\frac{1}{\sqrt{2}} \big( \ket{\bm{0}}_0\ket{0}_1\otimes_{i=2}^k \ket{0}_i
+\ket{\bm{0}}_0\ket{1}_1 \otimes_{i=2}^k \ket{0}_i \big)$.
\item[3] Then we run the subroutine $acq(i)$ of Lemma~\ref{lemma:0}
for $\frac{n}{2}-1$ times for $0 \leq i \leq \frac{n}{2}-2$,
which evolves the state from $\ket{\psi_0}$ to $\ket{\psi_{\frac{n}{2}-1}}$,
where
\begin{align*}
\ket{\psi_i}= & \frac{1}{\sqrt{2}} \left( \ket{\bm{0}}_0 \ket{0}_1
\bigotimes_{j=2}^i \ket{x_{j-1}}_j
\bigotimes_{j=i+1}^k \ket{0}_j
+\ket{\bm{0}}_0 \ket{1}_1
\bigotimes_{j=2}^i \ket{x_{k+j-1}}_j
\bigotimes_{j=i+1}^k \ket{0}_j
\right).
\end{align*}
\item[4]
Here let us define $g({\mathbf{\hat{x}}})=\prod_{i=1}^{\frac{n}{2}} x_i$
and $h({\mathbf{\tilde{x}}})=\prod_{j=1}^{\frac{n}{2}} x_{\frac{n}{2}+j}$.
Then we apply the step described in Lemma~\ref{lemma:onetime}
which makes one query to the oracle.
Then after $\frac{n}{2}$ queries the system is in the state
\begin{equation*}
\ket{\psi_{\frac{n}{2}}}=
\frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{\bm{0}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+
(-1)^{h({\mathbf{\tilde{x}}})}\ket{\bm{0}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right).
\end{equation*}
\item[5] We then apply the transformation $untangle^s_n$
described in Lemma~\ref{lemma:3} where $s=\frac{n}{2}-1$.
This step requires
a further $\lfloor \frac{n}{4} \rfloor$ queries, and finally the system is in the state
$$\ket{\beta_f}=
(-1)^{g'(\mathbf{x})}
\ket{\bm{0}}_0
\kett*{g({\mathbf{\hat{x}}}) \oplus h({\mathbf{\tilde{x}}}) }_1
\bigotimes_{i=1}^{k_1} (\ket{x_{2i}}_{i+1} \ket{x_{k+2i}}_{i+2})
$$
after a total of $\lfloor \frac{3n}{4} \rfloor$ queries.
\item[6] Get the output by then measuring $w_1$ in the computational basis.
\end{algorithmic}
\end{algorithm}
\endgroup
This coupled with the generalized parity decision tree complexity of the function provides the first separation result.
\begin{theorem}
\label{th:5}
For the function
$f_1= \prod\limits_{i=1}^{\frac{n}{2}}x_i \oplus
\prod\limits_{j=\frac{n}{2}+1}^{n}x_j $,
we have ${ QC_{\textrm{algo}}}(f_1)=\lfloor \frac{3n}{4} \rfloor$ and $D_{\oplus}(f_1)=n-1$.
\end{theorem}
\begin{proof}
\ \\~
{\bf $D_{\oplus}(f)=n-1:$}
This is a direct implication of Proposition~\ref{th:par} where
$f \in {\sf pdsp}(n,n,2)$
{\bf ${ QC_{\textrm{algo}}}(f)=\lfloor \frac{3n}{4} \rfloor:$}
We run Algorithm~\ref{algo} initializing it in the state
$\ket{\bm{0}}_0 \otimes_{i=1}^k \ket{0}_i$.
This algorithm makes a total of $\frac{n}{2} + \lfloor \frac{n}{4} \rfloor=\lfloor \frac{3n}{4} \rfloor$ queries, which completes the proof.
\end{proof}
For $f_1$ we are able to separate ${ QC_{\textrm{algo}}}(f)$ and $D_{\oplus}(f)$,
but the algorithm is not provably optimal for this function.
However we observe that this technique is indeed optimal for the
following function.
\begin{corollary}
\label{cor:2}
For the function $f_2$ on $n=2k$ variables where
$f_2(\mathbf{x})= \prod\limits_{i=1}^{ \lfloor \frac{3n}{4} \rfloor }x_i
\oplus \prod\limits_{j=\frac{n}{2} +1}^{n} x_j $
we have ${ QC_{\textrm{algo}}}(f_2)=Q_E(f_2)=\lfloor \frac{3n}{4} \rfloor$ and $D_{\oplus}(f_2)=n-1$.
\end{corollary}
\begin{proof}
\
\paragraph*{$Q_E(f) \geq \lfloor \frac{3n}{4} \rfloor$}
We can reduce $f_1$ to $\textrm{AND}_{\lfloor \frac{3n}{4} \rfloor}$ by fixing the variables
$x_i=0, \lfloor \frac{3n}{4} \rfloor +1 \leq i \leq n$, and therefore evaluating
$f$ must take at least $\lfloor \frac{3n}{4} \rfloor$ queries as we know $Q_E(\textrm{AND}_{\lfloor \frac{3n}{4} \rfloor})=\lfloor \frac{3n}{4} \rfloor$.
\paragraph*{${ QC_{\textrm{algo}}}(f)=\lfloor \frac{3n}{4} \rfloor$}
This function can in fact be written as
$$f(\mathbf{x})=
\left( \prod\limits_{i=1}^{ \frac{n}{2} }x_i
\bigoplus
\prod\limits_{i=\lfloor \frac{3n}{4} \rfloor+1}^{ n }x_i
\right)
\prod\limits_{j=\frac{n}{2}+1}^{ \lfloor \frac{3n}{4} \rfloor }x_j.
$$
We proceed in the same direction as Theorem~\ref{th:5}. After $\frac{n}{2}$ queries
the system is in the state
\begin{equation*}
\ket{\psi_{\frac{n}{2}}}=
\frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{\bm{0}}_0 \ket{0}_1 \bigotimes_{j=2}^k \ket{x_{j-1}}_j
+
(-1)^{h({\mathbf{\tilde{x}}})}\ket{\bm{0}}_0 \ket{1}_1 \bigotimes_{j=2}^k \ket{x_{k+j-1}}_j \right).
\end{equation*}
We now swap the values of the qubit $w_2$ to $w_k$ so that
the value $\frac{n}{2}+j$ goes to the qubit $w_{1+2j}$. This ensures
after the $\sf untangle^s_n$ is applied where $s=\frac{\frac{n}{2}-1}{2}$,
the system's state after $\lfloor \frac{3n}{4} \rfloor$ queries is
$$
\ket{\bm{0}}_0 \frac{1}{\sqrt{2}} \left( (-1)^{g({\mathbf{\hat{x}}})} \ket{0}_1
+
(-1)^{h({\mathbf{\tilde{x}}})}\ket{\bm{0}}_0 \ket{1}_1 \right)
\bigotimes_{j=1}^{s} \ket{x_{j-1}}_{2j} \ket{x_{k+j-1}}_{2j+1}
$$
We can apply a Hadamard to obtain the state
$
\ket{\bm{0}}_0 \ket{g({\mathbf{\hat{x}}}) \oplus h({\mathbf{\tilde{x}}})}
\bigotimes_{j=1}^{s} \ket{x_{j-1}}_{2j} \ket{x_{k+j-1}}_{2j+1}.$
We can now obtain the value of $f(\mathbf{x})$ as both the values of
$
g({\mathbf{\hat{x}}}) \oplus h({\mathbf{\tilde{x}}})=
\left( \prod\limits_{i=1}^{ \frac{n}{2} }x_i
\bigoplus
\prod\limits_{i=\lfloor \frac{3n}{4} \rfloor+1}^{ n }x_i
\right)
$
and measuring the qubit $w_{2i+2}$ gives the variable $x_{\frac{n}{2}+i}$.
\end{proof}
We now briefly describe the phase kickback method before
finally proving the result of Theorem~\ref{th:main}
which gives us the main separation of this paper,
which is just a broader extension of Theorem~\ref{th:5}.
\subsubsection*{Phase Kickback}
The only technique we require to evaluate the functions of this kind
apart from the ones used in Corollary~\ref{cor:2}
is that of phase kickback a widely used methodology in black box algorithms.
Suppose we have a $k+1$ qubit system
$\bigotimes_{i=1}^{k+1}w_i$ in the state
$\left( \bigotimes_{i=1}^k \ket{x_i}_i \right) \ket{-}_{k+1}$.
Let $S \subseteq [k]$ where $[k]= \{1,\ldots,k\}$.
Then the controlled not operation $\sf C^{\abs{S}}-NOT$ controlled on
$w_i=\ket{1}_i, i \in S$ (i.e. $x_1=i~ \forall i \in S$ )
with $w_{k+1}$ being target works as follows.
\begin{align}
\label{eq:3}
\left( \bigotimes_{i=1}^k \ket{x_i}_i \right) \ket{-}_{k+1}
\xrightarrow{\sf C^{\abs{S}}-NOT}
(-1)^{\left( \prod_{i \in S}x_i \right) }
\left( \bigotimes_{i=1}^k \ket{x_i}_i \right) \ket{-}_{k+1}
\end{align}
Let us now present Theorem~\ref{th:main}, which is one of our main results.
\begin{theorem}
\label{th:main}
Let $f \in \sf pdsp(n,\lceil \frac{3n}{4} \rceil,t+1)$ be a function on $n=2k$ variables
such that
\begin{equation*}
f(\mathbf{x})= \left( \prod\limits_{i=1}^{\frac{n}{2}} x_i \bigoplus g(\mathbf{x}') \right) \left(
\prod\limits_{j=\frac{n}{2}+1}^{\lfloor \frac{3n}{4} \rfloor} x_j \right)
,~\mathbf{x}'=\left( x_{\lfloor \frac{3n}{4} \rfloor+1},x_{\lfloor \frac{3n}{4} \rfloor+2}, \ldots , x_n \right).
\end{equation*}
where $g$ is perfect direct sum function defined on
$\left( x_{\lfloor \frac{3n}{4} \rfloor+1},x_{\lfloor \frac{3n}{4} \rfloor+2}, \ldots , x_n \right)$ so that it contains $t$ monomials
such that each monomial consists of at least $t+1$ variables.
Then we have (i) ${ QC_{\textrm{algo}}}(f)=Q_E(f)=\lfloor \frac{3n}{4} \rfloor$, (ii) $D_{\oplus}(f)=n-t$, (iii) $D(f)=n$.
\end{theorem}
\begin{proof}~
\paragraph*{$Q_E(f) \geq \lfloor \frac{3n}{4} \rfloor$}
For any such function, if we fix the variables $x_i, \lfloor \frac{3n}{4} \rfloor+1 \leq i \leq n$ to $0$
then the function is reduced to $\sf \textrm{AND}_{\lfloor \frac{3n}{4} \rfloor}$ which implies $Q_E(f) \geq \lfloor \frac{3n}{4} \rfloor$.
\paragraph*{$D_{\oplus}(f)=n-t$}
This is a direct implication of Proposition~\ref{th:par} where the number of monomials
is $t+1$.
\paragraph*{${ QC_{\textrm{algo}}}(f)=\lfloor \frac{3n}{4} \rfloor$}
We initialize the algorithm in the state $\ket{\bm{0}}_0 \bigotimes_{i=1}^{k+2} \ket{0}_i$.
We first apply a Not gate and a Hadamard gate to $w_{k+2}$ and then a Hadamard gate
which evolves the system to
\begin{align*}
\frac{1}{\sqrt{2}}
\left(
\ket{\bm{0}}_0 \ket{0}_1 \left( \bigotimes_{i=2}^{k} \ket{0}_i \right) \ket{0}_{k+1} \ket{-}_{k+2}
+
\ket{\bm{0}}_0 \ket{1}_1 \left( \bigotimes_{i=2}^{k} \ket{0}_i \right) \ket{0}_{k+1} \ket{-}_{k+2}
\right).
\end{align*}
This state can be written as $\ket{\psi}_0 \ket{0}_{k+1} \ket{-}_{k+2}$ where
$\ket{\psi_0}$ is the starting state of Theorem~\ref{th:5} and Corollary~\ref{cor:2}.
We now apply the transformations $acq(i), 0 \leq i \leq \frac{n}{2}-2$
as defined in Lemma~\ref{lemma:0} which makes $\frac{n}{2}-1$ queries to the oracle.
This evolves the system to the state
$\ket{\psi}_{\frac{n}{2}-1} \ket{0}_{k+1} \ket{-}_{k+2}$, that is
$$
\frac{1}{\sqrt{2}} \left(
\ket{\bm{0}}_0 \ket{0}_1 \left( \bigotimes_{i=2}^{k} \ket{x_{i-1}}_i \right)
+
\ket{\bm{0}}_0 \ket{1}_1 \left( \bigotimes_{i=2}^{k} \ket{x_{k+i-1}}_i \right) \right) \ket{0}_{k+1} \ket{-}_{k+2}
.
$$
This transformation is same as described in Theorem~\ref{th:3} and
since no operation is made on the $k+1$ and $k+2$-th qubit
their states remain unchanged.
We now acquire the phases
$(-1)^{\left(\prod_{i=1}^k x_i\right)}$ and $(-1)^{g(\mathbf{x}')}$.
Since $g$ has a perfect direct sum representation
there is a single monomial (say $m_1$) in $g(\mathbf{x}')$ that contains $x_n$.
Let the other variables of the monomial be $x_{\lfloor \frac{3n}{4} \rfloor+i}, i \in S_1$
where $S_1 \subseteq [\lceil \frac{n}{4} \rceil]$.
Therefore the qubits storing these values in the superposition
state with $w_i=\ket{1}_1$ are $w_{(1+\lfloor \frac{3n}{4} \rfloor+i)}, i \in S_1$.
We then apply the following operations.
\begin{itemize}
\item[] Controlled on $w_1=\ket{0}$, we apply a controlled not gate with
$Q_n$ as target and $w_i, 2 \leq i \leq k$ as controls.
\item[] Controlled on $w_1=\ket{1}$, we apply a controlled not gate with
$Q_n$ as target and $w_{\lfloor \frac{n}{4} \rfloor+i+1}, i \in S_1$ as controls.
\end{itemize}
This transforms the system to the state
\begin{align*}
\frac{1}{\sqrt{2}}
\left(
\kett*{\bm{ \prod_{i=1}^{k-1} x_i}}_0 \ket{0}_1 \left( \bigotimes_{i=2}^{k} \ket{x_{i-1}}_i \right)
\right.
+
\left.
\kett*{\bm{\prod_{i \in S_1} x_{\lfloor \frac{3n}{4} \rfloor+i}}}_0 \ket{1}_1 \left( \bigotimes_{i=2}^{k} \ket{x_{k+i-1}}_i \right)
\right) \\
\ket{0}_{k+1} \ket{-}_{k+2}
.
\end{align*}
\
The next operations are ${\sf C_0} P^n_k$ and ${\sf C_1} P^n_n$, followed by the oracle
and then ${\sf C_0} P^n_k$ and ${\sf C_1} P^n_n$ again, which results in the same transformation
as shown in Equation~\ref{eq:2} with the only difference that the monomial corresponding to
the superposition state with $w_1=\ket{1}$ has changed.
This forms
\begin{align*}
\ket{\psi_k}=
\frac{1}{\sqrt{2}}
\left( (-1)^{ \prod_{i=1}^{k} x_i}
\ket{\bm{0}}_0 \ket{0}_1 \left( \bigotimes_{i=2}^{k} \ket{x_{i-1}}_i \right)
\right.
+
\left. (-1)^{m_1}
\ket{0}_0 \ket{1}_1 \left( \bigotimes_{i=2}^{k} \ket{x_{k+i-1}}_i \right)
\right)
\ket{0}_{k+1} \ket{-}_{k+2}
\end{align*}
after $\frac{n}{2}$ queries.
We now obtain the phases corresponding to the other monomials $m_i, 2 \leq i \leq t$
using phase kickback as shown in Equation~\ref{eq:3}.
Let the variables in the $i$-th monomial be $x_{\lfloor \frac{3n}{4} \rfloor+j}, j \in S_i$,$ S_i \subseteq [\lceil \frac{n}{4} \rceil]$.
Controlled on $w_1=\ket{1}$,
corresponding to each monomial $m_i$, we apply
the operation $\sf C^{\abs{S_i}}-NOT$ on $w_{k+2}$,
where the $\abs{S_i}$ controls are $w_{\lfloor \frac{n}{4} \rfloor+j+1}=\ket{1}, j \in S_i$.
After the phases corresponding to the
$t-1$ monomials of $g$ are evaluated this way, the system is in the
state
\begin{align*}
\ket{\psi_k}= \frac{1}{\sqrt{2}} & \left( (-1)^{ \prod_{i=1}^{k} x_i}
\ket{\bm{0}}_0 \ket{0}_1 \left( \otimes_{i=2}^{k} \ket{x_{i-1}}_i \right)
\right.
\\+ &
\left. (-1)^{\oplus_{i=1}^t m_i}
\ket{0}_0 \ket{1}_1 \left( \otimes_{i=2}^{k} \ket{x_{k+i-1}}_i \right)
\right)
\ket{0}_{k+1} \ket{-}_{k+2}
\\ =
\frac{1}{\sqrt{2}} & \left( (-1)^{ \prod_{i=1}^{k} x_i}
\ket{\bm{0}}_0 \ket{0}_1 \left( \otimes_{i=2}^{k} \ket{x_{i-1}}_i \right)
\right.
\\+ &
\left. (-1)^{g(\mathbf{x}')}
\ket{0}_0 \ket{1}_1 \left( \otimes_{i=2}^{k} \ket{x_{k+i-1}}_i \right)
\right).
\ket{0}_{k+1} \ket{-}_{k+2}
\end{align*}
From here on the algorithm proceeds identically as Corollary~\ref{cor:2}
We first swap the values of the qubits in the superposition state with $w_1=\ket{1}$
so that the qubits are in the state $w_{2i+1}=\ket{x_{k+i}}$.
Then the untangling protocol makes $\frac{s}{2}$ queries and the
system is in the following state after an application of Hadamard gate on $w_1$.
$$
(-1)^{g'(\mathbf{x})}
\ket{\bm{0}}_0
\kett*{\prod_{i=1}^k x_i \oplus g(\mathbf{x}') }_1
\bigotimes_{i=1}^{s} (\ket{x_{2i}}_{2i} \ket{x_{k+i}}_{2i+1})
\ket{0}_{k+1}\ket{-}_{k+2}.
$$
From here-on we can obtain the value of the monomial
$\prod\limits_{j=\frac{n}{2}+1}^{\lfloor \frac{3n}{4} \rfloor} x_j $ as the value of each variable $x_j$
is the state of the qubit $w_{2j+1}$, which is in the state $\ket{x_j}_{2j+1}$.
This completes the proof.
\end{proof}
The number of functions covered by the class, referred in Theorem~\ref{th:main}, is as follows.
\begin{corollary}
\label{cor:3}
For any $n$ there are $\Omega \left( 2^{\frac{\sqrt{n}}{2}} \right)$ functions
(without considering permutation of variables)
for which we can obtain $\Q(f)=Q_E(f)<D_{\oplus}(f)$.
\end{corollary}
\begin{proof}
We give a lower bound on number of functions which satisfy the
constraints of the function described in Theorem~\ref{th:main}.
Without considering the permutation of variables, we can
simply count the number of ways the function $g(\mathbf{x}')$
can be constructed.
The function $g$ is defined on $\lceil \frac{n}{4} \rceil$ variables and
is it self a perfect direct sum function as defined in Definition~\ref{def:2}.
If $g$ contains $t$ monomials then then each of the monomial must
have at least $t+1$ variables in them. This is because
$\prod_{i=1}^k x_i \bigoplus g(\mathbf{x}')$ must satisfy the constraints
of Definition~\ref{def:2}.
Therefore each construction of $g$ is a different way of
partitioning the $\lceil \frac{n}{4} \rceil$ variables into $t$ sets.
If we do not consider which variable is in which monomial,
and rather just the distribution of the variables in the
partitions, then this becomes same as
finding the number of solutions to
$\sum_{i=1}^t v_i=\lceil \frac{n}{4} \rceil$ where $v_i \geq t+1~ \forall i$.
We do this is as it is well known that if a function is
derived from some other function just by a permutation of
the variable names, they have the same query complexity
and are called PNP equivalent~\cite{exact}. We aim to count the
functions that are not PNP equivalent with each other.
The number of such partitions is $\displaystyle {n +t -(t+1)^2 -1 \choose t-1}= {\lceil \frac{n}{4} \rceil-t^2-t-1 \choose t-1}$.
Here $t$ is minimum $1$ and at maximum $\sqrt{\lceil \frac{n}{4} \rceil-1}$.
Therefore the total possible number of function is
\begin{align*}
\left(
\displaystyle \sum_{x=1}^{\sqrt{\lceil \frac{n}{4} \rceil-1}} {\lfloor \frac{n}{4} \rfloor-x^2-x -1 \choose x-1}
\right)
>
\left(
\displaystyle \sum_{x=1}^{\sqrt{\lceil \frac{n}{4} \rceil-1}} { \sqrt{\lceil \frac{n}{4} \rceil-1} \choose x}
\right)
=
\Omega \left( 2^{\sqrt{\frac{n}{4}}} \right)
=
\Omega \left( 2^{\frac{\sqrt{n}}{2}} \right)
.
\end{align*}
\
\end{proof}
Again, note that the advantage is from being able to deterministically untangle two qubits with a single query,
owing to the result of Theorem~\ref{lemma:1} and the fact that these functions have high granularity.
The next important fact is in untangling we have a degree
of freedom in terms of which variables we want to carry over
to the end, and then their values can again be deterministically
to obtain other monomials.
In fact in the state $\ket{\beta_0}$, if there are $s$
variables each whose values are stored in the two superposition
state, we can carry over $\lceil \frac{s}{2} \rceil$ values
from each of the superposition states to the final state
that is simply a tensor product of qubits in computational
basis states, meaning the value of all the variables stored
in the working memory can be deterministically obtained.
This is evident from the structure of the state that we
obtain at the end for $f_1(\mathbf{x})$ and $f_2(\mathbf{x})$
(For $f_2$ we have already decided on which values
from $x_i, \frac{n}{2} \leq i \leq n$ we want to carry over
to the final, pre-measurement state):
$$\ket{\beta_f}=
(-1)^{g'(\mathbf{x})}
\ket{\bm{0}}_0
\kett*{\prod\limits_{i=1}^k x_i \oplus \prod\limits_{i=k+1}^n x_i }_1
\bigotimes_{i=1}^{k_1} (\ket{x_{r(2i)}}_{i+1} \ket{x_{k+i}}_{i+2})
.$$
The algorithm for the other
functions progresses in a similar manner.
This coupled with the fact that the $\sf pdsp$ class has high granularity, which allows us to efficiently
lower bound the generalized parity tree complexity gives us the desired advantage.
\section{The results for MM type Bent functions}
\label{sec:4}
In this section we apply our techniques on Maiorana-McFarland (MM) type functions~\cite{bent1}. As we have pointed out in the
introduction, our investigation started with the study of MM bent functions on small number of variables.
\subsection{MM Bent Functions on $4$ and $6$ variables}
\label{sec:31}
It has been shown in \cite{parity} that we can construct an exact quantum query algorithm
to evaluate any MM Bent function $f$ on $n$ variables with $\lceil \frac{3n}{4} \rceil$ queries using the parity decision tree technique.
This method utilizes the definition of the MM construction.
However, given that we only know $Q_E(f) \geq \frac{n}{2}$,
this does not rule out the possibility of an algorithm with lesser query complexity.
To verify the tightness of the upper bound due to the parity method,
we obtained the exact quantum query complexity of the functions
$f^{id}_4(x)=x_1x_3 \oplus x_2x_4$ and $f^{id}_6= x_1x_4 \oplus x_2x_5 \oplus x_3x_6$
using the semidefinite programming method of~\cite{sdp}, utilizing the CVX package of Matlab~\cite{cvx}.
It is worth mentioning here that the default solvers of CVX couldn't accurately solve the
semidefinite program for $n=6$, and we had to use a commercial solver called 'Mosek'.
The parity method requires $3$ and $5$ queries for $f^{id}_4$ and $f^{id}_6$ respectively.
For $f^{id}_4$ the exact quantum query complexity of the function indeed matched that value.
However, we found $Q_E(f^{id}_6)=4$, which is lower than the query complexity of the parity method
and could not formulate any other parity based method that reached the query complexity of $4$. This was the starting point of trying to design a new exact quantum query algorithm that could meet touch this lower bound. Although $\mathbb{F}_2$-polynomial and untangling based algorithms that we designed are not provably optimal for this class, we were able to use the same philosophy to obtain optimal results for the $\sf pdsp$ class, as described in Section~\ref{sec:2}. In this direction we first develop our algorithm for the function $f^{id}_n$
and then extend it for a larger class of MM type bent functions,
of the size doubly exponential in $\frac{n}{4}$.
\subsection{Extending for general $n$}
\label{sec:41}
We first extend our techniques to build an exact quantum algorithm
for evaluating $f^{id}_n(\mathbf{x})=
\bigoplus_{i=1}^{\frac{n}{2}} \left( x_ix_{\frac{n}{2}+i} \right)$
that requires $\frac{n}{2} + \lceil \frac{n}{8} \rceil = \lceil \frac{5n}{8} \rceil$
queries. We then observe that the same algorithm in fact evaluates a much larger
class of functions in $\mathbb{B}_n$, although the permutation is still identity
permutation. Finally we show how
this algorithm can be modified
to evaluate functions in $\mathbb{B}_n$ beyond the identity permutation.
For the functions $f^{id}_n$ we need $l+1$ qubits as working memory where $l=\lfloor \frac{n}{4} \rfloor$.
First we describe the phase obtaining method corresponding to the monomials when evaluating $f^{id}_n$. This is very similar to that of the $\sf pdsp$ class.
\begin{lemma}
\label{lemma2}
Corresponding to a query algorithm for a function on $n=2k$ variables
with $l= \lfloor \frac{n}{4} \rfloor$,
the state $\ket{\psi_i}$ can be transformed to the state $\ket{\psi_{i+1}}$
by making two queries to the oracle for $0 \leq i < l$ where the state
$\ket{\psi_i}$ is defined as
\
\begin{align*}
\ket{\psi_i}=&\frac{1}{\sqrt{2}}(-1)^{\sum_{j=1}^i x_jx_{k+j}} \ket{\mb{0}}_0\ket{0}_1
\bigotimes_{j=1}^i \ket{x_j}_{j+1}
\bigotimes_{j=i+2}^{l+1} \ket{0}_j \\
+&\frac{1}{\sqrt{2}} (-1)^{ \sum_{j=1}^i x_{l+j}x_{l+k+j}} \ket{\mb{0}}_0\ket{1}_1
\bigotimes_{j=1}^i \ket{x_{l+j}}_{j+1}
\bigotimes_{j=i+1}^{l} \ket{0}_{j+1}
\end{align*}
\end{lemma}
\begin{proof}
We define a protocol $\sf acq_1(i)$ which functions as follows.
We first apply the unitaries ${\sf C_0} \sg{n}{0}{i+1}$ and ${\sf C_1} \sg{n}{0}{l+i+1}$ on $\ket{\psi}_i$.
This transforms the system to
\
\begin{align*}
&\frac{1}{\sqrt{2}}\left( (-1)^{\sum_{j=1}^i x_jx_{k+j}} \right) \frac{\ket{\mb{0}}_0 + \ket{\mb{i+1}}_0}{2}\ket{0}_1\ket{x_1}_2\ldots\ket{x_i}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_{l+1}
\\
+&\frac{1}{\sqrt{2}} \left( (-1)^{ \sum_{j=1}^i x_{l+j}x_{l+k+j}} \right) \frac{\ket{\mb{0}}_0 + \ket{\mb{l+i+1}}_0}{2}\ket{1}_1\ket{x_{l+1}}_2 \\ & \ldots\ket{x_{l+i}}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_{l+1}. &
\end{align*}
We apply the oracle on this state, forming
\begin{align*}
&\frac{1}{\sqrt{2}} \left( (-1)^{\sum_{j=1}^i x_jx_{k+j}} \right)
\frac{\ket{\mb{0}}_0 + (-1)^{x_{i+1}}\ket{\mb{i+1}}_0}{2}\ket{0}_1\ket{x_1}_2 \\ &
\ldots\ket{x_i}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_{l+1}
\\
+& \frac{1}{\sqrt{2}} \left( (-1)^{ \sum_{j=1}^i x_{l+j}x_{l+k+j}} \right) \frac{\ket{\mb{0}}_0 + (-1)^{x_{l+i+1}}\ket{\mb{l+i+1}}_0}{2}\ket{1}_1\ket{x_{l+1}}_2 \\ &
\ldots\ket{x_{l+i}}_{i+1} \ket{0}_{i+2}\ldots \ket{0}_{l+1}.
\end{align*}
The next unitaries are ${\sf C_0} \parg{n}{0}{i+1}$ and ${\sf C_1} \parg{n}{0}{l+i+1}$, which forms the state
\begin{align*}
&\frac{1}{\sqrt{2}}(-1)^{\sum_{j=1}^i x_jx_{k+j}} \ket{\mb{x_{i+1}}}_0 \ket{0}_1\ket{x_1}_2\ldots\ket{x_i}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_l \\
+&\frac{1}{\sqrt{2}} (-1)^{ \sum_{j=1}^i x_{l+j}x_{l+k+j}} \ket{\mb{x_{l+i+1}}}_0 \ket{1}_1\ket{x_{l+1}}_2\ldots\ket{x_{l+i}}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_{l+1}.
\end{align*}
We then use the permutation matrices ${\sf C_0} \pg{n}{k+i+1}$ and ${\sf C_1} \pg{n}{l+k+i+1}$,
then make a query to the oracle, and use the permutation matrices
with the same controls. The resultant state is then
\begin{align*}
&\frac{1}{\sqrt{2}}(-1)^{\sum_{j=1}^i x_jx_{k+j}} (-1)^{x_{i+1}x_{k+i+1}} \ket{\mb{x_{i+1}}}_0 \ket{0}_1\ket{x_1}_2\ldots\ket{x_i}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_{l+1} \\
+&\frac{1}{\sqrt{2}} (-1)^{ \sum_{j=1}^i x_{l+j}x_{l+k+j}} (-1)^{x_{i+l+1}x_{l+k+i+1}} \ket{\mb{x_{l+i+1}}}_0 \ket{1}_1\ket{x_{l+1}}_2\ldots\ket{x_{l+i}}_{i+1} \\
&\ket{0}_{i+2}\ldots \ket{0}_{l+1}. \\
=&\frac{1}{\sqrt{2}}(-1)^{\sum_{j=1}^{i+1} x_jx_{k+j}} \ket{\mb{x_{i+1}}}_0 \ket{0}_1\ket{x_1}_2\ldots\ket{x_i}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_{l+1} \\
+&\frac{1}{\sqrt{2}} (-1)^{ \sum_{j=1}^{i+1} x_{l+j}x_{l+k+j}} \ket{\mb{x_{l+i+1}}}_0 \ket{1}_1\ket{x_{l+1}}_2\ldots\ket{x_{l+i}}_{i+1}\ket{0}_{i+2}\ldots \ket{0}_{l+1}. \\
\end{align*}
Finally we swap the values of the query register and the $i+2$-th work qubit which is i the $\ket{0}$ state
in both superposition state. Which results in the state
\begin{align*}
&
\frac{1}{\sqrt{2}}(-1)^{\sum_{j=1}^{i+1} x_jx_{k+j}} \ket{\mb{0}}_0\ket{0}_1
\bigotimes_{j=1}^{i+1} \ket{x_j}_{j+1}
\bigotimes_{j=i+2}^{l+1} \ket{0}_{j_1} \\
+&\frac{1}{\sqrt{2}} (-1)^{ \sum_{j=1}^{i+1} x_{l+j}x_{l+k+j}} \ket{\mb{0}}_0\ket{1}_1
\bigotimes_{j=1}^{i+1} \ket{x_{l+j}}_{j+1}
\bigotimes_{j=i+2}^{l} \ket{0}_{j+1} \\
&=\ket{\psi_{i+1}}
\end{align*}
Therefore we get $\ket{\psi_i} \xrightarrow{\sf acq_1(i)} \ket{\psi_{i+1}}$
and this completes the proof.
\end{proof}
We now start describing the algorithm for evaluating $f^{id}_n$
assuming $n \equiv 0 \mod 4$ ( then $l=\lfloor \frac{n}{4} \rfloor=\frac{n}{4}$).
We start with the state
$$\ket{\psi}_0= \frac{1}{\sqrt{2}} \big( \ket{\mb{0}}_0\ket{0}_1 \bigotimes_{j=1}^l \ket{0}_{j+1}
+
\ket{\mb{0}}_0\ket{1}_1 \bigotimes_{j=1}^l \ket{0}_{j+1} \big)$$
We then apply $\sf acq_1(i), {0 \leq i < l}$ described in Lemma~\ref{lemma2}.
transforming the system to the state
\begin{align*}
\ket{\psi}_{l}=&\frac{1}{\sqrt{2}}(-1)^{\sum_{j=1}^{l} x_jx_{k+j}} \ket{\mb{0}}_0\ket{0}_1
\bigotimes_{j=1}^l \ket{x_j}_{j+1} \\
+&\frac{1}{\sqrt{2}} (-1)^{ \sum_{j=1}^{l} x_{l+j}x_{l+k+j}} \ket{\mb{0}}_0\ket{1}_1
\bigotimes_{j=1}^l \ket{x_{l+j}}_{j+1}
\end{align*}
At this point, we have used $\frac{n}{2}$ queries and have obtained all the required phases
needed to evaluate the function's value for the corresponding input.
However, the system at this point is entangled if any of the qubits $w_i$ is in different state
in the two states.
We now construct the next building block of the algorithm, which is built to bring two qubits
in the two superposition states (one with $w_1=\ket{0}$ and the other with $w_1=\ket{1}$
to the same state using a single query.
This method of untangling two qubits with one query is the foundational step of algorithm.
\
Since there are $\frac{n}{4}$ qubits containing values of different variables
in $\ket{\psi}_{\frac{n}{4}}$, this process needs to be applied
$\lceil \frac{n}{8} \rceil$ times to get the desired result and un-entangle the system.
We now describe this methodology.
Recall that the state of the system after
$\frac{n}{2}$ queries is in the state
\begin{align*}
\ket{\psi}_{\frac{n}{4}}=&\frac{1}{\sqrt{2}}(-1)^{ \left( \sum_{j=1}^{\frac{n}{4}} x_jx_{\frac{n}{2}+j} \right) }
\ket{\mb{0}}_0\ket{0}_1\ket{x_1}_2\ket{x_2}_3\ldots \ket{x_{\frac{n}{4}}}_{\frac{n}{4}+1}
\\
+&\frac{1}{\sqrt{2}} (-1)^{ \left( \sum_{j=1}^{\frac{n}{4}} x_{\frac{n}{4}+j}x_{\frac{3n}{4}+j} \right) }
\ket{\mb{0}}_0\ket{1}_1\ket{x_{\frac{n}{4}+1}}_2\ket{x_{\frac{n}{4}+2}}_3\ldots \ket{x_{\frac{n}{2}}}_{\frac{n}{4}+1}
\end{align*}
We define
$f_1(\mathbf{x})=\left( \bigoplus_{j=1}^{\lfloor \frac{n}{4} \rfloor} x_jx_{\frac{n}{2}+j} \right)$ and
$f_2(\mathbf{x})= \left( \bigoplus_{j=1}^{\lfloor \frac{n}{4} \rfloor} x_{\lfloor \frac{n}{4} \rfloor+j}x_{\lfloor \frac{3n}{4} \rfloor+j} \right)$, and thus
$f^{id}_n(\mathbf{x})= f_1(\mathbf{x}) \oplus f_2(\mathbf{x})$ when $n \equiv 0 \mod 4$.
Therefore,
\begin{align}
\label{eq:psin4}
\ket{\psi}_{\frac{n}{4}}=\frac{1}{\sqrt{2}} \left( (-1)^{ f_1(\mathbf{x}) }
\ket{\mb{0}}_0\ket{0}_1
\bigotimes_{j=1}^{\lfloor \frac{n}{4} \rfloor} \ket{x_j}_{j+1}
+ (-1)^{ f_2(\mathbf{x}) }
\ket{\mb{0}}_0\ket{1}_1
\bigotimes_{j=1}^l \kett*{x_{\lfloor \frac{n}{4} \rfloor+j}}_{j+1}
\right).
\end{align}
We have the acquired the value of $f_1$ and $f_2$ as local phases.
At this stage we use the protocol $\sf untangle$ defined in
Theorem~\ref{lemma:1} iteratively to un-entangle the state.
We define this protocol in a generic manner so that it can be used for
other states as well.
We now directly apply the $\sf untangle^s_n$ protocol to the state
$\ket{\psi}_{\frac{n}{4}}$ as described in Equation~\ref{eq:psin4}
as
$$\ket{\psi}_{\frac{n}{4}}=
\frac{1}{\sqrt{2}} \left( (-1)^{ f_1(\mathbf{x}) }
\ket{\mb{0}}_0\ket{0}_1
\bigotimes_{j=1}^{\frac{n}{4}} \ket{x_j}_{j+1}
+ (-1)^{ f_2(\mathbf{x}) }
\ket{\mb{0}}_0\ket{1}_1
\bigotimes_{j=1}^{\frac{n}{4}} \ket{x_{\frac{n}{4}+j}}_{j+1}
\right).
$$
Here $\ket{\psi_{\frac{n}{4}}}$ can be described as the state $\ket{\beta_0}$
described in Lemma~\ref{lemma:3}
by putting $s=\frac{n}{4}$ and $r_i=i, 1 \leq i \leq \frac{n}{4}$.
Thus $\sf untangle^{s}_n$ takes $\lceil \frac{\frac{n}{4}}{2} \rceil = \lceil \frac{n}{8} \rceil$ queries and the system is in the state
\
\begin{align*}
(-1)^{g'(\mathbf{x})}
\ketm{0} \kett*{x_1x_{\frac{n}{2}+1} \oplus x_2x_{\frac{n}{2}+2} \ldots \oplus x_{\frac{n}{2}}x_n }_1
\ket{x_2}_2 \ket{x_{l+2}}_3 \ldots \ket{x_{\frac{n}{2}}}_{\frac{n}{4}+1}
\end{align*}
and measuring $w_1$ in the computational state gives us the output.
\ \\
In case of $n \equiv 2 \mod 4$ the number of monomials is not even.
and we have $f^{id}_n(\mathbf{x})=f_1(\mathbf{x}) \oplus f_2(\mathbf{x}) \oplus x_{\frac{n}{2}}x_n$.
Thus we acquire the
phase related to $\lfloor \frac{n}{4} \rfloor$ monomials in the state with $\ket{w_1}=0$ and for $\lceil \frac{n}{4} \rceil$ monomials
in the state with $\ket{w_1}=1$.
We apply $\sf acquire(i), 0 \leq i \leq \lfloor \frac{n}{4} \rfloor$,
bringing the system to the state
$$
\frac{1}{\sqrt{2}} \left(
(-1)^{f_1(\mathbf{x})}
\ketm{0}\ket{0}_1
\bigotimes_{j=1}^{\lfloor \frac{n}{4} \rfloor} \ket{x_j}_{j+1}
+(-1)^{f_2(\mathbf{x})}
\ketm{0}\ket{1}_1
\bigotimes_{j=1}^{\lfloor \frac{n}{4} \rfloor} \ket{x_{\lfloor \frac{n}{4} \rfloor+j}}_{j+1}
\right).
$$
\
Thus, the monomial $x_{\frac{n}{2}}x_n$ still needs to be evaluated.
At this point we obtain the last monomial with the state containing $\ket{1}_1$ using two queries.
For the superposition state with $\ket{0}_1$,
the value of qubit $\ket{x_1}_2$ is transformed to
$\ket{x_{\lfloor \frac{n}{4} \rfloor+1}}_2$ and the query register holds the value of $x_{\frac{n}{2}}$.
Thus after $\lfloor \frac{n}{4} \rfloor+2$ queries the system is in the state
\begin{align*}
(-1)^{g'(\mathbf{x})}&
\left(
\frac{1}{\sqrt{2}}(-1)^{f_1(\mathbf{x})}
\ketm{x_{\frac{n}{2}}}\ket{0}_1\ket{x_{\lfloor \frac{n}{4} \rfloor+1}}_2
\bigotimes_{j=2}^{\lfloor \frac{n}{4} \rfloor} \ket{x_j}_{j+1} \right.
\\+&
\left.
\frac{1}{\sqrt{2}} (-1)^{ f_2(\mathbf{x}) \oplus x_{\frac{n}{2}}x_n }
\ketm{x_{\frac{n}{2}}}\ket{1}_1\ket{x_{\lfloor \frac{n}{4} \rfloor+1}}_2
\bigotimes_{j=2}^{\lfloor \frac{n}{4} \rfloor} \ket{x_{\lfloor \frac{n}{4} \rfloor+j}}_{j+1} \right)
\end{align*}
Thus at this stage apart from $w_1$, $\lceil \frac{n}{4} \rceil-2$ qubits have different variables in the two superposition state.
We swap the value of $Q_n$ and $w_2$ with $w_{\lfloor \frac{n}{4} \rfloor}$
and $w_{\lfloor \frac{n}{4} \rfloor+1}$ and apply the $\sf untangle^s_n$ protocol
(with $s=\lceil \frac{n}{4} \rceil-2$) and reverse the swap operations.
This protocol makes $\lceil \frac{\lceil \frac{n}{4} \rceil-2}{2} \rceil$ queries
and the system is in the state
$$
(-1)^{g'(\mathbf{x})}
\ketm{x_{\frac{n}{2}}} \ket{x_1x_{\frac{n}{2}+1} \oplus x_2x_{\frac{n}{2}+2} \ldots \oplus x_{\frac{n}{2}}x_n}_1
\ket{x_{\lfloor \frac{n}{4} \rfloor+1}}_2
\ket{x_3}_3 \ldots \ket{x_{\frac{n}{2}-1}}_{\lfloor \frac{n}{4} \rfloor+1}.
$$
\
This gives us the answer after making a total of $2\lfloor \frac{n}{4} \rfloor +2 +\lceil \frac{\lceil \frac{n}{4} \rceil}{2} \rceil -1 = \lceil \frac{5n}{8} \rceil$
queries to the oracle.
Thus both in the case of $n \equiv 0 \mod 4$ and $n \equiv 2 \mod 4$ we require $\lceil \frac{5n}{8} \rceil$ queries
to evaluate the function $f^{id}_n$ exactly.
The above discussion, combined with the Lemma~\ref{lemma2},
Theorem~\ref{lemma:1}, Lemma~\ref{lemma:3}
and the description of Algorithm~\ref{algo} can be summarized as the following
theorem.
\begin{theorem}
\label{th:3}
The function $f^{id}_n$ can be evaluated by an exact quantum algorithm
that makes $\lceil \frac{5n}{8} \rceil$ queries to the oracle
and uses $\lfloor \frac{n}{4} \rfloor+1$ qubits as working memory.
\end{theorem}
This completes the description of the exact quantum algorithm that evaluates
$f^{id}_n$ using $\lceil \frac{5n}{8} \rceil$ queries.
As we can observe, in case of $n \equiv 0 \mod 4$
the qubits $w_2, w_3, \dots w_{\lfloor \frac{n}{4} \rfloor +1}$ are in the states
$x_2, x_{\lfloor \frac{n}{4} \rfloor +2},x_4,x_{\lfloor \frac{n}{4} \rfloor +2}, \ldots, x_{\frac{n}{2}}$ respectively.
If $n \equiv 2 \mod 4$ then the query register contains the variable $x_{\frac{n}{2}}$
and the qubits contain the variables
$x_{\lfloor \frac{n}{4} \rfloor+1}, x_2, x_{\lfloor \frac{n}{4} \rfloor+2}$ so on.
In both cases value of $\lceil \frac{n}{4} \rceil$ input variables is obtained via these qubits.
Therefore we can also evaluate any function $g$ depending on these variables
without making any more queries to the oracle, which we summarize in the following corollary.
\begin{corollary}
This algorithm can also be used to evaluate any MM type Bent function with identity permutation
and the function $g$ having at most $\lceil \frac{n}{4} \rceil$ influencing variables.
\end{corollary}
\subsection{Beyond the Identity Permutation}
We have shown that our algorithm can evaluate the MM Bent functions
of type $f^{id}_n \oplus g(x')$ where $x'$ is a subset of $\hat{x}$ consisting of
at most $\lceil \frac{n}{4} \rceil$ variables.
However, the techniques we have used are do not restrict the permutation to be identity
permutation. The algorithm works on dividing the variables of $\hat{x}$ into two (close to)
equal disjoint sets and then calculating the value of the corresponding points of $\tilde{x}$,
depending on the permutation.
In case of the identity permutation, since the variable $x_{\frac{n}{2}+i} \in \tilde{x}$
depended solely on the value of $x_i \in \hat{x}$ we could realize this procedure in a
sequential manner.
Therefore, as long we have a permutation such that it can be expressed as the concatenation
of two permutations on $\frac{n}{4}$ variables each,
or more precisely concatenation of permutations on $\lfloor \frac{n}{4} \rfloor$ and $\lceil \frac{n}{4} \rceil$ variables,
we should be able to calculate the influencing variables in $\tilde{x}$ corresponding
to the values of the variables in $\hat{x}$ at parallel,
and thus be able to evaluate the function with the same query complexity of $\lceil \frac{5n}{8} \rceil$.
We now concretize this relaxation in restraint and the corresponding modifications needed
in the algorithm.
\begin{theorem}
\label{th:4}
Let $f$ be an MM Bent function $f$ on $n$ variables such that $f= \phi(\hat{x}) \cdot \tilde{x} \oplus g(x')$,
with the following constraints:
\begin{enumerate}
\item[1] $\phi_1$ and $\phi_2$ are two permutations such that
$\phi(\hat{x}) \cdot \tilde{x} = \phi_1(\hat{y}) \cdot \tilde{y} \oplus \phi_2(\hat{z}) \cdot \tilde{z}$
\item[2] The sets of variables $\hat{y},\hat{z},\tilde{y},\tilde{z}$ are all disjoint,
$\abs{\hat{y}}=\abs{\tilde{y}}= \lfloor \frac{n}{4} \rfloor$, $\abs{\hat{z}}=\abs{\tilde{z}}= \lceil \frac{n}{4} \rceil$
\item[3] $\hat{y} \cup \hat{z} =\hat{x}$ and $\tilde{y} \cup \tilde{z} = \tilde{x}$
\item[4] $x' \subset{\hat{x}}, \abs{x' \cap \hat{y}} \leq \lceil \frac{n}{8} \rceil,
\abs{x' \cap \hat{z}} \leq \lceil \frac{n}{8} \rceil$
\end{enumerate}
Then the function can be evaluated by an exact quantum query algorithm
that makes $\lceil \frac{5n}{8} \rceil$ queries to the oracle and uses $\frac{n}{2}+1$ qubits as working memory.
\end{theorem}
\begin{proof}
Let the variables of $\hat{y}$ be $x_{i_1},x_{i_2}, \ldots x_{i_{\lfloor \frac{n}{4} \rfloor}}$ and
$x_{i_{\lfloor \frac{n}{4} \rfloor+1}},x_{i_{\lfloor \frac{n}{4} \rfloor+2}}, \ldots x_{i_{\frac{n}{2}}}$ be the variables of $\hat{z}$.
We start the system in the state in the all zero state and apply a Hadamard gate on the qubit $w_1$
to get the state
$$\frac{1}{\sqrt{2}} \ketm{0}\ket{0}_1\ket{0}_2\ldots\ket{0}_{\frac{n}{2}+1} +
\frac{1}{\sqrt{2}} \ketm{0}\ket{1}_1\ket{0}_2\ldots\ket{0}_{\frac{n}{2}+1}.$$
Corresponding to the state with $w_1=\ket{0}$,
the algorithm progresses as follows:
we obtain the values of the $\lfloor \frac{n}{4} \rfloor$ variables in
$\hat{y}$ using the first $\lfloor \frac{n}{4} \rfloor$ queries and
store them in the qubits $w_2,w_3, \ldots w_{\lfloor \frac{n}{4} \rfloor +1}$.
Before the $t$-th query, where $1 \leq t \leq \lfloor \frac{n}{4} \rfloor$ the gate ${\sf C_0} \parg{n}{0}{i_t}$,
is applied, followed by the oracle and then the value of
query register is swapped with the $t+1$-th work qubit,
which is in the state $\ket{0}$ at this point.
The next $\lfloor \frac{n}{4} \rfloor$ queries are used to obtain the corresponding
linear function in $\tilde{y}$ as follows.
The linear function in $\tilde{y}$ can be encoded using $2^{\lfloor \frac{n}{4} \rfloor}$ unitary operations.
Each operation correspond to a point in $\hat{y}$.
For example, if $\phi_1(e_1,e_2,\ldots e_{\lfloor \frac{n}{4} \rfloor})=(h_1,h_2,\ldots h_{\lfloor \frac{n}{4} \rfloor}),
e_t,h_t \in \{0,1\}$
then we apply a multiple target $\sf C^{\lfloor \frac{n}{4} \rfloor}-NOT$ operation controlled on $w_1=\ket{0}$,$w_2=\ket{e_1},\ldots, w_{\lfloor \frac{n}{4} \rfloor+1}=\ket{e_{\lfloor \frac{n}{4} \rfloor}}$,
with the targets being the qubits $w_{\lceil \frac{n}{4} \rceil+1+t}$ where $h_t=1$.
We apply these kinds of operations for all $2^{\lfloor \frac{n}{4} \rfloor}$ points in $\hat{y}$.
Note that for any input only one of these operations will have all controls satisfied.
Once this operation is applied,
we have the indexes of the variables in $\tilde{y}$ obtained which are influential at that input point.
We can obtain the corresponding phase one after another in a multiplied form
by putting a C-NOT from the qubit $w_{\lceil \frac{n}{4} \rceil+1+t}$ to the query register and
then apply the appropriate ${\sf C_0} \pg{n}{v}$
gate where $v$ depends on the encoding used.
This is followed by a query to the oracle and then the
C-NOT operation is applied again to un-compute the
query register.
Thus, after $2 \times \lfloor \frac{n}{4} \rfloor$ query this superposition state is in the form.
At this point we apply the $\sf C^{\lfloor \frac{n}{4} \rfloor}-NOT$ operations to un-compute
the garbage in the qubits $w_{\lceil \frac{n}{4} \rceil+1}$ to $w_{\frac{n}{2}+1}$,
leading the system to
$$ (-1)^{\phi_1{\hat{y}} \cdot \tilde{y}} \ketm{0}\ket{0}_1 \ket{x_{i_1}}_2\ldots \ket{x_{i_{\lfloor \frac{n}{4} \rfloor}}}_{\lfloor \frac{n}{4} \rfloor+1} \ket{0}_{\lfloor \frac{n}{4} \rfloor+2} \ldots \ket{0}_{\frac{n}{2}+1}$$
Similarly, in case of the state with $w_1=\ket{1}$,
this set of operations take $2 \times \lceil \frac{n}{4} \rceil$ queries to get the
phase $(-1)^{\phi_2(\hat{z}).\tilde{z}}$
and at this state the superposition state is in
$$ (-1)^{\phi_1{\hat{z}} \cdot \tilde{z}} \ketm{0}\ket{1}_1 \ket{x_{i_{\lfloor \frac{n}{4} \rfloor+1}}}_2\ldots \ket{x_{i_{\lfloor \frac{n}{4} \rfloor+\lceil \frac{n}{4} \rceil}}}_{\lceil \frac{n}{4} \rceil+1} \ket{0}_{\lceil \frac{n}{4} \rceil+2} \ldots \ket{0}_{\frac{n}{2}+1}$$
Now, if $n \equiv 0 \mod 4$, then $\lfloor \frac{n}{4} \rfloor=\lceil \frac{n}{4} \rceil$, and we
can apply the method of Lemma~\ref{lemma:3} to un-compute
the qubits $w_2,w_3, w_{\lfloor \frac{n}{4} \rfloor+1}$ using $\lceil \frac{n}{8} \rceil$ queries.
The step of Lemma~\ref{lemma:3} is applied so that the
variables in $x'$ are the ones that are stored
in the work qubits as final states of those qubits.
If $n \equiv 2 \mod 4$ then $\lfloor \frac{n}{4} \rfloor=\lceil \frac{n}{4} \rceil-1$ and the state with
$w_1=\ket{1}$ requires $2$ less queries to obtain the related phase. It uses the two queries to transforms two qubits $w_{\lceil \frac{n}{4} \rceil}$ and $w_{\lceil \frac{n}{4} \rceil+1}$ to the same state as in the other
superposition state, in the same manner as shown for $f^{id}_6$
and thus after $\lfloor \frac{n}{4} \rfloor+2$ queries there are $\lceil \frac{n}{4} \rceil-2$ qubits that
need to be brought to the same state to un-entangle the system.
This takes a further $\lceil \frac{\lceil \frac{n}{4} \rceil-2}{2} \rceil$ queries
using the methodology of Lemma~\ref{lemma:3}.
Thus in both cases, the algorithm requires $\lceil \frac{5n}{8}\rceil$ queries, and the system is in the state
\begin{align*}
(-1)^{x_{i_1} + \ldots + x_{i_{\frac{n}{2}}} }& \Big( (-1)^{\phi_1{\hat{y}} \cdot \tilde{y}} \ketm{0}\ket{0}_1 \ket{x_{i_1}}_2\ldots \ket{x_{i_{\frac{n}{2}}}}_{\lceil \frac{n}{4} \rceil+1} \ldots \ket{0}_{\frac{n}{2}+1}
\\
+& (-1)^{\phi_1{\hat{z}} \cdot \tilde{z}} \ketm{0}\ket{1}_1 \ket{x_{i_1}}_2\ldots \ket{x_{i_{\frac{n}{2}}}}_{\lceil \frac{n}{4} \rceil+1} \ket{0}_{\lceil \frac{n}{4} \rceil+2} \ldots \ket{0}_{\frac{n}{2}+1} \Big)
\end{align*}
Finally in both cases the Hadamard gate is applied on the
qubit $w_1$ which now contains the value of the permutation
$\phi(\hat{x}).\tilde{x}$ corresponding to the input given to the oracle.
At this point the work qubits $w_2$ through $w_{\lceil \frac{n}{4} \rceil+1}$ store the variables in $x'$, which are then used to calculate the
value of the function $g$, XOR-ing the output of $g$
with $w_1$ and then measuring $w_1$ in the computational basis
gives us the final output.
\end{proof}
We call the set of MM Bent functions satisfying
the constraints of Theorem~\ref{th:4} as $\Gamma_n$.
\subsection*{The case of odd $n$}
So far, we have concentrated on the class of MM Bent functions, which are defined for all even $n$, and have obtained a
large class of functions with
deterministic query complexity of $n$ which our exact quantum algorithm
evaluates using $\lceil \frac{5n}{8}\rceil$ queries.
However this technique can be extended for all odd values
of odd $n$ as well. This can be done as follows.
\begin{enumerate}
\item Take any function on $f= \phi(\hat{x}).\tilde{x} \oplus g(x')$ on $n=2k$ variables such that $\phi$ and $g$
follow the constraints of Theorem ~\ref{th:4}.
\item Form the function $f'=f(x) \oplus x_{n+1}$
\end{enumerate}
Since $f$ has a polynomial degree of $n$, as shown in \cite{parity}, this implies $f'$ has a polynomial degree of $n+1$.
This function can be evaluated in the exact quantum model by first evaluating $f$ using
$\lceil \frac{5n}{8} \rceil$ queries and using one more
query to obtain the value of $x_{n+1}$.
Thus this takes $\lceil \frac{5n}{8} \rceil +1 \leq \lceil \frac{5(n+1)}{8} \rceil +1$ queries.
The number of functions that can be evaluated in this case
is same as that for $n$.
\subsection{The number of functions evaluated:}
\label{mm:num}
We finally calculate the number of functions covered via the definition of Theorem~\ref{th:3} for even $n$ ($\abs{\Gamma_n}$), and the number of functions
for any odd $n$ is the same as the number of functions for
$n-1$.
We essentially give a lower bound on the number of functions, as our calculation is based on a single partition of $\hat{x}$ and $\tilde{x}$ into these four sets, and any choice of $x'$.
There are $2^{\lfloor \frac{n}{4} \rfloor}$ inputs to the first permutation and $2^{\lceil \frac{n}{4} \rceil}$ inputs to the second permutation,
and $x'$ contains $\lceil \frac{n}{4} \rceil$ inputs.
Therefore the total number of functions are $\left(2^{\lfloor \frac{n}{4} \rfloor}!\right)\left(2^{\lceil \frac{n}{4} \rceil}!\right)\left(2^{2^{\lceil \frac{n}{4} \rceil}}\right)$.
We now recall the definition of PNP-equivalence from~\cite{exact}.
\begin{definition}
Two functions $f$ and $g$ are called PNP-equivalent
if $f$ can be obtained from $g$ by permuting the name of
the variables in $g$, replacing some variables $x_i$ with $x_i \oplus 1$ in $g$
and by finally complementing the new formed function with $1$.
\end{definition}
If two functions are PNP equivalent then they have the same
deterministic and exact quantum query algorithm and often
an algorithm to evaluate one of them can be very easily modified
to evaluate the other using the same number of queries.
Corresponding to a function on $n$ variables, there can be at most $n! 2^{n+1}$ functions that are PNP-equivalent to it.
This is because there can be $n!$ permutation
of variables and each variable $x_i$ can be replaced with $x_i \oplus 1$, and finally each function $f(x)$ can be replaced with
$f(x) \oplus 1$.
Also, the PNP-equivalence relation
is reflective, symmetric and transitive in nature.
Therefore if there is a set of cardinality $\sf S$ consisting of functions on $n$ variables, then it consists of at least
$\frac{\sf S}{n!2^{n+1}}$ functions that are not PNP-equivalent.
Therefore in this case the class $\Gamma_n$
(exactly evaluated by our algorithm using $\lceil \frac{5n}{8} \rceil $ or $\lceil \frac{5n}{8} \rceil +1$
queries) must consist of at least
$$\frac{\left(2^{\lfloor \frac{n}{4} \rfloor}!\right)\left(2^{\lceil \frac{n}{4} \rceil}!\right)\left(2^{2^{\lceil \frac{n}{4} \rceil}}\right)}{n!2^{n+1}} = \Omega
\left(2^{\left(\lfloor \frac{n}{4} \rfloor 2^{\left( \lfloor \frac{n}{4} \rfloor \right)} \right)} \right) $$
functions, which is doubly exponential in $\lfloor \frac{n}{4} \rfloor$.
In conclusion, the fact that this algorithm cannot evaluate all MM Bent functions and thus all functions derived using
the Bent concatenation method for odd values of $n$
is a limitation compared to the parity decision method, which we note down in the following remark.
\begin{remark}
\label{r:2}
The parity decision tree method in \cite{parity} evaluates all MM Bent functions on $n$ variables using $\lceil \frac{3n}{4} \rceil$ queries where as the algorithm described in this requires $\lceil \frac{5n}{8} \rceil$ queries, but is able to evaluate only the MM Bent functions that meet the constraints described in Theorem~\ref{th:4}.
\end{remark}
While the family of algorithms designed by us evaluates a class of functions super exponential in $\lfloor \frac{n}{4} \rfloor$,
with a query complexity lower than any known parity decision tree technique, it lacks in two areas.
The first is that we are unable to show that ${ QC_{\textrm{algo}}}(f)=Q_E(f)$ for these functions.
The second is that we are unable to show ${ QC_{\textrm{algo}}}(f) < D^2_{\oplus}(f)$ for any of these functions.
That is, we do not know if there exists a parity decision tree technique that can have the same
query complexity as the family of algorithms we have presented.
We have noted down in Theorem~\ref{th:par2} that $D_{\oplus}(f)$ is lower bounded by
granularity. It is known that MM type Bent functions have a flat Fourier Spectra,
with $\hat{f}(S)=\frac{1}{2^{\frac{n}{2}}} ~\forall~ S \subseteq [n]$.
Therefore granularity of any MM type Bent function is $\frac{n}{2}$
which gives us a lower bound that we can show to be tight.
\section{Conclusion and Future Directions}
\label{sec:5}
In this paper we have designed a new family of exact quantum algorithms ($\Q$) for certain classes
of non-symmetric functions $f$ with query complexity ${ QC_{\textrm{algo}}}(f)$.
First we have described the class ${\sf pdsp}(n,\lceil \frac{3n}{4} \rceil,q)$ using perfect direct sum constructions
with products, and shown that for a set of $\Omega(2^{\frac{\sqrt{n}}{2}})$ functions in this class
we get $Q_E(f) = { QC_{\textrm{algo}}}(f) = \lfloor \frac{3n}{4} \rfloor$ with $D_{\oplus}(f) > \lfloor \frac{3n}{4} \rfloor$. For these set of functions we have
$\lfloor \frac{3n}{4} \rfloor+1 \leq D_{\oplus}(f) \leq n-1$, depending on the value of $q$ in ${\sf pdsp}(n, \lceil \frac{3n}{4} \rceil, q)$.
We have obtained this result by designing exact quantum query algorithms based on $\mathbb{F}_2$ polynomial
structure and then proven separation from generalized parity complexity technique
by exploiting the high granularity of these functions.
In this regard we design a subroutine as described in Theorem~\ref{lemma:0} which un-entangles two qubits in
an entangled system with a single query, which allows us to obtain the said separations and is central to
our algorithms. It would be interesting to study if this subroutine can be modified to be more efficient in the
bounded error quantum query model.
In fact, we not only obtain advantage over the parity decision tree model in which the parity of two bits is
calculated in a single query, but also the stronger generalized parity decision tree model in which parity of
any number of bits can be calculated in a single query.
Using similar $\mathbb{F}_2$ polynomial based techniques we have also designed algorithms for a subclass of MM type Bent functions
(a variable XOR-ed with MM Bent function when $n$ is odd) consisting of at least $\Omega(2^{2^{\lfloor \frac{n}{4} \rfloor}})$ functions that
are not PNP equivalent for any value of $n$. This family of algorithms have query complexity of $\lceil \frac{5n}{8} \rceil$
where as the lowest query complexity of any known parity decision tree technique is $\lceil \frac{3n}{4} \rceil$.
While $\Q(f)$ is optimal for $f= x_1x_4 \oplus x_2x_5 \oplus x_3x_6$,
we could neither show ${ QC_{\textrm{algo}}}(f)=Q_E(f)$ or that ${ QC_{\textrm{algo}}}(f)< {D_{\oplus}^{(2)}}(f)$
for these classes of functions, which we note down here as open problems.
\begin{enumerate}
\item Does there exist any parity based method that can evaluate functions from this subclass using
less than $\lceil \frac{3n}{4} \rceil$ queries?
\item What is the exact quantum query complexity of the functions in this class?
\end{enumerate}
Thus we design a family of algorithms that is both more powerful than the parity decision tree technique,
and even the generalized parity decision tree technique and can be applied to a large class of non symmetric
functions. In comparison, almost all the existing exact quantum query algorithms can only be applied to $poly(n)$
(mostly) symmetric functions.
It remains of interest to understand the extent to which these techniques can be applied and how can they be modified
to get optimal query complexity for other classes of Boolean functions, towards better understanding of this domain. |
2008.06256 | \section{Introduction}
\IEEEPARstart{U}{rban} road systems contain a large number of intersections. At intersections, the trajectories of vehicles traveling from multiple directions conflict with each other, posing a risk of collisions. The introduction of traffic signals has contributed significantly to intersection collision avoidance, but it has also made intersections the bottleneck of the road traffic network. Delays will inevitably occur when vehicles encounter red lights, and the startup loss as well as the phase transition loss also lower the intersection capacity. Many previous studies have focused on minimizing delay by adapting signal timing according to the estimated traffic demand~\cite{Mirchandani2001,Lin2004}, but the implementation of these schemes is difficult and the potential improvement provided by these methods may be limited in a human-driven environment. \par
With the rapid development of artificial intelligence and wireless communications, connected automated vehicle (CAV) technology is considered to be one of the most promising fields in future transportation. CAVs are able to interact with other vehicles on the road as well as roadside facilities, leading to improved driving trajectories to minimize travel delay, fuel consumption and network throughput~\cite{Chen2016,Tajalli2018}. Moreover, benefiting from sensors installed onboard and inter-vehicle communication, vehicles can measure the headway in a more accurate and timely manner and thus make decisions much more effectively, allowing CAVs to maintain a shorter headway~\cite{Arem2006,Wang2018}. Due to all of the desirable characteristics of CAVs, the form of traffic organization, particularly at intersections, may experience revolutionary changes in the coming years. Studies performed to date have explored a variety of isolated intersection control methodologies, which can be roughly categorized into vehicle-based and phase-based schemes. Vehicle based control determines the passing orders and trajectories for specific vehicles, while phase-based strategies are adaptive for typical traffic demands. Below, we provide a brief review of these control methods. \par
\subsection{Vehicle-based traffic control} \label{intro_SigFree_subsec}
Leveraging the connectivity and controllability of CAVs, intersection traffic signals can be totally abandoned, and centralized or decentralized controllers can be placed on intersections to detect the approaching CAV and arrange movements for it. Following this approach, various control concepts have been proposed. According to the study of Meng \textit{et al.}~\cite{Meng2018}, the signal-free vehicle-based intersection control can be categorized into two kinds: "ad-hoc negotiation based"~\cite{Dresner2004,Dresner2008,Li2013} and "planning based"~\cite{Zhu2015}. With the ad-hoc negotiation based methods, intersections are mainly organized under the rule of "first come first served"(FCFS)~\cite{Meng2018}, while planning based methods usually utilize optimization or searching approaches to determine the passing trajectories of multiple vehicles.\par
One of the earliest concepts of ad-hoc negotiation based intersection controls is to allow CAVs to make passing reservations while approaching an intersection, with the centralized intersection manager ensuring conflict-free outcomes. In this approach, Dresner and Stone~\cite{Dresner2004} proposed a multiagent FCFS intersection control policy, and it has been widely studied in subsequent researches~\cite{Dresner2008,Fajardo2011}. The control divides the intersection into multiple tiles, and by applying the restriction that a given tile can only be occupied by one CAV at any given time, collision avoidance is realized. These studies claim to achieve a lower delay for the FCFS policy in a single intersection, and the simulation conducted in VISSIM by Li \textit{et al.}~\cite{Li2013} has also verified that the FCFS policy outperforms traditional signal control. According to the experiments in~\cite{Carlino2013}, the intersection efficiency could be further slightly improved by introducing an auction-based system. \par
In a different approach, planning based intersection control can handle multiple vehicles at the same time instead of arranging vehicle trajectories exactly in the arrival order. In this case, the controller would perceive vehicles that arrive within a certain time period and determine their passing orders and trajectories by solving an optimization problem. Some existing studies achieve collision avoidance by preventing vehicles with intersecting trajectories from being in the intersection concurrently~\cite{Zhu2015,Meng2018}. In~\cite{Zhu2015}, a discrete time model (linear programming formulation for autonomous intersection control, or LPAIC) is formulated to achieve minimum traffic delay in a 4-leg, 4-lane intersection, with the outputs from each direction satisfying the demands. Meng \textit{et al.}~\cite{Meng2018} studied a simpler intersection scenario with only one lane in each leg. Both planning based and ad-hoc negotiation based frameworks are adopted to organize the vehicle passing order, and the simulations show that the planning-based approach is superior to the ad-hoc negotiation based approach with respect to both average value and standard deviation of vehicle delay, particularly when the traffic demand is high. In a more elaborate method, Lee and Park~\cite{Lee2012} proposed a cooperative vehicle intersection control (CVIC) system to adjust the acceleration behavior of passing vehicles by solving nonlinear constrained programming problems to minimize the overlapping length of the intersected trajectories. Moreover, to achieve a higher traffic efficiency, some studies allow vehicles with intersecting trajectories to enter the intersection simultaneously, as long as they do not pass the intersected point at the same time~\cite{Levin2017Confilct,Levin2017On}. In the conflict point intersection control (CPIC) model~\cite{Levin2017Confilct}, the spatial trajectories of vehicles passing the intersection are predesigned, and the conflict points are accordingly defined as the nodes where two trajectories intercept. The model then introduces a mixed integer program to optimize the entering time and passing speed of each vehicle. \par
\subsection{Phase-based traffic control} \label{intro_Sig_subsec}
Phase-based traffic control alternates passing permissions among conflicting traffic movements to provide a safe as well as efficient intersection organization. In a control cycle, conflict movements are instructed to pass the intersection in different phases, and the instructions are delivered to drivers using a set of signal lights. The assignment of phase length, in general, focuses on the overall traffic demand and the demand distribution pattern among conflict movements. One representative phase-based traffic control which has been implemented in current intersections is the pretimed signalized control. Unlike the actuated or semi-actuated signalized control which uses detectors to adjust phase settings based on real-time detected vehicles and pedestrians, in pretimed signalized control, the phase length is fixed based on typical traffic characteristics (e.g., traffic demands or the headway in average). In certain cases, a fluctuation of arrival rate on some directions might cause temporary queuing, but it should finally dissipate as long as the traffic flow characteristics remain stable. \par
In the era of autonomous driving, the phase-based intersection control may develop in other forms with higher traffic efficiency. While in this approach we still borrow the terminology of traditional signal control, e.g., pretimed phases and green/red lights, but it should be noted that the actual traffic lights are not required; rather, phase-based traffic control under the CAV environment is essentially a “collective” approach of traffic organization that assigns passing allowances to a group of non-conflicting vehicle movements in the same period of time. From the vehicle’s perspective, dynamic speed advice ensuring that vehicles pass through the signalized intersection at the maximum allowable speed during the green duration has been verified to be able to considerably reduce fuel~\cite{Trayford1984,Sanchez2006} or electricity~\cite{Wu2015} consumption. Moreover, in a pure CAV environment, the centralized traffic manager can ensure the nonstop passing of all vehicles, reducing the total phase transition losses of the intersection; as a result, the cycle length can be strongly decreased.~\cite{Zhou2017} and~\cite{Ma2017} proposed a parsimonious shooting heuristic to optimize the detailed trajectories of multiple CAVs approaching an intersection simultaneously, and following this research direction, Li \textit{et al.}~\cite{Li2018} simplified the trajectory optimization approach and lowered the computational complexity while preserving most of the desirable features of the former model. These studies reveal that the pretimed phase-based strategy can also be a practical intersection control method in the coming autonomous driving era; compared to its vehicle-based counterpart, phase-based traffic control is easier to implement, and the computational burden added to the controller is strongly alleviated since it is not necessary to solve complicated mathematical programming problems. \par
\vspace{3 ex}
Only limited work comparing the performance characteristics of the above two control philosophies (i.e., the vehicle-based control and the phase-based control) has been reported. Most previous studies have failed to consider the use of CAV technologies to improve the performance of the signalized control \cite{Dresner2008,Li2013}, leading to an underestimation of its potential to some extent. Recently, some researchers have observed that the vehicle-based control cannot always outperform the conventional signal control. Levin \textit{et al.}\cite{Levin2016} indicated some traffic scenarios for which signalized intersection controls outperform vehicle-based controls, and Patel \textit{et al.}\cite{Patel2019} investigated the optimal placement of vehicle-based and signalized intersections in urban networks. Nevertheless, the existing comparisons have been performed mostly under limited traffic scenarios, which are incapable of producing convincing and comprehensive conclusions. \par
Intuitively, compared to phase-based controls such as the signalized control, vehicle-based controls induce more “crossing-type interactions”, i.e., two vehicles on two conflicting lanes pass through the conflict point consecutively, and the crossing-type interaction generally requires a large time gap to ensure safety \cite{Yu2019}. Based on this intuition, the use of vehicle-based controls may lead to lower vehicle delays for light traffic, but one can naturally question their capabilities in a high traffic environment. For a fair comparison of different intersection control schemes in the era of CAVs, this study conducts numerical analyses on three intersection control protocols, i.e., pretimed phase-based control (PPC), vehicle-based control with the FCFS strategy and vehicle-based control with the CPIC strategy \cite{Levin2017Confilct}, under heterogeneous traffic demand patterns and intersection layouts. Specifically, in the PPC strategy, the approaching vehicles’ trajectories are adjusted to coordinate the green phases; the CPIC strategy requires the solution of mixed-integer programming models in real-time fashion, and to simulate its practical usage, we only allow a limited computational time. The testing scenarios include a variety of demand patterns from light to heavy, from balanced to imbalanced (in terms of arrival rates from different legs), and from stable to fluctuating. The tests are conducted with different intersection layouts, including four-leg and three-leg intersections. Additionally, we also incorporate scenarios with different levels of technological maturity to validate the performance characteristics of the two control philosophies. \par
The remainder of this paper is organized as follows. Section \ref{method_sec} describes the control models compared in this paper. In Section \ref{simulation_sec}, we simulate the control models under various traffic demand scenarios and intersection layouts and describe the simulation results. Finally, we conclude by presenting our findings in Section \ref{concluding_sec}.\par
\section{Control strategies} \label{method_sec}
This study focuses on control strategy comparisons for a isolated intersection under the CAV environment. As shown in Fig. \ref{Fig1}, the investigated intersection area includes the intersection core (the conflict area) and the coordinating area with a length of several hundred meters for each branch. Within the segment, the managing center can obtain the current state and traffic intention of all vehicles through roadside sensors or V2I communication, and then organizes the movements of vehicles. The center will calculate a desired trajectory for each vehicle using a specific control model and send the trajectory to the CAV. We assume that all involved vehicles are capable of understanding and following trajectories within a limited error. Under this setting, we present a thorough comparison among three control models for isolated intersections. The three models are a pretimed phase-based traffic control model and two vehicle-based models, namely, the first-come-first-serve (FCFS) control~\cite{Dresner2008} and the conflict-point intersection control (CPIC)~\cite{Levin2017Confilct}. FCFS is relatively simple and intuitive, and the latter control is expected to obtain better solutions with a much higher computational burden (since it requires the solution of a mixed-integer program). In this section, a brief description of the comparison framework and the three intersection control models is provided. \par
\begin{figure}[!t]
\centerline{\includegraphics[width=0.45\textwidth]{Figures/Fig1.png}}
\caption{Layout of the control segment\label{Fig1}}
\end{figure}
\subsection{Comparison frameworks} \label{method_Lot_subsec}
To fairly compare the three intersection control models, we should guarantee the equality of the parameters involved in the models. Vehicles of the three models share the same performance characteristics, for example, the maximum acceleration/deceleration rate and the reaction time. Also, vehicles would enter the area at the same cruising velocity, and the speed limits are equally set. In addition, the intersection layouts, including the length of the coordinating area and the size of the conflict area, is fairly set. \par
We also guarantee a similar level of safety. For the compared models, we have a same setting of the safe gap value in the conflict area; specially, we adopt different methods to achieve the safe distance. In the phase-based model, we limit the minimum spatial headway between lag and preceding vehicles; the time gap between phases is also considered to avoid collisions among vehicles from different directions. In vehicle-based models, we expand the longitudinal size of vehicles through spatial buffers, and by that means we preserve the safe gap between vehicles. In the coordinating area, vehicles should also maintain a minimum safe gap. We introduce the Gipps' safe distance rule~\cite{Gipps1981} to avoid rear-end collisions. For simplification, we assume that all vehicles enter the coordinating area on the lane corresponding to the desired travel direction, so that no lane changes will be conducted. \par
The detailed introduction of these control models is presented in following subsections. \par
\subsection{Pretimed phase-based control} \label{method_Sig_subsec}
The first discussed control is the pretimed phase-based control. Under the CAV environment, we can rely on the virtual signal lights to specify the vehicle that receives permission and passes the intersection. Similar to the conventional signalized intersection control, the compared control has pretimed phase settings (4 phases for 4-leg intersections and 3 phases for 3-leg intersections), and a vehicle is allowed to pass through the intersection only when its "signal" is green. The green time in each phase is allocated among the phase cycle and is adapted to the traffic demand and average headway. The optimal cycle length is determined through a grid search by conducting a series of simulations. For each arrival pattern, we generated 10 arrival sequences and simulated them in different cycle lengths: from 16 s to 120 s in 4-leg intersections, and 12 s to 90 s in T-type junctions. The cycle length with the lowest average delay is then selected as the optimal cycle length in this traffic pattern. \par
Under the CAV environment, the phase settings can be sent to CAVs in advance, allowing them to adjust their speed and match their entrance to the intersection with green lights. A desirable trajectory provides smooth acceleration and deceleration for the vehicle, lowering fuel consumption and enhancing comfort; additionally, the vehicle should arrive at the intersection when the light is green and pass through it at a high speed to avoid startup loss and improve the intersection efficiency. To meet these requirements, the trajectory optimization model proposed in~\cite{Zhou2017} and~\cite{Li2018} is introduced to optimize the vehicle trajectories in the coordinating area. \par
Given the length $L$ of the coordinating area and the entry time $t_i$, we first determine $t_o$, which is the time when the vehicle leaves the coordinating area and enters the intersection. Clearly, $t_o$ cannot be smaller than $t_i + d_{min}$, where $d_{min}$ is the minimum passing time of the area constrained by the entry velocity $\overline{v}$, the maximum allowance velocity in the conflict area $v_o$, and the acceleration boundaries $\underline{a}$ and $\overline{a}$, where $\underline{a} < 0 < \overline{a}$. In addition, some factors may further limit the feasible interval of $t_o$. The safety concern, which forces two adjacent vehicles to maintain a spatial gap, is the major limitation in this case. Meanwhile, vehicles cannot enter the intersection in red phases, so $t_o$ must be the earliest feasible time during a green phase.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Fig2.png}
\caption{Five quadratic segments of a vehicle trajectory}
\label{Fig2}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.4\textwidth]{Figures/Fig3a.png}}\\
\subfloat[][]{\includegraphics[width=0.4\textwidth]{Figures/Fig3b.png}}
\caption[]{Some possible trajectories with the same $t_i$ and $t_o$}
\label{Fig3}
\end{figure}
Given $t_i$ and $t_o$, the determination of the entire vehicle trajectory is still difficult because this determination is an infinite-dimension problem. To simplify the planning process, all of the vehicles are arranged for a trajectory with five quadratic segments according to the method proposed in~\cite{Ma2017}. As illustrated in Fig. \ref{Fig2}, $t_1 \leq t_2 \leq t_3 \leq t_4 \in {[t_i,t_o]}$ denote the joint moments between segments. Vehicles first cruise at the entrance speed $\overline{v}$ in time interval $[t_i,t_1]$ and then decelerate at a constant deceleration rate $\underline{a}$ during $(t_1,t_2]$. In some cases, vehicles have to stop completely at $t_2$, and the length of $(t_2,t_3]$ denotes the duration that vehicles must remain stationary. Otherwise, $t_2$ is equal to $t_3$, and the third segment does not exist. Then, during the next segment starting at $t_3$, vehicles accelerate at $\overline{a}$ until their velocities reach the leaving speed $v_o$ at $t_4$. In the fifth segment during $(t_4,t_o]$, vehicles cruise at $v_o$ and enter the intersection at $t_o$. \par
It is straightforward that when the trajectory satisfies $t_2=t_3=t_4=t_o$, as shown in Fig. \ref{Fig3}(a), the travel time $d$ reaches its minimum and equals $d_{min}$. When $d = d_{min}$, the driving trajectory has a unique solution. However, when $d>d_{min}$, the trajectory cannot be uniquely determined. Fig. \ref{Fig3}(b) illustrates some feasible trajectories with the same $t_i$ and $t_o$. \par
\begin{figure*}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.395\textwidth]{Figures/Fig4a.png}}
\subfloat[][]{\includegraphics[width=0.515\textwidth]{Figures/Fig4b.png}}
\caption[]{Feasible region of the vehicle trajectory under the constraints of the preceding vehicle and signal lights}
\label{Fig4}
\end{figure*}
Considering the safety constraint of the preceding vehicle in the same lane, the feasible region of the trajectory is further reduced. As illustrated in Fig. \ref{Fig4}, the upper boundary of the feasible trajectories is shown by the solid line, in which the distance from the preceding vehicle (for which the trajectory is shown in the dot dash line) at any time is equal to the minimum gap. Intuitively, in the selected trajectory, the deceleration time $t_1$ should be pushed back as much as possible to leave more viable space for the following vehicles. Thus, the optimal trajectory will be the one that is tangent to or coincident with the upper boundary at some point. In Fig. \ref{Fig4}(a), the selected trajectory is shown as the dashed line. Fig. \ref{Fig4}(b) adds the constraint of signal phases, delaying the entry of the vehicle to the intersection.
The calculation of the optimal trajectory exploits the properties of the quadratic function, which is cumbersome but not difficult. The optimal trajectory may be tangent to the upper boundary in the third segment or coincident with the boundary in the fourth and/or the fifth segment. A detailed of calculation to generate the trajectory is omitted in this paper.
\subsection{First-come-first-serve control} \label{method_FCFS_subsec}
For the ad-hoc negotiation based intersection control strategy, the \textit{first-come-first-serve} (FCFS) control~\cite{Dresner2008} is one of the most seminal works in the literature. The core idea of this control is to divide the intersection area into multiple tiles, so that the occupation of road space can be modeled as the occupation of tiles, reducing the variables used to describe the state of the intersection. In this control model, the safety gap is represented as an expanded area around vehicles, and the determination of occupied tiles is based on the expanded vehicle size. \par
Whenever a CAV enters the coordinating area, the roadside traffic manager will try to reserve a feasible trajectory for it in which the occupied tiles do not coincide with the tiles reserved for previous vehicles at any moment. To avoid excessive calculation, the original model in~\cite{Dresner2008} only tested two trajectories each time for each reservation. The first trajectory allows vehicles to pass through the control section at the highest speed, which is $\overline{v}$ in the coordinating area and $v_o$ at the intersection. This trajectory causes no delay for the vehicle. The other trajectory guides the vehicle to pass through the intersection at its current speed. However, this trajectory may be inefficient because, in some cases when vehicles move at a low speed, the intersection is occupied for a longer time. To address this issue, we adjust the setting of the second trajectory to guarantee a high speed while passing through the intersection. When the first trajectory is revealed to be unfeasible, we test the feasibility of a trajectory for which vehicle pass through the coordinating area at a slightly lower speed, for example, $2\overline{v}/3$. The deceleration pushes back the time of the entry to the intersection, enabling the vehicles to pass through the intersection at the maximum allowable speed $v_o$ without conflicts. If both trajectories fail to be feasible, the vehicle will decelerate at the maximum rate $\underline{a}$. We allow the vehicles that do not have confirmed reservations to drive through the coordinating area at a safe speed $\underline{v}$ until they are stopped by queuing vehicles in front of the intersection or finally have their reservations confirmed. All the vehicles without confirmed reservations will send new requests at a certain frequency, which is higher when vehicles are closer to the intersection and lower when the requests are less urgent. A possible speed-time curve of a vehicle is illustrated in Fig. \ref{Fig5}. \par
\begin{figure*}[!t]
\centering
\includegraphics[width=0.7\textwidth]{Figures/Fig5.png}
\caption{Possible speed-time curve of a vehicle under the FCFS strategy}
\label{Fig5}
\end{figure*}
\subsection{Conflict point intersection control} \label{method_CPIC_subsec}
Different from the FCFS control, planning-based intersection control considers multiple vehicles simultaneously in a dynamic fashion. In this category of control, an optimization problem is usually formulated to determine the intersection passing strategies for vehicles, with the objective function of minimizing delay or fuel consumption, and constraints such as speed limits and collision free. In addition, a rolling horizon model is generally adopted in dynamic traffic scenarios for real-time implementation~\cite{Meng2018}. Theoretically, planning-based control is considered to outperform ad-hoc negotiation based control because the strategies adopted in ad-hoc negotiation based control are always feasible solutions in planning-based control; however, constrained by the heavy computational burden in the optimization procedure, the performances of the planning-based control can only be sub-optimal in reality. \par
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Fig6.jpg}
\caption{Determination of the safety buffer size}
\label{Fig6}
\end{figure}
Our realization of planning-based control is mainly based on the Conflict Point Intersection Control (CPIC) model proposed in~\cite{Levin2017Confilct}. The conflict-point based collision-avoidance method was previously developed for aircraft management in airports~\cite{Liang2018} and the open air~\cite{Rey2016}; in the field of intersection control, Levin and Rey~\cite{Levin2017Confilct} define conflict points as locations where the vehicle trajectories of vehicles traveling from different directions intersect. The CPIC model then formulates a MILP problem to optimize the trajectories for passing vehicles. Under the assumption that vehicles maintain constant speed at intersections, we only need to optimize two parameters for each vehicle, namely, $t_i(r_i^-)$ and $t_i(r_i^+)$, which denote the intersection entry and exit time of vehicle $i$, respectively. \par
To maintain the consistency of the comparison settings, the dynamic buffer size that varies with the vehicle speed in the original model~\cite{Levin2017Confilct} is replaced by a static safety gap in this paper. To guarantee the minimum gap, an extra longitudinal buffer $s'$ is added to each vehicle. As illustrated in Fig. \ref{Fig6}, given the safety gap $s$ between vehicles and the angle $\theta$ of the trajectories of vehicle $i,j$ on conflict point $c$, the buffer size $s'$ can be derived as
\begin{equation}
s' = \frac{s/2}{\cos{(\theta/2)}}+\frac{W_0/2}{\tan{[(\pi-\theta)/2]}}
\label{eq1}
\end{equation}
\noindent where $W_0$ is the width of vehicle $i$. It is noted that Eq.(\ref{eq1}) also holds when vehicles $i,j$ share the same trajectory, in which case $\theta$ equals to zero. The formation of the problem is as follows.\footnote{For conciseness, the problem formation is only briefly presented here. Readers may refer to~\cite{Levin2017Confilct} for more detailed description. } \par
To begin, the objective function is:
\begin{equation}
\min \sum_{i}t_i(r_i^+)
\label{eq2}
\end{equation}
\noindent which minimizes the sum of exit time of all vehicles. Since the occurrence time is fixed for a given vehicle, the objective function also minimizes total delay. Eqs.(\ref{eq3})-(\ref{eq9}) are the constraints, including the speed limits, the first-in-first-out (FIFO) conditions and the collision-avoidance conditions. Firstly, we have \par
\begin{equation}
t_i(r_i^-) \geq e_i
\label{eq3}
\end{equation}
The constraint limits the earliest intersection entry time. \par
\begin{equation}
t_i(r_i^-) + \tau_i(r_i^-) \leq t_j(r_j^-)
\label{eq4}
\end{equation}
\begin{equation}
t_i(r_i^+) + \tau_i(r_i^+) \leq t_j(r_j^+)
\label{eq5}
\end{equation}
Eqs.(\ref{eq4}) and (\ref{eq5}) guarantee the FIFO constraint for all vehicles $i$,$j$ that have the same spatial trajectories, \noindent where
\begin{equation}
\tau_i(c) = \frac{L_i(c) \cdot (t_i(r_i^+) - t_i(r_i^-))}{d(r_i^-,r_i^+)}
\label{eq6}
\end{equation}\par
\noindent denotes the time duration that vehicle $i$ occupies conflict point $c$. In Eq.(\ref{eq6}), $L_i(c)$ denotes the vehicle length with the safety buffer included. Therefore, we have $L_i=L_0+2s'$, where $L_0$ is the physical length of the vehicle, and $s'$ is the safety buffer required for conflict point $c$.
\begin{equation}
\frac{d_i(r_i^-,r_i^+)}{\overline{U_i}} \leq t_i(r_i^+) - t_i(r_i^+) \leq \frac{d_i(r_i^-,r_i^+)}{\underline{U_i}}
\label{eq7}
\end{equation}
Eq.(\ref{eq7}) provides the upper ($\overline{U_i}$) and lower ($\underline{U_i}$) speed limits for vehicles. For every pair of vehicles $i$,$j$ that come from different directions and for which their trajectories intersect at the conflict point $c$, we have \par
\begin{equation}
t_i(c) + \tau_i(c) - t_j(c) \leq (1 - \delta_{ij}(c))M_{ij}
\label{eq8}
\end{equation}
\noindent where $M_{ij}$ is a large number, and $\delta_{ij}(c)$ is a binary variable representing the passing order of vehicles $i$,$j$ at point $c$. If $j$ enters $c$ after $i$ has left, then $\delta_{ij}(c)=1$ and $\delta_{ji}(c)=0$, and vice versa. Therefore, we have \par
\begin{equation}
\delta_{ij}(c) + \delta_{ji}(c) = 1
\label{eq9}
\end{equation}
Eqs.(\ref{eq2})-(\ref{eq9}) formulate the MILP problem involved in the CPIC model. Due to the numerous binary variables $\delta_{ij}$, the number of which is twice that of vehicle pairs that have conflict points in their paths, obtaining a solution of this problem is quite time-consuming. According to the simulation experiment conducted in~\cite{Levin2017Confilct}, the trajectory arrangement for no more than 30 vehicles can be completed in real time. Therefore, the rolling horizon framework is also adopted in the CPIC to limit the vehicle number, ensuring that the model is feasible for use in a real traffic system. \par
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Fig7.png}
\caption{Two control lines in the rolling horizon framework}
\label{Fig7}
\end{figure}
The rolling horizon framework is described as follows. As illustrated in Fig. \ref{Fig7}, the framework includes two control lines: the outer line is regarded as “vision”, and the inner line suggests the minimum safety distance for a vehicle to adjust its speed to pass through the intersection. The intersection manager receives or detects the location and the velocity of all approaching CAVs at every time step, and when a vehicle passes through the first control line, it will be added into the optimization set. In each time step, the traffic manager updates the members of the optimization set, and then, the MILP problem is formulated to find the optimal trajectories for vehicles. Presented as the “as late as possible” (ALAP) rule in~\cite{Levin2017Confilct}, the trajectory for each vehicle remains adjustable until it passes the second control line. The passed vehicles remain in the optimization set, providing constraints on the feasible trajectories of other vehicles until they have traveled so far that cannot have any impact on the vehicles for which the trajectories have not yet been determined. \par
While the position of the inner control line is determined by the speed limit of the coordinating area and the intersection, the position of the outer line should be carefully analyzed. If the gap between the two lines is set to a too large value, too many vehicles will be involved in the optimization, and therefore, real-time trajectory allocation will be impossible; on the other hand, the performance of the solution will be damaged by a narrow gap. In the simulations of this paper, the outer control line is set as far from the intersection as possible, on the condition that the optimization can be completed soon enough. \par
\section{Comparisons of numerical simulations} \label{simulation_sec}
In this section, we present the results of the numerical simulations on the three control models under a variety of scenarios. By using heterogeneous intersection scenarios, we assess the model performance characteristics under different traffic demand patterns and intersection layouts. We first test the model performance characteristics in a symmetric 4-leg intersection (scenarios \textbf{1-A} and \textbf{1-B}). Then, in scenarios \textbf{2-A} and \textbf{2-B}, two legs from the opposite direction were narrowed as secondary roads. Similar to the 4-leg intersection, scenarios \textbf{3-A} and \textbf{3-B} test the performance in a T-type junction. In each intersection layout, we ran the simulation under both balanced and imbalanced traffic demand. Moreover, we explored the possible impact of the fluctuating vehicle arrival sequences (in scenarios \textbf{4-A} and \textbf{4-B}) and different safety buffers (in scenarios \textbf{5-A} and \textbf{5-B}). The numerical simulations were coded on MatLAB, and conducted using a personal computer with an AMD Ryzen 5 2600 CPU and 16GB RAM. \par
For an unbiased comparison of the performance of the vehicle-based and phase-based traffic control, we ensure that each control strategy shares exactly the same traffic scenario and environmental variables. The time step as well as the reaction time in simulations is 0.2 seconds. The length of the coordinating area is 600 m. Vehicles enter the area at the maximum allowable cruising speed ($\overline{v}$), which is 18 m/s in simulations; the minimum speed $\underline{v}$ along the coordinating area is set 5 m/s. When passing through the intersection, the speed limit $v_o$ is 15 m/s for through vehicles and 10 m/s for left-turn vehicles, while right turns are ignored from the model due to their negligible influence on the intersection traffic. The maximum acceleration and deceleration rates are 1.5 $m/s^2$. In the phase-based traffic control, the phase transition loss between two consecutive phases is 3 seconds. All of the vehicles are cars with dimensions of 4 m $\times$ 1.8 m. To simplify the model, we adopt a static buffer size that does not vary with vehicle speed. In scenarios \textbf{1}, \textbf{2}, \textbf{3} and \textbf{4}, the minimum spatial gap between any two vehicles is 1.0 m, and we also test the performance of the control models under different safety gap settings of 4.0 and 8.0 m in scenarios \textbf{5-A} and \textbf{5-B}.
\subsection{Traffic generation} \label{simulation_TraffGen_subsec}
In the simulations, we use $\lambda_0$ to describe the traffic demand volume, which denotes the average number of total arrivals per second in all lanes. In each scenario, $\lambda_0$ varies from 0.1 to 4.0, representing the total traffic volume from 360 to 14,440 vehicles per hour (vph). In addition to the total volume, we also specify the distribution pattern of traffic demands by setting the vehicle distribution ratio $r_i$ on lane $i$, where $\sum_ir_i=1$. The vehicle arrival rate of lane $i$ is determined by the following Eq.(\ref{eq10}). \par
\begin{equation}
\lambda_i=\lambda_0r_i
\label{eq10}
\end{equation}
For each vehicle arrival vector \textbf{$\lambda$}, 10 realizations of vehicles arrivals in 65 minutes are randomly generated. As suggested in~\cite{Korkmaz2010}, we supposed that the headway follows a shifted exponential distribution, and the minimum following headway is set to 1.0 s. To approximately describe the arrival patterns, we discretize the time and set each time step as 0.2 s; in each time step, the probability that a new arrival is generated, i.e., $p_i$, is determined by Eq.(\ref{eq11}) (as long as the time gap to the previous vehicle is no less than 1.0 s). By Eq.(\ref{eq11}), vehicles are generated following an approximated Poisson distribution with a minimum gap of 1.0 s.
\begin{equation}
p_i = \frac{0.2\lambda_i}{1-\lambda_i}
\label{eq11}
\end{equation}
With the generated arrival realizations, the three control models are tested, and the differences between the simulated travel times and the free flow times of all of the vehicles are aggregated as the total delay. Vehicle delays during the first 5 min are not counted in the simulation. In some extreme cases, the queuing vehicles may spill back to the head of the coordinating area and therefore block new arrivals. If the generation of vehicles is blocked, the entrance will be postponed and the waiting time is also considered into the total delay. \par
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.35\textwidth]{Figures/Fig8a.png}}\\
\subfloat[][]{\includegraphics[width=0.35\textwidth]{Figures/Fig8b.png}}
\caption[]{Intersection layouts and demand patterns under scenarios \textbf{1-A}(a) and \textbf{1-B}(b). The numbers on the lane represent the demand level of the lane.}
\label{Fig8}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph01.png}}\\
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph02.png}}
\caption[]{Simulation results of scenarios \textbf{1-A}(a) and \textbf{1-B}(b)}
\label{Fig9}
\end{figure}
\subsection{Symmetric 4-leg intersection} \label{simulation_Sce1_subsec}
As shown in Fig. \ref{Fig8}, we model a 4-leg intersection with 6 lanes in each leg, among which 4 lanes are approaching the intersection and 2 lanes are departing. The lane width is set to 3 m. As illustrated, the lanes are numbered from 1 to 24. The possible routes between the lanes are fixed; for example, a vehicle that arrives from the south and intends to make a left turn heading west will enters the intersection from lane 12 and leaves at lane 23. The right-turn movements are omitted in this study because they have no direct conflict with other movements, indicating that no vehicle is generated from lanes 1, 5, 9 and 13. Two traffic demand patterns are tested under this intersection structure: the balanced demand pattern and the imbalanced demand pattern. \par
In the balanced demand pattern (scenario \textbf{1-A}), the arrival rates of all of the lanes are equally set as $1/12$ of the total arrival rate $\lambda_0$. Therefore, we have $r_i=1/12$ for $i \in \{2,3,4,6,7,8,10,11,12,14,15,16\}$ and $r_i=0$ otherwise, as shown in Fig. \ref{Fig8}(a). For every demand level $\lambda_0$ from 0.1 to 4.0, Fig. \ref{Fig9}(a) presents the average value of the delay time of 10 experiments, and the 25th and 75th values are expressed by the colored region around the curve. The comparisons across various intersection control models (ad-hoc negotiation based FCFS, planning-based CPIC, and phase-based PPC strategy) show notable differences in the average delay, particularly when traffic demand is high. \par
The simulation results of the PPC strategy show a remarkable improvement in traffic efficiency. Benefiting from the reduced headway and start-up losses, the cycle length of the improved control can be greatly shortened, decreasing the average delay of passing vehicles. When traffic demand is quite low, a green time proportion of 25\% is verified to be sufficient for cleaning queuing vehicles, and the average delay is therefore decreased to no more than 8 s. The two vehicle-based traffic control models show even better performance under low traffic volume. Most vehicles can maintain the maximum speed when passing the intersection, experiencing negligible delays. In most traffic demands, planning-based CPIC strategy outperforms ad-hoc negotiation based FCFS strategy, since the FCFS strategy is always within the feasible region of the CPIC problem. However, it may seem counter-intuitive that CPIC exhibits a slightly larger delay than FCFS under extremely low traffic demand. This is due to the approximation we impose to eliminate nonlinear constraints that forces the vehicles to slow down slightly while approaching the intersection. \par
As the traffic demand increases, the average delays incurred by the three control models all increase. For the PPC strategy, a higher proposition of green time is required, causing higher cycle lengths, consequently resulting in a higher average delay. However, the increase in the average delay is quite small compared to that of FCFS and CPIC strategies. Under the scenario that the traffic demand is 4.0 vehicles per second (or 14,400 vph), the numerical simulations show that the average delay is 14.131 s under an optimal cycle length of 30 s. On the other hand, as shown in Fig. \ref{Fig9}(a), the vehicle-based traffic control models show different curves. In the FCFS control, the average delay for $\lambda_0=2.0$ is 8.178 s, while under the demand that $\lambda_0=2.1$, the delay time increases to 118.890 s. At the end of the simulation, we observed a maximum delay of 262.130 s, indicating that the vehicle queue is constantly growing. Similar phenomena were also observed in the CPIC model when $\lambda_0$ reached 3.4. In this case, traffic demand has reached the capacity of the intersection under the control strategies. \par
For this intersection layout, we can briefly summarize the performance of the different intersection control models under various demand levels. The vehicle-based traffic control performs well under low demand, but the capacity of these models is relatively low, i.e., the models cannot accommodate large demands well. On the contrary, the phase-based traffic control shows higher delay under low traffic volume scenarios, but in high demand cases ($\lambda_0 \geq 3.3$ in this scenario), it becomes the only method that can stabilize the intersection queues. \par
In the imbalanced distribution pattern (scenario \textbf{1-B}), we assumed that the traffic demand from the east and the south is higher than average. Fig. \ref{Fig8}(b) illustrates the distribution: $r_i=1/21$ for $i \in \{10,11,12,14,15,16\}$, $r_i=2/21$ for $i \in \{2,3,4\}$ and $r_i=1/7$ for $i \in \{6,7,8\}$. The simulation results presented in Fig. \ref{Fig9}(b) lead to similar conclusions to those as in scenario \textbf{1-A}. Both vehicle-based traffic control models provide high-quality and stable intersection management under low traffic demand ($\lambda_0 \leq 1.9$ for FCFS and $\lambda_0 \leq 3.0$ for the CPIC strategy), but in high-demand scenarios, only the delay of the PPC strategy remains acceptable. \par
\subsection{4-leg intersections with secondary roads} \label{simulation_Sce2_subsec}
We then examine the performances of the control models under a smaller intersection where a 6-lane main road intersects with a 4-lane secondary road. Moreover, the balanced (scenario \textbf{2-A}) and imbalanced (scenario \textbf{2-B}) traffic patterns are included. The intersection layouts and the demand patterns are presented in Figs.\ref{Fig10}(a) and \ref{Fig10}(b), respectively. This type of intersections is prevalent in urban road networks, particularly on arterial
roads. \par
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.35\textwidth]{Figures/Fig10a.png}}\\
\subfloat[][]{\includegraphics[width=0.35\textwidth]{Figures/Fig10b.png}}
\caption[]{Intersection layouts and demand distribution under the scenarios \textbf{2-A}(a) and \textbf{2-B}(b)}
\label{Fig10}
\end{figure}
The simulation results under the balanced and imbalanced traffic patterns are presented in Figs.\ref{Fig11}(a) and \ref{Fig11}(b), respectively. The basic trends of the average delay given by the three control models in this intersection layout do not differ much from those observed in scenarios \textbf{1-A} and \textbf{1-B}. It should be noted that in scenario \textbf{2-A}, a relatively high demand of $\lambda_0=1.8$ caused a significant deviation among the 10 simulations under the FCFS strategy. The results indicate that the performance of the FCFS strategy in busy intersections is unreliable with the potential risks for intersection failure. \par
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph03.png}}\\
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph04.png}}
\caption[]{Simulation results of scenarios \textbf{2-A}(a) and \textbf{2-B}(b)}
\label{Fig11}
\end{figure}
\subsection{T-type junctions} \label{simulation_Sce3_subsec}
A series of simulations were also used to examine T-type junctions, which are another important category of intersections. In the junction connecting one main road (from the west and east) and a secondary road (from the south), traffic from the main road is dominant. The intersection layouts and the demand distributions are presented in Figs.\ref{Fig12}(a) and \ref{Fig12}(b), including a balanced demand pattern and an imbalanced one. \par
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.35\textwidth]{Figures/Fig12a.png}}\\
\subfloat[][]{\includegraphics[width=0.35\textwidth]{Figures/Fig12b.png}}
\caption[]{Layouts and demand distribution of the T-type junction}
\label{Fig12}
\end{figure}
As shown in Figs.\ref{Fig13}(a) and \ref{Fig13}(b), the simulation results under the T-type junction are quite different from those obtained for the 4-leg intersections. The planning-based CPIC strategy is revealed to show the best performance for all demand levels. This result may be due to the asymmetrical structure of the junction, which significantly affects the performance of the PPC strategy. In the T-type junction, two out of three phases are dedicated for left-turn movements from one specific direction, providing limited passing permissions for vehicles to go through the junction. Therefore, the capacity of the PPC strategy has experienced a sharp decline in simulations. On the contrary, the CPIC strategy benefits from the simplification of conflict relations. As illustrated in Figs. \ref{Fig14}, the number of conflict points (shown as circle dots) is 40 in 4-leg intersections, and 9 in T-type junction. Consider a scenario with 12 vehicles evenly distributed; the ratio of vehicles that arrive from each lane is shown in Fig. \ref{Fig8} for 4-leg intersections and Fig. \ref{Fig12} for T-type junctions. We can then calculate the total numbers of conflicting vehicle pairs that must be handled simultaneously in the CPIC strategy, which are 40 in 4-leg intersections and 24 in T-type junctions, respectively. In this case, consequently, the number of 0-1 variables in T-type junctions is 48, which is 32 less than that in 4-leg intersections. It notably lessens the computing burden of the optimizing problem in T-type junctions, and makes it possible to adopt a larger optimization horizon and acquire better solutions in scenarios with high traffic density. Hence, we observed the advantageous performances of the CPIC strategy compared to those of the PPC strategy in the T-type intersection. \par
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph05.png}}\\
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph06.png}}
\caption[]{Simulation results of scenarios \textbf{3-A}(a) and \textbf{3-B}(b)}
\label{Fig13}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.4\textwidth]{Figures/Fig14a.png}}\\
\subfloat[][]{\includegraphics[width=0.4\textwidth]{Figures/Fig14b.png}}
\caption[]{Conflict points in 4-leg intersections (a) and T-type junctions (b)}
\label{Fig14}
\end{figure}
\subsection{Symmetric 4-leg intersection under fluctuating arrival rates} \label{simulation_Sce4_subsec}
In following scenarios, we examine the impact of fluctuating arrivals in a symmetric 4-leg intersection. The intersection layouts and the demand distributions are the same as in scenarios \textbf{1-A} and \textbf{1-B}, which are shown in Figs.\ref{Fig8}(a) and \ref{Fig8}(b), respectively. The differences lie in the traffic generation procedure. In scenarios \textbf{4-A} and \textbf{4-B}, the vehicle generation rate varies every 2 minutes between $0.5\lambda_0$ and $1.5\lambda_0$. For instance, for the demand volume such that $\lambda_0=2.0$, the average vehicle arrival rate is set to 1.0 vehicle per second in the first two minutes of the simulation and 3.0 vehicles per second during the next two minutes. The purpose of these experiments is to study the ability of the control models to deal with temporary queues. In traditional traffic scenarios, joining a growing queue generally leads to an additional queuing delay. Considering the notable startup loss in manual driving, it will take even more time for a queue to dissipate, giving rise to a degradation in the intersection efficiency. However, due to the shorter reaction time of CAVs, it is expected that the fluctuating arrival process will lead to a weaker impact on autonomous driving intersections. \par
\begin{figure}[!t]
\centering
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph07.png}}\\
\subfloat[][]{\includegraphics[width=0.45\textwidth]{Figures/Graph08.png}}
\caption[]{Simulation results of scenarios \textbf{4-A}(a) and \textbf{4-B}(b)}
\label{Fig15}
\end{figure}
From the simulation results presented in Figs.\ref{Fig15}(a) and \ref{Fig15}(b), it can be observed that the fluctuations in the traffic arrivals do not have much influence on the efficiency and capacity of these control models. As was revealed in scenarios \textbf{1-A} and \textbf{1-B}, the PPC strategy can handle higher traffic demand. On the other hand, FCFS and CPIC strategies are quite effective under low traffic demand, but their delay increases rapidly as the arrival rate $\lambda_0$ approaches their capacities.
\subsection{Intersection performance under different safety gaps} \label{simulation_Sce5_subsec}
Finally, we examine the intersection performance under different safety gap settings to reflect the impact of technological maturity of autonomous driving. In previous simulations, the minimum allowable gap between vehicles is 1.0 m. However, this setting requires a relatively high autonomous driving technology level, which is unlikely to be achieved in the near future. Therefore, to examine the impact of immature autonomous driving technologies, we conducted simulations with different safety gap settings under the same intersection layout and demand pattern of scenario \textbf{1-A}, and then compared the performance characteristics of the three control models. \par
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Graph09.png}
\caption{Simulation results under the safety gap of 4.0 meters}
\label{Fig16}
\end{figure}
Fig. \ref{Fig16} illustrates the simulation results when the minimum allowable safety gap is set to 4.0 m. The change would have impacts on the headway of two consecutive vehicles from the same direction; furthermore, the minimum time gap between two conflict vehicles increases significantly, especially in conflict points with small angle $\theta$ (see Fig. \ref{Fig6}). As shown in Fig. \ref{Fig16}, the performance comparisons of the three control models follow similar patterns as in other scenarios: vehicle-based traffic control is dominant in the low demand cases, while the phase-based traffic control shows better performance in busy intersections. However, compared to the results of scenario \textbf{1-A} (shown in Fig. \ref{Fig9}(a)), the efficiency is reduced for all three control models. The delay under all the demand levels increases, and an excessive queuing time is observed under lower demand. For the FCFS strategy, the average delay begins to rise rapidly when $\lambda_0$ exceeds 1.4. The same result is obtained for the CPIC strategy (when $\lambda_0$ exceeds 1.8) and the PPC strategy (when $\lambda_0$ exceeds 3.5). Among the three control models, the PPC strategy is found to be affected the least by the increased safety gap. The critical traffic demand level $\lambda_0$ at which the PPC strategy outperforms the CPIC strategy is 1.7, which is much smaller than the value in scenario \textbf{1-A} (3.3). \par
The results of the simulations conducted under a much higher safety gap setting are shown in Fig. \ref{Fig17}. When the minimum allowable safety gap is 8.0 m, the advantages of the PPC strategy are more distinct. The average delay of the vehicle-based traffic control exceeds the delay of the PPC strategy when $\lambda_0=1.0$, which is a quite low traffic demand level, indicating that the PPC may be much more suitable when autonomous driving technologies are not sufficiently advanced.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Graph10.png}
\caption{Simulation results under the safety gap of 8.0 meters}
\label{Fig17}
\end{figure}
\section{Conclusions} \label{concluding_sec}
To supplement the existing studies on the isolated intersection control in the CAV era, this paper compared the performances of two intersection control philosophies, i.e., phase-based and vehicle-based traffic control, through a series of numerical simulations. Specifically, we implement three intersection control strategies: one ad-hoc negotiation based control~\cite{Dresner2008}, one rolling-horizon planning-based control (CPIC)~\cite{Levin2017Confilct} and one pretimed phase-based control. For a fair comparison of the three control strategies, all of the environmental factors, including the safe gap, the coordinating area length and the speed limits, were set to be the same to ensure that all three control strategies benefited equally from autonomous driving technologies. The comparisons were conducted under multiple intersection layouts, including symmetric and asymmetric 4-leg intersections as well as a T-type junction. In each layout, we simulated various traffic demand levels from 360 to 14,440 vehicles per hour and distributed the demand in both a balanced manner and an imbalanced manner. Furthermore, we compared the intersection performances under different settings, such as fluctuating vehicle arrival sequences and larger safe gaps. The simulation results lead to some interesting conclusions. Under the scenarios of 4-leg intersections, the vehicle-based traffic control strategies (FCFS and CPIC) show negligible delay when the traffic demand is low. As the demand level increases, their delay increases rapidly, making the phase-based traffic control the optimal approach. Nevertheless, in traffic scenarios with less conflicting vehicles (e.g., in a T-type junction with most vehicles driving on the main road), the vehicle-based methods show significantly improved performance. We also observe that when autonomous driving technologies are immature so that the CAVs are forced to maintain a larger headway, the advantages of phase-based traffic control over vehicle-based traffic control are much more distinct. \par
This paper discussed several representative intersection control strategies in the autonomous driving era, and our comparisons cannot cover all possible control strategies; therefore, to acquire more reliable results, deriving the theoretical performances of intersection control under general settings is a topic worth investigation. On the other hand, the required safe gap is assumed to be fixed in this paper, which is not always reasonable, and the effects of a flexible gap should be examined in future studies. Finally, since some existing studies (e.g.,~\cite{Patel2019}) have revealed that the selection of control strategies at different intersections may affect the efficiency of traffic network, in the future it is necessary to generalize the comparisons to network level in order to obtain a comprehensive understanding of the performance of the network traffic control. \par
\section*{Acknowledgment}
The research is supported in part by Tsinghua-Daimler Joint Research Center for Sustainable Transportation and Tsinghua University-Toyota Research Center.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi |
1306.5378 | \section{Introduction}
In the end of the dark ages, with the formation of the first
generation of stars and/or quasars, the neutral hydrogen in the
universe began to reionize. The redshifted 21 cm signal is one of the
most important signatures for detecting the epoch of reionization
(EoR), which has been a new frontier of astrophysics in recent
years. The 21 cm line emission/absorption from the EoR has been
redshifted to meter wave band. Several facilities that cover the meter
wave band have been or will be built to explore EoR through the
redshifted 21 cm signal, such as the 21 Centimeter Array
(21CMA\footnote{http://21cma.bao.ac.cn}), the Low Frequency Array
(LOFAR\footnote{http://www.lofar.org}), the Murchison Wide-field Array
\footnote{http://www.mwatelescope.org}, and the Square Kilometre Array
(SKA\footnote{http://www.skatelescope.org}). The predicted brightness
temperature of the EoR 21 cm signal is a few $10^{-2}$ K
\citep[e.g.,][]{2008ApJ...676....1B, 2008PhRvD..78j3511P,
2004MNRAS.347..187F}, while that of the foreground emission from the
Milky Way and extragalactic radio sources reaches $10^2$ K and even
higher \citep[e.g.,][]{2008MNRAS.391..383G, 2006ApJ...650..529W},
i.e., brighter than the EoR 21 cm signal by four orders of
magnitude. In order to detect the EoR 21 cm signal, the biggest
challenge that one must overcome is how to subtract the strong
foreground emission.
A dozen of foreground subtraction methods have been proposed. A
typical one is the polynomial fitting method \citep[e.g.,
][]{2006ApJ...650..529W}, the principle of which is rather
straightforward. The spectrum of the total signal is fitted with a low
order (e.g., second or third) polynomial in logarithmic space, and the
residual is regarded to be the sum of the 21 cm signal and the
instrumental noise. This kind of method can be applied either in $uv$
space or real space. Some derivative methods have been released, for
example, \cite{2013ApJ...763...90W}. Another important sort of
foreground subtraction method is the non-parametric method
\citep[e.g.,][] {2009MNRAS.397.1138H,2013MNRAS.429..165C}. Unlike the
fitting based method, the non-parametric method does not assume the
detailed form (usually a polynomial-like form in logarithmic space) of
the foreground spectrum.
Both kinds of methods assume that the foreground spectrum is smooth in
frequency space, or more generally, the emissions of different
frequencies are strongly correlated, while the EoR 21 cm signal is
full of saw-tooth-like structures. In other words, the characteristic
scales of the foreground and EoR signals are different in their radio
spectrum space. The power spectral density of the foreground signal is
mainly contributed by large scale components in radio frequency space,
while that of the EoR signal is mainly contributed by small scale
components \citep[e.g., $\lesssim1$ MHz at $z=8$, corresponding to
$\lesssim8$ comoving Mpc;][]{2007ApJ...669..663M}. This difference can
be used to distinguish between them. For general one-dimensional signals
like time series and spectrum, mature mathematical tools have been
invented, and the wavelet transform is one among them.
In this work , we study the feasibility of using continuous wavelet
transform \citep[CWT, e.g., ][]{1992tlw..conf.....D} to subtract the
strong foreground emission in radio spectrum space. We describe the
simulation of the foreground, 21 cm, and thermal noise signals in
\S\ref{sec:simu}, give a brief introduction to the CWT that we use in
\S\ref{sec:wt}, test the foreground subtraction with simulated signals
in \S\ref{sec:sub}, discuss our results in \S\ref{sec:discussion}, and
conclude our work in \S\ref{sec:conclusion}. Throughout the paper, we
adopt $H_0=100h$ km s$^{-1}$ Mpc$^{-1}$, where $h=0.71$, $\Omega_{\rm
M}=0.27$, $\Omega_\Lambda=0.73$, and $\Omega_b=0.044$ \citep[e.g.,
][]{2003ApJS..148..175S}.
\section{Simulation of the Low-frequency Radio Spectra}
\label{sec:simu}
\subsection{Redshifted 21 cm Signal from the Epoch of Reionization}
\label{ssec:21cm}
According to \cite{2006ApJ...650..529W} and references therein, we
simulate the redshifted 21 cm signal from the EoR
based on the theoretical three-dimensional power spectrum, which is
represented as
\begin{gather}
P_{\rm 3D, 21 cm}(k,z)=(16{\rm~mK})^2\frac{1}{h^2}\left (\frac{\Omega_{\rm b}h^2}{0.02}\right)^2\frac{1+z}{10}\frac{0.3}{\Omega_{\rm M}}\notag\\
\times \left \{[1-x_{\rm e}^2(z)]^2+b^2(z)e^{-k^2R^2(z)}x_{\rm
e}^2(z)\right\}P_{\rm 3D, matter}(k,z),
\end{gather}
where $P_{\rm 3D, matter}$ is the three-dimensional matter power spectrum
at redshift $z$, $x_{\rm e}(z)$ is the average ionization fraction,
$b(z)$ is the mean halo bias, and $R(z)=100[1-x_{\rm e}(z)]^{-1/3}$
kpc is the mean radius of the ionized patches in H{\sc II} regions. We
use the $b(z)$ and $x_{\rm e}(z)$ values calculated from the fiducial
reionization model of
\cite{2003ApJ...598..756S,2005ApJ...625..575S}. In this work, we
consider a redshift interval $\Delta z\ll 1$, so that the power
spectrum can be regarded as uniform. In the narrow redshift range, we
are able to calculate the one-dimensional power spectrum by the
formula\footnote{The $n$-dimensional Fourier transform convention that
we use here is
\begin{gather}
F_{k}(\mathbf{k})=\left(\frac{1}{L}\right)^n\int F(\mathbf{x})\exp(i\mathbf{k}\cdot\mathbf{x})d^nx\notag\\
F(\mathbf{x})=\left(\frac{2\pi}{L}\right)^n\int F_{k}(\mathbf{k})\exp(-i\mathbf{k}\cdot\mathbf{x})d^nk\notag
\end{gather}}
\begin{gather}
P_{\rm 1D, 21 cm}(k,z)=\frac{1}{2\pi}\int_{k}^{\infty}P_{\rm 3D, 21 cm}(k^\prime,z)k^\prime d k^\prime,
\end{gather}
according to \cite{1999coph.book.....P}. The line-of-sight
distribution of the redshifted 21 cm line brightness temperature,
i.e., the radio spectrum in an interval $(\nu_0-\Delta\nu/2,\nu_0+\Delta\nu/2)$
(corresponding to a redshift interval $(z_0-\Delta z/2,z_0+\Delta z/2)$, where
$z=1420.4$ MHz/$\nu-1$) can be represented as the sum of a series of
Fourier bases as
\begin{gather}
T_{\rm 21 cm}(x_n)=\sum_{q=0}^{N-1}\left[ A_q\cos\left(\frac{2\pi
q}{L}x_n\right)+B_q\sin\left(\frac{2\pi
q}{L}x_n\right)\right],
\end{gather}
where $L$ is the comoving space scale corresponding to $\Delta z$,
$A_q$ and $B_q$ are random variables that independently follow an
identical normal distribution $\mathcal{N}(0,\sqrt{2P_{\rm 1D, 21
cm}(k,z_0)/L})$, $k=2\pi q/L$ is the wave number, $x_n$ ($n=1,
2,\cdots,N$) is the line-of-sight location related to frequency
$\nu_n$ under linear approximation, and $N$ is the number of frequency
channels involved. $N$ is chosen to be large enough so that most power
of the power spectrum $P_{\rm 1D, 21 cm}(k,z)$ is enclosed within the
wave number interval $2\pi/L\leq k\leq 2\pi N/L$. In this work, we
take $z_0=8$ as an example, and let $\Delta\nu=20$ MHz, $N=500$, so
that the frequency resolution is $d\nu=\Delta\nu/N=40$ kHz, which is a
realistic value that can be achieved by most existing and upcoming EoR
probing facilities.
\subsection{Foreground Emission}
\label{ssec:fg}
Compared with relatively high frequency radio observations ($\ge 1.4$
GHz), low frequency radio observations are far from mature. Detailed
properties of radio emission from radio sources are still not
clear. It is not appropriate to place too strong assumptions over the
foreground emission. Nevertheless, the smoothness of the foreground
radio spectra is widely accepted \citep[e.g., ][]{1999A&A...345..380S,
2011MNRAS.413.2103P}. We also accept this assumption in present
work. To study the effect of our proposed method, we decide to test
two kinds of foregrounds (Fig. \ref{fig:fg}). One is based on high
frequency and relatively limited low frequency observations of radio
sources, which has a set of analytic formulas deduced from clear
physical mechanisms. Another foreground model that we use is composed
purely mathematically, which is used to test the tolerance of the
foreground subtraction method. The two kinds of foregrounds are
described in detail as follows.
For the first kind of foreground, we simulate spectra following our
previous work \citep[][and references therein]{2010ApJ...723..620W},
which includes the radio emission from the Milky Way, galaxy clusters,
and discrete extragalactic radio sources (i.e., star-forming galaxies,
radio-quiet AGNs, and radio-loud AGNs). We call it ``FG I'' in the
following sections.
For the second kind of foreground, we assume the spectrum to possess
two different power law indices ($\alpha_1=1$ and $\alpha_2=-2.7$) in
lower and higher frequency bands, respectively. The spectrum turnover
feature in this foreground model originates from the absorption in
many sources \citep[][]{2008MNRAS.390L..43G, 2004A&A...424...91E,
2003A&A...402..171T, 2000A&A...363..887D,
1999MNRAS.305..492D,2012ApJ...760...77A}. Several absorption
mechanisms have been proposed, and their detailed spectra are
different. Nevertheless, we just compose a pure mathematical model to
represent the general feature when absorption happens. Note that in
the simulation of FG I, absorption mechanisms have been considered for
different kinds of sources; we are only making the feature of
absorption more obvious to make the foreground more complex for the
test purpose. The slope changes smoothly around the turnover frequency
$\nu_t=157$ MHz ($\nu_t$ is randomly chosen here only for the test
purpose), and the sharpness of the turnover point is described by a
width parameter $\nu_w$. The spectrum is described by following
equations as
\begin{gather}
T_{\rm b}(\nu)=T_{\rm b}(\nu_t)\left
(\frac{\nu}{\nu_t}\right)^{(1-w)\alpha_1+w\alpha_2}\label{equ:fg2}
\end{gather}
where
\begin{gather}
w=\frac{2\arctan[(\nu-k\nu_t)/\nu_w]/\pi+1}{2},\\
k=1-\frac{\nu_w}{\nu_t}\tan\frac{\pi(\alpha_1+\alpha_2)}{2(\alpha_1-\alpha_2)},
\end{gather}
and $T_{\rm b}(\nu_t)$ is the brightness temperature at the turnover
frequency. The above equation ensures that the spectrum reaches a peak
at the turnover frequency. We call it ``FG II'' in the following sections.
\subsection{Instrumental Noise}
\label{ssec:noise}
Instrumental noise is crucial for the detection of the EoR 21 cm
signal. When the instrumental noise is significantly lower than the 21
cm signal, the detailed form of 21 cm signal can be derived, and only
statistical information can be obtained otherwise. We will test the
method with two different noise levels, one is higher than the 21 cm
signal, and another one is lower. We use 60 mK as the higher noise
level like we have used in our previous work
\citep{2013ApJ...763...90W}, which is based on the parameters of the
21CMA. Other working facilities such as LOFAR can also reach this
noise level. For example, according to the parameters used in
\cite{2010MNRAS.405.2492H} and \cite{2006ApJ...650..529W}, when the
core area of LOFAR is configured with a channel bandwidth of several
$10^1$ kHz, and the observing time reaches month level, its noise
level will reach several $10^1$ mK. The lower noise level is taken to
be $6$ mK, which is calculated according to the future SKA core region
parameters as follows.
According to \cite{2001isra.book.....T}, the brightness temperature
measurement error of an interferometer is calculated as
\begin{gather}
\Delta T_{\rm b}=\frac{\lambda^2 T_{\rm sys}}{A_{\rm e}}\frac{1}{\Omega_{\rm beam}\sqrt{d\nu\tau n(n-1)}}\notag\\
\approx\frac{\lambda^2T_{\rm sys}}{nA_{\rm e}}\frac{1}{\Omega_{\rm beam}\sqrt{d\nu\tau}},\label{equ:noise}
\end{gather}
where $\lambda$ is the wavelength, $T_{\rm sys}$ is the system
temperature, $A_{\rm e}$ is the effective area of one antenna,
$\Omega_{\rm beam}$ is the solid angle of the synthesized beam, $d\nu$
is the channel bandwidth, $\tau$ is the observing time, and $n$ is the
number of antennas. According to the design of SKA, there will be
$\eta=30\%$ of the total collecting area ($n_{\rm total}A_{\rm e}$)
within the central 1--2 km. Here, we assume the baseline length to be
$L=1.5$ km, so that $\Omega_{\rm
beam}\approx\frac{\pi}{4}\frac{\lambda^2}{L^2}=1.40\times10^{-6}$
sr. We rewrite Equation (\ref{equ:noise}) as
\begin{gather}
\Delta T_{\rm b}\approx\frac{\lambda^2T_{\rm sys}}{\eta n_{\rm
total}A_{\rm e}}\frac{1}{\Omega_{\rm beam}\sqrt{d\nu\tau}}.
\end{gather}
For the SKA, the parameter $(n_{\rm total}A_{\rm e})/T_{\rm sys}=5000$
m$^2$ K$^{-1}$. Finally, we have
\begin{gather}
\Delta T_{\rm b}\approx\frac{4L^2}{\pi}\frac{T_{\rm sys}}{n_{\rm total}A_{\rm e}}\frac{1}{\eta\sqrt{d\nu\tau}}\\
=6~{\rm mK}\left (\frac{40~{\rm
kHz}}{d\nu}\right)^{1/2}\left(\frac{30~{\rm
days}}{\tau}\right)^{1/2}.
\end{gather}
Given a channel bandwidth of $40$ kHz, and a total observing time of 1
month, the brightness temperature noise level can reach a few mK. In
following analysis, we use $6$ mK as the lower noise level.
\section{Continuous Wavelet Transform}
\label{sec:wt}
We make a brief introduction in the context of our problems,
according to the detailed review by \cite{1992tlw..conf.....D}. The
spectra from different sorts of sources (i.e., continuous spectra
mainly from synchrotron emission and line emission/absorption from the
neutral hydrogen in different epochs of the universe) possess
different characters. Continuous spectra are smooth in radio frequency
space, while line emission/absorption spectra are full of
saw-tooth-like structures. One can quantify the above difference and
separate them in frequency space.
\subsection{From Short-time Fourier Transform to Continuous Wavelet
Transform}
For stationary signal, one possible method to quantify the above
difference is Fourier transform. Continuous spectra are composed of
more low frequency components, while saw-tooth-like line
emission/absorption spectra are composed of more high frequency
components. So in principle, the smooth spectra and the saw-tooth-like
spectra can be separated with a pair of low-pass and high-pass
filters. However, the assumption that the spectra are stationary
signals is not safe enough.
To handle non-stationary signal, the real space resolution should be
kept. The short-time Fourier transform (STFT) is a modification of
traditional Fourier transform, which partly keeps the real space
resolution. The STFT of a real space signal $h(t)$ can be defined as
\begin{gather}
(\mathcal{F}h)(\tau,\omega)=\int_{-\infty}^{\infty}h(t)w(t-\tau)e^{-i\omega
t}dt,\label{equ:stft}
\end{gather}
where $w(t)$ is the window function, which should meet the
normalization requirement
\begin{gather}
\int_{-\infty}^{\infty}w(\tau)d\tau=1.
\end{gather}
As a commonly used window function, the Gaussian window function is
usually defined as
\begin{gather}
w_{g}(x)=\frac{1}{\sqrt{2\pi}s}e^{-\frac{x^2}{2s^2}},\label{equ:gaussian_window}
\end{gather}
where the parameter $s$ determines the real space resolution. By using
the Gaussian window, the STFT becomes
\begin{gather}
(\mathcal{F}h)(\tau,\omega)=\int_{-\infty}^{\infty}h(t)
\frac{1}{\sqrt{2\pi}s}e^{-\frac{(t-\tau)^2}{2s^2}} e^{-i\omega
t}dt.\label{equ:gaussian_stft}
\end{gather}
Although the STFT partly keeps the real space resolution, it is not
adaptively determined by the frequency $\omega$, i.e., a global and
fixed parameter $s$, which represents that the width of the window
function is used for all frequencies.
The CWT can overcome the above difficulty. According to e.g.,
\cite{1998BAMS...79...61T}, the one-dimensional CWT is defined as
\begin{gather}
W_{x,\psi}(\tau,s)=\int_{-\infty}^{\infty}h(t)\frac{1}{\sqrt{|s|}}\psi^*(\frac{t-\tau}{s})dt,\label{equ:cwt}
\end{gather}
where $h(t)$ is the real space signal to be transformed, $\psi$ is
called the mother wavelet function, $\tau$ and $s$ represent the real
space and scale indices of the wavelet coefficient
$W_{x,\psi}(\tau,s)$, respectively. The mother wavelet function
$\psi\in L^2(\mathbb{R})$ (quadratically integrable function) should
meet the requirement
\begin{gather}
0<C_{\psi}\equiv\int_{-\infty}^{\infty}\frac{|\Psi(\omega)|}{|\omega|}d\omega<\infty,
\end{gather}
where $\Psi$ is the Fourier transform of $\psi$, and $C_{\psi}$ is
called the admissibility constant. According to Equation
(\ref{equ:cwt}), given a certain scale $s$, the wavelet transform is
actually the cross-correlation between $\psi_{s}(t)=\psi(t/s)/\sqrt{|s|}$ and the
real space signal $h(t)$, so that according to the cross-correlation
theorem, it can be calculated efficiently in Fourier space as
\begin{gather}
W_{x,\psi}(\tau,s)=\mathcal{F}^{-1}\{\Psi_s^*\cdot H\},\label{equ:cwt_fft}
\end{gather}
where $\Psi_s$ and $H$ are the functions $\psi_s$ and $h$ in Fourier
space. In practice, the Fourier transform can be calculated with any
fast Fourier transform package discretely. In this work, we implement
the transform with the {\it FFTW3} package \citep{FFTW05}.
The most commonly-used mother wavelet functions include Morlet
\citep{1982Geop...47..203M}, Paul \citep[e.g.,
][]{1998BAMS...79...61T}, DOG \citep[e.g., ][]{1998BAMS...79...61T},
etc. We prefer the Morlet mother wavelet function, which is defined as
\begin{gather}
\psi(x)=\pi^{-1/4}e^{2\pi if_0 x}e^{-x^2/2},
\end{gather}
where $f_0$ is the frequency parameter, for the following reasons. First,
according to the comparison among a variety of wavelet functions in
\cite{1998BAMS...79...61T} Morlet mother wavelet function has the
advantage that it can obtain better frequency resolution (not to be
confused with the radio frequency, here the term ``frequency''
represents the scale of the signal component), but poorer real space
resolution (in our context, it is radio frequency resolution). In this
work, we are more interested in the separation between smooth
continuum emission and saw-tooth-like line emission according to their
different characteristic scales; in other words, the frequency
resolution (i.e., the scale resolution) is relatively more important
to us. Second, the form of the Morlet mother wavelet function can be
obviously regarded as the product of a Gaussian window function and
the Fourier base, so that the Morlet wavelet transform can be treated
as an STFT that has a frequency-dependent window function. In other
words, the Morlet wavelet transform can be smoothly introduced by
slightly modifying the Gaussian window STFT.
The $f_0$ parameter is chosen to be 1 channel$^{-1}$ (i.e., $25$
MHz$^{-1}$) in this work to match the radio frequency resolution. We
plot the Morlet mother wavelet function that we use here in Figure
\ref{fig:wavelet}. According to \cite{1992tlw..conf.....D}, the
inverse CWT is defined as
\begin{gather}
h(t)=\frac{2}{C_{\psi}}\int_0^\infty\left
[\int_{-\infty}^{\infty}W_{x,\psi}(\tau,s)\frac{1}{\sqrt{|s|}}\psi(\frac{t-\tau}{s})d\tau\right
]\frac{ds}{s^2}.\label{equ:icwt}
\end{gather}
Given a certain scale $s$ the inner part of the double
integration is actually a convolution between $W_{x,\psi}$ and
$\psi_s$, so that can also be calculated efficiently in Fourier
space just like what we have done in Equation (\ref{equ:cwt_fft}),
as
\begin{gather}
h(t)=\frac{2}{C_{\psi}}\int_0^{\infty}\mathcal{F}^{-1}\{\mathcal{F}\{W_{x,\psi}\}\cdot\Psi_{s}\}(t,s)\frac{ds}{s^2}.
\end{gather}
After the signal is filtered in the wavelet coefficient space, we will
use this equation to transform it back to real space.
\subsection{Boundary Effects}
\label{ssec:boundary}
According to the definition of CWT (Equation (\ref{equ:cwt})), the
input signal is assumed to be infinite in real space, which however
does not hold in practice. In other words, when trying to use CWT to
subtract the foreground in radio frequency space, the radio bandwidth
is not infinitely broad. There are several methods to extend a finite
signal to an infinite one to meet the definition of CWT. Filling
zeros, period extension, and symmetric extension are the most common
ones. We have tested all the three extension methods above and find no
significant difference among them. We provide a further discussion
about handling the boundary effect in \S\ref{ssec:extension}. Although
general extension methods will introduce discontinuity, and may
contaminate the transformed signal, it will not significantly affect
the final results for the following reason. Almost all working and
upcoming facilities that aim to detect the 21 cm signals from the EoR
have a much broader bandwidth than that is required in most
conditions. For example, when calculating the one-dimensional H{\sc I}
power spectrum of a certain redshift from the radio spectra, the
adopted bandwidth is usually limited to several MHz to ensure the
uniformity of the power spectrum, we can perform the subtraction over
a larger bandwidth than needed, and only use the bandwidth section
that is less affected by the boundary effects. So, we simply use the
period extension method to handle the finite signal bandwidth.
\section{Subtraction of Foreground Signal}
\label{sec:sub}
The difference between the distribution of significant wavelet
coefficients of the foreground and 21 cm signals can be used to
distinguish between them. In the following sections, we first study the
characters of the wavelet coefficients of the foreground and 21 cm
signals, respectively. After that, we test the wavelet based method of
subtracting a strong foreground.
\subsection{Wavelet Coefficients of Different Kinds of Sources}
\label{ssec:coeff_diff}
We first study the characters of the wavelet coefficients of the
foreground that been simulated above (\S\ref{ssec:fg}). We show the
absolute value of the wavelet coefficients of the FG I and FG IIs with
$\nu_w=$5, 10, and 20 MHz in Figure \ref{fig:wt_fg}.\footnote{Note
that for presentation purpose all the wavelet coefficients shown in
figures are multiplied by $10^3$.} The most impressive character of
the coefficients of the four foregrounds is that the most significant
coefficients are contributed by the boundary effect of the data. In
our tests, it is hard to disentangle the boundary effect and the
contribution from the foreground signal itself, because both of them
are more prominent on large scales. However this is not a serious
problem, since what we actually want to obtain is not the foreground
signal itself.
Then we study the behavior of the 21 cm signal (\S\ref{ssec:21cm})
with the CWT. We randomly choose one realization of the simulated 21
cm signal, and calculate the wavelet coefficients. The result is shown
in Figure \ref{fig:wt_21}. Different from those of the foregrounds,
the coefficients of the 21 cm signal appear to be much more prominent
in small-scale regions, and much less affected by the boundary
effects.
\subsection{Filtering Out the Foreground Signal}
\label{ssec:filtering}
As we have noted that the distribution of the significant coefficients
of the foreground signal and the 21 cm signals are different
(\S\ref{ssec:coeff_diff}), we can utilize this character to filter out
the foreground signals. Because the significant coefficients of the
smooth foregrounds are mainly contributed by the discontinuity of the
data boundary, the simplest way is to exclude the regions affected by
the boundary effect. To determine the regions to be excluded, we
calculate the wavelet coefficients of function
\begin{gather}
T_{d}(\nu)=\delta_c(\nu-\nu_{\min}),\label{equ:dd}
\end{gather}
as shown in Figure \ref{fig:coi}a, where $\delta_c$ is the Dirac delta
function and $\nu_{\min}=147.8$ MHz is the lower limits of our test
band. Strictly speaking, the step function is more suitable for
representing the discontinuity near the boundary; however, in our
practical condition, we choose the Dirac delta function due to its
localization property. In detail, because we use the period signal
extension method, it is impossible to compose such a step function
with the jumps at $\nu_{\rm min}$ and $\nu_{\rm max}$ only. On the
other hand, the Dirac delta function can be regarded as the derivative
of step function and according to the property of Fourier transform,
the difference between the Morlet wavelet transforms of Dirac delta
and step functions is only a slowly varying factor, which only has a
minor impact on our results. Note that since we use a period signal
extension method, the discontinuity of the upper limit of the signal
will also be reflected by Equation (\ref{equ:dd}). For any given scale
$s$, the absolute value of the wavelet coefficient peaks at
$\nu_{\min}$ and $\nu_{\max}$, i.e., the boundary of the data
series. The wavelet coefficients of the $T_d(\nu)$ can be used to
recognize the region that is significantly affected by the boundary
effect. We can empirically define a threshold for each scale to be
$10^{-2}$ of the peak value. The regions, where the absolute value of
the wavelet coefficient is above the threshold should be marked to be
excluded. Because the absolute value of the coefficient decreases
exponentially as the distance from the boundary increases, the masked
region is relatively insensitive to the value of the threshold. With
this standard, we generate the mask for filtering out the foreground
(Fig. \ref{fig:coi}b).
\subsection{Results}
\label{ssec:results}
Multiplying the wavelet coefficients of the total signal
(Fig. \ref{fig:wt_total}) by the mask (Fig. \ref{fig:coi}b), we derive
the filtered coefficients as is shown in Figure
\ref{fig:wt_filtered}. Then by using Equation (\ref{equ:icwt}), we
reconstruct the filtered 21 cm signal. To further avoid the boundary
effect, we exclude the signal with $\nu<\nu_{\rm min}+5$ MHz and
$\nu>\nu_{\rm max}-5$ MHz. Note that the bandwidth that is cut here is
chosen empirically, considering the trade-off between the available
bandwidth and the boundary effect. Although it seems that we have
wasted half of the total band, actually in real observations we can
move the subtraction band, i.e., $(\nu_{\rm min},\nu_{\rm max})$
continuously in the range of the total instrument band, so that most
of the frequency range can be used. Our result shows that a total
observation bandwidth of $20$ MHz enables us to detect the HI 21 cm
line emission distribution in one redshift period. Broader bandwidth
should enable us to study wider redshift range.
Given the noise level calculated for the SKA core region (i.e.,
$\Delta T_{\rm b}=6$ mK in \S\ref{ssec:noise}), we test our method on
the simulated foregrounds and 21 cm signals. Suppose that the noise
level is significantly lower than the EoR 21 cm signal, we can obtain
the spectrum or the distribution of H{\sc I} along the line-of-sight
of a certain sky region covered by a single beam. Typical
reconstructed results with the above four foregrounds are shown in Figure
\ref{fig:result}.
If the brightness temperature noise is comparable with the EoR 21 cm
signal, we can only obtain its statistical properties such as the
power spectrum. It is obvious that the information that we can extract
from one-dimensional power spectrum is relatively limited when
compared with three-dimensional power spectrum, but we still test the
one-dimensional power spectrum here. There are two reasons. The first
reason is that in this work, we only simulate the 21 cm spectrum on
each pixel, rather than a three-dimensional data cube, so that with
our simulated data, we cannot calculate the three-dimensional power
spectrum. Future simulations by using codes such as 21CMFAST
\citep{2011MNRAS.411..955M} may enable us to perform more complete
tests, which will be a part of our future work. The second reason is
that the calculation of one-dimensional power spectrum is relatively
less dependent on instrument parameters and is relatively simple. To
calculate three-dimensional power spectrum, one must compose data cube
in real space and transform it into Fourier space, during which the
survey strategy, especially the shape and area of the sky coverage
must be considered, while one-dimensional power spectrum only requires
measuring the radio spectrum on each interesting pixel, and does not
need to consider how these pixels are distributed, so that the results
should be more general. Assuming a noise level of $60$ mK
(\S\ref{ssec:noise}), we calculate the one-dimensional 21 cm power
spectrum by averaging the line-of-sight power spectra from 1000 beams,
and subtract the predicted instrumental noise power spectrum. We show
the results in Figure \ref{fig:ps}. We find that there is some power
leakage, which is especially severe in the small wave number end
($k<0.4h$ Mpc$^{-1}$). This is mainly caused by the filtering
strategy, and can be corrected as described in the following.
There are at least two methods to correct the power leakage, which is
especially severe at the small wave number end. Both of the two
methods work by multiplying a correction factor with the produced
power spectrum, which is the function of wave number $k$. In the first
method, we can feed a standard signal, the power spectrum of which is
known in advance, into the foreground subtraction program, and
calculate the power spectrum of the output signal, which is then
compared with that of the input signal, and calculate the correction
factor. This method can be named as the closed loop method. In the
second method, the correction factor is calculated as the ratio of the
total bandwidth of the input signal to that of the bandwidth after
masked for a certain scale, i.e., the ratio of the total bandwidth to
the width of the white region in Figure \ref{fig:coi}b at different
scales. Then according to \cite{Kirby2005846}, the wavelet scale $s$
of Morlet wavelet transform has a Fourier wave number counterpart
$2\pi f_0/s$. So that the correction factor can be converted to a
function of the wave number and can be applied to correct the power
spectrum. This method can be named as the open loop method. In
principle, the closed loop method should be more precise since it
avoids the issue of converting the wavelet scale $s$ to the Fourier
wave number $k$, which according to \cite{Kirby2005846} has more than
one conversion standards. Nevertheless, we have tested both methods
and find no significant difference between them. We show the result
that is corrected with the closed loop method in Figure
\ref{fig:ps_corr}. We find that after the correction, the power
leakage has been significantly eliminated.
\section{Discussion}
\label{sec:discussion}
\subsection{Comparison with the Polynomial Fitting Based Method}
\label{ssec:other_wt}
\cite{2006ApJ...650..529W} proposed a polynomial fitting based method
for the foreground subtraction, which can be regarded as a
representative example of parametric methods. The basic idea of this
method is to fit the total spectrum with a logarithmic space $n{\rm th}$-order
polynomial
\begin{gather}
\log T(\nu)=\sum_{m=0}^{n} a_m (\log \nu)^m,
\end{gather}
where $n$ is often chosen to be $2$ or $3$ and the residual is
regarded as the reconstructed EoR 21 cm signal. Although in our
previous work \citep{2013ApJ...763...90W} we have tested both $n=2$
and $3$ and find that $n=2$ is sufficient for FG I, we use $n=3$ here
to subtract more complex foregrounds. We present the reconstructed EoR
21 cm signal with $\Delta T_{\rm b}=6$ mK in Figure
\ref{fig:result_fit} and the estimated power spectrum of the signal
with $\Delta T_{\rm b}=60$ mK in Figure \ref{fig:ps_fit}.
For FG I, we find that both methods work, and can derive consistent
results. For FG II, when the aim is to reconstruct the EoR 21 cm
signal in a single beam, we find that the subtraction effect of the
polynomial fitting based method in \cite{2006ApJ...650..529W} is
strongly related to $\nu_w$, while our method appears more
stable. When we only aim to estimate the one-dimensional power spectrum,
the method of \cite{2006ApJ...650..529W} works poorly for $\nu_w=5$ MHz,
and for $\nu_w$ with larger values, the estimated power spectrum is
less affected by the complexity of the foreground. For FG II with
$\nu_w=5$ MHz, the power spectral density at small $k$ is
significantly overestimated, which is obviously caused by the
contamination from the foreground. On the other hand, as has been
pointed out in our previous work \citep{2013ApJ...763...90W}, a simple
polynomial fitting method over a narrow band \citep[e.g., $2$ MHz
in][]{2006ApJ...650..529W} will also lead to the leakage of power
spectral density in the small wave number end.
To make a quantitative comparison, we estimate the root mean square
(rms) deviation between the input and reconstructed 21 cm signals,
which is defined as
\begin{gather}
Q\equiv\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left [T_{\rm 21cm}^\prime(\nu_i)-T_{\rm 21cm}(\nu_i)\right ]^2},
\end{gather}
where $T_{\rm 21~cm}$ and $T^{\prime}_{\rm 21~cm}$ are the input and
reconstructed 21 cm signals, respectively, and $N$ is the number of
frequency channels. A smaller $Q$ means a better subtraction
effect. Given the noise $\Delta T_{\rm b}=6$ mK, for different
$\nu_w$, we compare the RMS deviation of the results obtained with the
method of \cite{2006ApJ...650..529W} and ours as shown in Table
\ref{tbl:comp} and Figure \ref{fig:comparison}. The errors of the
estimation of $Q$ are calculated by using the standard deviation of
1000 times Monte-Carlo simulation. The comparison is performed with
the noise considered (Fig. \ref{fig:comparison}a) and ignored
(Fig. \ref{fig:comparison}b), respectively, and the result is
insensitive to the existence of noise. We find that when $\nu_w>1$
MHz, the $Q$ of our method is rather stable, and almost independent of
$\nu_w$, while the effect of the method of \cite{2006ApJ...650..529W}
seems rather sensitive to $\nu_w$. When $\nu_w<20$ MHz, our method
works significantly better than that of \cite{2006ApJ...650..529W} and
when $\nu_w>30$ MHz, the polynomial fitting based method becomes
better.
\subsection{Comparison with Wp Smoothing Method}
\cite{2009MNRAS.397.1138H} suggested a method based on the Wp
smoothing algorithm \citep[originally described
by][]{machler1993very,machler1995variational}, which is also a
non-parametric method. We implement their method according to an
implementation note written by the author i.e., {\it Implementation of
the ``Wp'' smoothing for EoR foreground fitting} (the {\it
Implementation note} hereafter). This method requires solving a
boundary value problem (BVP) with nonlinear terms, and when
implementing the solver numerically, it actually solves a multivariate
nonlinear system of equations. Most algorithms for solving this kind
of system of equations are based on iteration so they have the risk of
instability, and may not finally reach the optimal solution. We have
tested the Hybrid and Broyden algorithms and find that the solution is
sensitive to the initial guess.
To analyze the behavior of the iteration for solving the above BVP, we
list Equation (8)-(15) in the {\it Implementation note}. The BVP
is described as follows:
\begin{gather}
h^\prime(x)=g(x)\\
g^\prime(x)=p_{\mathbf{w}}(x)e^{h(x)}\left [-\frac{1}{2\lambda}\sum_{i=1}^n(x-x_i)_+\psi_i(y_i-f(x_i))\right ]\label{equ:bvp2}\\
f^{\prime}(x)=k(x)\\
k^{\prime}(x)=p_{\mathbf{w}}(x) e^{h(x)},\label{equ:bvp4}
\end{gather}
and the boundary conditions
\begin{gather}
g(x_1)=0\\
g(x_n)=0\\
\sum_{i}\psi_i(y_i-f(x_i))=0 \label{equ:bc3}\\
\sum_{i}x_i\psi(y_i-f(x_i))=0,\label{equ:bc4}
\end{gather}
where $\lambda$ is the Lagrange multiplier, $y_i$ is the measured
brightness temperature at frequency $x_i$, $f(x_i)$ is the solution,
which represents the smooth foreground component, and function
$\psi(x):=x\to x$. As described in the {\it Implementation note},
there is no ``natural'' value for $\lambda$, and in practice, the
authors simply choose a reasonable-looking value for $\lambda$, we set
$\lambda=1$ in our implementation. Then the recovered 21 cm signal is
obtained by $y_i-f(x_i)$. We find that if during the iteration, the
function $h(x)$ becomes negative numbers with a relatively large
absolute value, then the right hand sides of Equations
(\ref{equ:bvp2}) and (\ref{equ:bvp4}) vanish. Then $g^{\prime}(x)$ and
$k^{\prime}(x)$ becomes zero, and finally the solution of $f(x)$ will
degenerate to a first order polynomial and $g(x)$ becomes zero. The
$f(x)$ has two free parameters i.e., the slope and the intercept,
which can be solved by Equations (\ref{equ:bc3}) and
(\ref{equ:bc4}). Obviously the following functions,
\begin{gather}
h(x)=-C_1\label{equ:wrong_solution1}\\
k(x)=C_2\\
f(x)=C_2x+b\\
g(x)=0\label{equ:wrong_solution4},
\end{gather}
where $C_1$ is a large positive number, and $C_2$ and $b$ are two
constants that can be solved with the linear equation set (Equations
(\ref{equ:bc3}) and (\ref{equ:bc4})), can be an approximate solution
to the above BVP, which however takes no information from the observed
spectrum $y_i$.
In order to test the above method, we make a small modification to
prevent the iteration from reaching an obviously wrong solution, such
as Equations (\ref{equ:wrong_solution1})-(\ref{equ:wrong_solution4}).
We set a lower limit of the function $h(x)$. We test this method both
by using FG I and FG II with a $\nu_w=5$ MHz. For FG I, this method
can obtain a result as good as that of the polynomial fitting based
method, and for FG II, the $Q$ is about 0.01 K
(Fig. \ref{fig:wpsmooth}), which is close to our method. The
comparison results are summarized in Table \ref{tbl:comp}. However, we
should point out that because the solution of the BVP relies on
iteration based methods and that the size of the system of nonlinear
equations is not less than the number of channels, the process of
solution is rather time-consuming. As a rough comparison, we implement
this method by using the {\it GNU Scientific Library}
\citep{galassi2005gnu}, and run the program on a workstation with an
Intel Xeon 1.87 GHz CPU. It takes about 30 minutes to obtain the
result, while with our wavelet based method, it takes about 60 seconds
to run 1000 rounds of subtractions. For the above reason, we were not
able to calculate the errors of the estimation of $Q$ with the
Monte-Carlo method.
\subsection{Risk of Instrumental Calibration Uncertainty}
All above tests are based on the assumption that the instrument is
perfectly calibrated. However, calibration uncertainty, more or less,
always exists , so it is valuable to consider this effect when we test
the foreground subtraction method. As a simple test, we consider a
relative calibration error of $10^{-3}$ between different frequency
channels. We assume the uncorrected relative gain at frequency $\nu$
to be described as
\begin{gather}
g(\nu)=1+10^{-3}\cos\left (2\pi\frac{\nu-\nu_{\min}}{10~{\rm MHz}} \right)\label{equ:gain},
\end{gather}
as shown in Figure \ref{fig:signal_gain}a. This is a rough model
composed only for the test purpose, nevertheless, according to
\cite{2010MNRAS.409.1647J}, a polarization sensitive instrument that
is improperly calibrated may possess calibration uncertainty. This
calibration uncertainty appears to be an oscillation structure along
the radio frequency axis, which is significant at several MHz scale,
just like our simple model above. In this test, we only use the above
FG I.
We apply the uncorrected instrumental calibration uncertainty to the
total signal (the sum of FG I, the 21 cm signal, and the instrumental
noise), and use this signal to test the subtraction method. We show
the subtraction result of the wavelet based method in Figure
\ref{fig:calibration}a. As a comparison, we test the polynomial
fitting based method, and show the result in Figure
\ref{fig:calibration}b. It's obvious that the result of the wavelet
based method is affected a little by the instrumental calibration
uncertainty, while the polynomial fitting based method becomes much
worse. This phenomenon can be explained by the fact that when the
instrumental calibration uncertainty is involved, the foreground
spectrum is no longer low-order polynomial shaped. Although we can use
a higher-order polynomial to approximate the calibration-involved
foreground, it will over-fit the background 21 cm signal. On the other
hand, the wavelet based method does not place such a strong assumption
over the foreground spectrum, i.e., polynomial-like, so the deviation
of the spectrum from a polynomial shape does not have significant
influence on the subtraction effect.
\subsection{Application to an Extremely Sharp Turnover Condition}
From above discussions we have found that the wavelet based method
appears to be more tolerant to complex conditions for the foregrounds,
and its numerical stability and computing efficiency is much higher
than the Wp smoothing based method. In this section we will test our
method with an extreme condition, i.e., FG II with $\nu_w=1$ MHz.
By using the filtering method described in \S\ref{ssec:filtering}, we
obtain the result, which is shown in Figure
\ref{fig:result_1MHz}a. From the wavelet coefficients of the total
signal, which is shown in Figure \ref{fig:man_filtered}a, we find that
the sharp turnover of the foreground spectrum has significant
contributions to small scales, which are not filtered out by the above
filtering method. We manually draw a mask
(Fig. \ref{fig:man_filtered}b) to check whether the sharp turnover
feature can be filtered. The filtered wavelet coefficients are shown
in \ref{fig:man_filtered}c, which is transformed to real space. The
recovered EoR 21 cm signal is shown in Figure
\ref{fig:result_1MHz}b. The result has been significantly improved by
using the manual filtering method.
Although the above method is based on a subjective standard, it shows
that the wavelet based method can be further improved to handle more
complex conditions. In our future work, we will try to find out a more
objective method to subtract the foreground in such kind of extreme
conditions.
\subsection{Handling Higher and More Complex Noise}
We have tested two noise levels above: 6 mK for the future SKA core
region and 60 mK as a representation of current working facilities
just like what we have done in our previous work
\citep{2013ApJ...763...90W}. For the 6 mK noise level, we are able to
reconstruct actual 21 cm signal from each image pixel, while for the
60 mK noise level, we are only able to obtain the power spectrum as a
statistical information. As has been pointed in our previous work
\citep{2013ApJ...763...90W}, the 60 mK noise level is calculated based
on the 21CMA instrument, whose field of view is fixed to $5^\circ$
zone around the north celestial pole. This may not hold for other
instruments, so that in this section we test a higher noise level of
120 mK. This noise level is equivalent to reducing the total
observation time to 25\%, which may be ``more'' realistic. Still with
1000 times simulation, we obtain the one-dimensional power spectrum of
the reconstructed 21 cm signal, as shown in Figure
\ref{fig:ps_120}. We find that the results are similar to those
obtained in \S\ref{ssec:results}, but the fluctuation is larger. The
effect can be improved by increasing the number of pixels used. For
most working and upcoming facilities that are able to produce images,
the total number of pixels should be much more than 1000, so that
should be able to handle higher noise levels.
The properties of noise may appear more complex in the aspect of
stationarity. In the above tests, we assume the noise to be stationary
along both the frequency and time axes, which may be broken during
practical observations. However, as the data are accumulated before
the subtraction of foreground, the nonstationarity in time domain will
not affect our method. But what about the nonstationarity in radio
frequency domain? For a noise level significantly lower than 21 cm
signal (e.g., around several mK), this will not be a serious problem,
despite that the noise will be mixed with the 21 cm signal after the
subtraction. For a noise level significantly higher than the 21 cm
signal, the subtraction algorithm itself can still work, but more
corrections are required before producing the final result of the
power spectrum. In this condition the noise is not a white noise, so
that for excluding the power spectral density contributed by the
noise, one must subtract a more complex noise power spectrum from the
total power spectrum to produce the final result.
\subsection{Testing Other Signal Extension Methods}
\label{ssec:extension}
As described in \S\ref{ssec:boundary}, in the above tests, we simply
use the period signal extension method. From Figure \ref{fig:wt_fg},
we note that the significant wavelet coefficients are mainly
contributed by the boundary effect. We have also tested other
extension methods including filling zeros and symmetric
extension. We find that these extension methods differ little from
the period extension that we have used above. Nevertheless, we find that
if we extend the originally measured signal $I(\nu)$ as
\begin{gather}
I_{\rm ext}(\nu)=\left\{
\begin{array}{ll}
I(\nu_{\rm min})+(\nu-\nu_{\rm min})\frac{dI(\nu)}{d\nu}|_{\nu=\nu^{+}_{\rm min}} & \nu\in[2\nu_{\rm min}-\nu_{\rm max},\nu_{\rm min})\\
I(\nu) &\nu\in[\nu_{\rm min},\nu_{\rm max}]\\
I(\nu_{\rm max})+(\nu-\nu_{\rm max})\frac{dI(\nu)}{d\nu}|_{\nu=\nu^{-}_{\rm max}} &\nu\in(\nu_{\rm max},2\nu_{\rm max}-\nu_{\rm min}]
\end{array}
\right . ,
\end{gather}
which can be named as the linear extension method, the boundary effect
can be significantly suppressed. We present the wavelet coefficients
of the total signal and the filtered signal in band $\nu_{\rm
min}<\nu<\nu_{\rm max}$ in Figure \ref{fig:pad}. Note that the
filtering procedure is exactly the same as described in
\S\ref{ssec:filtering}, but applied to the total band after the
extension.
We roughly test the effect of foreground subtraction with this
extension method using FG I. We find that for the $\Delta T_{\rm b}=6$
mK condition, the change of $Q$ is not significant compared with the
period extension method, while for the $\Delta T_{\rm b}=60$ mK
condition, the power leakage in the small wave number end almost
disappears, as shown in Figure \ref{fig:ps5}. This apparently can be
explained as that the boundary effect mainly affects the large scale
components of the signal.
Although the simple test above show that linear extension method is a
promising method to handle the boundary effect, unlike the period
extension method that we use throughout this work, it is not commonly
used yet, and more systematic tests are required, which will be
performed in our future work. Because of the above reason, in this
work, we still use the period extension method to handle the boundary
effect.
\subsection{What Kind of Conditions are Different Methods Suitable to?}
From the discussion above, we can conclude that if the foreground
spectrum can be well approximated by a low-order polynomial, the
traditional polynomial fitting based method can work well, and obtain
an acceptable estimation of the 21 cm spectrum. When the foreground is
no longer simple, for example it appears to possess a turnover with
$\nu_w<20$ MHz; the fitting-based method will not produce an
acceptable result, but the wavelet-based method can still work
well. Furthermore, actual foreground may be more complex and can
deviate from the power-law-shaped spectrum significantly. The fitting
based method can be seriously affected. If the instrument has an
uncorrected calibration error, the wavelet based method will also have
significant advantages over the polynomial fitting based method. And
for the Wp smoothing based method, in all the conditions that we have
tested, it works at least as well as the traditional polynomial
fitting based method. When the foreground is no longer as simple as FG
I, it can obtain about the same effect as our method. However, solving
a nonlinear BVP is a rather time-consuming work, so it may be a
problem when a large number of subtraction is required, for example
when estimating power spectra.
\section{Conclusion}
\label{sec:conclusion}
We propose a CWT-based foreground subtraction method for the detection
of redshifted 21 cm signal from the EoR. This method works based on
the assumption that the foreground spectra are smooth, while the 21 cm
signal spectrum is full of saw-tooth-like structures; thus, their
characteristic scales are significantly different. We can distinguish
them in the wavelet coefficient space easily and perform the
foreground subtraction. By testing the wavelet transform based method
with a set of foreground spectra with different complexities, we find
that compared with the traditional spectral fitting based method, our
method is more tolerant to complex foregrounds. Furthermore, we also
find that when the instrument has uncorrected response errors, our
method can also work significantly better than the spectral fitting
based method. Our method can obtain similar results with the Wp
smoothing method, which is also a non-parametric method, but our
method consumes much less computing time.
\acknowledgments
We thank the referee for his/her constructive and valuable comments,
which help improve the manuscript. This work was supported by the
Ministry of Science and Technology of China (grant Nos. 2009CB824900
and 2013CB837900), the National Science Foundation of China (grant
Nos. 11203041, 11261140641, and 11125313), the Chinese Academy of
Sciences (grant No. KJZD-EW-T01), and Science and Technology
Commission of Shanghai Municipality (grant No. 12XD1406200). |
1306.5633 | \section{Summary and Discussions}
By explicit construction we have extended the Petrov-like boundary
condition to the finite cutoff surface and derived the
incompressible Navier-Stokes equation in the long-wavelength
limit. In each model, we have computed the value of shear
viscosity and discussed its asymptotical behavior when the
position of cutoff approaches to horizon or infinity. In general
the kinematic viscosity is cutoff dependent and such a dependence
asks for further understanding from the side of holographic
renormalization group flow. In special case when the cutoff
surface approaches to the horizon, our results go back to the
previous ones without employing a long-wavelength limit, implying
a deep analogy between the near-horizon limit and the
long-wavelength limit.
This work, as well as previous works imposing the Petrov-like
boundary condition in the near-horizon limit, only involves the
electromagnetic field as the most simple matter field in the bulk
(see, however, \cite{Wu:2013mda} for the perfect fluid case as a
step further). More general matter fields may lead to further
problems, such as the anisotropy caused by the axion field
\cite{MT}, which is rather interesting. It is also challenging to
extend this framework to a finite cutoff surface which may be
spatially curved. When the spatial part of the hypersurface is
compact, the long-wavelength limit seems not applicable. As
emphasized in \cite{Eling:2009pb}, taking the long-wavelength
limit is essential to reduce the partial differential equation to
ordinary differential equation. However, if the section of cutoff
surface is compact, then the wavelength should have an upper
bound such that the long-wavelength limit can not exist globally.
Nevertheless, for some special non-flat cutoff surface the
long-wavelength limit maybe exist. We leave these issues for
further investigation in future.
\begin{acknowledgments}
Wei Zhang is very grateful to Cheng-Yong Zhang for useful
discussion and help. This work is supported by the Natural Science
Foundation of China under Grant Nos.11175245, 11075206, 11275208,
11178002. Y.Ling also acknowledges the support from Jiangxi young
scientists (JingGang Star) program and 555 talent project of
Jiangxi Province.
\end{acknowledgments}
\section*{Appendix:}
\subsection{The Petrov-like boundary condition in the last model}
From now on, we will present the detailed calculation of the last
model with respect to the gravity/fluid duality in spacetime with
matter fields following the general framework presented in
\cite{Zhang:2012uy}. We have the embedded hypersurface $\Sigma_c$
and its metric reads as
\begin{eqnarray}
ds^2_{p+1}&=&-f(r_c)dt^2+{r_c}^2{\tilde\delta}_{ij}d{\tilde x}^id{\tilde x}^j
\nonumber\\
&\equiv& -{(dx^0)}^2+\delta_{ij}dx^idx^j \nonumber\\
&=&-\frac{1}{\lambda^2}d\tau^2
+\frac{1}{\lambda}\delta_{IJ}dx^Idx^J. \nonumber
\end{eqnarray}
Similarly as we fix the induced metric $h_{ab}$ on the cutoff surface,
we also fix ${F_{ab}|}_{\Sigma_c}$, which could be regarded as the
Dirichlet-like boundary condition. Then we have
\begin{equation}
{F_{\tau I}|}_{r_c}=0. \nonumber
\end{equation}
${F_n}^b$, ${F_a}^b$ and $F^{ab}$ could be written in terms of
$F_{\mu\nu}$ on $\Sigma_c$ as
\begin{eqnarray}
&&{{F_n}^{\tau}|}_{r_c}=F_{n\tau}h^{\tau\tau},
\ \ \ \ \ \ \ \ \ \ \
{{F_n}^I|}_{r_c}=F_{nJ}h^{IJ}, \nonumber\\
&&{{F_{\tau}}^I|}_{r_c}=F_{\tau J}h^{IJ}=0, \ \ \ \ \
{{F^{IJ}}|}_{r_c}=F_{KL}h^{KI}h^{LJ}. \nonumber
\end{eqnarray}
Then, the perturbation of electromagnetic field
should take the following form
\begin{eqnarray}
F_{n\tau}=0+\lambda{F_{n\tau}}^{(1)}, \nonumber\\
F_{nI}=0+\lambda{F_{nI}}^{(1)}. \nonumber
\end{eqnarray}
Now we will give the detailed calculation from the Petrov-like
boundary condition to equation (\ref{fff}). Firstly we remark that
in the presence of matter fields, the Weyl tensor can be expressed
in terms of the intrinsic curvature and extrinsic curvature as
well as the energy-momentum tensor through Eqs.(3)-(6) in
\cite{Zhang:2012uy}. Moreover, since the extrinsic curvature is
related to the Brown-York stress tensor, we can finally rewrite
the Petrov-like boundary condition in terms of Brown-York stress
tensor as\footnote{Our calculation is applicable for a general
spacetime with matter fields, thus we keep $p$ as general until we
get back to the last model with a magnetic black brane, in which
$p$ is set to $2$.}
\begin{eqnarray}
&&\lambda{t^{\tau}}_{\tau}{t^I}_J+\frac{2}{\lambda}h^{IK}{t^{\tau}}_K{t^{\tau}}_J
-2\lambda^2{t^I}_{J,\tau}-\lambda{t^I}_K{t^K}_J-2h^{IK}{t^{\tau}}_{(K,J)}
+\lambda{\delta^I}_J[\frac{t}{p}(\frac{t}{p}-{t^{\tau}}_{\tau})+
2\lambda\partial_{\tau}\frac{t}{p}] \nonumber\\
&&+\lambda\frac{1}{p}(T_{\delta\beta}n^{\beta}n^{\delta}+2\Lambda+T+{\lambda}^2T_{\tau \tau}
-2\lambda T_{\delta \tau}n^{\delta}){{\delta}^I}_J-\lambda{T^I}_J=0. \nonumber
\end{eqnarray}
The energy-momentum tensor of electromagnetic field takes the form
\begin{equation}
T_{\mu\nu}=\frac{1}{4}g_{\mu\nu}F_{\rho\sigma}F^{\rho\sigma}-
F_{\mu\rho}{F_\nu}^{\rho}. \nonumber
\end{equation}
We have
\begin{eqnarray}
T_{\delta\beta}n^{\beta}n^{\delta}&=&T_{nn}
=\frac{1}{4}F_{\rho\sigma}F^{\rho\sigma}
-F_{n\rho}{F_n}^{\rho}, \nonumber\\
T&=&\frac{p-2}{4}F_{\rho\sigma}F^{\rho\sigma}, \nonumber\\
{\lambda}^2T_{\tau\tau}
&=&-\frac{1}{4}F_{\rho\sigma}F^{\rho\sigma}
-{\lambda}^2F_{\tau\rho}{F_\tau}^{\rho}, \nonumber\\
-2\lambda T_{\delta \tau}n^{\delta}
&=&-2\lambda T_{n\tau}
=2\lambda F_{n\rho}{F_\tau}^{\rho}, \nonumber\\
-{T^I}_J&=&-\frac{1}{4}{\delta^I}_JF_{\rho\sigma}F^{\rho\sigma}
+F^{I\rho}F_{J\rho}. \nonumber
\end{eqnarray}
The Petrov-like boundary condition further becomes
\begin{eqnarray}
&&\lambda{t^{\tau}}_{\tau}{t^I}_J+\frac{2}{\lambda}h^{IK}{t^{\tau}}_K{t^{\tau}}_J
-2\lambda^2{t^I}_{J,\tau}-\lambda{t^I}_K{t^K}_J-2h^{IK}{t^{\tau}}_{(K,J)}
+\lambda{\delta^I}_J[\frac{t}{p}(\frac{t}{p}-{t^{\tau}}_{\tau})+
2\lambda\partial_{\tau}\frac{t}{p}] \nonumber\\
&&+\lambda\frac{1}{p}(-\frac{1}{2}F_{\rho\sigma}F^{\rho\sigma}-F_{n\rho}{F_n}^{\rho}
-\lambda^2F_{\tau\rho}{F_\tau}^{\rho}+2\lambda F_{n\rho}{F_\tau}^{\rho}+2\Lambda)
{\delta^I}_J+\lambda F^{I\rho}F_{J\rho}=0. \nonumber
\end{eqnarray}
Moreover
\begin{eqnarray}
-\frac{1}{2}F_{\rho\sigma}F^{\rho\sigma}
&=&-F_{n\tau}F_{n\tau}h^{\tau\tau}-F_{nI}F_{nJ}h^{IJ}
-\frac{1}{2}F_{IJ}F_{KL}h^{KI}h^{LJ}, \nonumber\\
-F_{n\rho}{F_n}^{\rho}
&=&-F_{n\tau}F_{n\tau}h^{\tau\tau}
-F_{nI}F_{nJ}h^{IJ},\nonumber\\
-\lambda^2F_{\tau\rho}{F_\tau}^{\rho}
&=&-\lambda^2F_{n\tau}F_{n\tau}, \nonumber\\
2\lambda F_{n\rho}{F_\tau}^{\rho}
&=&2\lambda(F_{n\tau}{F_\tau}^{\tau}+F_{nI}{F_\tau}^I)
=0, \nonumber\\
F^{I\rho}F_{J\rho}
&=&F_{nJ}F_{nL}h^{IL}+F_{JK}F_{LM}h^{LI}h^{MK}. \nonumber
\end{eqnarray}
So, the Petrov-like boundary condition reads as
\begin{eqnarray}
&&\lambda{t^{\tau}}_{\tau}{t^I}_J+\frac{2}{\lambda}h^{IK}{t^{\tau}}_K{t^{\tau}}_J
-2\lambda^2{t^I}_{J,\tau}-\lambda{t^I}_K{t^K}_J-2h^{IK}{t^{\tau}}_{(K,J)}
+\lambda{\delta^I}_J[\frac{t}{p}(\frac{t}{p}-{t^{\tau}}_{\tau})+
2\lambda\partial_{\tau}\frac{t}{p}] \nonumber\\
&&+\lambda\frac{1}{p}[-2F_{n\tau}F_{n\tau}h^{\tau\tau}-\lambda^2F_{n\tau}F_{n\tau}-2F_{nI}F_{nJ}h^{IJ}
-\frac{1}{2}F_{IJ}F_{KL}h^{KI}h^{LJ}+2\Lambda]{\delta^I}_J \nonumber\\
&&+\lambda F_{nJ}F_{nL}h^{IL}+\lambda F_{JK}F_{LM}h^{LI}h^{MK}=0. \nonumber
\end{eqnarray}
After plugging $p=2$ into above equation, we get the
Petrov-like boundary condition
\begin{eqnarray}
&&\lambda{t^{\tau}}_{\tau}{t^I}_J+\frac{2}{\lambda}h^{IK}{t^{\tau}}_K{t^{\tau}}_J
-2\lambda^2{t^I}_{J,\tau}-\lambda{t^I}_K{t^K}_J-2h^{IK}{t^{\tau}}_{(K,J)}
+\lambda{\delta^I}_J[\frac{t}{2}(\frac{t}{2}-{t^{\tau}}_{\tau})+
\lambda\partial_{\tau}t] \nonumber\\
&&+\lambda{\delta^I}_J[-F_{n\tau}F_{n\tau}h^{\tau\tau}-\frac{\lambda^2}{2}F_{n\tau}F_{n\tau}-F_{nI}F_{nJ}h^{IJ}
-\frac{1}{4}F_{IJ}F_{KL}h^{KI}h^{LJ}+\Lambda] \nonumber\\
&&+\lambda F_{nJ}F_{nL}h^{IL}+\lambda F_{JK}F_{LM}h^{LI}h^{MK}=0. \nonumber
\end{eqnarray}
Taking the perturbation expansion for Brown-York stress tensor and
electromagnetic field, we find the leading order of the expansion
is automatically satisfied by the background while the sub-leading
order with $\lambda^2$ reads as
\begin{eqnarray}
{{t^I}_J}^{(1)} &=& \frac{2\sqrt{f}}{\partial_{r_c}f}
\delta^{IK}{{t^{\tau}}_K}^{(1)}{{t^{\tau}}_J}^{(1)}-
\frac{2\sqrt{f}}{\partial_{r_c}f}\delta^{IK}
{{t^{\tau}}_{(K,J)}}^{(1)} \nonumber\\
&&-\frac{f}{r_c\partial_{r_c}f}
{\delta^I}_J{{t^{\tau}}_{\tau}}^{(1)}
+\frac{r_c\partial_{r_c}f+2f}
{2r_c\partial_{r_c}f}{\delta^I}_Jt^{(1)}. \nonumber
\end{eqnarray}
\subsection{The Hamiltonian constraint in the last model}
Here we give the detailed calculation from the
Hamiltonian constraint to equation (\ref{za}).
The Hamiltonian constraint is
\begin{equation}
^{p+1}R+K_{ab}K^{ab}-K^2=2\Lambda+2T_{\mu\nu}n^{\mu}n^{\nu},
\ \ \ a,b=0,\dots p,\ \ \ \mu,\nu=0,\dots p+1. \nonumber
\end{equation}
In terms of $t_{ab}=Kh_{ab}-K_{ab}$ in coordinate
system ($\tau, x^I$), we get
\begin{equation}
{({t^{\tau}}_{\tau})}^2-\frac{2}{\lambda^2}h^{IJ}
{t^{\tau}}_I{t^{\tau}}_J+{t^I}_J{t^J}_I
-\frac{t^2}{p}-2\Lambda-2T_{\mu\nu}n^{\mu}n^{\nu}=0.
\nonumber
\end{equation}
Considering the last term on the left-hand side of the above equation
\begin{eqnarray}
-2T_{\mu\nu}n^{\mu}n^{\nu}=-2T_{nn}
=F_{n\tau}F_{n\tau}h^{\tau\tau}+F_{nI}F_{nJ}h^{IJ}
-\frac{1}{2}F_{IJ}F_{KL}h^{KI}h^{LJ}, \nonumber
\end{eqnarray}
then the Hamiltonian constraint becomes
\begin{equation}
{({t^{\tau}}_{\tau})}^2-\frac{2}{\lambda^2}h^{IJ}
{t^{\tau}}_I{t^{\tau}}_J+{t^I}_J{t^J}_I-\frac{t^2}{p}
-2\Lambda+F_{n\tau}F_{n\tau}h^{\tau\tau}+F_{nI}F_{nJ}h^{IJ}
-\frac{1}{2}F_{IJ}F_{KL}h^{KI}h^{LJ}=0. \nonumber
\end{equation}
Now, considering the perturbation of the electromagnetic field and
meanwhile taking the perturbation expansion for Brown-York stress
tensor, we find the leading order of the expansion is automatically
satisfied by the background while the sub-leading order
with $\lambda^1$ reads as
\begin{equation}
{{t^{\tau}}_{\tau}}^{(1)}
=\frac{2\sqrt{f}r_c}{-r_c\partial_{r_c}f+2f}
\delta^{MN}{{t^{\tau}}_M}^{(1)}{{t^{\tau}}_N}^{(1)}
+\frac{2f}{-r_c\partial_{r_c}f+2f}t^{(1)}. \nonumber
\end{equation}
\subsection{The momentum constraint in the last model}
Following discussion is about the momentum constraint
\begin{equation}
\partial_a{t^a}_b=T_{\mu b}n^{\mu}. \nonumber
\end{equation}
The time component of the equation is
\begin{equation}
\partial_a{t^a}_{\tau}=T_{\mu\tau}n^{\mu}. \nonumber
\end{equation}
Because
\begin{eqnarray}
\partial_a{t^a}_{\tau}
&=&\partial_{\tau}{t^\tau}_{\tau}+\partial_I{t^I}_{\tau}
=\lambda\partial_{\tau}{{t^\tau}_\tau}^{(1)}
-\frac{1}{\lambda}\partial^I{{t^\tau}_I}^{(1)}+\dots,
\nonumber\\
T_{\mu\tau}n^{\mu}
&=&T_{n\tau}=0, \nonumber
\end{eqnarray}
then at leading order it gives rise to
\begin{equation}
\partial^I{{t^\tau}_I}^{(1)}=0. \nonumber
\end{equation}
The space component of the equation is
\begin{equation}
\partial_a{t^a}_I=T_{\mu I}n^{\mu}. \nonumber
\end{equation}
Similarly, since
\begin{eqnarray}
\partial_a{t^a}_I
&=&\partial_{\tau}{t^\tau}_I+\partial_J{t^J}_I \nonumber\\
&=&\lambda\partial_{\tau}{{t^\tau}_I}^{(1)}
+\lambda\partial_J{{t^J}_I}^{(1)}, \nonumber\\
T_{\mu I}n^{\mu}
&=&T_{nI} \nonumber\\
&=&-(0+\lambda{F_{nJ}}^{(1)}){F_I}^J \nonumber\\
&=&-\lambda{F_{nJ}}^{(1)}{F_I}^J, \nonumber
\end{eqnarray}
then at leading order we have
\begin{equation}
\partial_\tau{{t^\tau}_I}^{(1)}
+\partial_J{{t^J}_I}^{(1)}=-{F_{nJ}}^{(1)}{F_I}^J.
\nonumber
\end{equation} |
2107.11790 | \section{Introduction}\label{sec:1}
Urban areas composed with the high density of population and traffic suffer from safety issues and transport efficiency~\cite{KIM2017159,9143155}. As a prevention of threats, various sensing and monitoring Internet of things (IoT) devices have been deployed throughout the city-wide areas~\cite{isj2021dao}. Thus, it is obvious that IoT devices make the city smarter and secure. Closed circuit television (CCTV) is one of the main supervision tools in smart city surveillance applications and deployed in Point-of-Interest (POI), that is, geometrically and socially important spots or crime prone areas. Furthermore, the CCTV-recorded data is real-time video streaming, thus the corresponding streaming and scheduling technologies are also actively discussed~\cite{ton201608kim,jsac201806choi,tmc201907koo,twc201912choi,tmc2021yi,mm2017koo}. The real time monitoring video data from CCTV enable facility infrastructure management, spatial information acquisition, and adequate response to rising problems in smart city applications.
Most of monitoring/sensing data are a time-sensitive deadline, and it is mostly valid for a specific period of time. In this paper, we propose a strategic data collection from sparsely located CCTV devices. We assume the situation when the intelligent IoT CCTV cannot transmit or relay sensing data to a base station via multi-hop relays. Instead, we consider the case where we deploy unmanned aerial vehicles (UAVs) to selectively collect data and planning the flying route~\cite{tvt202106jung,tvt2021jung}. UAV provides more extensive and diverse application with its increase of utilization~\cite{9406452,electronics20jung,10.4108/eai.30-6-2020.165502}. Due to the natural trait of UAVs, the UAVs can swiftly move and optimize their path in order to quickly complete their mission~\cite{access19geraldes}. UAVs can also collect raw data and deliver it to the target point operating. Furthermore, the communication between a ground device to a UAV in the air has an advantage in radio signal degradation, unlike how the signals degrade quickly for the wireless communication between two devices on the ground due to various shadowing and scattering~\cite{chandrasekharan2016}.
However, due to the battery limitations of UAVs, it is occasionally unrealistic to collect all data in POIs. Instead, we take an economic approach in order to selectively collect data under the consideration of distance and data similarity. In this paper, we design a novel algorithm that is for the data collection with Myerson auction approach for distributed and autonomous data collection using UAVs. Furthermore, we utilizes deep learning-based framework for solving the Myerson auction-based formulation for optimizing seller's revenue.
There are several application researches to solve resource allocation problems with the variant Myerson auctions~\cite{shin2019auction, luong2018, dutting2019optimal}.
The contribution of this paper is two folds. First, we collect the data from CCTV efficiently in terms of distance and data redundancy. In addition, we maximize the seller's revenue with deep learning auction approach.
The rest of the paper is organized as follows.
Sec.~\ref{sec:2} and Sec.~\ref{sec:3} propose the system model and our auction model, respectively. Sec.~\ref{sec:4} evaluates the performances and Sec.~\ref{sec:5} concludes the paper.
\section{Data Collection System Model}\label{sec:2}
\subsection{Overall Architecture}\label{sec:2-1}
This section introduces the overall architecture of our data collection scenario and auction process. The proposed system consists of a UAV and sparsely located CCTV devices installed in the POIs throughout the city as shown in Figure~\ref{fig:overview}. Due to the relatively long distance between the devices, sometimes it is unrealistic for them to transmit or relay sensing data to a base station.
We assume that the UAV can only assigned to a single device at a time. As a result, the devices compete for the drone's data processing ability to transmit their collected data.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{auction.png}
\caption{Data collection scenario by UAV.}
\label{fig:overview}
\end{figure}
The UAV acts as a seller and the devices act as a buyer.
The devices in need strategically submit their bid based on their valuation.
Then the UAV collects all bids which are in transform form. The drone with the highest allocation probability becomes a winner and pays final the determined payment and is detailed in Sec.~\ref{sec:3}.
Through the sequence of auctions, the trajectory of UAV is determined.
\subsection{Private Valuation}\label{sec:2-2}
Note that each device $u_i$ have their own private valuation $v_i$ as equation~\ref{eq:valuation}.
We assume that every bidder devices know the last collected data and location of each drone. As the distance between the device and the drone gets close, the device would have a higher winning probability which means it is willing to attend the auction.
Devices are located in POI, while the location of the drone varies. The distance $d_i$ between the drone and the device can be derived as
\begin{equation}
d_i = \sqrt{{(x_{u}(t) - x_{i})^2}+{(y_{u}(t) - y_{i})^2}},
\label{eq:euclidean}
\end{equation}
where $x_{u}(t)$ and $y_{u}(t)$ denote the location of UAV at time t, and $x_{i}$ and $u_{i}$ denote the location of device $i$. In addition, if the current bidder's data is similar to the data collected from just before round, it means the data may be redundant. The device with high similarity, thy would be less active in the auction.
For all images in the pile, we one by one compare two $m$ $\times$ $n$ sized images for similarity $s_i$, using mean squared error (MSE) for all pixels.
$K = \{1,2,..,|K|\}$ denotes the image pile collected just before round, and $I = \{1,2,..,|I|\}$ denotes the current device's image pile, and the MSE (denoted as \textsf{MSE} below) can be derived as follows,
\begin{equation}
s_i = \textsf{MSE} = \frac{1}{mn} \sum_{i = 0}^{m-1}\sum_{i = 0}^{n-1}[{I(i,j) - K(i,j)]^2}.
\label{eq:similarity}
\end{equation}
In general, when the distance $d_i$ and data similarity $s_i$ are large, the buyer would be less incentive. Therefore, the valuation $v_i$ can be expressed by
\begin{equation}
v_i = s_i \cdot d_i.
\label{eq:valuation}
\end{equation}
\section{Algorithm Design Concepts}\label{sec:3}
Myerson presents provable analytical results for single item auctions optimizing the auctioneer revenue in truthful settings where each buyer has its own private valuation of the resource \cite{myerson1981optimal}.
For a single-item auction with $N$ bidders, Myerson’s mechanism firstly introduces a function of bidder's valuation which is known as the virtual valuation \cite{chawla2007algorithmic} as in~\eqref{eq:valuation},
\begin{equation}
\phi_i(v_i) = v_i - \frac {1 - F_i(v_i)} {f_i(v_i)}.
\end{equation}
Each bidder $i$ has its own individual private valuation $v_i$ which is drawn from the cumulative density function $F_i(v_i)$ where the probability density function of $v_i$ is defined as $f_i(v_i)$.
With the concept of virtual valuation, the winner and the final payment are determined. The winner would be the one with the highest virtual valuation. The final payment $q_i$ can be calculated through the second-highest virtual valuation of the user using~\eqref{eq:payment}. This means that a winning bidder pays a price equal to the virtual valuation-inverse of the second-highest virtual valuation,
\begin{equation}
q_i = \phi_i^{-1}(\phi_j(v_j)).
\label{eq:payment}
\end{equation}
We now show how the Myerson variant deep learning-based auction maximizes the expected revenue of UAV while guaranteeing truthfulness and revenue-optimal. Detailed neural architectures for deep learning to solve our proposed auction-based problems are organized in Algorithm~\ref{al:deep}. The monotonic network takes the role of virtual valuation in Myerson auction~\cite{luong2018}. The allocation network maps the UAV and the device with the highest non-zero transform bid. The payment network determines the final payment to the winner delivery UAV.
The neural architecture parameters $w^{i}_{kj}$ and $\beta^{i}_{kj}$ are trained with the valuation profiles as the training set while minimizing the loss function.
\begin{algorithm}[t]
\caption{Deep Learning-Based Auction Algorithm}
\label{alg:deep}
\begin{algorithmic}[0]
\STATE {\bfseries Input:} Candidate bid sets $\textbf{b}=(b_{1}, b_{2},...,b_{N})$. \\
\STATE {\bfseries Output:} Allocation probability set $g_i=(g_{1}, g_{2},...,g_{N})$, payment set $p_i=(p_{1}, p_{2},...,p_{N})$.
\REPEAT
\STATE Compute \ $\phi_i(b_i) = \max_{\forall k \in K}\min_{\forall j \in J} \left(w^{i}_{kj}b_i + \beta^{i}_{kj}\right)$ ;
\STATE Compute \ $g_i(\bar{b}) = \frac{e^{k\bar{b}_i}}{\sum_{j=1}^{N+1}e^{k\bar{b}_j}}$ ;
\STATE Compute $p_i^{0}(\bar{b}) = ReLU(\max_{\forall j \neq i}\bar{b_j})$ ;
\STATE Compute\ $\phi^{-1}_i(y) = \min_{\forall k \in K}\max_{\forall j \in J} (w^{i}_{kj})^{-1}\left(y-\beta^{i}_{kj}\right)$;
\STATE Compute \ $L(w,\beta) =-\sum_{i=1}^{N}g_i^{(w,\beta)}(v^{s})p_i^{(w, \beta)}(v^{s})$ ;
\UNTIL{The loss function $L(w,\beta)$ converges to the minimum.}
\end{algorithmic}
\label{al:deep}
\end{algorithm}
\section{Performance Evaluation}\label{sec:4}
In this section, we have a deep learning-based optimal auction (DLA) algorithm for data collection. In addition, the proposed deep learning based optimal auction compared with SPA as a baseline.
We construct a neural network with PyTorch library. To evaluate our system, we performe the deep learning auction where the numbers of participating devices are five with distribution of valuation $f_{V}(v)$ $\sim U[0.5,1]$. We set the five groups and three linear functions for the neural network. Overall 500 iterations were done for every round.
Fig. \ref{fig:revenue gap} shows the 300 individual deep learning auction results. The revenue gap between SPA and DLA is obtained for each iteration, and we sort it in an ascending order. That is, the corresponding graph shows the range of gaps that can occur over iterations. Overall, with DLA, we can confirm the revenue is improved compare to SPA.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{revenue.png}
\caption{Revenue gap from 300 experiment cases sorted in an scending order.}
\label{fig:revenue gap}
\end{figure}
\section{Concluding Remarks}\label{sec:5}
In this paper, we propose a distributed and autonomous aerial data collection in smart city surveillance applications. We collect the data from CCTV device selectively, under the consideration of distance and data redundancy. With the deep-learning auction, our scenario achieves the initial objective that is the maximization of revenue of the seller UAV as well as the preserving the truthful conditions in distributed resource allocation. The evaluation results confirm that the auction-based resource allocation formulation between the CCTV devices and UAV gives distinct revenue benefits compared to the traditional SPA.
\section*{Acknowledgment}
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2019M3E4A1080391, 2021R1A4A1030775). S. Jung and J. Kim are the corresponding authors of this paper (e-mails: jungsoyi@korea.ac.kr, joongheon@korea.ac.kr).
\bibliographystyle{IEEEtran} |
2012.08074 | \section{Introduction}
Markov decision process (MDP) is stochastic control process in a discrete time~\cite{Puterman:2014aa} and it has played an essential role in studying optimizations investigated by dynamic programming~\cite{Bellman:2015aa} and reinforcement learning~\cite{Sutton:1998aa}.
They are extensively applied in many areas, such as automatic control and robotics.
A MDP with a finite horizon $T$,
which is of interest of this paper,
is a tuple $(S,A,\{P_{a}(s',s)\},R)$, where $S$ is a state space and $A$ is the set of action, both of which are continuous. Here $\{P_{a}(s',s)\}$ is the state transition probability density to $s'\in S$ from $s\in S$ exerted by the action $a\in A$ and it satisfies the Markov property.
Moreover, $R^t: S\times A\rightarrow\mathbb{R}$ is called the reward function defined at each time step $t$, and if the process is in some state $s\in S$ and the action $a$ is chosen, a corresponding $R^t(s,a)$ is immediately accumulated.
The task of the optimization control is to select an optimal policy $\pi_t^*$ at the time $t\in[0,T]$ from the set of policies \{$\pi:S\rightarrow A$\} to maximize the expectation of total accumulated rewards $\mathcal{R}^t$ in the unknown future $[t,T]$.
Conventionally, $\mathcal{R}^t$ is simply the summation, denoted by $\mathcal{R}_+^t$, of the rewards obtained:
\begin{eqnarray}
\label{Areward}
\mathcal{R}_+^t(s_t,a_t;\cdots;s_T,a_T)= R^t(s_t,a_t)+R^{t+1}(s_{t+1},a_{t+1})+\cdots+R^{T}(s_T,a_T).
\end{eqnarray}
By the definition of the optimality, the optimal policy, denoted by $\pi^*_{t,+}$ to emphasize the accumulation way as the addition, takes the form as
\begin{eqnarray}
\label{optimization}
\pi^*_{t,+}(s_t)=\text{argmax}_{a_t}\text{max}_{a_{t+1},\cdots,a_{T}}\mathbb{E}^{(t+1)}\left[\mathcal{R}_+^t(s_t,a_t;s_{t+1},a_{t+1};\cdots;s_T,a_T)\right],
\end{eqnarray}
where the expectation $\mathbb{E}^{(t+1)}$ is taken on the distributions $\left\{s_{k}\sim P_{a_{k-1}}(s_k,s_{k-1})|t<k< T\right\}$.
If a linear transition with Gaussian noise is assumed and a linear quadratic reward is chosen,
such an optimization can be exactly solved.
As we will briefly review later,
the solution of the optimal policy $\pi^*_{t,+}$ does not depend on the noise of the linear transition~\cite{Kwakernaak:1972aa} (or c.f. Eq.~(\ref{aLQR})), which implies that the noise plays a completely trivial role there.
However, the noise is important in real systems and noise dependences in the solution of optimal policy can probably indicate whether the model in the consideration is reasonable or not.
Thus, it is interesting to extend this solution or discover other exactly solvable cases where optimal policies are noise dependent and such a control is expected to be advantageous in the presence of significant noise.
Furthermore,
the generality of the additive way to accumulate rewards still remains open and we are interested in a more general framework than the additive accumulation, i.e., multiplicative rewards, which is the other main goal of the current paper with a positive statement.
Our proposal of such a multiplicative scheme,
by the definition given later,
should be distinguished from the similar terminology of multiplicative MDP~\cite{Howard:1972aa,Sladky:1976aa,Rothblum:1984aa,Borkar:2002aa,Kallenberg:2011aa,Osogami:2012aa,White:2018aa,Freitas:2018aa,Bertsekas:2019aa} where the product means the additive rewards being multiplied by a one-period transition matrices.
This paper is organized as follows.
In Sec.~\ref{sec_axiom}, we propose a general paradigm of reward accumulations to generalize the concept of the additive accumulation.
We review the exact solution under the additive rewards and derive one of our main results as an exactly solvable noise dependent optimization under a multiplicative reward in Sec.~\ref{sec_add}.
Shed light by this result,
we show that our proposal of the multiplicative scheme of reward accumulation is actually a general framework, even model-independently, in Sec.~\ref{sec_mul}, followed by the conclusion in Sec.~\ref{sec_con}.
\section{An axiomatic approach to reward accumulations}
\label{sec_axiom}
In this work to construct other exact solutions whose optimal policy is noise dependent, we will investigate distinct ways that the rewards $\{R^t\}$ are accumulated
and first consider a generalization in the accumulation and optimization as
\begin{eqnarray}
\label{optimization_1}
\pi^*_t(s_t)=\text{argmax}_{a_t}\text{max}_{a_{t+1},\cdots,a_{T}}\mathbb{E}^{(t+1)}\left[\mathcal{R}^t(s_t,a_t;s_{t+1},a_{t+1};\cdots;s_T,a_T)\right], \\
\label{Areward_1}
\mathcal{R}^t(s_t,a_t;\cdots;s_T,a_T)= R^t(s_t,a_t)\oplus R^{t+1}(s_{t+1},a_{t+1})\oplus\cdots\oplus R^{T}(s_T,a_T),
\end{eqnarray}
where the binary operation $\oplus:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ denotes a general accumulating way.
We require that such a general accumulation satisfies the following two conditions:
\begin{eqnarray}
r_1\oplus r_2=r_2\oplus r_1;\,\,r_1\oplus (r_2\oplus r_3)=(r_1\oplus r_2)\oplus r_3.
\end{eqnarray}
They implies that, respectively, the order of the rewards and the order of accumulating are irrelevant~\footnote{However, for the infinite horizon, the existence of an analogous discount factor will invalidate the commutativity}.
Clearly the conventional accumulation is $\oplus=+:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$, i.e. the traditional summation of two real numbers, which trivially satisfies the conditions above.
One another simple and natural choice is the multiplication $\oplus=\cdot$, which will be our focus:
\begin{eqnarray}
\mathcal{R}_\cdot^t(s_t,a_t;s_{t+1},a_{t+1};\cdots)=R^t(s_t,a_t)\cdot R^{t+1}(s_{t+1},a_{t+1})\cdots,
\end{eqnarray}
where the subscript ``${}_\cdot$'' of $\mathcal{R}_\cdot^t$ denotes the multiplication.
At first glance, the multiplicative accumulation is similar to the conventional additive one since it can be transformed to be a summation form as
\begin{eqnarray}
\label{log}
\ln\mathcal{R}_\cdot^t=\ln R^t(s_t,a_t)+\ln R^{t+1}(s_{t+1},a_{t+1})+\cdots.
\end{eqnarray}
Nevertheless, the essential distinction is in the optimal policy (\ref{optimization_1}) because for general distribution $\mathbb{E}[\ln(\cdot)]\neq \ln\mathbb{E}[\cdot]$ unless the process is completely deterministic due to Jensen's inequality~\cite{Rudin:2006aa}.
It makes the multiplicative way and the additive way quantitatively different in nature.
As just mentioned, the optimization of multiplicatively accumulated rewards is equivalent to that of additive ones when the uncertainty in $P_{a}(s',s)$ is absent.
This observation will be a useful checker later.
\section{Additive and multiplicative rewards}
\label{sec_add}
In this section, we will first review an exact solution to MDP for the additive rewards $\oplus=+$~\cite{Kwakernaak:1972aa}.
Then we will derive an analogous exactly solvable MDP with a multiplicative reward $\oplus=\cdot$ as our main result.
\subsection{Linear transition with a Gaussian noise}
In the following discussion,
we assume the following linear transition for $t=1,\cdots,T$
\begin{eqnarray}
\label{LT}
\left\{\begin{array}{l}s_{t}=A_{t-1}s_{t-1}+B_{t-1}a_{t-1}+w_{t-1};\\
w_{t-1}\sim\mathcal{N}(0,\Sigma_{t-1}),\end{array}\right.
\end{eqnarray}
where the Gaussian distribution $\mathcal{N}(\mu,\Sigma)$ takes the form of
\begin{eqnarray}
\label{white_noise}
p(x;\mu,\Sigma)=\frac{1}{\sqrt{|2\pi\Sigma|}}\exp\left[-\frac{1}{2}(x-\mu)^\intercal\Sigma^{-1}(x-\mu)\right].
\end{eqnarray}
\subsection{Additive linear quadratic rewards}
Let us first review the result of $\oplus=+$ with the following quadratic rewards together with the linear transition (\ref{LT}) above called linear quadratic regulator (LQR)~\cite{Kwakernaak:1972aa}:
\begin{eqnarray}
\label{LQR}
R_\text{LQR}^{t}(s_t,a_t)=-s_t^\intercal U_ts_t-a_t^\intercal W_ta_t,
\end{eqnarray}
where $U_t$ and $W_t$ are positive definite matrices.
The choice $\oplus=+$ means the cumulative rewards is the following summation:
\begin{eqnarray}
\mathcal{R}_{+,\text{LQR}}^t(s_t,a_t;\cdots;s_T,a_T)= \sum_{k=t}^TR_\text{LQR}^k(s_k,a_k)=\sum_{k=t}^T-s_k^\intercal U_ks_k-a_k^\intercal W_ka_k.
\end{eqnarray}
Then the optimal policy can be obtained by Eq.~(\ref{optimization}) as
\begin{eqnarray}
\label{aLQR}
\pi_{t,+\text{ LQR}}^*(s_t)=\left[(W_t-B^\intercal_t\Phi_{t+1\text{ LQR}}B_t)^{-1}B_t\Phi_{t+1\text{ LQR}}A_t\right]s_t,
\end{eqnarray}
where $\Phi_{t\text{ LQR}}$ is \emph{backward} updated by $\Phi_{t+1\text{ LQR}}$ through the following Riccati equation:
\begin{eqnarray}
\label{riccati}
\Phi_{t\text{ LQR}}=A^\intercal_t\left[\Phi_{t+1\text{ LQR}}+\Phi_{t+1\text{ LQR}}B_t(W_t-B_t^\intercal\Phi_{t+1\text{ LQR}}B_t)^{-1}B_t\Phi_{t+1\text{ LQR}}\right]A_t-U_t,
\end{eqnarray}
with the initialization as
\begin{eqnarray}
\Phi_{T+1\text{ LQR}}=0.
\end{eqnarray}
Although we will not re-derive this exact solution,
it should be noted that the optimal policy (\ref{aLQR}) is completely independent of the noise (\ref{aLQR}) in linear transitions.
This property enables us to apply the LQR even without measuring the covariance matrices $\Sigma$.
It is also reflected in the fact that the updating rule (\ref{riccati}) is the Riccati equation for the deterministic optimal control with LQR.
\subsection{Multiplicative linear exponentiated quadratic rewards}
In this part, we will investigate the case of $\oplus=\cdot$ in Eq.~(\ref{Areward_1}), i.e. the multiplicative rewards:
\begin{eqnarray}
\mathcal{R}_\cdot^t(s_t,a_t;\cdots;s_T,a_T)=\prod_{k=t}^TR^k(s_k,a_k),
\end{eqnarray}
with the following exponentiated quadratic reward
\begin{eqnarray}
R^k(s_ka_k)=\exp\left(-s_k^\intercal U_ks_k-a_k^\intercal W_ka_k\right)
\end{eqnarray}
which implies
\begin{eqnarray}
\label{reward_m}
\mathcal{R}_\cdot^t(s_t,a_t;\cdots;s_T,a_T)=\exp\left(\sum_{k=t}^T-s_k^\intercal U_ks_k-a_k^\intercal W_ka_k\right).
\end{eqnarray}
We recall from Eq.~(\ref{optimization_1}) that
\begin{eqnarray}
\label{a*}
\pi^*_{t,\cdot}(s_t)=\text{argmax}_{a_t}\text{max}_{a_{t+1},\cdots,a_{T}}\mathbb{E}^{(t+1)}\left[\exp\left(\sum_{k=t}^T-s_k^\intercal U_ks_k-a_k^\intercal W_ka_k\right)\right].
\end{eqnarray}
It is natural to define the value function as the maximization in Eq.~(\ref{a*}):
\begin{eqnarray}
\label{value}
V^*_t(s_t)=\text{max}_{a_t}\text{max}_{a_{t+1},\cdots,a_{T}}\mathbb{E}^{(t+1)}\left[\mathcal{R}_\cdot^t(s_t,a_t;s_{t+1},a_{t+1};\cdots;s_T,a_T)\right],
\end{eqnarray}
Let us first consider $\pi_{T,\cdot}^*(s_T)$ from $V_T^*(s_T)$ in Eq.~(\ref{value}) since the world ends at $T$:
\begin{eqnarray}
\label{ind_0}
V_T^*(s_T)&=&\text{max}_{a_T}[\mathcal{R}_\cdot^T(s_T,a_T)]\nonumber\\
&=&\text{max}_{a_T}[\exp\left(-s_T^\intercal U_Ts_T-a_T^\intercal W_Ta_T\right)]\nonumber\\
&=&\exp\left(-s_T^\intercal U_Ts_T\right),
\end{eqnarray}
with $\pi_T^*=0$ due to the positive-definiteness of the matrix $W_T$.
Therefore, observing Eq.~(\ref{ind_0}), we would set the induction assumption as
\begin{eqnarray}
\label{ind_ass}
V_{t+1}^*(s_{t+1})\stackrel{?}{=}\frac{1}{D_{t+1}}\exp(s^\intercal_{t+1}\Phi_{t+1}s_{t+1}),
\end{eqnarray}
for some to-be-determined matrices $\Phi_{t+1}$ and number $D_{t+1}$ independent on $s_{t+1}$.
Our main task is to prove that $V_t^*(s_t)$ is of the exactly the same form with some $\Phi_t$ and $D_t$ derived from $\Phi_{t+1}$ and $D_{t+1}$.
By definitions in Eqs.~(\ref{reward_m},\ref{value}) and the induction assumption (\ref{ind_ass}),
\begin{eqnarray}
\label{value_1}
V^*_t(s_t)&=&\text{max}_{a_t}\left\{\exp(-s_t^\intercal U_ts_t-a_t^\intercal W_ta_t)\cdot\text{max}_{a_{t+1},\cdots,a_{T}}\mathbb{E}^{(t+1)}\left[\mathcal{R}_\cdot^{t+1}(s_{t+1},a_{t+1};\cdots;s_T,a_T)\right]\right\}\nonumber\\
&=&\exp(-s_t^\intercal U_ts_t)\text{max}_{a_t}\left[\exp(-a_t^\intercal W_ta_t)\mathbb{E}_{s_{t+1}\sim\mathcal{N}(A_ts_t+B_ta_t,\Sigma_t)}V_{t+1}^*(s_{t+1})\right]\nonumber\\
&=&\frac{\exp(-s_t^\intercal U_ts_t)}{D_{t+1}}\text{max}_{a_t}\left\{\exp(-a_t^\intercal W_ta_t)\mathbb{E}_{s_{t+1}\sim\mathcal{N}(A_ts_t+B_ta_t,\Sigma_t)}\left[\exp(s^\intercal_{t+1}\Phi_{t+1}s_{t+1})\right]\right\},
\end{eqnarray}
where $\mathbb{E}_{s_{t+1}\sim\mathcal{N}(A_ts_t+B_ta_t,\Sigma_t)}$ precisely means the sampling of $s_{t+1}$ by the linear transition with a Gaussian white distribution as in Eq.~(\ref{LT}).
Then we extend out such a Gaussian integration in Eq.~(\ref{value_1}):
\begin{eqnarray}
\label{value_2}
V^*_t(s_t)&=&\exp(-s_t^\intercal U_ts_t)\frac{1}{D_{t+1}}\text{max}_{a_t}\left\{\exp(-a_t^\intercal W_ta_t)\int d\vec{s}_{t+1}\sqrt{|\Sigma^{-1}_t/2\pi|}\right.\nonumber\\
&&\left.\exp\left[-\frac{1}{2}(s_{t+1}-A_ts_t-B_ta_t)^\intercal\Sigma_t^{-1}(s_{t+1}-A_ts_t-B_ta_t)\right]\exp(s^\intercal_{t+1}\Phi_{t+1}s_{t+1})\right\}\nonumber\\
&=&\frac{1}{D_{t+1}}\sqrt{\frac{|\Sigma_t^{-1}|}{|\Sigma_t^{-1}-2\Phi_{t+1}|}}\exp(-s_t^\intercal U_ts_t)\nonumber\\
&&\text{max}_{a_t}\left\{\exp\left[-a^\intercal_tW_ta_t+(A_ts_t+B_ta_t)^\intercal\Omega_{t+1}(A_ts_t+B_ta_t)\right]\right\},
\end{eqnarray}
where
\begin{eqnarray}
\Omega_{t+1}&\equiv&\Sigma_t^{-1}\left(\Sigma_t^{-1}-2\Phi_{t+1}\right)^{-1}\Phi_{t+1}.
\end{eqnarray}
Therefore, we obtain one of the main results as:
\begin{eqnarray}
\pi^*_{t,\cdot}(s_t)&=&\text{argmax}_{a_t}\left\{\exp\left[-a^\intercal_tW_ta_t+(A_ts_t+B_ta_t)^\intercal\Omega_{t+1}(A_ts_t+B_ta_t)\right]\right\}\nonumber\\
&=&\left[(W_t-B^\intercal_t\Omega_{t+1}B_t)^{-1}B_t\Omega_{t+1}A_t\right]s_t,
\end{eqnarray}
which is put into Eq.~(\ref{value_2}) to derive that
\begin{eqnarray}
\label{value_3}
V^*_t(s_t)&=&\frac{1}{D_{t+1}}\sqrt{\frac{|\Sigma_t^{-1}|}{|\Sigma_t^{-1}-2\Phi_{t+1}|}}\exp\left[-s_t^\intercal (U_t-A^\intercal_t\Omega_{t+1}A_t)s_t\right]\nonumber\\
&&\exp\left[s^\intercal_tA_t^\intercal\Omega_{t+1}B_t(W_t-B_t^\intercal\Omega_{t+1}B_t)^{-1}B_t^\intercal\Omega_{t+1}A_ts_t\right]\nonumber\\
&\equiv&\frac{1}{D_{t}}\exp\left(s^\intercal_t\Phi_ts_t\right),
\end{eqnarray}
with
\begin{eqnarray}
\frac{1}{D_t}\equiv \frac{1}{D_{t+1}}\sqrt{\frac{|\Sigma_t^{-1}|}{|\Sigma_t^{-1}-2\Phi_{t+1}|}}
\end{eqnarray}
and the following updating rule:
\begin{eqnarray}
\Phi_t\equiv A^\intercal_t\left[\Omega_{t+1}+\Omega_{t+1}B_t(W_t-B_t^\intercal\Omega_{t+1}B_t)^{-1}B_t\Omega_{t+1}\right]A_t-U_t.
\end{eqnarray}
Indeed, we have proven the induction step that $V_t^*(s_t)$ also precisely takes the exponentiated quadratic form with $\Phi_t$ and $D_t$ independent on $s_t$.
From Eq.~(\ref{ind_0}),
we obtain the backward initialization as
\begin{eqnarray}
\left\{\begin{array}{l}\Phi_{T}=-U_T;\\
\frac{1}{D_T}=1,\end{array}\right.
\end{eqnarray}
in addition to $\pi^*_T=0$ from the last line of Eq.~(\ref{ind_0}).
Of course, we can also embed $\pi^*_T=0$ into the updating rules by artificially extending horizon to $(T+1)$ by
\begin{eqnarray}
\label{compact_ini}
\left\{\begin{array}{l}\Phi_{T+1}=0;\\
\frac{1}{D_{T+1}}=1.\end{array}\right.
\end{eqnarray}
In a short summary, with the initialization (\ref{compact_ini}),
\begin{eqnarray}
\label{policy_m}
\pi^*_{t,\cdot}(s_t)&=&\left[(W_t-B^\intercal_t\Omega_{t+1}B_t)^{-1}B_t\Omega_{t+1}A_t\right]s_t, \\
\label{omega_m}
\Omega_{t+1}&\equiv&\Sigma_t^{-1}\left(\Sigma_t^{-1}-2\Phi_{t+1}\right)^{-1}\Phi_{t+1};
\end{eqnarray}
with the updating rule for $\Omega_{t+1}$ which is determined by the updating of $\Phi_{t+1}$:
\begin{eqnarray}
\label{update_m}
\Phi_t= A^\intercal_t\left[\Omega_{t+1}+\Omega_{t+1}B_t(W_t-B_t^\intercal\Omega_{t+1}B_t)^{-1}B_t\Omega_{t+1}\right]A_t-U_t.
\end{eqnarray}
\subsection{The deterministic case: $\Sigma_k\rightarrow0$}
\label{noise_free}
It is noted that our exponentiated reward (\ref{reward_m}) is related to the additive one in Eq.~(\ref{LQR}) by the logarithm (\ref{log}).
Thus, by Jensen's inequality, it is a consistency check that the optimal policy (\ref{policy_m}) should be reduced to the policy (\ref{aLQR}) (that is noise-independent) in the noise-free limit $\{\Sigma_k\rightarrow 0\}$,
\begin{eqnarray}
\lim_{\{\Sigma_k\rightarrow0\}}\pi_{t,\cdot}^*=\lim_{\{\Sigma_k\rightarrow0\}}\pi_{t,+\text{LQR}}^*=\pi_{t,+\text{LQR}}^*.
\end{eqnarray}
which is indeed the case since $\Omega_{t+1}\rightarrow\Phi_{t+1}$ in Eq.~(\ref{omega_m}) in such a limit and the updating rule~(\ref{update_m}) becomes the standard Riccati equation~(\ref{riccati}).
We will see that this reduction under a special noise limit to the policy under additive reward reflects a general principle that the scheme of multiplicative reward is more general than the additive scheme independently of the model.
\section{Multiplicative scheme as a general framework}
\label{sec_mul}
In the discussions above,
we propose a new multiplicative reward accumulating way other than the additive one.
However,
it appears that whether the multiplicative or the additive one works better for the practical sake strongly depends on the system in the real world.
Actually,
we will prove that the multiplicative approach is a general framework, i.e., any optimal policy obtained by a certain additive reward function can be approximated by the policy obtained by a multiplicative reward with an arbitrary precision.
\subsection{Scaling invariance}
To address the issue above,
let us observe the optimal policy in Eqs.~(\ref{policy_m},\ref{omega_m},\ref{update_m}) rewritten below:
\begin{eqnarray}
\label{policy_m_1}
\pi^*_{t,\cdot}(s_t)[\{W_t,U_t,\Sigma_t\}]&=&\left[(W_t-B^\intercal_t\Omega_{t+1}B_t)^{-1}B_t\Omega_{t+1}A_t\right]s_t, \\
\Omega_{t+1}&\equiv&\Sigma_t^{-1}\left(\Sigma_t^{-1}-2\Phi_{t+1}\right)^{-1}\Phi_{t+1};
\end{eqnarray}
with the updating rule for $\Omega_{t+1}$ which is determined by the updating of $\Phi_{t+1}$:
\begin{eqnarray}
\Phi_t= A^\intercal_t\left[\Omega_{t+1}+\Omega_{t+1}B_t(W_t-B_t^\intercal\Omega_{t+1}B_t)^{-1}B_t\Omega_{t+1}\right]A_t-U_t.
\end{eqnarray}
We have made explicit the parameter dependence on $\{W_t\}$,$\{U_t\}$ and $\Sigma_t$ in Eq.~(\ref{policy_m_1}).
It is straightforward to prove the following scaling invariance:
\begin{eqnarray}
\pi^*_{t,\cdot}(s_t)[\{W_t,U_t,\Sigma_t\}]&=&\pi^*_{t,\cdot}(s_t)[\{\kappa W_t,\kappa U_t,\kappa^{-1}\Sigma_t\}],
\end{eqnarray}
which can be also rearranged into the following form
\begin{eqnarray}
\label{scaling}
\pi^*_{t,\cdot}(s_t)[\{W_t,U_t,\kappa\Sigma_t\}]&=&\pi^*_{t,\cdot}(s_t)[\{\kappa W_t,\kappa U_t,\Sigma_t\}].
\end{eqnarray}
On the other hand,
the noise-free limit in Sec.~\ref{noise_free} implies that
\begin{eqnarray}
\lim_{\kappa\rightarrow0^+}\pi^*_{t,\cdot}(s_t)[\{W_t,U_t,\kappa\Sigma_t\}]&=&\pi^*_{t,+}(s_t)[\{W_t,U_t,\Sigma_t\}],
\end{eqnarray}
which means, by the scaling invariance~(\ref{scaling}), that
\begin{eqnarray}
\lim_{\kappa\rightarrow0^+}\pi^*_{t,\cdot}(s_t)[\{\kappa W_t,\kappa U_t\}]&=&\pi^*_{t,+}(s_t)[\{W_t,U_t\}],
\end{eqnarray}
where we have removed the redundant (the same) noise dependence.
Therefore,
the scaling invariance~(\ref{scaling}) ensures that the optimal policy under the additive reward~(\ref{LQR}) can be approached by including a sufficiently small scaling coefficient $\kappa$ in the multiplicative reward~(\ref{reward_m}).
Furthermore,
this phenomenon is not reward-function dependent or even model dependent.
For any additive upper-bouded reward function $R^t_+$ at the time slice $t$ with its accumulation $\mathcal{R}^t_+\equiv\sum_{k\geq t}R^k_+$,
we can define the following multiplicative reward with its accumulation:
\begin{eqnarray}
R^t_\cdot\equiv\exp\left(\kappa R^t_+\right)\text{ with }\mathcal{R}^t_\cdot\equiv\prod_{k\geq t}R^t_\cdot,
\end{eqnarray}
whose optimal policy denoted by $\pi^*_{t,\cdot}$ can be reduced to the optimal policy derived by $\{R^t_+\}$ in the limit:
\begin{eqnarray}
\lim_{\kappa\rightarrow0^+}\pi^*_{t,\cdot}[\{R^t_\cdot\}]=\pi^*_{t,+}[\{R^t_+\}].
\end{eqnarray}
It is because of the Taylor expansion that
\begin{eqnarray}
R^t_\cdot=1+\kappa R^t_++O(\kappa^2),
\end{eqnarray}
where the constant term does not contribute to the optimal policy and the higher order term $O(\kappa^2)$ is diminished by the limit $\kappa\rightarrow0^+$ above, leaving the dominant $\kappa$-linear term.
This property is model-independent although, in the example before, we have used a model-dependent scaling invariance~(\ref{scaling}) to manifest it.
However,
the converse is generally not true.
Namely,
given a multiplicative reward,
its optimal policy cannot be approached with an arbitrary precision
by any other additive reward.
Our exactly solvable case in this work shows that the multiplicative reward, in the viewpoint of the additive one, has a long-range correlation, i.e., the reward function
$\mathcal{R}_\cdot^t$ in Eq.~(\ref{reward_m}) by a Taylor expansion contains terms like $s_ts_{t+k}$ for arbitrarily large $k<T-t$.
This non-perturbative nature cannot be captured by any additive scheme.
It exactly means that the multiplicative reward is a more general framework than the additive reward due to an additional free parameter $\kappa$ to adjust the weight between reward and other model factors, e.g., the noise.
In a short summary,
if the real world indeed prefers the additive reward to produce a better policy,
we can still use the multiplicative reward by tuning $\kappa$ to a smaller value during the series of experiments and tests.
On the other side,
if the multiplicative way is preferable in the real system,
the additive accumulation of reward generically cannot give a satisfying optimal policy.
\section{Conclusion}
\label{sec_con}
In this work, we propose a new multiplication way of reward accumulations and develop a rigorous solution in a linear transition model.
In contrast to the conventional additive reward case,
our optimal policy is explicitly dependent on the noise of linear transition models.
Furthermore,
we also show that the multiplicative scheme is a general framework that covers the additive one with an arbitrary precision.
We expect that our proposal of extension of the reward accumulation can have a wide application in real systems.
\acknowledgments
The authors are grateful to Professor Koji Tsuda for helpful advice on the manuscript.
Y.~Y. was supported by JSPS fellowship and X.~S. was supported by the China Scholarship Council.
This work was supported in part by MEXT/JSPS KAKENHI Grant No. JP19J13783 (Y.~Y.) and CSC No. 201809120018 (X.~S.). |
1804.03935 | \section{Introduction}
Recently, a new greedy algorithm for obtaining a good subspace $V_n$ of $n$-dimension to approximate elements of a compact set $\mathcal{K}$ in a Banach space $X$ has been given. This greedy algorithm was studied initially when $X$ is a Hilbert space in the context of reduced basis methods for solving families of PDEs, see \cite{May1,May2}. Later, it was studied extensively not only in the setting of Hilbert spaces, let us mention, for instance, Binev et al. \cite{Biet}, Buffa et al. \cite{Buet}, DeVore et al. \cite{DePeVo}, and Wojtaszczyk \cite{Woj}. The greedy algorithm for generating the subspace $V_n$ to approximate elements of $\mathcal{K}$ is implemented as follows. We first select $f_0$ such that
\begin{eqnarray*}
\|f_0\|_X =\max_{f\in \mathcal{K}}\|f\|_X \,.
\end{eqnarray*}
Since $\mathcal{K}$ is compact, such a $f_0$ always exists. At the general step, assuming that $\{f_0,\ldots,f_{n-1} \}$ and $V_{n}=\linspan\{f_0,\ldots,f_{n-1} \}$ have been chosen, then we take $f_n$ such that
\begin{eqnarray*}
\dist (f_n,V_{n})_X=\max_{f\in \mathcal{K}}\dist(f,V_{n})_X\,.
\end{eqnarray*}
The error in approximating the elements of $\mathcal{K}$ by $V_n$ is defined as
\begin{eqnarray*}
\sigma_0(\mathcal{K})_X:= \|f_0\|_X\,,\qquad\ \sigma_n(\mathcal{K})_X:=\dist (f_n,V_{n})_X=\max_{f\in \mathcal{K}}\dist(f,V_{n})_X\,
\end{eqnarray*}
for $n\geq 1$. The sequence $\sigma_n(\mathcal{K})_X$ is non-increasing. It is important to note that the sequence $\{f_n\}_{n\geq 0}$ and also $\sigma_n(\mathcal{K})_X$ are not unique.
Let us mention that the best possible error one can achieve to approximate the elements of $\mathcal{K}$ by $n$-dimensional subspaces is the Kolmogorov width $d_n(\mathcal{K})_X$, which is given by
\begin{eqnarray*}
d_n(\mathcal{K})_X:= \inf_{L}\sup_{f\in \mathcal{K}}\dist(f,L)_X\,, \qquad n\geq 1,
\end{eqnarray*}
where the infimum is taken over all $n$-dimensional subspaces $L$ of $X$. We also put $$d_0(\mathcal{K})_X=\max_{f\in \mathcal{K}}\|f\|_X.$$ We would like to emphasize that in practice, finding subspaces which give this performance is out of reach.
We are interested in how well the subspaces created by the greedy algorithm approximate the elements of $\mathcal{K}$. For this purpose it is natural to compare $\sigma_n(\mathcal{K})_X$ with the Kolmogorov width $d_n(\mathcal{K})_{X}$. Various comparisons between $\sigma_n(\mathcal{K})_X$ and $d_n(\mathcal{K})_X$ have been made. The first attempt in this direction was given in \cite{Buet} and improved in \cite{Biet}, where the authors considered the case when $X$ is a Hilbert space $H$. Under this assumption, it has been shown that
\begin{eqnarray*}
\sigma_n(\mathcal{K})_H \leq C2^nd_n(\mathcal{K})_H
\end{eqnarray*}
for an absolute constant $C$. Observe that this result is useful only when $d_n(\mathcal{K})_H$ decays faster than $2^{-n}$. A significant improvement of the above result was given in \cite{DePeVo} where the authors prove that if the Kolmogorov width has polynomial decay with rate $n^{-s}$, then the greedy algorithm also yields the same rate, i.e., $
\sigma_n(\mathcal{K})_H\leq Cn^{-s}\,.
$
In the same paper, the estimate of this type for Banach spaces $X$ was also considered, but there is an additional factor $n^{\frac{1}{2}+\varepsilon}$ (for any $\varepsilon>0$), that is,
\begin{equation}\label{in-0}
\sigma_n(\mathcal{K})_X\leq C n^{-s+\frac{1}{2}+\epsilon}
\end{equation}
where $C$ depends on $s$ and $\varepsilon$.
For a recent result in this direction we refer to \cite{Woj}. Let $\tilde{\gamma}_n(X)$ be the supremum of Banach-Mazur distance $d(V_n,\ell_2^n)$ where $V_n$ is the $n$-dimensional space in a quotient space of $X$. If $d_n(\mathcal{K})_X\leq C_0n^{-s}$ and $\tilde{\gamma}_n(X)\leq C_1n^{\mu}$, then Wojtaszczyk \cite{Woj} shows that there is a constant $C$ such that
\begin{equation} \label{in-1}
\sigma_n(\mathcal{K})_X\leq C \bigg(\frac{\log (n+2)}{n}\bigg)^{s} n^{\mu}\,.
\end{equation}
Observe that the estimate given in \eqref{in-1} improves the result \eqref{in-0} since $\tilde{\gamma}_n(X)\leq \sqrt{n}$. It has been shown in \cite{Woj} that the above estimate is optimal in $L_p$ up to a logarithmic factor. However, for a given Banach space $X$, the factor $\tilde{\gamma}_n(X)$ is not easy to compute. Hence, this raises the question whether we can replace the condition on $\tilde{\gamma}(X)$ by $ \gamma_n(X)=\sup_{V_n} d(V_n,\ell_2^n)$ where the supremum is taken over $n$-dimensional subspaces $V_n$ in $X$, see Section \ref{sec-2} for the definition.
In the present paper we will give a new analysis of the performance of the greedy algorithm in which we show that the assumption on $\tilde{\gamma}_n(X)$ can be relaxed to $\gamma_n(X)$. In addition the rate of the logarithm in \eqref{in-1} can also be improved when $s>1/2$. More precisely we shall prove that there is a constant $C>0$ such that
\begin{eqnarray*}
\sigma_n(\mathcal{K})_X\leq C\sqrt{\log (2n)}\, n^{-s+\mu}\,
\end{eqnarray*}
if $d_n(\mathcal{K})_X\leq C_0n^{-s}$ and $\gamma_n(X)\leq C_1n^{\mu}$\,.
Often, the compact set of interest $\mathcal{K}$ is the image (or subset) of the closed unit ball $B_E$ of a Banach space $E$ under a compact operator $T\in \mathcal{L}(E,X)$. For this reason, we shall compare $\sigma_n(\mathcal{K})_X$ with the Kolmogorov widths $d_n(T(B_E))_X$. In this study, we obtain the estimate
\begin{equation}\label{in-3}
\sigma_{3n-1}(\mathcal{K})_X \leq 3e^2\, \Gamma(E)\Gamma_n(X) \bigg(\prod_{k=0}^{n-1} d_{k}(T(B_E))_X \bigg)^{1/n} ,\qquad n\geq 1,
\end{equation}
here $\Gamma_n(X)$ is the $n$-Grothendieck number of $X$, which is closely related to $\gamma_n(X)$, see Section \ref{sec-2}. Note that if $E$ is a Hilbert space then $\Gamma(E)=1$. In Section \ref{sec-3} we will give an example showing that the estimate \eqref{in-3} is sharp in some situations.
The rest of our paper is organized as follows. In the next Section \ref{sec-2} we will collect some required tools. The main results are stated and proved in Section \ref{sec-3}.
\section{Some preparations}\label{sec-2}
In this section we collect some tools needed to formulate our results in the next section. The Banach - Mazur distance of two isomorphic Banach spaces $X$ and $Y$ is
defined by
\begin{eqnarray*}
d(X,Y)=\inf\big\{\|T\|\cdot \|T^{-1}\|: \ T: X\to Y \text{ is an isomorphism}\big\} \,.
\end{eqnarray*}
For a Banach space $X$ we introduce a sequence of numbers
\begin{eqnarray*}
\gamma_n(X)=\sup\big\{d(V,\ell_2^n)\,,\ V \ \text{is an } n \text{-dimensional subspace in }X \big\}\,.
\end{eqnarray*}
The sequence $\gamma_n(X)$ is non-decreasing and $\gamma_1(X)=1$. It is obvious that if $X$ is a Hilbert space then we have $\gamma_n(X)=1$, $n=1,2,3,\ldots$. In the case of an arbitrary Banach space $X$, it is known that $\gamma_n(X)\leq n^{1/2}$ and $\gamma_n(L_p)\leq n^{|\frac{1}{2}-\frac{1}{p}|}$ for $1\leq p\leq \infty$.
Let $X$ and $Y$ be Banach spaces of finite dimension. Then there exists an operator $T: X\to Y$ such that $d(X,Y)=\|T\|\cdot \|T^{-1}\|$. We can additionally assume that $\|T^{-1}\|=1$. Hence a new norm on $X$ defined by $\|x\|_e:=\|Tx\|_Y$ satisfies
\begin{equation} \label{dist}
\|x\|_X \leq \|x\|_e\leq d(X,Y)\|x\|_X\,.
\end{equation}
Moreover $T$ is an isometry between $(X,\|\cdot\|_e)$ and $Y$\,.
The local injective distance $\gamma_n(X)$ is closely related to the so-called Grothendieck number. Let $T\in \mathcal{L}(X,Y)$ be a linear bounded operator. The $n$-th Grothendieck number of $T$ is defined as
\begin{eqnarray*}
\Gamma_n(T):=\sup \Big\{ \big|\det\big( \langle Tx_i,b_j\rangle\big)\big|^{1/n},\ x_1,\ldots,x_n\in B_X,\ b_1,\ldots, b_n \in B_{Y'} \Big\}\,.
\end{eqnarray*}
If $T$ is the identity map of $X$ then we write $\Gamma_n(X)$. Let $0\leq \delta \leq 1/2$. A Banach space $X$ is said to be of weak Hilbert type $\delta$ if there exists a constant $C\geq 1$ such that $\Gamma_n(X)\leq C\, n^\delta$ for $n\geq 1.$ We denote the class of these spaces by $\Gamma_\delta$. Note that $
\Gamma_{1/2}$ is the set of all Banach spaces, i.e.,
\begin{eqnarray*}
\Gamma_n(X) \leq c n^{1/2},\qquad \text{for all } X\,.
\end{eqnarray*}
In particular we have
$\Gamma_n(L_p)\leq n^{|1/p-1/2|}$ for $1\leq p\leq \infty$. The relation between $\gamma_n(X)$ and Grothendieck numbers is represented in the following lemma, see, e.g., \cite{Pie91}.
\begin{lemma} Let $X$ be a Banach space. Then
$$\gamma_n(X)\leq C_1n^{\delta}\qquad \text{ if and only if }\qquad \Gamma_n(X)\leq C_2n^{\delta}$$ for some $C_1, C_2\geq 1$.
\end{lemma}
For later use, let us introduce the notion of Kolmogorov and Gelfand widths of linear continuous operators. In the following we use the definition given in \cite[Chapter 2]{Pin}, but see also \cite[Chapter 11]{Pie}. Note that there is a shift of 1 between definitions by Pinkus \cite[Chapter 2]{Pin} and Pietsch \cite[Chapter 11]{Pie}. Let $B_X$ be the closed unit ball of $X$. The Kolmogorov $n$-width of the operator $T \in \mathcal L(X,Y)$ is defined as
\begin{eqnarray*}
d_n(T):=d_n(T(B_X))_Y= \inf_{L_n}\sup_{\|x\|_X\leq 1}\inf_{y\in L_n}\|Tx-y\|_Y,
\end{eqnarray*}
where the infimum is taken over all subspace $L_n$ of dimension $n$ in $Y$. The Gelfand $n$-th width of $T \in \mathcal L(X,Y)$ is given by
$$ d^n(T) := d^n(T(B_X))_Y:= \inf_{L^n}\sup_{\|x\|_X\leq 1\,,x\in L^n}\|Tx\|_Y \,,$$
where the infimum is taken over subspaces $L^n$ of $X$ of co-dimension at most $n$. We also put
$
d_0(T)=d^0(T)=\|T\|
$. Note that Kolmogorov and Gelfand widths are closely related, i.e.,
$d^n(T)=d_n(T')$ for every $T\in \mathcal{L}(X,Y)$ and $d_n(T)=d^n(T')$ if $T$ is compact or $Y$ is a reflexive Banach space, see \cite[Chapter 2]{Pin}. Here recall that $T'$ is the dual operator of $T$. For basic properties of these quantities we refer to monographs \cite[Chapters 2]{Pin} and \cite[Chapter 11]{Pie}\,.
The relation between Grothendieck number and Kolmogorov, Gelfand widths is given in the following lemma. For a proof we refer to \cite{Pie91}.
\begin{lemma}\label{lem1} Let $X$ and $Y$ be Banach spaces and $T\in \mathcal{L}(X,Y)$. Then it holds
\begin{eqnarray*}
\bigg(\prod_{k=0}^{n-1} d_k(T)\bigg)^{1/n}\leq \Gamma_{n}(T)\qquad \text{and} \qquad \bigg(\prod_{k=0}^{n-1} d^k(T)\bigg)^{1/n}\leq \Gamma_{n}(T)
\end{eqnarray*}
for all $n\geq 1$.
\end{lemma}
An operator $T\in \mathcal{L}(X,Y)$ is called absolutely $2$-summing if there exists a constant $C$ such that
\begin{equation}\label{B2}
\bigg(\sum_{i=1}^n \Vert Tx_i \Vert^2 \bigg)^{1/2}\leq C\, \sup\bigg\lbrace \bigg( \sum_{i=1}^n |\langle x_i,b\rangle|^2\bigg)^{1/2} : \ b\in X',\ \Vert b \Vert_{X'}\leq 1\bigg\rbrace.
\end{equation}
The set of these operators is denoted by $\mathcal{B}_{2}(X,Y) $ and the norm $ \|T|\mathcal{B}_2\| $ is given by the infimum of all $C > 0$ satisfying \eqref{B2}. The following assertion can be found in \cite{Pie91}.
\begin{lemma}\label{lem2} Let $X$ and $Y$ be Banach spaces. Let $T\in \mathcal{B}_2(X,Y)$. Then we have
\begin{eqnarray*}
\Gamma_n(T) \leq en^{-1/2} \|T|\mathcal{B}_2\| \, \Gamma_n(X)\,,\qquad n\geq 1.
\end{eqnarray*}
\end{lemma}
\section{Main results}\label{sec-3}
Our first result can be formulated as follows.
\begin{theorem}\label{main-1} Let $X$ be a Banach space and $\mathcal{K}$ a compact subset of $X$. Assume that
$$ d_n(\mathcal{K})_X\leq C_0\max(1,n)^{-s}, \ \ (n\geq 0)\qquad\text{and}\qquad \gamma_n(X)\leq C_1 n^{\mu},\ \ (n\geq 1)\,$$
for $0\leq \mu\leq \frac{1}{2}$ and $s> \mu$. Then we have
\begin{equation} \label{k-10}
\sigma_n(\mathcal{K})_X\leq C_0 C_1 2^\mu 16^s\sqrt{\log (2n)}\, n^{-s+\mu}\, \qquad \text{for}\ \ n\geq 2\,.
\end{equation}
\end{theorem}
\begin{proof}
The idea of the proof follows from the proof of Proposition 2.2 in \cite{Woj}.\\
{\it Step 1.} Let $\varepsilon>0$. From the assumption
$d_n(\mathcal{K})_X\leq C_0n^{-s}$, $n\geq 1$,
we infer the existence of a sequence of subspaces $(T_k)_{k\geq 0}$ in $X$ and $\dim (T_k)=2^{k}$ such that
\begin{eqnarray*}
\max_{x\in \mathcal{K}}\min_{g\in T_k}\|x-g\|_X\leq (C_0+\varepsilon)2^{-sk}\,.
\end{eqnarray*}
For $n\in \mathbb{N}$ fixed we put $V_k=T_0+T_1+\ldots+T_{k-1}$ for $k=1,\ldots,n$. Then we have $V_{k}\subset V_{k+1}$ and $\dim(V_k)< 2^{k}$. Observe that
\begin{equation} \label{kol}
\max_{x\in \mathcal{K}}\min_{g\in V_k}\|x-g\|_X\leq \max_{x\in \mathcal{K}}\min_{g\in T_{k-1}}\|x-g\|_X\leq (C_0+\varepsilon) 2^{-s(k-1)}\,.
\end{equation}
We denote $N=2^{n}$. Implementing the greedy algorithm for the set $\mathcal{K}$ we get the sequence $\{f_0,\ldots,f_{N-1}\}$. Then it follows from \eqref{kol} that
\begin{equation} \label{k-01}
\|f_\ell-g_\ell^k\|_X \leq (C_0+\varepsilon) 2^{-s(k-1)}\,, \qquad \ell=0,\ldots, N-1\,;\ k=1,\ldots,n
\end{equation}
for some $g_\ell^k\in V_k$. Let $X=\linspan\{f_0,\ldots,f_{N-1} \}$ and $Y=\linspan\{V_n,X \} $. It is obvious that $2^n\leq \dim(Y)< 2^{n+1}$. From \eqref{dist} we infer the existence of a Euclidean norm $\|\cdot\|_e$ on $Y$ satisfying
\begin{equation} \label{k-02}
\|y\|_X \leq \|y\|_e\leq d\big(Y,\ell_2^{\dim(Y)}\big)\|y\|_X\leq \gamma_{\dim(Y)}(X)\|y\|_X \leq A \|y\|_X,
\end{equation}
where we put $A= \gamma_{2^{n+1}}(X)$.
Let $Q$ be the orthogonal projection from $Y$ onto $X$ in the Euclidean norm $\|\cdot\|_e$. We denote $\dim(Q(V_k))=h_k$ for $k=1,\ldots,n$. It is clear that $h_k\leq \dim(V_k)< 2^k$ and $ Q(V_{k-1})\subset Q(V_k)$.\\
From \eqref{k-01} and \eqref{k-02} we get
\begin{equation} \label{k-03}
\begin{split}
\dist(f_\ell,Q(V_k))_{\|\cdot\|_e}&\leq \|f_\ell-Q(g_\ell^k)\|_e \\
& = \|Q(f_\ell - g_\ell^k)\|_e \leq \|f_\ell-g_\ell^k\|_e\leq (C_0 +\varepsilon) A {2^{-s(k-1)}}\,.
\end{split}
\end{equation}
By $\{\phi_j\}_{j=0,\ldots,N-1}$ we denote the orthonormal system obtained from $f_0,\ldots,f_{N-1}$ by Gram-Schmidt orthogonalization in the norm $\|\cdot\|_e$. It follows that the matrix $[\phi_j(f_\ell)]_{j,\ell=0}^{N-1}$ has a triangular form. In particular, on the diagonal we have
\begin{equation} \label{d}
\dist\big(f_\ell,\linspan\{ f_0,\ldots,f_{\ell-1} \}\big)_{\|\cdot\|_e} \geq \dist \big(f_\ell,\linspan\{ f_0,\ldots,f_{\ell-1} \}\big)_{X} =\sigma_\ell(\mathcal{K})_X\,.
\end{equation}
{\it Step 2.} We consider the case
\begin{eqnarray*}
0<h_{m_1}=\ldots=h_{m_{2}-1}<h_{m_2}=\ldots=h_{m_3-1}<\ldots<h_{m_{L}}=\ldots=h_n
\end{eqnarray*}
where $m_1=1$, $m_{L+1}=n+1$. We denote $\{x_j\}_{j=0,\ldots,N-1}$ another orthonormal basis in $X$, such that
\begin{eqnarray*} Q(V_{m_{i-1}})=\ldots=Q(V_{m_{i}-1})=\linspan\{ x_{0},\ldots, x_{h_{m_{i-1}}-1}\}\,
\end{eqnarray*}
for $i=2,\ldots,L$. Considering the vector $[x_j(f_\ell)]_{j=0}^{N-1}$ we observe that
\begin{equation} \label{k-04}
\sum_{j=h_{m_{L}}}^{N-1}|x_j(f_\ell)|^2 =\dist(f_\ell;Q(V_n))_{\|\cdot \|_e}^2
\end{equation}
and
\begin{equation} \label{k-05}
|x_0(f_\ell)|^2\leq \|f_\ell\|_e^2\,,\qquad \quad \sum_{j=h_{m_{i-1}}}^{h_{m_{i}}-1}|x_j(f_\ell)|^2 \leq \dist(f_\ell;Q(V_{{m_{i}}-1}))_{\|\cdot \|_e}^2\,,
\end{equation}
for $i=2,\ldots, L$. Note that
\begin{eqnarray*}
\prod_{j=0}^{N-1}\sigma_j(\mathcal{K})_X\leq \prod_{j=0}^{N-1}|\phi_j(f_j)| =\big|\det[\phi_j(f_\ell)]\big|= \big| \det[x_j(f_\ell)] \big|\,,
\end{eqnarray*}
see \eqref{d}. By $k_j$ we denote the $j$-th column of the matrix $ [x_j(f_\ell) ]_{j,\ell=0}^{N-1}$. Applying Hadamard's inequality and then arithmetic-geometric mean inequality we obtain
\begin{equation} \label{com}
\begin{split}
\bigg( \prod_{j=0}^{N-1}\sigma_j(\mathcal{K})_X \bigg)^2&\leq \big(\det[x_j(f_\ell)]\big)^2 \\
& \leq \bigg( \prod_{j=0}^{h_1-1}\|k_j\|_e^2 \bigg)\bigg( \prod_{j=h_{m_L}}^{N-1} \|k_j\|_e^2 \bigg)\bigg( \prod_{i=2}^{L} \prod_{j=h_{m_{i-1}}}^{h_{m_{i}}-1} \|k_j\|_e^2 \bigg) \\
&
\leq \bigg(\frac{1}{h_1}\sum_{j=0}^{h_1-1}\|k_j\|_e^2 \bigg)^{h_1}\bigg(\frac{1}{N-h_{m_L}} \sum_{j=h_{m_L}}^{N-1} \|k_j\|_e^2\bigg)^{N-h_{m_L}}\\
&\ \quad\times\ \ \prod_{i=2}^{L} \bigg(\frac{1}{h_{m_{i}}-h_{m_{i-1}}} \sum_{j=h_{m_{i-1}}}^{h_{m_{i}}-1} \|k_j\|_e^2\bigg)^{h_{m_{i}}-h_{m_{i-1}}} \,.
\end{split}
\end{equation}
From \eqref{k-03}, \eqref{k-04}, and \eqref{k-05} we have
\begin{eqnarray*}
\sum_{j=0}^{h_1-1}\|k_j\|_e^2 = \sum_{j=0}^{h_1-1}\sum_{\ell=0}^{N-1}|x_j(f_\ell)|^2 \leq \sum_{\ell=0}^{N-1} \|f_\ell\|_e^2 \leq N A^2 d_0(\mathcal{K})_X^2 \leq NA^2 (C_0+\varepsilon)^2
\end{eqnarray*}
(since $d_0(\mathcal{K})_X\leq C_0$, by our assumption), and for $i=2,\ldots,L$,
\begin{eqnarray*}
\begin{split}
\sum_{j=h_{m_{i-1}}}^{h_{m_{i}}-1} \|k_j\|_e^2 & = \sum_{j=h_{m_{i-1}}}^{h_{m_{i}}-1}\sum_{\ell=0}^{N-1} |x_j(f_\ell)|^2 \\
&\leq \sum_{\ell=0}^{N-1}\dist(f_\ell;Q(V_{m_{i}-1}))_{\|\cdot \|_e}^2 \leq N (C_0+\varepsilon)^2 A^2 {2^{-2s(m_{i}-2)}}\,.
\end{split}
\end{eqnarray*}
Similarly, we have
\begin{eqnarray*}
\sum_{j=h_{m_L}}^{N-1} \|k_j\|_e^2 \leq \sum_{\ell=0}^{N-1}\dist(f_\ell;Q(V_{n}))_{\|\cdot \|_e}^2 \leq N (C_0+\varepsilon)^2 A^2 {2^{-2s(n-1)}}\,.
\end{eqnarray*}
Inserting this into \eqref{com} we find
\begin{equation} \label{k-31}
\begin{split}
\bigg( \prod_{j=0}^{N-1}\sigma_j(\mathcal{K})_X \bigg)^2
& \leq \bigg(\frac{NA^2(C_0+\varepsilon)^2}{h_1}\bigg)^{h_1} \bigg(\frac{ N (C_0+\varepsilon)^2 A^2 {2^{-2s(n-1)}}}{N-h_{m_L}}\bigg)^{N-h_{m_L}} \\
&\qquad\times\quad\prod_{i=2}^{L} \bigg(\frac{N(C_0+\varepsilon)^2 A^2 {2^{-2s(m_{i}-2)}}}{h_{m_{i}}-h_{m_{i-1}}} \bigg)^{h_{m_{i}}-h_{m_{i-1}}}\\
& = M A^{2N} (C_0+\varepsilon)^{2N}\bigg(2^{-2s(n-1)(N-h_{m_L})}\prod_{i=2}^{L} 2^{-2s(m_{i}-2)(h_{m_{i}}-h_{m_{i-1}})}\bigg)\,,
\end{split}
\end{equation}
where we put
\begin{eqnarray*}
M= \bigg(\frac{N}{h_1}\bigg)^{h_1}\bigg(\frac{N}{N-h_{m_L}}\bigg)^{N-h_{m_L}}\prod_{i=2}^{L}\bigg(\frac{N}{h_{m_{i}}-h_{m_{i-1}}}\bigg)^{h_{m_{i}}-h_{m_{i-1}}}\,.
\end{eqnarray*}
For nonnegative number $a_1,\ldots,a_n$ and positive numbers $p_1,\ldots,p_n$ we have
\begin{eqnarray*}
a_1^{p_1}\cdots a_n^{p_n} \leq \bigg( \frac{a_1p_1+\ldots+a_np_n}{p_1+\ldots+p_n}\bigg)^{p_1+\ldots+p_n}\,,
\end{eqnarray*}
see, e.g., \cite[Page 17]{Hardyet}. Applying the above inequality for $M$ we get
\begin{equation}\label{k-32}
M\leq (L+1)^{N}\leq (n+1)^{N}\,.
\end{equation}
Now we deal with the term
\begin{eqnarray*}
\begin{split}
U:&=\bigg(2^{-2s(n-1)(N-h_{m_L})}\prod_{i=2}^{L} 2^{-2s(m_{i}-2)(h_{m_{i}}-h_{m_{i-1}})}\bigg)\\
&=2^{2s[-(n-1)2^n+(n-1)h_{m_L}-(m_{L}-2)(h_{m_L}-h_{m_{L-1}})-\ldots-(m_2-2)(h_{m_2}-h_{m_1})]}\\
&=2^{2s[-(n-1)2^n+h_{m_L}(n+1-m_{L})+h_{m_{L-1}}(m_{L}- m_{L-1})+\ldots + h_{m_2}(m_3-m_2)+h_{m_1}(m_2-2)]} \,.
\end{split}
\end{eqnarray*}
Using $h_{m_i}<2^{m_i}$ for $i=1,\ldots, L$ we can estimate
\begin{equation}\label{k-33}
\begin{split}
U
&\leq 2^{2s[-(n-1)2^n+2^n+\ldots +2^2+ 2^1]} \leq 2^{2s[-(n-3)2^n]}\,.
\end{split}
\end{equation}
Plugging \eqref{k-32} and \eqref{k-33} into \eqref{k-31} we obtain
\begin{eqnarray*}
\begin{split}
\bigg( \prod_{j=0}^{N-1}\sigma_j(\mathcal{K})_X \bigg)^2
& = (n+1)^{N} A^{2N}(C_0+\varepsilon)^{2N} 2^{[(-n+3)2^n]2s}\,.
\end{split}
\end{eqnarray*}
Finally from the assumption $A\leq C_1 2^{\mu(n+1)}$ we find
\begin{eqnarray*}
\begin{split}
\sigma_{2^n-1}(\mathcal{K})_X & \leq (C_0+\varepsilon) C_1 \sqrt{n+1}\cdot 2^{(n+1)\mu}\,2^{(-n+3)s}\\
&
= (C_0+\varepsilon) C_1 8^s \sqrt{n+1}\cdot 2^{(n+1)\mu}\,2^{-ns}
\end{split}
\end{eqnarray*}
and hence
\begin{equation} \label{log}
\sigma_{j}(\mathcal{K})_X \leq (C_0+\varepsilon) C_1 2^\mu 16^s\sqrt{\log(2j)}\cdot j^{(\mu-s)}\,
\end{equation}
for $2^{n}\leq j<2^{n+1}$. Since $\varepsilon>0$ arbitrary we get \eqref{k-10}.\\
{\it Step 3.} We comment on the case
\begin{eqnarray*}
0=h_{m_1}=\ldots=h_{m_{2}-1}<h_{m_2}=\ldots=h_{m_3-1}<\ldots<h_{m_{L}}=\ldots=h_n\,.
\end{eqnarray*}
In this situation we proceed as in Step 2, but there is no first term in the product on the right-hand side of \eqref{com}. Note that in case $h_1=\ldots=h_n=0$, there is no logarithmic factor on the right-hand side of \eqref{log}. The proof is complete.
\end{proof}
\begin{remark} Comparing with Theorem 2.3 in \cite{Woj} we found that in the case $s> 1/2$ the estimate given in Theorem \ref{main-1} improves the rate of the logarithmic term. Moreover the factor $\tilde{\gamma}_n(X)$ is replaced by $\gamma_n(X)$, which is somewhat better, since in general $\gamma_n(X)\leq \tilde{\gamma}_n(X)$ \,.
\end{remark}
In the case of Lebesgue spaces we have the following.
\begin{cor} Let $1\leq p\leq \infty$, $s>\big|\frac{1}{2}-\frac{1}{p}\big|$, and $\mathcal{K}$ be a compact set in $L_p$. Assume that
$$ d_n(\mathcal{K})_{L_p}\leq C_0\max(1,n)^{-s}, \ \ n\geq 0\,.$$
Then we have
\begin{eqnarray*}
\sigma_n(\mathcal{K})_{L_p}\leq C_0 16^s2^{|\frac{1}{2}-\frac{1}{p}|} \sqrt{\log (2n)}\, n^{-s+|\frac{1}{2}-\frac{1}{p}|}\, \qquad \text{for}\ \ n\geq 2\,.
\end{eqnarray*}
\end{cor}
Let $E$ and $X$ be Banach spaces and $B_E$ be the closed unit ball of $E$. As a supplement we study the case $\mathcal{K}\subset T(B_E)$ where $T\in \mathcal{L}(E,X)$ is a compact operator. We shall compare the rate of convergence of $\sigma_n(\mathcal{K})$ with the Kolmogorov widths of $T(B_E)$. In this situation we have the following.
\begin{theorem}\label{greedy2} Let $X$ be a Banach space and $\mathcal{K}$ be a compact set in $X$. Assume that there exists a compact operator $T\in \mathcal{L}(E,X)$ where $E$ is a reflexive Banach space such that $\mathcal{K}\subset T(B_E)$. Then we have
\begin{eqnarray*}
\bigg(\prod_{k=0}^{3n-1} \sigma_k(\mathcal{K})_X\bigg)^{1/3n} \leq 3e^2 \Gamma_n(E) \Gamma_n(X) \bigg(\prod_{k=0}^{n-1} d_{k}(T) \bigg)^{1/n} ,\qquad n\geq 1.
\end{eqnarray*}
\end{theorem}
\begin{proof} First, note that $T(B_E)$ is a closed set in $X$ since $T$ is a compact operator and $E$ is reflexive. For $n\in \mathbb{N}$ fixed, running the greedy algorithm for $\mathcal{K}$ we get $\{f_0,\ldots,f_{3n-1}\}$ and $V_{k}=\linspan\{f_0,\ldots,f_{k-1} \}$.
We select $e_k\in B_E$ such that $Te_k=f_k$ for $k=0,\ldots, 3n-1$.
For each $k\in \mathbb{N}$, as a consequence of the Hahn-Banach Theorem, see \cite[Corollary 14.13]{HeStr}, we can choose $b_k\in X'$ such that $\|b_k\|_{X'}=1$,
\begin{equation} \label{key}
\langle V_k,b_k\rangle =0\,,\qquad \text{and} \qquad \langle f_k,b_k\rangle=\langle Te_k,b_k\rangle=\dist(f_k,V_k)_X=\sigma_k(\mathcal{K})_X\,.
\end{equation}
We define the operators $A\in \mathcal{L}(\ell_2^{3n},E)$ and $B\in \mathcal{L}(X,\ell_2^{3n})$ by
\begin{equation} \label{b}
A:=\sum_{k=0}^{3n-1} u_k\otimes e_k\qquad \text{and} \qquad B:=\sum_{k=0}^{3n-1}b_k\otimes u_k\,,
\end{equation}
where $ \{u_k\}_{k=0,\ldots,3n-1}$ is the canonical basis of $\ell_2^{3n}$. We calculate the norm $\|\,B\,|\mathcal{B}_2\|$, see the definition \eqref{B2}. Let $x_1,\ldots, x_N \in X$. We have
\begin{eqnarray*}
\bigg(\sum_{i=1}^N \| Bx_i \|_{\ell_2^{3n}}^2 \bigg)^{1/2}= \bigg(\sum_{i=1}^N \Big\| \sum_{k=0}^{3n-1} \langle x_i,b_k \rangle u_k \Big\|_{\ell_2^{3n}}^2 \bigg)^{1/2} = \bigg(\sum_{i=1}^N \sum_{k=0}^{3n-1} |\langle x_i,b_k \rangle |^2 \bigg)^{1/2}\,
\end{eqnarray*}
which implies
\begin{eqnarray*}
\begin{split}
\bigg(\sum_{i=1}^N \| Bx_i \|_{\ell_2}^2 \bigg)^{1/2} & \leq \sqrt{3n}\,\sup_{k=0,\ldots,3n-1} \bigg(\sum_{i=1}^N |\langle x_i,b_k \rangle |^2 \bigg)^{1/2}
\\
& \leq \sqrt{3n}\sup_{b\in B_{X'}}\bigg(\sum_{i=1}^N |\langle x_i,b \rangle |^2 \bigg)^{1/2}\,.
\end{split}
\end{eqnarray*}
Hence $\|\,B\,|\mathcal{B}_2 \|\leq \sqrt{3n} $\,.
We consider the matrix
$
( \langle Te_k,b_j \rangle) = ( \langle BTAu_k,u_j \rangle)
$
which has the lower triangular form. It follows from \eqref{key} that
\begin{eqnarray*}
\bigg(\prod_{k=0}^{3n-1} \sigma_k(\mathcal{K})_X\bigg)^{1/3n}=\bigg(\prod_{k=0}^{3n-1} | \langle Te_k,b_k\rangle|\bigg)^{1/3n} = \big|\det \big( \langle BTAu_i,u_j \rangle\big) \big|^{1/3n}\,.
\end{eqnarray*}
Note that for any operator $S\in \mathcal{L}(\ell_2^n)$ we have
$
|\det S| \leq \prod_{k=0}^{n-1} d_k(S)\,
$
since Kolmogorov widths equal to singular values of $S$, see \cite{Pie2}. Consequently we obtain
\begin{equation} \label{sim}
\begin{split}
\bigg(\prod_{k=0}^{3n-1} \sigma_k(\mathcal{K})_X\bigg)^{1/3n} & \leq \bigg(\prod_{k=0}^{3n-1} d_k(BTA) \bigg)^{1/3n} \\
& = \bigg(\prod_{k=0}^{n-1} d_{3k}(BTA) \prod_{k=0}^{n-1} d_{3k+1}(BTA)\prod_{k=0}^{n-1} d_{3k+2}(BTA) \bigg)^{1/3n}
\,.
\end{split}
\end{equation}
From the property
\begin{eqnarray*}
d_{m+n+k}(BTA)\leq d_m(B)d_n(T)d_k(A),
\end{eqnarray*}
see \cite[Page 32]{Pin} or \cite[Theorem 11.9.2]{Pie}, and the monotonicity $d_{k+1}\leq d_k$ of Kolmogorov widths we conclude that
\begin{equation} \label{pro}
\begin{split}
\bigg(\prod_{k=0}^{3n-1} \sigma_k(\mathcal{K})_X\bigg)^{1/3n}
&
\leq \bigg(\prod_{k=0}^{n-1} d_{k}(A) \bigg)^{1/n}\bigg(\prod_{k=0}^{n-1} d_{k}(T) \bigg)^{1/n} \bigg(\prod_{k=0}^{n-1} d_{k}(B) \bigg)^{1/n} \,.
\end{split}
\end{equation}
Lemmas \ref{lem1} and \ref{lem2} yield the estimate
\begin{equation} \label{B}
\begin{split}
\bigg(\prod_{k=0}^{n-1} d_{k}(B) \bigg)^{1/n} \leq \Gamma_n(B) & \leq en^{-1/2}\| B|\mathcal{B}_2\|\, \Gamma_n(X)\\
& \leq e n^{-1/2} (3n)^{1/2} \, \Gamma_n(X)=e\sqrt{3}\,\Gamma_n(X) \,.
\end{split}
\end{equation}
Now we deal with the first product on the right-hand side of \eqref{pro}. We have
\begin{eqnarray*}
\bigg(\prod_{k=0}^{n-1} d_{k}(A) \bigg)^{1/n} = \bigg(\prod_{k=0}^{n-1} d^{k}(A') \bigg)^{1/n} \leq \Gamma_n(A')\,,
\end{eqnarray*}
see Section \ref{sec-2}. Here $A'\in \mathcal{L}(E',\ell_2^{3n})$ is the dual operator of $A$ which is of the form
$
A'=\sum_{k=0}^{3n-1} e_k\otimes u_k\,.
$ Similar argument as for the operator $B$ we also get $\|A'|\mathcal{B}_2\| \leq \sqrt{3n} $.
Hence we found
\begin{eqnarray*}
\begin{split}
\Gamma_n(A') & \leq en^{-1/2}\| A'|\mathcal{B}_2\|\, \Gamma_n(E') \leq e n^{-1/2} (3n)^{1/2} \, \Gamma_n(E')=e\sqrt{3}\,.\Gamma_n(E')
\end{split}
\end{eqnarray*}
which leads to
\begin{eqnarray*}
\bigg(\prod_{k=0}^{n-1} d_{k}(A) \bigg)^{1/n} \leq e\sqrt{3}\, \Gamma_n(E')= e\sqrt{3}\, \Gamma_n(E)\,.
\end{eqnarray*}
Putting this and \eqref{B} into \eqref{pro} we arrive at
\begin{eqnarray*}
\begin{split}
\bigg(\prod_{k=0}^{3n-1} \sigma_k(\mathcal{K})_X\bigg)^{1/3n}
&
\leq 3e^2 \Gamma_n(E)\Gamma_n(X)\bigg(\prod_{k=0}^{n-1} d_{k}(T) \bigg)^{1/n} \,.
\end{split}
\end{eqnarray*}
The proof is complete.
\end{proof}
\begin{remark} Note that Theorem \ref{greedy2} still holds true if one replaces Kolmogorov widths by Gelfand widths, i.e.,
\begin{eqnarray*}
\bigg(\prod_{k=0}^{3n-1} \sigma_k(\mathcal{K})_X\bigg)^{1/3n} \leq 3e^2 \Gamma_n(E) \Gamma_n(X) \bigg(\prod_{k=0}^{n-1} d^{k}(T) \bigg)^{1/n} ,\qquad n\geq 1.
\end{eqnarray*}
\end{remark}
We have the following consequence.
\begin{cor} \label{cor-5} Let $X$ be a Banach space and $\mathcal{K}$ be a compact set in $X$. Assume that there exists a compact operator $T\in \mathcal{L}(\ell_2,X)$ such that $\mathcal{K}\subset T(B_{\ell_2})$. Then we have
\begin{equation} \label{k-11}
\sigma_{3n-1}(\mathcal{K})_X \leq 3e^2 \Gamma_n(X) \bigg(\prod_{k=0}^{n-1} d_{k}(T) \bigg)^{1/n} ,\qquad n\geq 1.
\end{equation}
In addition, if $X=L_p$ for $1\leq p\leq \infty$ and $d_n(T) \leq C_0 n^{-s}$ for some $s> \big|\frac{1}{2}-\frac{1}{p}\big|$, $(n\geq 1)$, then there exists a constant $C>0$ such that
\begin{eqnarray*}
\sigma_{3n-1}(\mathcal{K})_X \leq C3e^2\, n^{|1/p-1/2|-s} ,\qquad n\geq 1.
\end{eqnarray*}
\end{cor}
We proceed by considering the case $\mathcal{K}=T(B_{\ell_2})$ for some compact operator $T\in \mathcal{L}(\ell_2,X)$. In this situation we can replace $ \sigma_{3n-1}(\mathcal{K})_X$ in \eqref{k-11} by $ \sigma_{2n-1}(\mathcal{K})_X$. We have the following.
\begin{theorem} Let $X$ be a Banach space and $T\in \mathcal{L}(\ell_2,X)$ be a compact operator. Assume that $\mathcal{K}= T(B_{\ell_2})$. Then we have
\begin{eqnarray*}
\sigma_{2n-1}(\mathcal{K})_X \leq e\sqrt{2} \Gamma_n(X) \bigg(\prod_{k=0}^{n-1} d_{k}(\mathcal{K})_X \bigg)^{1/n} ,\qquad n\geq 1.
\end{eqnarray*}
\end{theorem}
\begin{proof} Recall that $T(B_{\ell_2})$ is the closed set in $X$. For $n\in \mathbb{N}$ fixed, running the greedy algorithm for $\mathcal{K}$ we get $\{f_0,\ldots,f_{2n-1}\}$ and $V_{k}=\linspan\{f_0,\ldots,f_{k-1} \}$. First we show that we can select $e_k\in B_{\ell_2}$ such that $Te_k=f_k$ for $k=0,\ldots, 2n-1$ and $\{ e_k\}_{k=0}^{2n-1}$ is an orthonormal system in $\ell_2$. Indeed, if $Te_0=f_0$ with
\begin{eqnarray*}
\|f_0\|_X =\max_{f\in \mathcal{K}}\|f\|_X =\max_{e\in B_{\ell_2}}\|Te\|_X \,,
\end{eqnarray*}
then $\|e_0\|_{\ell_2}=1$. Assume that we have chosen the orthonormal system $\{e_0,\ldots,e_{k-1} \}$ in $\ell_2$ with $Te_i=f_i$ for $i=0,\ldots,k-1$. Let $\{e_j \}_{j\geq 0}$ be an orthonormal basis of $\ell_2$ constructed from the system $\{e_0,\ldots,e_{k-1} \}$ and let $f_k=Te$ where $e=\sum_{j\geq0} c_je_j$ with $\|(c_j)_{j\geq 0}\|_{\ell_2}\leq 1$. We consider
\begin{eqnarray*}
f_k^*=\frac{1}{\|c^*\|_{\ell_2}}\sum_{j\geq k} c_jTe_j,\ \qquad \text{with}\ \ c^*=(0,\ldots,0,c_k,c_{k+1},\ldots)\,.
\end{eqnarray*}
Here we assume that $c^*\not =0$ otherwise $\sigma_k(\mathcal{K})_X=0$. We have $f_k^*\in T(B_{\ell_2})$ and
\begin{eqnarray*}
\begin{split}
\dist(f_k,V_k)&\geq \dist(f_k^*,V_k)_X\\ &=\inf_{a_0,\ldots,a_{k-1}}\bigg\|\frac{1}{\|c^*\|_{\ell_2}}\sum_{j\geq k} c_jTe_j - \sum_{j=0}^{k-1}a_j Te_j\bigg\|_X\\
& = \frac{1}{\|c^*\|_{\ell_2}}\inf_{a_0,\ldots,a_{k-1}}\bigg\| \sum_{j\geq 0} c_jTe_j - \sum_{j=0}^{k-1}\big(a_j\|c^*\|_{\ell_2}+c_j\big) Te_j\bigg\|_X \\
& = \frac{1}{\|c^*\|_{\ell_2}} \dist(f_k,V_k)\,.
\end{split}
\end{eqnarray*}
Hence $\|c^*\|_{\ell_2}=1$ and $f_k=f_k^*$ which implies $e$ is orthogonal to $\{e_0,\ldots,e_{k-1} \}$ and $\|e\|_{\ell_2}=1$. Similar to \eqref{b} we define the operators $A\in \mathcal{L}(\ell_2^{2n},\ell_2)$ and $B\in \mathcal{L}(X,\ell_2^{2n})$ by
\begin{eqnarray*}
A:=\sum_{k=0}^{2n-1} u_k\otimes e_k\qquad \text{and} \qquad B:=\sum_{k=0}^{2n-1}b_k\otimes u_k\,.
\end{eqnarray*}
Note that $\|B|\mathcal{B}_2\|\leq \sqrt{2n}$ and $\|A\|\leq 1$\,. We have \begin{eqnarray*}
\begin{split}
\bigg(\prod_{k=0}^{2n-1} \sigma_k(\mathcal{K})_X\bigg)^{1/2n} & \leq \bigg(\prod_{k=0}^{2n-1} d_k(BTA) \bigg)^{1/2n}\\
& \leq \bigg(\prod_{k=0}^{2n-1} \|A\| d_k(BT) \bigg)^{1/2n}\leq \bigg(\prod_{k=0}^{2n-1} d_k(BT) \bigg)^{1/2n}\,,
\end{split}
\end{eqnarray*}
see \eqref{sim}. By the same argument as in the proof of Theorem \ref{greedy2} we obtain the desired estimate.
\end{proof}
In some situations, the estimate given in Corollary \ref{cor-5} is sharp. Let us consider the following example which is borrowed from \cite{DePeVo}, see also \cite{Woj}. Let $\mathcal{K}=\{n^{-\alpha}u_n \}\subset \ell_q$ with $2<q<\infty$ where $\{u_n\}_{n\geq 1}$ is the canonical basis of $\ell_2$. It is clear that $\sigma_n(\mathcal{K})_{\ell_q}=\frac{1}{(n+1)^{\alpha}}$. We consider the diagonal operator $D_\alpha:\ell_2\to \ell_q$ defined by $u_n\to n^{-\alpha}u_n$. Then $\mathcal{K}\subset D_\alpha(B_{\ell_2})$. We know that $d_n(D_{\alpha}) \leq Cn^{-\alpha+\frac{1}{q}-\frac{1}{2}}$, see, e.g., \cite[Section 6.2.5.3]{Pie07} which implies
\begin{eqnarray*}
\sigma_{3n-1}(\mathcal{K})_{\ell_q} \leq C n^{|\frac{1}{2}-\frac{1}{q}|}
n^{-\alpha+\frac{1}{q}-\frac{1}{2}}=Cn^{-\alpha}\,.
\end{eqnarray*}
Hence the operator $D_\alpha$ give the sharp estimate in the rate of convergence in this example.
\vskip 3mm
\noindent
{\bf Acknowledgements} The author would like to thank Markus Bachmayr for fruitful discussions. Moreover, the author acknowledges the Hausdorff Center of Mathematics, University of Bonn for financial support.
\bibliographystyle{amsplain} |
2106.03962 | \section{Acknowledgements}
We thank \href{https://shrutij01.github.io}{Shruti Joshi} for helpful comments on the earlier version of the paper.
\section{An Algorithmic Approach to Generating MDPs from CFE Problems}
\label{sec:algo}
\SetKwProg{Fn}{Function}{}{}
\RestyleAlgo{algoruled}
\newcommand\mycommfont[1]{\scriptsize\ttfamily\textcolor{blue}{#1}}
\SetCommentSty{mycommfont}
\newcommand{\xAlCapSty}[1]{\small\sffamily\bfseries\MakeUppercase{#1}}
\SetAlCapSty{xAlCapSty}
\newcommand\mynlfont[1]{\scriptsize\sffamily{#1}}
\SetNlSty{mynlfont}{}{}
\SetSideCommentLeft
\begin{algorithm}[t]
\DontPrintSemicolon
\SetKwInOut{KwIn}{Input}
\SetKwInOut{KwOut}{Output}
\SetKwData{Da}{\textsf{D}}
\SetKwData{La}{L}
\SetKwData{Un}{Un}
\SetKwData{Bin}{Bin}
\SetKwData{f}{f}
\SetKwData{SCM}{\textsf{SCM}}
\SetKwData{K}{K}
\SetKwData{Af}{\textsf{ActF}}
\SetKwData{Cf}{Cat}
\SetKwData{Nf}{\textsf{Num}}
\SetKwData{Hf}{\textsf{One-hot}}
\SetKwData{CFReward}{\textsf{Pos}}
\SetKwData{DF}{\texttt{DistF}}
\SetKwData{DMF}{\texttt{DistD}}
\SetKwData{MDP}{\textsf{MDP}}
\KwIn {Training Dataset (\Da), ML model (\f),
Structural Causal Model (\textsf{SCM}),
Actionable features (\Af),
Data Manifold distance function (\DMF),
Data Manifold adherence ($\lambda$),
Desired Label (\La),
Distance Function (\DF), Discount Factor ($\gamma$)}
\KwOut {\textsf{MDP}}
\caption{\label{alg:mdp} Generate \textsf{MDP} from a Counterfactual Explanation Problem}
\SetKwFunction{TransitionFunction}{Transition}
\SetKwFunction{RewardFunction}{Reward}
\SetKwFunction{Allowed}{Allowed}
\SetKwFunction{InDomain}{InDomain}
\SetKwFunction{argmax}{argmax}
\SetKwFunction{Parent}{Parent}
\SetKwFunction{BinFn}{BinFn}
\SetKwFunction{cost}{Cost}
\SetKw{break}{break}
\SetKwData{PMa}{M'}
\SetKwData{IncNum}{$F_{i}+$}
\SetKwData{DecNum}{$F_{i}-$}
\SetKwData{IncCat}{$F_{j}{+}1$}
\SetKwData{DecCat}{$F_{j}{-}1$}
\SetKwData{Sa}{\textsf{CurrState}}
\SetKwData{NSa}{\textsf{State'}}
\SetKwData{FSa}{\textsf{NextState}}
\SetKwData{Ac}{$A$}
\SetKwData{Cost}{Cost1}
\SetKwData{ManifoldCost}{Cost2}
\SetKwData{Reward}{\textsf{CFReward}}
\SetKwData{Ua}{\textsf{U}}
\SetKwData{Va}{\textsf{V}}
\SetKwData{NVa}{\textsf{V'}}
\SetKwData{Fa}{\textsf{F}}
\tcp{States consist of numerical ($\Nf$) and categorical ($\Cf$) features.}
$\mathit{\text{State space } \mathcal{S} \subseteq \mathbb{R}^{\vert\Nf\vert}\times \mathbb{Z}^{\vert\Cf\vert} }$ \\ \label{line:state}
\tcp{Actions change an actionable feature by some amount.}
$\mathit{\text{Action space } \mathcal{A} \subseteq \mathbb{R}^{\left| \Af \right|}; \text{denote actions } A \in \mathcal{A} }$ \\ \label{line:action}
\Fn{$\mathit{\RewardFunction(\f, \La, \Sa, \Ac, \Da, \lambda, \DMF, \textsf{SCM})}$}{ \label{line:rewardfunc}
$\mathit{\FSa \leftarrow \TransitionFunction(\Sa, \Ac, \textsf{SCM})}$ \\
\eIf{$\mathit{\argmax(\f(\FSa))} = \La$}
{$\mathit{\Reward \leftarrow \CFReward}$ \tcp{High positive reward}}
{$\mathit{\Reward \leftarrow \f(\FSa)[\La]}$ \tcp{Probability of classification in the desired class} \label{line:otherreward}}
\Return $\DF(\Sa, \Ac, \Da)$ \tcp{cost of an action} \label{line:reward1}
$+ \lambda * \DMF(\FSa, \Da)$ \tcp{Manifold distance cost} \label{line:reward2}
$+\Reward$ \tcp{Counterfactual label reward} \label{line:CFreward}
}
\Fn{$\mathit{\TransitionFunction(\Sa, \Ac, \textsf{SCM}})$ }{ \label{line:transitfunc}
\tcp{Action does not violate feature domain and unary constraints}
\eIf{\Allowed(\Ac) \& \InDomain(\Ac) \label{line:updated}}
{$\mathit{\FSa \leftarrow \Sa + \Ac}$ \tcp{Modify features} \label{line:change}}
{\Return \Sa \label{line:illegalaction}}
\tcp{Modify the endogenous features}
\For{$\mathit{\Va \in \textsf{SCM}}$ \label{line:exogenous}} {
\uIf{$\mathit{\Ac \in \Parent(\Va)}$ }
{
$\mathit{\FSa[\Va] \leftarrow \Fa(\Ua)}$
\tcp{Stochastic or deterministic update of endogenous features} \label{line:endoupdate}
}
}
\Return \FSa
}
$\mathit{\textsf{MDP} \leftarrow \{\mathcal{S}, \mathcal{A}, \TransitionFunction, \RewardFunction, \gamma\}}$
\end{algorithm}
We now present a general approach for translating a CFE problem setup into an \textsf{MDP}{}.
\Cref{alg:mdp} generates each component of an \textsf{MDP}{}: state space, action space, transition function, reward function, and additional inputs such as discount factor. We detail each component's generation below.
\textbf{State space.}
We focus first on the features of a particular dataset. Broadly, features can be categorized into numerical (\textsf{Num}) and categorical (\textsf{Cat}). For each categorical variable, we map each value of that variable to a unique integer.
Consequently, the state space $\mathcal{S}$ of our \textsf{MDP} (\cref{line:state}) consists of the product of the continuous domains for numerical features (a subset of $\mathit{\mathbb{R}^{\vert \textsf{Num}\vert}}$) and product of the integer domains for categorical features (a subset of $\mathit{\mathbb{Z}^{\vert \textsf{Cat}\vert}}$)
\textbf{Action space.}
To facilitate capturing actionability~\citep{Ustun19:Actionable} and causal relationships between variables~\citep{karimi_algorithmic_2020,karimi-imperfect:2020,mahajan_preserving_2020,causality:Pearl}, we further categorize features as follows.
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item \emph{Actionable} features can be directly changed by an agent, e.g., income, education level, age.
\item \emph{Mutable but not actionable} features are mutable but cannot be modified directly. They change as an effect of change in other features, e.g., credit score cannot be directly set by a person, it changes due to change in other features like income and loan amount. These dynamics are determined by a structural causal model (\textsf{SCM})~\citep{causality:Pearl}, described in detail below.
\item \emph{Immutable} features cannot change, e.g., race, birthplace.
\end{itemize}
The agent is permitted to change only the actionable features (denoted by \textsf{ActF}). Consequently, the action space $\mathcal{A}$ is a subset of $\mathbb{R}^{\vert\textsf{ActF}\vert}$ (\cref{line:action}).
Categorical features can be changed within their discrete domain, while numerical features can be changed within their continuous domain.
\Cref{line:updated} further enforces the infeasibility of out-of-domain actions.
\textbf{Transition function.}
The third component is the transition function (\cref{line:transitfunc}), which finds the modified state when an action is taken.
This function is constructed using the structural causal model (\textsf{SCM}), which is an input to \Cref{alg:mdp}.
An \textsf{SCM} consists of a triplet \textsf{M} = $\langle \textsf{U}, \textsf{V}, \textsf{F}\rangle$.
$\textsf{U}$ is the set of \emph{exogenous} features and $\textsf{V}$ is the set of \emph{endogenous} features.
In terms of a causal graph, the exogenous features $\textsf{U}$ consist of features that have no parents, i.e., they can change independently.
The endogenous features $\textsf{V}$ consists of features that have parents in $\textsf{U}$ and/or other features in $\textsf{V}$. They change as an effect of change in their parents.
$\textsf{F}$ is the set of functions that determine the relationship between exogenous and endogenous features.
Since knowing the exact \textsf{SCM} is mostly infeasible, \Cref{alg:mdp} also accepts causal relations in form of unary (\textsf{Un}) and binary (\textsf{Bin}) constraints.
Unary constraints are derived from the property of one feature, e.g., age and education level cannot decrease.
Binary constraints are derived from the relation between two features, e.g., if education level increases, age increases.
If an action does not violate the domain of the feature it is changing, nor the constraints in the \textsf{SCM}, then the feature is modified in \textsf{NextState} (\cref{line:change}).
If the modified feature is a parent node to endogenous features in the \textsf{SCM}, we update its children by using the \textsf{F} functions (\cref{line:exogenous}-\ref{line:endoupdate}).
\textbf{Reward function.}
\Cref{line:rewardfunc} defines a reward function that, given a state and an action, returns a reward based on three components derived from the initial CFE problem:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item Given the current state (\textsf{CurrState}), action (\textsf{$A$}), training dataset (\textsf{D}), and distance function \texttt{DistF}, the first part returns the appropriate cost to take that action (\cref{line:reward1}).
The distance function can either be $\ell_p$ norm of the change produced by the action or a more complex function that takes into account the cumulative distribution function (CDF) of the specific feature that the action is modifying.
The latter function can account for cases where, e.g., changing a feature from its 90th to 95th percentile value costs more than changing it from its 50th to 55th percentile value~\citep{Ustun19:Actionable}.
\item The second part adds a cost if a datapoint is away from the training data manifold (\cref{line:reward2}).
The \texttt{DistD} function finds the distance of the datapoint from the data manifold and returns a real number.
That is multiplied by a factor $\lambda$ to control the strictness of data manifold adherence.
\item The third part rewards the agent with a large positve value if the trained model \textsf{f} produces the desired label for \textsf{NextState} (\textsf{CFReward} in \cref{line:CFreward}).
To avoid sparse rewards, we partially reward the agent with a small positive value otherwise. This reward is equal to the probability of \textsf{NextState} being classified in the desired class (\cref{line:otherreward}). This can only be used if the underlying classifier provides the class label probabilities instead of only the class label.
In conjunction with the discount factor $\gamma$, this encourages the agent to learn the policy that quickly gets the desired label for any datapoint.
\end{itemize}
\textbf{Other parameters.}
MDPs require additional parameters such as the discount factor $\gamma \in [0,1]$, which is an input to \Cref{alg:mdp}.
At a high level, setting $\gamma < 1$ penalizes longer (in terms of the number of steps) paths; for additional intuition, see~\citet{RLBook}.
We note that $\lambda$, \textsf{DistD}, and \textsf{DistF} are user-specified and domain-specific parameters that directly impact the reward function for the \textsf{MDP}.
\section{Background}
\label{sec:background}
This section provides background about the social implications of ML models and techniques to address concerns, along with a brief introduction to Reinforcement Learning.
\subsection{Fairness, Accountability, and Transparency of AI and ML}
Fairness and explainability of an ML model are two major themes in the broad area of equitable ML learning research.
Fairness research mostly proposes algorithms that learn a model that does not discriminate against individuals belonging to disadvantaged demographic groups. Other possibilities of intervention lie in modifying the training data itself.
Demographic groups are determined by values of sensitive attributes prescribed by law, e.g., race, sex, religion, or the nation of origin.
ML models can get biased against certain demographic groups because of the bias in their training data, specifically label bias and selection bias.
Label bias occurs due to manual biased labeling of datapoints belonging to a demographic group, e.g., if individuals from the black community were denied loans in the past irrespective of their ability to pay back, this gets captured in the data from which the model can learn.
Selection bias occurs when specific subsets of a demographic group are selected, which captures potentially correlations between the prediction target and a specific demographic group, e.g., selecting only defaulters from a demographic group in the training data.
More than 20 definitions of fairness of an ML model have been proposed in literature~\citep{verma_fairness}.
\citet{dunkelau_fairness-aware} summarize some of the significant research advances that have been made in fairness research, and is a comprehensive introductory text for understanding the categorization and direction of research.
Explainability research can be broadly divided into model explanation and outcome explanation research problems~\citep{xai-survey4}. The model explanation problem seeks to search for an inherently interpretable and transparent model with high fidelity to the original complex model. Linear models, decision trees, and rule sets are examples of inherently interpretable models.
There exists techniques to explain complex models like neural networks and tree ensembles using interpretable surrogate like decision tree~\cite{craven_exp1,KRISHNAN_exp2,Chipman_makingsense_exp3,Pedro_exp4} and rule sets~\cite{Deng_exp5,Andrews_exp6}.
There also exist approaches that can be applied to black-box models~\citep{Andreas_exp7,Krishnan_exp8,Zien_exp9}.
The outcome explanation problem seeks to find, for a single datapoint and prediction from a model, an explanation of why the model made its prediction. The explanation is either provided in the form of the importance of each feature in the datapoint, or the form of example datapoints. The first class of methods are called feature attribution methods and are grouped into model-specific~\citep{Khosla_cam,grad-cam} and model-agnostic~\citep{Poulin_explaind,ribeiro_why_2016,Turner2016_MES} kinds. Example-based approaches return a few datapoints that either have the same class label as the original datapoint or a different class label.
The motivation for the first is to provide a set of datapoints that must be similar in the input space.
The motivation for the second is to provide a set of datapoints that serves as a target to achieve in case the individual wants to receive the alternative label.
The second set of datapoints can be referred to as \emph{counterfactual explanations}.
Counterfactual explanations are applicable to supervised machine learning where the desired label has not been obtained for a datapoint.
Most research in counterfactual explanations assumes a classification setting.
Supervised ML setup consists of several labeled datapoints, which are inputs to the algorithm, and the aim is to learn a function mapping from the input datapoints (with say m features) to labels.
In classification, the labels are discrete values.
The input space is denoted by $\mathcal{X}^m$ and the output space is denoted by $\mathcal{Y}$.
The learned function is the mapping $f: \mathcal{X}^m \to \mathcal{Y}$ is used to make predictions.
We expound on counterfactual explanations and their desirable properties in \Cref{sec:desiderata}.
Major beneficiaries of explainable machine learning include the healthcare and finance sectors, which have a huge social impact~\citep{tjoa2019survey1}.
We point the readers to surveys in the area of explainable machine learning~\citep{xai-survey2,carvalho2019:survey3,xai-survey4}.
\subsection{Reinforcement Learning}
Reinforcement Learning (RL) is one of the three broad classes of machine learning, along with supervised and unsupervised learning.
In RL, the goal is to explore a given environment and to learn a policy over time that dictates what action should be taken at a given state. The exploration happens with the help of an agent.
Therefore, a policy is a mapping from a state to an action.
When an action is taken at a state, the environment returns with the new state and a reward.
A good policy aims to maximize the reward over time.
The calculation of the new state is facilitated through the transition function, whereas the calculation of the reward is done using the reward function.
Naturally, the agent can either learn policies that are greedy and only focus on immediate reward or learn policies that focus on reward in the long-term. This trade-off is controlled by a discount factor called $\gamma$, whose value lies between 0 and 1 (inclusive of 0 and 1).
States can either be discrete or continuous. Similarly, actions can also be either discrete or continuous.
An RL problem is expressed in terms of a Markov Decision Process (MDP), which has five components. We illustrate each of them using the game of chess.
\begin{itemize}
\item State space $\mathcal{S}$, which are states an agent might explore. In chess, these are the 64 squares that an agent can move to.
\item Action space $\mathcal{A}$, which are the possible actions an agent can take. These might be restricted based on the current state. In chess, the actions depend on the game pieces like a king, queen or pawns, and the given position on the chessboard. The action space is the union of all possible actions.
\item Transition function \texttt{T} which given the current state and action, find the new state that the agent will transition to, e.g., moving the pawn by 1 unit north end up putting the agent in the state that is one unit north of its current state. Transition functions can be deterministic or stochastic (see \Cref{sec:example}).
\item Reward function \texttt{R} passes the reward to the agent given the action, the current state, and the new state. This reward signal is the main factor that the agent uses to learn a good policy, e.g., winning a game would pass a positive reward, and losing the game would send a negative reward to the agent.
\item Discount factor $\gamma$ is associated with the nature of the problem at hand. This is used to decide the trade-off between immediate and long-term rewards.
\end{itemize}
Many algorithms have been developed to efficiently learn an agent, given the environment like value iteration, policy iteration, policy gradient, actor-critic methods~\cite{RLBook}.
\section{Conclusions \& Future Research}
\label{sec:conclusions}
CFEs are an effective and actionable way to provide explanations for an ML system.
Among the several desirable properties of a CFE generation approach, it is desirable if the approach works for a \emph{black-box} model and needs to be optimized once and can be used to generate CFEs rapidly thereupon, i.e., \emph{amortized CFEs}.
We propose a novel algorithm that generates amortized and sequential CFEs for black-box models.
To the best of our knowledge, we are the first to propose such an approach.
Our approach also incorporates other desirable properties like changing only the actionable features, respecting causal constraints, and adhering to the data manifold.
We evaluated our approach using several datasets.
Our approach successfully generates CFEs for most datapoints, and the CFEs generated by our approach possess desirable properties and perform better than the ones generated by our baselines.
We see many avenues for future research. We enable sequential explanations by way of a generic \textsf{MDP}; in our setting, it may be the case that \emph{any} path requires to cross the decision boundary of the ML model $\textsf{f}$ is too costly, thus resulting in the optimal action being not to act at all. \textsf{MDPs} can also model \emph{stochastic shortest path} problems, or SSPs~\citep{Eaton62:Optimal}; here, the goal would be to find the path with the lowest cost that \emph{necessarily} cross the decision boundary, effectively removing the ``do not act'' option in the event that the optimal path comes with a negative expected reward. We note that our general approach can model SSPs, and we believe a specific SSP-based approach could be valuable for particular domains.
\section{Counterfactual vs. Contrastive explanations}
\label{sec:counter_vs_contra}
There is ongoing discussion on the exact definition of counterfactual explanation, with some researchers advocating to call it contrastive explanations. \citet{cfe_vs_contra} have captured the precise difference in a recent article.
They mention that the counterfactual explanations as introduced by~\citet{wachter_counterfactual_2017} are almost the same as contrastive explanations. These explanations seek to find the minimal changes to the input such that the prediction from the ML model changes.
On the other hand, counterfactuals are a function of the datapoint, its prediction, the ML model, and the data generating process that created that datapoint.
\citet{causality:Pearl} describes three steps for generating counterfactuals:
\begin{enumerate}
\item Abduction: This is the process of conditioning on the exogenous variables in the data generation process.
\item Intervention: This is the process of making a sparse change on a specific observable variable.
\item Prediction: This is the process of using the exogenous variables identified in the first step and propagating the intervention to generate the counterfactual.
\end{enumerate}
We agree with this framing. Therefore, counterfactual explanations amount to much more perturbing the input datapoint---as in the case of contrastive explanations, whuch are tied to the data generating process. Indeed, it is our belief that our proposed framework captures these concerns, if data regarding causal interactions is available.
We take note of this distinction and therefore have adherence to causal relations as a desiderata of counterfactual explanations (\Cref{sec:desiderata}).
Structural Causal Models (SCM) consists of the exogenous and endogenous variables involved in the data generation process.
\textsc{FastCFE}\xspace takes as input the SCM (partial SCM is supported) of the dataset and takes it into consideration while generating CFEs.
If the SCM is not provided, the explanations generated by \textsc{FastCFE}\xspace are basically contrastive explanations.
\section{Desiderata of \emph{Practical} Counterfactual Explanations (CFEs)}
\label{sec:desiderata}
The overarching goal of a counterfactual explanation (CFE) is to provide practical guidance to an individual seeking to change their treatment (e.g., class label) by a deployed ML model.
Apart from the necessary property of a CFE having a desired class label, other desiderata have been identified in the literature, enumerated here:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item \emph{Actionability}: CFEs should only recommend changes to the features that are actionable by an individual~\citep[e.g.,][]{Ustun19:Actionable,karimi_model-agnostic_2020,Kanamori2020:DACE,mothilal_explaining_2020,dandl_multi-objective_2020}. Actionable features are dataset dependent.
CFEs should also consider personal preferences; it may be easier for someone to change feature \var{A} than \var{B}, and vice-versa for others.
\item \emph{Sparsity}: Ideally, CFEs should make changes to a smaller set of features~\citep[e.g.,][]{wachter_counterfactual_2017,Ustun19:Actionable,karimi_model-agnostic_2020,guidotti_local_2018,van_looveren_interpretable_2020}.
\citet{Miller-xai:2019} have argued that smaller explanations are more comprehensible to humans.
\item \emph{Adherence to data manifold}: To obey the correlations between features, their input domain, and to be realistic and actionable, CFEs should adhere to the training data manifold~\citep[e.g.,][]{dhurandhar_model_2019,Kanamori2020:DACE,dandl_multi-objective_2020,van_looveren_interpretable_2020}.
\item \emph{Respect for causal relations}: Several common facts cannot be learned from data itself, but while recommending changes, they must be respected, e.g., asking someone to decrease their age to get the desired label is not helpful. Structural causal models (\textsf{SCM})~\citep{causality:Pearl} represents such knowledge and captures the effect of change in one feature on others.
Causal relations can encode facts like age cannot decrease or age increases if education level increases~\citep[e.g.,][]{mahajan_preserving_2020}.
\item \emph{Model-agnostic}: For applicability across different classes of ML models, a CFE generating approach may need to be model-agnostic~\citep[e.g.,][]{medina_comparison-based_2018,guidotti_local_2018,sharma_certifai_2019}.
\item \emph{Black-box models}: If CFEs are required for a proprietary ML model, the generating approach should work for black-box models, i.e., require access to only the \functionname{predict} function~\citep[e.g.,][]{medina_comparison-based_2018,guidotti_local_2018,sharma_certifai_2019}.
\item \emph{Amortized CFEs}: CFEs are often required for several datapoints belonging to the same distribution.
It would be effective if an approach can generate CFEs without optimizing separately for each of them. An approach that generates CFEs after single optimization produces \emph{amortized CFEs}.
\end{itemize}
\textsc{FastCFE}\xspace satisfies all the above desiderata. As shown in~\Cref{tab:main-table} and to the best of our knowledge, it is also the first approach to do so.
The choice of action space helps produce CFEs that consider actionability among features and are sparse.
We only modify the actionable features.
Our CFEs are realistic and actionable as they adhere to the training data manifold and respect causal relations among features.
\textsc{FastCFE}\xspace works for black-box models and therefore, is model-agnostic.
It learns a policy that can produce CFEs for many input datapoints (individuals) without the need to optimize again; and therefore, generates amortized CFEs.
\belowcaptionskip=10pt
\begin{table}
\caption{Qualitative comparison of various CFE generating approaches on the counterfactual desiderata.
\textsc{FastCFE}\xspace is the \x{first and only one} which satisfies all desiderata. }
\label{tab:main-table}
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccc}
\toprule
Approach & Action'ty & Sparse & Agnostic & Black-box & Amortize & Manifold & Causality \\
\midrule
CF Expl.~\citep{wachter_counterfactual_2017} & \xmark & \cmark & \xmark & \xmark & \xmark & \xmark & \xmark \\
Recourse~\citep{Ustun19:Actionable} & \cmark & \cmark & \xmark & \xmark & \xmark & \xmark & \xmark \\
CEM~\citep{dhurandhar_model_2019} & \xmark & \cmark & \xmark & \xmark & \xmark & \cmark & \xmark \\
MACE~\citep{karimi_model-agnostic_2020} & \cmark & \cmark & \xmark & \xmark & \xmark & \xmark & \xmark \\
DACE~\citep{Kanamori2020:DACE} & \cmark & \xmark & \xmark & \xmark & \xmark & \cmark & \xmark \\
DICE~\citep{mothilal_explaining_2020} & \cmark & \cmark & \xmark & \xmark & \xmark & \xmark & \xmark\\
VAE CFs~\citep{mahajan_preserving_2020} & \cmark & \xmark & \xmark & \xmark & \cmark & \cmark & \cmark \\
Spheres~\citep{medina_comparison-based_2018} & \xmark & \cmark & \cmark & \cmark & \xmark & \xmark & \xmark \\
LORE~\citep{guidotti_local_2018} & \xmark & \cmark & \cmark & \cmark & \xmark & \xmark & \xmark \\
Weighted~\citep{grath_interpretable_2018} & \xmark & \xmark & \cmark & \cmark & \xmark & \xmark & \xmark \\
CERTIFAI~\citep{sharma_certifai_2019} & \cmark & \xmark & \cmark & \cmark & \xmark & \xmark & \xmark \\
Prototypes~\citep{van_looveren_interpretable_2020} & \xmark & \cmark & \cmark & \cmark & \xmark & \cmark & \xmark \\
MOC~\citep{dandl_multi-objective_2020} & \cmark & \cmark & \cmark & \cmark & \xmark & \cmark & \xmark \\
\textbf{\textsc{FastCFE}\xspace} & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Illustrative examples}
\label{sec:example_remain}
This section gives the remaining examples of translating a CFE problem into an \textsf{MDP}.
\textbf{Example 1:}
Let us now consider the example where one of the two features is age (denoted by feature \var{a}).
This adds a constraint because age cannot decrease.
Therefore, any change which decreases age is not allowed.
This is captured by the transition function.
In~\Cref{fig:example2} we see that the edges which act on feature \var{a} have now become unidirectional implying that the value of feature \var{a} cannot decrease.
Taking the action \var{a-1} at any state ends up being in the same state, albeit with a cost of 1.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{example2.pdf}
\caption{Transition function for a dataset with 2 features, out of which one is age (denoted by \var{a}). Circles show the states and edges show possible transitions. The edges which denote action on the feature \var{a} are unidirectional as age cannot decrease. Action \var{a-1} taken at any state would loop back to the same state. Each action has a constant cost of 1.
}
\vspace{1em}
\label{fig:example2}
\end{figure}
\smallskip
\textbf{Example 2:}
Let us now consider a dataset with three features, out of which one is immutable, e.g., race (denoted by feature \var{r}).
Feature \var{a} still represents age and carries its non-decreasing constraint.
Such a feature cannot be changed using any action, and this is encoded in the transition function by returning the same state if this action is taken.
The state space in this \textsf{MDP} will consist of 3 values, one for each feature.
\Cref{fig:example3} shows the transition function for the \textsf{MDP} representing the CFE problem using this dataset.
As we already saw, \var{a} which represents \var{age} is non-decreasing.
Also, none of the actions affect the value of feature \var{r}, it remains constant (shown by the constant `r' in the diagram).
The reward function is similar to the first example: a constant cost to take any action and a high reward for reaching the terminal state where the first two features are (2,2). This state follows into a dummy state where any action ends up in the same dummy state.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{example3.pdf}
\caption{Transition function for a dataset with 3 features, the first being age (denoted by \var{a}) and the third being race (denoted by \var{r}). Circles show all the states and edges show possible transitions. None of the actions can change the value of feature \var{r} as race is immutable.}
\label{fig:example3}
\vspace{1em}
\end{figure}
Let \var{r} take values 0 and 1.
Defined formally, here are the components for this \textsf{MDP}:
\begin{itemize}[itemsep=0pt,topsep=2pt,leftmargin=*]
\item States $\mathcal{S}$ = \{0,0,0\}, \{0,1,0\}, \{0,2,1\}, \{1,0,0\}, \dots.
\item Actions $\mathcal{A}$ = \var{a+1}, \var{a-1}, \var{b+1}, \var{b-1}.
\item Transition function $\mathit{T : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}}$
\item Reward function $R : \mathcal{S} \times \mathcal{A} \to \mathbb{R}$.
\item Discount factor $\gamma \in [0,1)$.
\end{itemize}
\smallskip
\textbf{Example 3:}
In all the examples we visited, there was a constant cost to taking any action, and all states but one gave a 0 reward on reaching them.
Consider the previous example where the dataset consisted of 3 features: age, education-level, and race.
Some of the states do not appear in the training dataset used to train the classifier we are trying to generate CFEs for.
Ideally, we would prefer to generate CFEs that are similar to existing data; otherwise, we might generate unrealistic and unactionable explanations.
This is based on the assumption that training data is a good representation of the true distribution of features.
Some of such states are:
\begin{itemize}[itemsep=0pt,topsep=2pt,leftmargin=*]
\item (0,2,0) and (0,2,1): intuitively this shows that it is unrealistic for an individual to be in the lowest age group (0) and have the highest education-level (1). This is true regardless of the person's race.
\item (2,0,1): it is improbable for someone belonging to the race encoded by value 1 to be in the highest age group and have the lowest possible education-level. Yet (2,0,0) is not an improbable state, and this might be due to the differences in education level across different races.
\end{itemize}
We encode this information in the \textsf{MDP} by modifying its reward function.
If we take an action that ends up in an unrealistic state, it attracts a penalty of -5 points.
The dummy state still carries the +10 reward, other states reward 0, and there is a constant cost of 1 to take any action.
The agent learning in this environment would ideally learn to avoid the unrealistic states and take actions that go to the terminal state.
In this situation, the agent can learn not to take a shorter path because it goes through an unrealistic state.
We use a $k$-Nearest Neighbour algorithm to find the appropriate penalty for landing in any state in our experiments. If a state is close to a datapoint in the training dataset or occurs in the training dataset itself, there is a low or no penalty.
\smallskip
\textbf{Example 4:}
Reconsider the last example in which there are three features.
The reward function in the last example costed the same for all features.
It might be harder to change one feature than another in real life, e.g., it might be easier for someone to wait to increase their age rather than get a higher educational level.
This can be accounted for by posting higher costs to change features harder to change and vice-versa for feature easier to change.
\section{Illustrative Examples: Translating CFEs to MDPs}
\label{sec:example}
We now give two examples of translating a CFE problem into an \textsf{MDP}{}. Once modeled as an \textsf{MDP}{}, we can use various off-the-shelf algorithms (from planning or RL) to learn a policy to generate CFEs.
\textbf{Example 1:}
Consider inputs consisting of two features $\var{a}, \var{b} \in \{0, 1, 2\}$.
The combinations of possible values for \var{a} and \var{b} form the state space for the \textsf{MDP}{}, and is represented by $\mathcal{S}$.
\Cref{fig:example1} shows how we can move between different states by changing feature values.
The directed edges show that upon taking a specific action, the agent can move from one state to another, e.g., the agent transits from state \var{(0, 1)} to \var{(0, 2)} by taking the action \var{b+1}, which increments the value of feature \var{b} by 1.
Actions \var{a+1} and \var{a-1} respectively increase and decrease the value of feature \var{a} by 1. This is similar for feature \var{b}. These actions constitute the action space for the \textsf{MDP}{}, and are represented by $\mathcal{A}$.
The third component of our \textsf{MDP}{} is the transition function which is represented by $\mathit{T: \mathcal{S} \times \mathcal{A} \to \mathcal{S}}$. This denotes that if the agent takes an action $\mathcal{A}$ in state $\mathcal{S}$ then it will end up in state $\mathcal{S}$.
The aforementioned transition function is deterministic because with a probability of 1, taking the action $\mathcal{A}$ in state $\mathcal{S}$,
will land the agent in the state $S' \in \mathcal{S}$, and the probability of ending up in any other state $S'' \in \mathcal{S}$
is 0.
In probabilistic transition functions, there is a probability distribution over destination states.
It is denoted by $\mathit{T : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to}$ \{0,1\} with an additional constraint of $\forall S \in \mathcal{S}, \forall A \in \mathcal{A} \sum_{S' \in \mathcal{S}} T(S, A, S') = 1$ (probability laws).
The final component of the \textsf{MDP}{} is the reward function. Taking an action costs some amount (negative reward), and reaching desirable states gives a positive reward.
For example, in this \textsf{MDP}{} taking any action costs a constant amount of 1.
Reaching all states gives 0 reward, except for a terminal state ($\phi$), which gives a reward of +10.
The terminal state ($\phi$) can only be reached via (2,2) (using any action), the state in green color.
All actions in the terminal state lead to itself with 0 cost.
This represents the situation in which a ML classifier classifies all individuals as 0, except when their features are (2,2). Therefore (2,2) is the desirable state.
The aim is to learn a policy that reaches a terminal state from any state at lowest cost (e.g., taking the fewest number of steps), if a terminal state is reachable. Cost (or reward) can be discounted in the traditional way using a discount factor $\gamma \in [0,1)$, or even $\gamma=1$ when a fixed horizon (e.g., maximum number of steps) is used.
Formally, for this example with a discrete state space and discrete action space, our \textsf{MDP}{} is:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item States = $\{S \in \mathcal{S}: \{0,0\}, \{0,1\}, \{0,2\}, \{1,0\}, \dots \}$.
\item Actions = $\{A \in \mathcal{A}: \var{a+1}, \var{a-1}, \var{b+1}, \var{b-1} \}$.
\item Transition function $\mathit{T : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}}$
\item Reward function $r : \mathcal{S} \times \mathcal{A} \to \mathbb{R}$.
\item Discount factor $\gamma \in [0,1]$, capturing the tradeoff between current and future reward.
\end{itemize}
Our goal is then the traditional goal of finding a policy $\pi : \mathcal{S} \to \mathcal{A}$ that, given a state $S \in \mathcal{S}$ (in our case, an input datapoint), returns an action $A \in \mathcal{A}$ that represents the best first step to take to achieve a goal (in our case, a first feasible change to a portion of an input datapoint's feature vector). In the context of \textsc{FastCFE}\xspace{}, one would then call this precomputed policy repeatedly to achieve an optimal path to a final goal with a desired class label as output.
\begin{figure}
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/example1.pdf}
\caption{}
\label{fig:example1}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/example4.pdf}
\caption{}
\label{fig:example4}
\end{subfigure}
\caption{
Transition function for the two examples. Circles show all the states, and edges show possible transitions.
1) Left-hand-side shows the transition function for a dataset with two features \var{a} and \var{b}, with no restrictions on the values both of them can take within the input domain. The transition edges are therefore bidirectional.
2) Right-hand-side shows the transition function for a dataset with three features: age (\var{a}), education-level (\var{b}), and race (\var{r}). The transition edges are unidirectional as both age and education cannot decrease. Since race is immutable, there are no actions for \var{r}. Since increase in education stochastically affects age, the dashed edges represent a 50\% probability of transition. }
\label{fig:example14_}
\end{figure}
\smallskip
\textbf{Example 2:}
Now consider a realistic dataset with 3 features: age (denoted by \var{a}), education-level (denoted by \var{b}), and race (denoted by \var{r}).
Features can be causally related.
In this dataset: age cannot decrease, education-level cannot decrease, education-level influences age, and race is immutable.
When we increase the education-level \var{b} by 1, there is a 50\% chance that the age \var{a} will remain the same and a 50\% chance that it will increase (by 1). These interactions between variables can be captured by a structural causal model (\textsf{SCM}{}), as we discuss in Section~\ref{sec:algo}.
Therefore, the transition function for the \textsf{MDP}{} representing the CFE problem for this dataset is stochastic.
Defined formally, here are the components for this \textsf{MDP}{}:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item States = $\{S \in \mathcal{S}: \{0,0,0\}, \{0,1,0\}, \{0,2,1\}, \{1,0,0\}, \dots \}$.
\item Actions = $\{A \in \mathcal{A}: \var{a+1}, \var{a-1}, \var{b+1}, \var{b-1} \}$.
\item Transition function $\mathit{T : \mathcal{S} \times \mathcal{A} \times \mathcal{S'} \to}$ \{0,1\} s.t.\
$\forall S \in \mathcal{S}, \forall A \in \mathcal{A},\ \ \sum_{S' \in \mathcal{S}} T(S, A, S') = 1.$
\item Reward function $r : \mathcal{S} \times \mathcal{A} \to \mathbb{R}$.
\item Discount factor $\gamma \in [0,1]$.
\end{itemize}
\Cref{fig:example4} shows the transition function for this problem input.
The action that increases the education-level (represented by \var{b}), now has a probabilistic transition to two destination states, represented by dashed unidirectional edges. Each transition edge has a 50\% probability of occurrence.
Unidirectionality comes from the fact that education-level cannot decrease.
The edges which denote action on the feature \var{a} are also unidirectional as age cannot decrease. The reward function is identical to the previous example; optionally, it can be changed to accommodate desired real-world preference such as adherence to the data manifold (\Cref{sec:desiderata}) or having different costs for changing different features, which we describe in \Cref{sec:algo}.
Additional examples can be found in~\Cref{sec:example_remain}.
\section{Introduction}
\label{sec:intro}
Machine learning (ML) models are increasingly used to make predictions in systems that directly or indirectly impact humans. This includes critical applications like healthcare~\citep{medical-treatment-ml}, finance~\citep{credit-risk-ml}, hiring~\citep{hiring-ml}, and parole~\citep{parole-ml}.
To understand ML models better and to promote the equitable impact of their deployment in society, it is necessary to assess stakeholders'---both expert~\citep{Holstein19:Improving} and layperson~\citep{Saha20:Measuring}---comprehension of and needs for general observability into their systems~\citep{poursabzi2021manipulating,ehsan2021expanding}.
The nascent Fairness, Accountability, Transparency, and Ethics in machine learning (aka ``FATE ML'') community conducts research to develop methods to detect (and counteract) bias in ML models, develop techniques that make complex models explainable, and propose policies to advise and adhere to regulation of algorithmic decision-making. Here, we focus on black box model explainability.
Research in explainable ML is bifurcated. One high-level approach aims to develop inherently interpretable models such as decision trees and linear models~\citep{Rudin19:Stop}. The downside to these models is their inability to achieve high accuracy on complicated tasks in computer vision and natural language processing.
Another high-level approach aims to utilize existing complex classification techniques (such as deep neural networks), but to bolster them with surrogate models that can render their predictions and/or internal processes understandable~\citep{xai-survey2}. This is achieved through explaining models holistically (global explanation) or single predictions from the model (local explanation).
Global explanation generally requires approximating complex models via interpretable surrogates, while local explanation methods largely approximate a local region around a complex decision boundary.
\textbf{Counterfactual explanations (CFEs).}
A technique for providing local explanations, CFEs explain a classification by finding the minimum change in the original datapoint such that the underlying ML model will end up classifying the new datapoint into a desired class. (We provide an in-depth discussion of terminology in \Cref{sec:counter_vs_contra}.) For example, if an individual were denied a loan request, a CFE might tell them that if they were to increase their income by \$2000, their request would be approved.
CFEs do not necessarily approximate the underlying ML model, and hence the changes recommended by them are fidelitous to the model---they indeed get the desired class label.
CFEs provide a precise recommendation to an individual and are therefore more directly actionable than other forms of local explainability.
Recent research in this area has aimed to ensure CFEs are actionable and useful by incorporating additional constraints and desiderata into the counterfactual generation problem.
As described in~\Cref{sec:desiderata}, these include notions of sparsity, causality, and realism of CFEs, among others.
What is needed~\citep[see, e.g.,][]{verma2020CFsurvey,Chou21:Counterfactuals,karimi2020survey} is a generalized approach that can accommodate such varied constraints and can also be computed efficiently.
\textbf{Operationalizing counterfactual explanations.} We propose a novel approach (\textsc{FastCFE}\xspace) for generating CFEs for ML models by translating a given counterfactual generation problem into a Markov Decision Process (\textsf{MDP}{}), which is solved using standard algorithms for solving Reinforcement Learning (RL) problems (see~\Cref{sec:background}) or, given complete access to the underlying model, planning.
Since \textsc{FastCFE}\xspace aims to learn a policy that can generate CFEs, upon learning that policy once, we can generate CFEs for multiple datapoints from the same distribution without the need to re-optimize (which is required by most previous approaches; see~\Cref{sec:related}).
Thus, \textsc{FastCFE}\xspace \emph{amortizes} the cost of repeatedly computing CFEs for different inputs on the same model.
\textsc{FastCFE}\xspace also allows enforcing desirable properties of CFEs, such as closeness to the training data distribution (data manifold), respect of causal relations between the features, and mutability and actionability of different features.
\textsc{FastCFE}\xspace only requires access to the \functionname{predict} function of the ML model, and therefore works for \emph{black-box} models and is \emph{model agnostic}.
The output of \textsc{FastCFE}\xspace is a learned policy, which can be used to generate CFEs for any input datapoint. Via the learned policy, \textsc{FastCFE}\xspace outputs CFEs as a sequence of steps that lead an individual to the final counterfactual state. To our knowledge, we are the first to leverage techniques from stochastic control to provide such \emph{sequential CFEs} \citep{Ramakrishnan_Lee_Albarghouthi_2020}. Furthermore, if desired, that sequence can adhere to particular \emph{sparsity} constraints (e.g., only one feature changing per step).
Sequential and ``rolled out'' CFEs have several advantages, directly addressing gaps identified by recent survey papers~\citep{verma2020CFsurvey,Chou21:Counterfactuals,karimi2020survey} and workshops~\citep{Ehsan21:Operationalizing}: 1) optional action sparsity allows an individual to focus their effort on changing a small number of features at a time; 2) if an individual is not able to precisely follow prior advice, they can update their features and get new advice to achieve the nearest counterfactual; and 3) presentation of CFEs as a set of discrete and sequential steps which is closer to real-world actions, rather than one step continuous change to attain a counterfactual state, which all previous approaches do.
Our \textbf{main contribution} is a general-purpose algorithm that translates a standard CFE problem into a Markov decision process (\textsf{MDP}{}). To the best of our knowledge, our stochastic-control-based approach is the first to simultaneously address roadblocks to using CFEs in practice that have been identified by the community~\citep[e.g.,][]{verma2020CFsurvey,Chou21:Counterfactuals,karimi2020survey}.
\section{Evaluation}
\label{sec:eval}
\setlength{\belowcaptionskip}{-9pt}
We next provide experimental validation of \textsc{FastCFE}\xspace using several real-world datasets.
We also explore the impact of varying domain-specific parameters on the policies, and thus sequential CFEs, produced by \textsc{FastCFE}\xspace.
Our experiments answer the following research questions:
\begin{description}[noitemsep,topsep=-1pt,parsep=0pt]
\item[\RQ{1}] Does \textsc{FastCFE}\xspace successfully generate CFEs for various input datapoints (validity)?
\item[\RQ{2}] How much change is required to reach a counterfactual state (proximity)?
\item[\RQ{3}] How many features are changed to reach a counterfactual state (sparsity)?
\item[\RQ{4}] Do the generated CFEs adhere to the data manifold (realisticness)?
\item[\RQ{5}] Do the generated CFEs respect causal relations (feasibility)?
\item[\RQ{6}] How much time does \textsc{FastCFE}\xspace take to generate CFEs (amortizability)?
\end{description}
\paragraph{Datasets. }
We use 3 datasets in our experiments: German Credit, Adult Income, and Credit Default~\citep{UCI-repo}. These datasets have 20, 14, and 23 features respectively. We omitted the feature \var{education-num} in the Adult dataset as it has one to one mapping with another feature \var{education}, resulting in 13 features.
We split the datasets into 80\%-10\%-10\% for training, validation, and testing, respectively. Each dataset has two labels, `1' and `0', where `1' is the desired label.
Using the training dataset, we trained a simple classifier: a neural network with two hidden layers (5 and 3 neurons) with ReLU activations.
The test accuracy of the classifier was above 80\% for all the datasets; specifically, 83.0\% for German Credit, 83.7\% for Adult Income, and 83.2\% for Credit Default. These rates are comparable to other simple classification models; still, we note that the competitive performance of the trained classifier \textsf{f} is relatively less important for \textsc{FastCFE}\xspace's validation.
\paragraph{Implementation Algorithm.}
Any appropriate method for computing an optimal policy $\pi^* : \mathcal{S} \to \mathcal{A}$, or any approximately optimal policy, to the \textsf{MDP}{} output of Algorithm~\ref{alg:mdp} can be used. The datasets used in our experiments yield an \textsf{MDP}{} with a continuous state and action space, so we train the agent using a policy gradient algorithm. Specifically, we use proximal policy optimization (PPO)~\citep{PPO2017} with generalized advantage estimate (GAE)~\citep{GAE2018} to train the agent.
The features in the dataset are scaled between $-1$ and $1$ before training both the classifier and the agent.
\subsection{Baselines} Since, to our knowledge, \textsc{FastCFE}\xspace is the first approach to generate amortized CFEs for black-box models, there exist no previous approaches against which we can directly compare. Nevertheless, we compare \textsc{FastCFE}\xspace to several previous popular CFE generating approaches and two other baselines that we developed.
\noindent\textbf{Baselines we developed.} To compare \textsc{FastCFE}\xspace to approaches that generated CFEs in an amortized manner for black-box models, we developed two baselines:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item \textbf{Random:} This approach tries to generate CFEs by executing random actions from the action space.
\item \textbf{Greedy:} This approach tries to generate CFEs by changing features greedily.
At each step, it executes all the actions in the action space, and moves forward with the one which gives the highest reward. Naturally, this is expensive.
\end{itemize}
Note that, in order to have a finite number of actions to evaluate and greedily choose the best action, the greedy approach requires the action space to be discretized.
Therefore we compare \textsc{FastCFE}\xspace, random, and the greedy approaches using discrete action space.
\noindent\textbf{Previous CFE generating approaches.}
Based on the level of required model access, previous CFE generating approaches can be categorized as: 1) access to complete model internals, i.e. weights of neurons or nodes of decision trees, 2) access to model gradients (restricted to differentiable models like neural networks), and 3) access to only the prediction function (black-box). We choose popular approaches from all three categories.
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item \textbf{Complete model internal access.}
We chose MACE~\cite{karimi_model-agnostic_2020} from this category.
\item \textbf{Gradients access.} We chose DiCE~\cite{mothilal_explaining_2020} from this category.
\item \textbf{Black-box.} We chose three approaches in this category, which were available in the open-source repository of DiCE~\citep{DiCE-repo}. These are black-box and model-agnostic versions of the original DICE approach (which required gradients). They are DiCE-Genetic, DiCE-KD-Tree, and DiCE-Random.
\end{itemize}
\subsection{Experimental Methodology}
\begin{table}
\footnotesize
\centering
\caption{Causal relations and immutable features we identified for the datasets used in experiments.
}
\label{tab:causal-rels}
\resizebox{\columnwidth}{!}{
\begin{tabular}{m{0.15\columnwidth}m{0.45\columnwidth}m{0.4\columnwidth}}
\toprule
Dataset & Causal relations & Immutable features \\
\midrule
German Credit & Age and Job cannot decrease & Foreign worker, Number of liable people, Personal status, Purpose \\ \\
Adult \mbox{Income} & Age and Education cannot decrease, increasing Education increases Age & Marital-status, Race, Native-country, Sex \\ \\
Credit \mbox{Default} & Age and Education cannot decrease, increasing Education increases Age & Sex, Marital status \\
\bottomrule
\end{tabular}
}
\end{table}
Here we describe the details of running \textsc{FastCFE}\xspace and other baselines.
\paragraph{\textsc{FastCFE}\xspace specifics.}
Since \textsc{FastCFE}\xspace requires the set of immutable features and causal constraints as input, we infer them for each dataset.
They are shown in \Cref{tab:causal-rels}. As described in Algorithm~\ref{alg:mdp}, this directly impacts the transition function in the MDP.
We use a particular instantiation of~\Cref{alg:mdp}, available in our open source codebase, in the experiments:
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*]
\item Action space: To produce sequential CFEs, actions primarily modify only one feature at a time. However, endogenous features may change due to changes in their parents.
\item Cost of action: For the experiments, we do not penalize the agent for taking any action, thus \textsf{DistF} always returns 0. We describe the rationale for this later.
\item Data manifold distance: Following previous work~\citep[e.g.,][]{dandl_multi-objective_2020,Kanamori2020:DACE}, we train a $k$-Nearest Neighbor (KNN) algorithm on the training dataset and use it to find the $\ell_1$ distance of a given datapoint from its nearest neighbor ($k = 1$) in the dataset (\textsf{DistD}). We use several values of the data manifold adherence factor $\lambda$ in the experiments.
\item Counterfactual state reward (\textsf{CFReward}): As mentioned earlier, when the agent reaches a state, it receives a reward equal to the probability of that state to belong to the desired class. Whenever a counterfactual state is reached, the agent is rewarded with 100 points.
\item Discount Factor: We use a discount factor $\gamma=0.99$. This value encourages the agent to learn the policy which takes a datapoint to its counterfactual state in small number of steps, thus removing the requirement of \textsf{DistF}.
\end{itemize}
\paragraph{MACE specifics.} MACE requires the user to specify the type of ML model to be used for classification. We choose logistic regression (LR) and random forest (RF) as the underlying models. We could not use neural networks because of the long runtime of MACE (see \Cref{sec:results}).
All approaches are requested to generate CFEs for all test datapoints that are predicted as `0' by the underlying classifier.
Due to the small size of the German Credit dataset, we generate CFEs for datapoints that are predicted as `0' both in the training and test sets.
Thus we generate CFEs for 257 datapoints in the German credit, 7229 datapoints in the Adult Income, and 5363 datapoints in the Credit Default datasets, respectively.
Since MACE uses a different classifier, the number of datapoints predicted as `0' were slightly different. More details are provided in section \cref{sec:results}.
\textsc{FastCFE}\xspace, random, and greeedy approaches stop when they reach a counterfactual state (predicted as `1') or exhaust 200 actions. Other baselines have no timeout.
\subsection{Results}
\label{sec:results}
\input{neurips_table}
\Cref{tab:finalresults} shows the performance of \textsc{FastCFE}\xspace and all the baselines on the CFE desiderata. We report the average validity, average proximity (separately for the numerical and categorical features), average sparsity, average data manifold distance, average adherence to causal constraints of the generated CFEs, and the average time to generate the CFEs.
\noindent \textbf{Answer to \RQ{1}:} As shown in \Cref{tab:finalresults}, \textsc{FastCFE}\xspace has very high validity for all datasets. For Adult Income, \textsc{FastCFE}\xspace gets the highest validity at 100\%, while for Credit Default and German Credit is achieves the second and third highest validity, respectively.
Random and greedy approaches have a low validity in general.
DiCE-Genetic has a validity in the high range in general, but this comes at the cost of proximity, sparsity, and data manifold distance.
DiCE-KDTree is unable to generate CFE even for a single datapoint in all the three datasets. Unsurprisingly, Dice-KDTree is able to generate some CFEs when all features are deemed mutable.
DiCE-Random achieves 100\% validity for all datasets, and just like DiCE-Genetic this comes at the cost of proximity, sparsity, and data manifold distance.
The conclusion is similar for DiCE-Gradient's validity.
MACE also achieves 100\% validity, but is very expensive to run. Due to this, it was impractical to run MACE for the larger datasets, Adult Income and Credit Default (we show MACE run only for the German Credit dataset). MACE was even more expensive when the underlying classifier was a neural network, and we had to abandon that experiment. For the two underlying classifiers, MACE predicted `0' for 210 datapoints for logistic regression (LR) and 287 datapoints for the random forest (RF). MACE was supposed to generate CFEs for these datapoints.
\textbf{Answer to \RQ{2}:} We measure proximity for numerical and categorical features separately (Prox-Num and Prox-Cat respectively). For numerical features, the distance is sum of the $\ell_1$ norm respectively divided by the median average deviation for each numerical feature.
For categorical features, the distance is the number of categorical features changed divided by the total number of categorical features. These metrics were proposed and used in previous works~\cite{mahajan_preserving_2020,mothilal_explaining_2020}.
\textsc{FastCFE}\xspace's CFEs are most proximal for Adult Income and Credit Default datasets, and come second for German Credit.
The random approach and the four variants of DiCE have high proximity value.
The greedy approach performs well on this metric, but its validity is very low.
MACE's performance is about average.
\noindent \textbf{Answer to \RQ{3}:}
\textsc{FastCFE}\xspace achieves the lowest sparsity for all the datasets. Note that, for \textsc{FastCFE}\xspace we measure sparsity at the start and end point of a CFE, and not at every step. This is in accordance with previous works~\cite{mothilal_explaining_2020}.
Random, DiCE-Genetic, and DiCE-Gradient's performance is abysmal.
This is surprising because DiCE-Gradient has a post-hoc step specifically for reducing sparsity.
Greedy, MACE, and DiCE-Random's performance is about average.
\noindent \textbf{Answer to \RQ{4}:}
\textsc{FastCFE}\xspace achieves low average manifold distance.
It performs second best for Adult Income and Credit Default and is in the middle for German Credit.
The greedy approach and MACE also performs well on this metric.
The random approach and all variants of DiCE perform poorly on this metric.
In \Cref{sec:hyperparam}, we explore the impacts of hyperparameters related to data manifold distance.
\noindent \textbf{Answer to \RQ{5}:}
By virtue of construction, \textsc{FastCFE}\xspace always respects causal relations, it has a 100\% adherence in all datasets. The DiCE based approaches and MACE take as input the immutable features, but not the other causal constraints and hence do not perform well.
The greedy approach performs well on this metric, even though it does not have a knowledge of the causal constraints.
The random approach performs abysmally on this metric.
\noindent \textbf{Answer to \RQ{6}:}
The final column in \Cref{tab:finalresults} reports the average computation time per CFE. By virtue of amortization, \textsc{FastCFE}\xspace can generate CFEs very quickly. Therefore, on average it takes the lowest time in all datasets. The next best performers are DICE-Random and the random approach. Even then, \textsc{FastCFE}\xspace is \textbf{11$\times$} faster than the random approach on average (up to \textbf{15$\times$} faster), and \textsc{FastCFE}\xspace is \textbf{8$\times$} faster than DiCE-random on average (up to \textbf{15$\times$} faster).
The difference even more staggering for DiCE-Genetic and greedy approach.
MACE and Dice-Gradient were the slowest.
\textsc{FastCFE}\xspace is about \textbf{1000$\times$} faster than MACE on average (up to \textbf{1447$\times$} faster) and \textbf{4500$\times$} faster than DiCE-Gradient on average (up to \textbf{9400$\times$} faster).
While amortization allows for rapid generation of new CFEs, the one-time cost of training the agent(s) tended to range from 2 to 12 hours, depending on the dataset.
\section{Effects of different hyperparameters on CFE evaluation metrics}
\label{sec:hyperparam}
\begin{table}[h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l l l l l l l l l l}
\toprule
Dataset & $\lambda$ & \#DataPts. & Validity & Prox-Num & Prox-Cat & Sparsity & Manifold dist. & Causality & Time (s) \\
\midrule
\multirow{5}{*}{German Credit}
& 0 & 257 & 94.6 & 0.09 & 0.060 & 1.17 & 0.70 & 100.0 & 0.12 \\
& 0.1 & 257 & 97.3 & 0.10 & 0.063 & 1.22 & 0.72 & 100.0 & 0.07 \\
& 1 & 257 & 95.3 & 0.11 & 0.059 & 1.20 & 0.71 & 100.0 & 0.07 \\
& 10 & 257 & 40.5 & 0.0 & 0.077 & 1.00 & 0.62 & 100.0 & 0.15 \\
& 100 & 257 & 42.4 & 0.0 & 0.079 & 1.03 & 0.64 & 100.0 & 0.22 \\
\midrule
\multirow{5}{*}{Adult Income}
& 0 & 7229 & 100.0 & 0.04 & 0.0 & 1.00 & 0.18 & 100.0 & 0.028 \\
& 0.1 & 7229 & 100.0 & 0.04 & 0.0 & 1.00 & 0.18 & 100.0 & 0.016 \\
& 1 & 7229 & 100.0 & 0.04 & 0.0 & 1.00 & 0.18 & 100.0 & 0.016 \\
& 10 & 7229 & 100.0 & 0.04 & 0.0 & 1.00 & 0.18 & 100.0 & 0.016 \\
& 100 & 7229 & 100.0 & 0.04 & 0.0 & 1.00 & 0.18 & 100.0 & 0.028 \\
\midrule
\multirow{5}{*}{Credit Default}
& 0 & 5363 & 99.96 & 0.007 & 0.11 & 1.00 & 0.32 & 100.0 & 0.08 \\
& 0.1 & 5363 & 99.96 & 0.015 & 0.11 & 1.00 & 0.32 & 100.0 & 0.05 \\
& 1 & 5363 & 79.38 & 0.001 & 0.14 & 1.28 & 0.37 & 100.0 & 0.08 \\
& 10 & 5363 & 47.29 & 0.44 & 0.0 & 1.00 & 0.24 & 100.0 & 0.18 \\
& 100 & 5363 & 5.74 & 1.22 & 0.0 & 1.00 & 0.60 & 100.0 & 0.32 \\
\bottomrule
\end{tabular}
}
\caption{Comparison of the evaluation metrics of CFEs for different value of $\lambda$ hyper-parameter which determines the closeness to the training data manifold.
}
\label{tab:hyperparams_effects}
\end{table}
\Cref{tab:hyperparams_effects} shows the effect of increasing the penalty for leaving the training data manifold, which is enforced at each step of a counterfactual path.
With increasing $\lambda$, the manifold distance should become smaller.
Increasing $\lambda$ also makes it harder for the agent to learn a effective policy, and therefore the validity could also go down.
We observe both these expected trends in \Cref{tab:hyperparams_effects} above, specially for the German Credit and Credit Default datasets.
\section{Submission of papers to NeurIPS 2021}
\end{document}
\section{Related Work}
\label{sec:related}
Literature in counterfactual explanations for ML is relatively recent, with the first proposed algorithm in~\citeyear{wachter_counterfactual_2017}.
\citet{wachter_counterfactual_2017} proposed finding counterfactuals as a constrained optimization problem where the goal is to find the minimum change in the features such that the new datapoint has the desired label. This approach was gradient-based, did not consider actionability among features, did not adhere to data manifold or respect causal relations, and the optimization problem needed to be solved for generating a CFE for each input datapoint.
Other desiderata mentioned in~\Cref{sec:desiderata} were proposed by other papers: 1) approaches that generate multiple, diverse counterfactuals for a single input datapoint~\citep{mothilal_explaining_2020,dandl_multi-objective_2020,mahajan_preserving_2020,karimi_model-agnostic_2020,sharma_certifai_2019,russell_efficient_2019}, 2) approaches that generate counterfactual for black-box models and are model-agnostic~\citep{inverse-classification2,medina_comparison-based_2018,guidotti_local_2018,grath_interpretable_2018,sharma_certifai_2019,rathi-generating:2019,white_measurable_2019,poyiadzi_face_2020,keane2020good,dandl_multi-objective_2020}, 3) approaches that generate CFEs adhering to data manifold~\citep{dhurandhar_explanations_2018,dhurandhar_model_2019,joshi_towards_2019,van_looveren_interpretable_2020,mahajan_preserving_2020,pawelczyk_learning_2020,keane2020good,Grace:2019,dandl_multi-objective_2020,Kanamori2020:DACE}, 4) approaches that generate CFEs that respect causal relations~\citep{mahajan_preserving_2020,karimi_algorithmic_2020,karimi-imperfect:2020}, 5) approaches that generate fast counterfactuals~\citep{mahajan_preserving_2020}.
Only \citet{mahajan_preserving_2020} proposed an approach that can generate multiple CFEs for many datapoints, after optimizing once, therefore \emph{fast} counterfactuals, but their approach is gradient-based and therefore works only for differentiable models and it not black-box.
Our approach overcomes this limitation and generates both \emph{fast} and \emph{model-agnostic} CFEs, which adhere to data manifold and respect causal relations.
Out of the previous approaches that respect causal relations, only \citet{mahajan_preserving_2020} works with partial \textsf{SCM}, while others require complete causal graph or complete \textsf{SCM}~\citep{karimi_algorithmic_2020,karimi-imperfect:2020}, which are mostly unavailable in the real world. Our approach also works with a partial \textsf{SCM}.
All the previous works give a single-shot solution for getting to a counterfactual state from an input datapoint. Our approach overcomes this limitation by proposing a novel algorithm that generates \emph{sequential} CFEs.
\citet{verma2020CFsurvey} and \citet{karimi2020survey} have collected and summarized recent works in counterfactual explainability. We point the readers to these surveys for an excellent in-depth review of the research landscape in this area. |
2209.12726 | \section{Introduction}
Since the development of the VLSI industry and the consequent scaling down of devices following Moore's law, transistor sizes have significantly decreased. As a result, supply voltages have also been reduced. Therefore, voltage regulators are needed, which output a fixed voltage despite varying input voltages. The IC's other parts receive this fixed output voltage as a supply. There are two main types of regulators: linear and switching. LDOs are linear regulators that are commonly used in VLSI chips. The reduction in supply voltage due to scaling has made LDOs an important component of power management ICs because we require lower input-output voltage differences, i.e., lower dropout.
An LDO consists of four main blocks ~\cite{ref_article10}: an error amplifier, a pass element, a feedback network, and a load. An error amplifier is a differential amplifier based on an OTA or Operational Transconductance Amplifier. Like an Opamp, it has similar characteristics. OTAs, on the other hand, are designed for capacitive loads, while OPAMPs are designed for resistive loads. MOSFETs or BJTs can be used as the pass element. It is preferable to use MOSFETs because they are not very sensitive to temperature changes ~\cite{ref_article11}. A simple resistive voltage divider network can be used as a feedback network.
The design has been simulated using LTspice developed by Linear Technology and Analog Devices. It is widely used for analog circuit simulations.
\section{Comparative study between performances of PMOS and NMOS LDOs}
The two main architectural types of the LDO~\cite{ref_article1} is shown in
Fig.~\ref{fig1}, the difference between them is the type of pass transistor. The first one is a PMOS pass transistor, whereas the second is an NMOS pass transistor.
\begin{figure}
\includegraphics[width=\textwidth]{fig1.png}
\caption{Block level design of PMOS and NMOS based LDO} \label{fig1}
\end{figure}
Three major components make up a typical LDO circuit, namely a high gain error amplifier ~\cite{ref_article12}, a pass transistor, and a feedback network. Using the pass transistor to control the load current, the High gain error amplifier compares output voltages with reference voltages, and the error amplifier receives a return voltage signal from resistors that act as voltage-voltage feedback to sense output voltages from the LDO.
\subsection{The reason behind selecting a PMOS-based design}
The dropout voltage of PMOS is lower than that of NMOS. However, because the NMOS pass transistor is connected as a common drain, it leads to a small output resistance at high load currents due to the increase in transconductance. This makes it more cumbersome to fabricate the IC since an additional charge pump~\cite{ref_article3} is required to support a wide range of load currents. PMOS LDO has higher loop gain as compared to NMOS-based LDO. PMOS is, however, a bit slower compared to NMOS since the mobility of electrons is greater than that of holes, thus PMOS design occupies a larger area which accounts for larger capacitances. This makes the PMOS LDO~\cite{ref_article5} slower than its NMOS counterpart. Thus, we have selected PMOS for our design by balancing odds and favours.
\section{Error Amplifier Design}
The error amplifier~\cite{ref_article4} is a two-stage OTA. The first stage is a differential amplifier stage. The second stage is a gain-enhancing common source stage. We have used a current mirror biasing for the first stage formed by MOSFETs $M_1$ and $M_6$. The MOSFETs $M_2$ and $M_3$ form the inverting and non-inverting terminals of the OTA respectively. The design has been done for 180nm technology. The current flowing through $M_6$ is copied in $M_1$ and is twice the current flowing through $M_4$ and $M_5$ each since the same current flows through $M_4$ and $M_5$ due to equal gate-source voltage. $C_c$ is the Miller Compensation capacitance and $C_L$ is the load capacitance. The miller capacitance has been added to increase the stability of the error amplifier.
\begin{figure}
\includegraphics[width=\textwidth]{fig2.png}
\caption{Circuit diagram of error amplifier} \label{fig2}
\end{figure}
\section{Significance of Miller Compensation Capacitance}
The compensation capacitance has been used to stabilize the system by increasing its phase margin. Mosfet $M_4$ is diode-connected, so it has a very low output impedance ($~1/g_{m_{4}}$). Hence, the overall port impedance at the drain of $M_4$ is low. However, the port impedances at the drains of $M_5$ and $M_8$ are very high (comparable to the $r_o$). This forms two low-frequency poles ($~1/(r_{o}*C_{L})$). Each pole contributes a $-90^o$ phase shift, and thus a total phase shift of $-180^o$ at low frequencies. This results in $0^o$ phase margin and the OTA will oscillate in negative feedback. Thus, we are adding a compensation capacitor~\cite{ref_article2} between the drain of $M_5$ and $M_8$. Now, the output impedance at the drain of $M_5$ will see a much larger capacitance according to Miller’s theorem and this pole will shift towards a lower frequency. The other pole shifts towards higher frequency. Thus the system will essentially become a 1st order system with a significant phase margin ($~60^o$). Thus, the OTA will function as an amplifier instead of an oscillator.
\section{Proposed PMOS LDO}
The architecture of this LDO is similar to a basic LDO regulator. A voltage reference of 1.2V~\cite{ref_article6} is generated by the bandgap which is given to the negative terminal of the error amplifier. The output of the error amplifier block is fed to the gate terminal of the PMOS pass network and at the drain, a resistive divider network is connected. The feedback voltage from the resistive divider is fed back to the positive terminal of the error amplifier to ensure negative feedback.
\begin{figure}
\includegraphics[width=\textwidth]{fig3.png}
\caption{Circuit design of PMOS based LDO} \label{fig3}
\end{figure}
\subsection{Working Principle of Circuit}
From Fig.~\ref{fig2}, the error amplifier is in negative feedback. This is because according to Barkhausen criteria, the total phase shift should be $-180^o$ for negative feedback. The gate-drain phase shift is $-180^o$ and the voltage is fed back to the positive terminal. Therefore, the total phase shift is calculated to be $-180^o$. We can assume a virtual short condition for negative feedback in opamp. Thus, $V_+ = V_- = V_{REF}$ which leads to the following working formula for $V_{OUT}$.
\begin{equation}
V_{out} = V_{REF} * (R_{1} + R_{2})/R_{2}
\end{equation}
Now, let's understand the LDO regulation principle intuitively. If the load current increases, the current through the pass element cannot increase immediately. It will undergo some transient. Initially, the required additional load current is drawn from the load capacitor. This will lead to a decrease in the output node voltage. This output voltage reduction will lead to a decrease in the feedback voltage. Therefore, the gate voltage of the pass element will also decrease, because the voltage has been fed back to the positive terminal. Thus, the source-gate voltage of the PMOS pass element increases, which finally increases the current through the pass network to the required level. The same is the scenario when the load current drops. In this way, the output voltage and the load current are maintained by the LDO.
\subsection{Simulation and Analysis}
The 2-stage OTA~\cite{ref_article7} forming the error amplifier was simulated in LTspice. The OTA was designed for a DC differential voltage gain greater than 5000 and a Gain Bandwidth Product (GBP) of 5MHz. The bode plot of the OTA was obtained as shown in Fig. below.
\begin{figure}
\includegraphics[width=\textwidth]{fig4.png}
\caption{Bode plot of the OTA} \label{fig4}
\end{figure}
The DC gain has been found to be 81dB = 11220 which is greater than 5000 as expected . The 3dB cutoff frequency is found to be 440Hz. Thus, the GBP evaluates to 5MHz which matches our design constraints.
\begin{figure}
\includegraphics[width=\textwidth]{fig5.png}
\caption{The LDO circuit was simulated for a load current~\cite{ref_article9} of 10mA and the input supply was varied from 5.0V to 1.0V. The output was maintained at 2.0V as given by equation (1) until $V_{in}$ drops below 2.6V. Therefore, the dropout voltage is (2.6 - 2.0) V = 0.6V} \label{fig5}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{fig6.png}
\caption{Behaviour of the LDO for varying load currents} \label{fig6}
\end{figure}
We observed the behaviour of the LDO for varying load currents in Fig.~\ref{fig6}. It is found that as load current increases the performance of the LDO degrades since the dropout voltage increases~\cite{ref_article8,ref_article13}. The LDO supplies a maximum current of 23mA after which output voltage cannot be regulated.
\subsection{Result Summary}
\begin{table}
\centering
\caption{Simulation and analysis results of the PMOS based LDO}\label{tab1}
\begin{tabular}{|l|ll|}
\hline
Parameters & \multicolumn{2}{l|}{Values} \\ \hline
Input Voltage & \multicolumn{2}{l|}{1.0V - 5.0V} \\ \hline
Output Voltage & \multicolumn{2}{l|}{2.0V} \\ \hline
Reference Voltage & \multicolumn{2}{l|}{1.2V} \\ \hline
\multirow{3}{*}{Dropout Voltage} & \multicolumn{1}{l|}{$I_{LOAD}$ = 5mA} & 0.3V \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 7.5mA} & 0.45V \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 10mA} & 0.6V \\ \hline
\multirow{3}{*}{Power Consumed} & \multicolumn{1}{l|}{$I_{LOAD}$ = 5mA} & 7.42mW \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 7.5mA} & 10.25mW \\ \cline{2-3}
& \multicolumn{1}{l|}{$I_{LOAD}$ = 10mA} & 12.31mW \\ \hline
Miller Capacitance & \multicolumn{2}{l|}{3pF} \\ \hline
Maximum Tolerable Load Current & \multicolumn{2}{l|}{23mA} \\ \hline
\end{tabular}
\end{table}
\section{Conclusion}
This paper illustrates how low dropout (LDO) voltage regulator topology can be applied to voltage regulator design and why PMOS-based designs are preferred. The simulation clearly shows that the proposed technique can tolerate up to 23mA of current, which is an excellent result as compared to other LDOs. The PMOS-based design was stabilized with an efficient compensation technique and a Miller Capacitance of 3pF was used to achieve $~60^o$ phase margin.
\bibliographystyle{splncs04} |
2101.10741 | \section{Introduction} \label{sec:intro}
Galactic outflows driven by energy and momentum of an active galactic nucleus (AGN) and/or supernovae (SNe) are now understood to be an indispensable component of the galactic ecosystem \citep{2012ARA&A..50..455F, 2014ARA&A..52..589H, 2017hsn..book.2431H, 2018Galax...6..114Z}.
Multi-wavelength observations over the past decades have established an ever-growing inventory of galactic outflows, leading to the recognition that these outflows typically involve multi-scales and multi-phases.
However, our physical understanding of galactic outflows, in particular their mass budget, energetics and life cycle, is still far from complete.
The Galactic center, loosely defined here as the innermost few hundred parsec region of our Galaxy, provides the closest and perhaps the best laboratory for studying the formation and early evolution of a galactic outflow.
Observational evidence has accumulated over recent years for a multi-phase outflow from the Galactic center \citep{2003ApJ...582..246B, 2010ApJ...708..474L, 2019ApJ...875...32N}, collectively known as the Galactic Center Lobe (GCL; \citealp{1984Natur.310..568S}), a loop-like feature extending vertically out to $\gtrsim$ 1 degree (at a presumed distance of 8 kpc, $1^\circ$ corresponds to 140 pc) north of the disk mid-plane.
Compelling evidence also exists for outflows at still larger (kiloparsec-) scales \citep{2010ApJ...724.1044S, 2013Natur.493...66C, 2018ApJ...855...33D, 2020Natur.584..364D, Predehl2020}, but the physical relation between the outflows on different scales, e.g., whether they were produced by the same mechanism, remains an open question.
More recently, our view of the Galactic center outflow is further sharpened.
Based on high-resolution radio continuum observations afforded by the MeerKAT radio telescope, \citet{2019Natur.573..235H} found evidence for a pair of radio bubbles in the Galactic center,
which are roughly symmetric about the disk mid-plane with a width of 140 pc and a full length of 430 pc.
The northern bubble is spatially coincident with the GCL, but it is more clearly limb-brightened. In particular, the eastern side of the radio bubbles is delineated by the famous Radio Arc \citep{1984Natur.310..557Y} and its northern and southern extension toward higher latitudes; the western side is also bounded by prominent non-thermal filaments (NTFs; \citealp{1984Natur.310..557Y}).
Non-thermal emission is predominant in the radio bubbles at the observed frequency of 1284 MHz, although the GCL is known to show substantial thermal emission at different wavebands \citep{2003ApJ...582..246B, 2010ApJ...708..474L,2019PASJ...71...80N}.
Strikingly, the shells of the radio bubbles delineate the so-called ``X-ray chimneys'' recently discovered by X-ray observations \citep{2019Natur.567..347P}, which is a pair of diffuse, thermal X-ray features extending above and below the mid-plane.
This strongly suggests a physical relation between the two features, reminiscent of a collimated hot gas outflow with an expanding shell \citep{2021A&A...646A..66P}.
Proposed origins for the Galactic center outflow as well as for the outflows on larger scales (i.e., the Fermi bubbles and the recently discovered eROSITA bubbles; \citealp{2010ApJ...724.1044S, Predehl2020}) fall in two categories \citep{2019Natur.573..235H}: (i) past activity from the central
super-massive black hole (SMBH), commonly known as Sgr A*, which is currently in a quiescent state \citep{Cheng2011, Zubovas2011, 2012MNRAS.424..666Z, Zhang2020, 2020ApJ...904...46K}; or (ii) episodic or continuous nuclear star formation \citep{2010RvMP...82.3121G, 2014MNRAS.444L..39L, Crocker2015}.
In principle, both processes can drive an energetic outflow and produce the bubble-like structures observed at multi-wavelengths and multi-scales.
Therefore a quantitative modeling and close comparison with the observations are crucial to distinguish between the two scenarios.
In the literature, there have been a number of numerical simulations of a large-scale outflow from the Galactic center, which focuses on the formation of the Fermi bubbles by AGN jets or AGN winds \citep{Guo2012, Mou2014, Mou2015, Cheng2015, Zhang2020}.
In addition, \cite{Sarkar2015} and \cite{2017MNRAS.467.3544S} investigate the formation of the Fermi bubbles by simulating a nuclear starburst-driven wind.
In this work, we investigate the specific scenario that the radio bubbles/X-ray chimneys are the manifestation of an outflow driven by sequential SN explosions concentrated in the Galactic center, using three-dimensional magnetohydrodynamic (MHD) simulations, which is the first attempt of this kind to our knowledge.
Recently, \citet{2017ApJ...841..101L} and \citet{Li2020} have performed advanced numerical simulations to study SNe-driven outflows on a similar physical scale, but these simulations were run with physical conditions typical of galactic disks.
The Galactic center, on the other hand, is a unique environment characterized by a strong gravity, a concentration of massive stars, and a strong and ordered magnetic field.
In particular, the presence of the NTFs, which have a strong tendency to be vertically oriented with respect to the disk, points to a vertical magnetic field in the Galactic center (see review by \citealp{Ferriere2009}).
Theoretical studies have demonstrated that a strong external magnetic field can significantly affect the evolution of a supernova remnant (SNR; \citealp{1991MNRAS.252...82I, 1995MNRAS.274.1157R, Wu2019}), as
the magnetic pressure confines the expansion of the SN ejecta in such a way that they preferentially propagate along the direction of the magnetic field.
We are thus motivated to perform numerical simulations to test the scenario of an SN-driven, magnetically-collimated outflow for the radio bubbles/X-ray chimneys.
In Section \ref{sec:sim}, we describe our basic model and settings of the simulation.
In Section \ref{sec:res}, we present the simulation results and confront them with the observations.
In Section \ref{sec:dis}, we discuss the implications as well as limitations of our results.
A summary is given in Section \ref{sec:sum}.
\section{Simulation} \label{sec:sim}
We use the publicly available MHD code \textit{PLUTO}\footnote{http://plutocode.ph.unito.it/} \citep{Mignone2007, Mignone2012} to simulate sequential SNe explosions in the Galactic center and the formation of an SN-driven bubble.
The global dynamical evolution and fine structures of the bubble necessarily depend on many physical processes and physical quantities of the Galactic center, some of which are not well constrained.
Rather than pursuing a full degree of realism or a thorough exploration of the parameter space, our main aim here is to test a simplified but well-motivated model for the bubble formation.
\subsection{Basic MHD Equations and Magnetic Field Configuration}
\label{subsec:Bfield}
The simulation is based on a three-dimensional (3D) MHD cartesian frame with a grid of $512^3$, equivalent to a physical volume of 200$^3$~pc$^3$ and a linear resolution of 0.39 pc.
We set the $z$-axis to be perpendicular to the Galactic disk (north as positive), the $y$-axis to be parallel to the line-of-sight (the observer at the negative side), and the $x$-axis to run along decreasing Galactic longitude.
Because the radio bubbles are roughly symmetric about the Galactic plane, we only simulate the $z>0$ volume, sufficient to enclose the northern bubble, which exhibits a size of $\sim$120 pc (width) $\times$ 190 pc (height).
We adopt an outflow boundary condition.
The simulation is governed by the ideal MHD conservation equations,
\begin{equation}
\begin{cases}
\dfrac{\partial \rho}{\partial t} + \nabla \cdot (\rho \bm{v}) = 0 ,\\
\\
\dfrac{\partial (\rho\bm{v})}{\partial t}+\nabla \cdot\left[\rho\bm{vv}-\dfrac{\bm{B B}}{4\pi}+\bm{1}\left(p+\dfrac{\bm{B}^{2}}{8\pi}\right)\right]^{T}=-\rho \nabla \Phi, \\
\\
\dfrac{\partial E_{t}}{\partial t}+\nabla \cdot\left[\left(\dfrac{\rho \bm{v}^{2}}{2}+\rho e+p+\rho \Phi\right) \dfrac{\bm{v}-\bm{v} \times \bm{B} \times \bm{B}}{4\pi}\right] \\
= -\dfrac{\partial\left( \rho \Phi\right)}{\partial t}, \\
\\
\dfrac{\partial \bm{B}}{\partial t} - \nabla \times (\bm{v} \times \bm{B}) = 0,
\end{cases}
\end{equation}
where $\rho$ is the mass density, $p$ the thermal pressure, $\bm{v}$ the velocity, $\bm{B}$ the magnetic field, $\bm{1}$ the dyadic tensor, $\Phi$ the gravitational potential, and $E_t$ the total energy density, defined as:
\begin{equation}
E_t = \rho \epsilon + \dfrac{(\rho\bm{v})^2}{2\rho} + \dfrac{\bm{B}^2}{8\pi},
\end{equation}
where $\epsilon$ is the internal energy.
We use an ideal equation of state, i.e., $\epsilon = p/ (\Gamma -1)$, in which the ratio of specific heats $\Gamma$ = 5/3.
As mentioned in Section~\ref{sec:intro}, the orientation of the NTFs indicates that a vertical magnetic field is prevalent in the Galactic center.
We adopt a dipole magnetic field structure generated by a current loop with a diameter of 300 pc, which can be expressed analytically \citep{simpson2001simple}.
With this large diameter, the magnetic field lines remain approximately vertical to the disk within our simulation volume.
There are ample evidence that the Galactic center has an average magnetic field strength substantially higher than in the disk \citep{Ferriere2009}.
\citet{2010Natur.463...65C} derived a lower limit of 50 $\mu$G for the central 400 pc, based on an upper limit in the detected diffuse $\gamma$-ray flux.
Given the observed radio spectral energy distribution of the Galactic center, a weaker magnetic field would lead to more relativistic electrons and consequently a higher $\gamma$-ray flux due to inverse Compton emission.
In fact, energy equipartition between the magnetic field, X-ray-emitting hot plasma and turbulent gas implies a magnetic field strength of $\sim$100 $\mu$G \citep{2010Natur.463...65C}.
On the other hand, \citet{Thomas2020} suggested a stronger magnetic strength of 200 $\mu$G in the NTFs.
In our fiducial run of simulation, the initial magnetic field strength at the origin ($x=y=z=0$) is set as $B_{\rm 0}=80~\mu$G.
Values of $50~\mu$G and $200~\mu$G are also tested to examine the effect of a weaker/stronger magnetic field (see Section~\ref{subsec:synthetic}).
The simulation neglects viscosity and thermal conduction, but takes into account radiative cooling. We adopt the TABULATED cooling function implemented in {\it PLUTO}, which is generated with \textit{Cloudy} for an optically thin plasma and solar abundances \citep{2017RMxAA..53..385F}.
We neglect the synchrotron cooling of relativistic electrons, which are presumably produced by the SN shocks (see Section~\ref{subsec:synthetic}).
\begin{table*}
\caption{Simulation Parameters for the Radio Bubbles}
\label{table:parameters}
\centering
\begin{tabular}{l l l l l}
\hline\hline
Fiducial Parameters & Value & & \\
\hline
SN Ejecta Mass & 10 M$_{\odot}$ & &\\
SN Kinetic Energy & 1$\times$ 10$^{51}$ erg & & \\
Injection Radius & 4 pc & & \\
Ambient Temperature & 1$\times$ 10$^{6}$ K & & \\
Diameter of Explosion Region & 50 pc & & \\
Height of Explosion Region & 10 pc & & \\
\hline\hline
Simulation Runs & B80I1 & B80I2 & B50I1 & B200I1 \\
\hline
Magnetic Field Strength & 80 $\mu$G & 80 $\mu$G & 50 $\mu$G & 200 $\mu$G \\
Explosion Interval & 1 kyr & 2 kyr & 1 kyr & 1 kyr \\
\hline
\end{tabular}\\
\end{table*}
\subsection{Gravitational Potential and Initial ISM Conditions}
\label{subsec:gravity}
The gravitation in the Galactic center mainly originates from two components, namely, the nuclear star cluster (NSC), which dominates the innermost $\sim$20 pc, and the nuclear stellar disk (ND) that occupies the inner few hundred parsecs.
We neglect larger-scale structures such as the bar and the Galactic disk.
The SMBH, which has four million solar masses and a sphere of influence of a few parsecs in radius, can also be ignored given the scales of interest here.
The NSC/ND will not evolve significantly on the timescale involved in our simulations, hence we adopt a fixed gravitational potential, which,
following \citet{2008ApJ...675.1278S}, can be approximated by a logarithmic form,
\begin{equation}
\Phi = 0.5v_0^2\log(R_c^2+\dfrac{x^2}{a^2} + \dfrac{y^2}{b^2} +\dfrac{z^2}{c^2}),
\end{equation}
where $v_0$ is the asymptotic velocity of a flat rotation curve, $R_c$ is the core radius, and $a$, $b$ and $c$ are stretching parameters.
We adopt $v_0 = 98.6\rm~km~s^{-1}$, $R_c = 2\rm~pc$, $a=b=c=1$ for the NSC, and $v_0 = 190\rm~km~s^{-1}$, $R_c = 90\rm~pc$, $a=b=1$, $c=0.71$ for the ND, from Table 1 of \citet{2008ApJ...675.1278S}.
The combined NSC+ND potential has been found to provide a good match to the observed stellar mass distribution in the Galactic center \citep{2002A&A...384..112L}.
At the beginning of the simulation, the interstellar medium (ISM) is assumed to be isothermal and in hydrostatic equilibrium with the gravitational potential,
\begin{equation}
\dfrac{\nabla P}{\rho} = - \nabla \Phi,
\label{eqn:balance}
\end{equation}
where $P=n_tkT$ is the thermal pressure, and $n_t$ is the total number density of gas particles including protons, electrons and heavy elements.
As usual we define $\rho = \mu m_p n_t$, where $m_p$ is the proton mass and $\mu \approx 0.6$ is the mean molecular weight for solar abundance.
The initial temperature is set to be $10^6$~K, which is roughly the virial temperature given the enclosed gravitational mass of $1 \times10^9\rm~M_\odot$ within 100 pc.
The prevalence of hot gas (with temperatures $\gtrsim10^6$ K) in the Galactic center has been established observationally (e.g., \citealp{2003ApJ...591..891B, Ponti2015}).
While cooler gas (with temperatures $\lesssim 10^4$ K) is also known to exist in the Galactic center, it tends to concentrate in dense filaments and clouds near the midplane and is not expected to play a significant role in the bubble formation. We discuss possible effects of a multi-phase ISM on the observed properties of the bubble in Section~\ref{subsec:caveat}.
The initial density distribution can then be derived by solving Eqn.~\ref{eqn:balance}, as shown in Figure~\ref{fig:density} along with the initial magnetic field distribution.
From the adopted initial conditions, it can be shown that the thermal pressure of the ISM ($n_tkT \sim 10^{-12}-10^{-10}\rm~dyn~cm^{-2}$) is everywhere significantly lower than the magnetic pressure ($B_0^2/8\pi \sim 2.5\times10^{-10}\rm~dyn~cm^{-2}$), perhaps except in the innermost few parsecs. In the meantime, the Alfv{\'e}n speed, $V_{\rm A}=(B_0^2/4\pi\rho)^{\frac{1}{2}} \lesssim 10^{3}\rm~km~s^{-1}$, is much lower than the typical expansion velocity of the SN. Therefore, the present case of the Galactic center satisfies the {\it moderately strong field} condition defined by \citet{1991MNRAS.252...82I}.
\subsection{Supernova Input}
\label{subsec:SN}
In the simulations, SNe are set to explode within a predefined cylindrical volume.
The cylinder has a diameter of 50 pc in the $x-y$ plane and a thickness of 10 pc along the $z$-axis, to mimic the concentration of massive stars near the Galactic plane \citep{2015MNRAS.447.1059K}.
We have tested a wider explosion area in the $x-y$ plane (e.g., 100 pc in diameter, closer to that of the CMZ), finding that the
resultant bubble would become significantly fatter, inconsistent with the observed morphology.
In reality, the CMZ may provide a horizontal confinement to the bubble. However, a self-consistent implementation of the CMZ would necessarily introduce more free parameters, and is beyond the scope of the present work.
The base of the radio bubbles shows a small but appreciable offset to the west of Sgr A* \citep{2019Natur.573..235H}.
Thus we place the center of the cylinder at $x = 5$ pc to mimic this behavior.
Due to the otherwise axisymmetry in the simulation, this appears to be the most viable way to reproduce the observed offset.
The fiducial SN birth rate is set to be $1\rm~kyr^{-1}$ \citep{2018ApJ...855...33D}, which is estimated by assuming an SFR of 0.1 M$_{\odot}$ yr$^{-1}$, a \citet{Kroupa2001} initial mass function (IMF) and a minimum mass of 8 M$_{\odot}$ for the progenitor star of a core-collapse SN.
\citet{Barnes2017} and \citet{2020MNRAS.497.5024S} estimated a current SFR of 0.1 M$_{\odot}$ yr$^{-1}$ inside the CMZ, while \citet{NoguerasLara2019} found that star formation in the ND (which has a similar radial extent as the CMZ) has been relatively active in the past 30 Myr, with an SFR of $0.2-0.8\rm~M_{\odot}~yr^{-1}$.
Our assumed SFR of 0.1 M$_{\odot}$ yr$^{-1}$ is compatible with the smaller radial extent of our adopted exploding region, which may be the case if SN events have been episodic and clustering on a $\lesssim$ Myr timescale.
We also test the effect of a lower SN birth rate of $0.5\rm~kyr^{-1}$ (see below).
We have neglected Type Ia SNe, which have a birth rate of $\lesssim0.05\rm~kyr^{-1}$ according to the enclosed stellar mass in the ND/NSC \citep{2005A&A...433..807M}, though a recent study by \citet{2021ApJ...908...31Z} found evidence that Sgr A East, one of the few currently known SNRs in the Galactic center, was created by a Type Iax SN.
Individual SNe are thus injected at random positions inside the cylindrical volume, one after another with a fixed interval according to the assumed birth rate.
Each SN has an ejecta mass of $M_{\rm ej}=10 \rm~M_{\odot}$ and a kinetic energy of $E_{\rm ej}=1\times$ 10$^{51}$ erg \citep{Poznanski2013}.
This energy is deposited into a sphere with a radius of $R_{\rm SN}=4$ pc, ignoring any intrinsic anisotropy.
The analytic solution within $R_{\rm SN}$ is derived from \citet{Truelove1999}, in which the newly born SN is divided into two parts, the inner uniform density core region and the outer power-law density envelope region.
The radius of the former is 10 times that of the latter, and the power-law index is set as zero.
\begin{figure*}
\centering
\includegraphics[width=0.323\textwidth]{rho_t0_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{rho_t0_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{rho_t0_density1_E1_yz.eps}
\caption{Initial distribution of gas density, plotted in logarithmic scale and in units of cm$^{-3}$. The white arrows indicate the initial magnetic field distribution.
The three panels are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively.
}
\label{fig:density}
\end{figure*}
\subsection{Simulation Runs and Synthetic Emission Maps}
\label{subsec:synthetic}
In this work, we perform four runs of simulation, each with a unique combination of magnetic field strength and SN explosion interval.
Our fiducial simulation is represented by run \textit{B80I1}, where \textit{B} and \textit{I} indicate the magnetic field and explosion interval, respectively.
The fiducial run has $B_0=80\rm~{\mu}G$ and $I=1$ kyr.
The other three runs have either one of the two parameters varied.
\textit{B50I1} has $B_0=50\rm~{\mu}G$ and \textit{B200I1} has $B_0=200\rm~{\mu}G$, covering the empirical lower and upper limits inferred for the Galactic center (Section~\ref{subsec:Bfield}).
Finally, \textit{B80I2} has an explosion interval of 2 kyr.
The total elapsed time is set to be 330 kyr for all four runs.
In the fiducial simulation, this is about the time when the top of the bubble approaches the edge of the simulation box.
The time step is adaptive and ranges between $1-40$ yr.
The simulation parameters are summarized in Table~\ref{table:parameters}.
To facilitate comparison with the observations, we generate synthetic radio and X-ray maps for the final snapshot (i.e., $t$ = 330 kyr) of the simulation.
We include synchrotron radiation and free-free emission in the radio band (default at 1284 MHz, to be consistent with the MeerKAT observation), while for the X-ray band only thermal emission from a collisionally-ionized, optically-thin plasma is considered.
First we need to distinguish regions inside and outside the evolving bubble.
This is realized by adding a tracer parameter, $Q$, evaluated at every pixel in the simulation, which obeys a simple conservation equation:
\begin{equation}
\dfrac{\partial (\rho Q)}{\partial t} + \nabla \cdot (\rho Q \bm{v}) = 0.
\label{eqn:tracer}
\end{equation}
$Q$ has a value of 1 for pure SN ejecta and 0 for the unpolluted ISM, and a value between 0--1 for pixels with mixed ejecta and ISM.
We further calculate the Mach number for every pixel.
The synthetic maps only take into account pixels with a non-zero tracer parameter or a Mach number greater than 2.
The latter condition is employed to ensure that pixels with a high Mach number but a zero tracer parameter, such as those at or immediately behind the shock, are included.
Synchrotron emissivity depends on the magnetic field strength and the density of relativistic electrons. However, the latter cannot be directly obtained from our simulation and thus requires some working assumption.
Here, we assume that the relativistic electron density at a given pixel of interest is proportional to the local gas density \citep{Orlando2007, Zhang2017}, normalized to have a mean energy density of $0.1 \rm~eV~cm^{-3}$ across the bubble volume.
This is compatible with the estimated mean cosmic-ray energy density of $10 \rm~eV~cm^{-3}$ in the bubble \citep{2019Natur.573..235H} and the empirical fact that relativistic electrons account for $\sim 1\%$ of the total cosmic-ray energy density in the GeV band \citep{Blasi2013}.
We calculate the synchrotron emissivity in each pixel and integrate along the light-of-sight (i.e., the $y$-axis) to derive the synchrotron intensity map. In this calculation the $y$-component of the magnetic field is neglected due to the nature of synchrotron radiation.
Radio free-free emission is calculated following the standard formula of \citet{Longair2011}, which, at a give pixel, scales with density squared and is a function of temperature.
A temperature threshold of $10^4$ K is adopted when calculating the free-free emission.
We find that only a tiny fraction of all pixels in any of our simulations has a temperature below $10^5$ K.
The X-ray emissivity of an optically-thin thermal plasma in collisional ionization equilibrium \citep{2001ApJ...556L..91S}, also scaling with density squared, is extracted from \textit{ATOMDB}\footnote{http://www.atomdb.org}, version 3.0.9, for which we adopt a solar abundance.
The free-free and X-ray intensity maps are again derived by integrating along the $y$-axis.
We find that self-absorption is negligible in both the radio and X-ray bands, thanks to the relatively low column density involved.
\section{Results} \label{sec:res}
In this section we present the simulation results. We first describe the formation and subsequent evolution of the bubble in the fiducial run, showing that a good agreement on the overall morphology of the bubble is achieved between the simulation and observation (Section \ref{subsec:formation}).
We then present the other three runs of simulations and examine the effect of varying magnetic field strength or SN birth rate on the bubble formation (Section~\ref{subsec:compset}).
Lastly, we confront the synthetic emission maps with the radio and X-ray observations (Section~\ref{subsec:compobs}).
\subsection{Bubble Formation and Evolution in the Fiducial Simulation}
\label{subsec:formation}
\begin{figure*}
\centering
\includegraphics[width=0.323\textwidth]{rho_t1_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{rho_t1_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{rho_t1_density1_E1_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{rho_t6_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{rho_t6_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{rho_t6_density1_E1_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{rho_t11_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_density1_E1_yz.eps}
\caption{Density-velocity distributions after 30 (top row), 180 (middle row) and 330 (bottom row) kyr, for simulation \textit{B80I1}, i.e., with initial magnetic field strength of 80 $\mu$G and an explosion interval of 1 kyr.
The gas density is plotted in logarithmic scale and in units of cm$^{-3}$.
The white arrows indicate the velocity vector.
The left, middle and right columns are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively.
}
\label{fig:rho}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.323\textwidth]{T_t1_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{T_t1_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{T_t1_density1_E1_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{T_t6_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{T_t6_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{T_t6_density1_E1_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{T_t11_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{T_t11_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{T_t11_density1_E1_yz.eps}
\caption{Temperature-magnetic field distributions after 30 (top row), 180 (middle row) and 330 (bottom row) kyr, for simulation \textit{B80I1},
The gas temperature is plotted in logarithmic scale and in units of Kelvin.
The white arrows indicate the magnetic field vector.
The left, middle and right columns are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively.
}
\label{fig:T}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.323\textwidth]{B_t11_density1_E1_xy.eps}
\includegraphics[width=0.323\textwidth]{B_t11_density1_E1_xz.eps}
\includegraphics[width=0.323\textwidth]{B_t11_density1_E1_yz.eps}
\caption{Magnetic strength distributions after 330 kyr for simulation \textit{B80I1},
The left, middle and right columns are slices through the $z=0$, $y=0$ and $x=0$ planes, respectively.
}
\label{fig:B}
\end{figure*}
In Figures~\ref{fig:rho} and \ref{fig:T},
we show the gas density and temperature maps of run \textit{B80I1}.
In each figure, the density or temperature distribution is shown for a slice through the $z=0$ (left columns), $y=0$ (middle columns) and $x=0$ (right columns) plane,
after a simulation time of $t$ = 30 (top rows), 180 (middle rows) and 330 (bottom rows) kyr.
By design, 30 SNe have exploded by the time of 30 kyr.
The forward shock front of several youngest SNe are clearly revealed in the density map, as well as by
the overlaid projected velocity vectors.
A high-density region forms and persists around the origin ($x=y=z=0$), because of the steep gravitational potential even in the presumed absence of an SMBH.
As the shocks propagate, they compress and heat the ambient gas and also frequently collide with each other, eventually forming an expanding complex of post-shock gas with temperatures of $10^{7-8}$ K.
By the time of 180 kyr, this hot gas complex has developed into a bubble structure with a common dense shell, most clearly seen in the $x-z$ and $y-z$ planes.
Inside the bubble, the density is low as a result of expansion, while the temperature remains high due to repeated shock heating.
Numerous arc-like features are evident in the temperature map, especially in the $x-z$ and $y-z$ planes, which are the relic of individual SN shocks.
At this stage, the bubble looks fat, with a similar extent ($\sim100$ pc) along the three dimensions.
However, the overall expansion starts to show a preference along the vertical (positive $z$) direction, with the vertical expansion velocity of the shell now being $\sim690\rm~km~s^{-1}$, substantially larger than the average expansion velocity of $\sim120\rm~km~s^{-1}$ in the $x-y$ plane.
This is primarily due to the collimation effect by an ordered magnetic field \citep{1991MNRAS.252...82I,1992ApJ...389..297S,1995MNRAS.274.1157R,Wu2019}.
Specifically, the SN shocks tend to push the semi-vertical magnetic field to the sides, greatly suppressing the magnetic field inside the bubble and in the meantime amplifying the magnetic field near the bubble shell. In turn, the latter decelerates and even halts the horizontal expansion of the bubble. The vertical expansion, on the other hand, feels no such magnetic confinement, thus a high velocity along this direction remains. The relatively strong gravitational potential in the $x-y$ plane also contributes to retarding the horizontal expansion and facilitates the bubble collimation along the $z$-axis.
As a result, by the time of 330 kyr, the bubble becomes much more elongated. The top of the bubble almost reaches the edge of the simulation box ($z=200$ pc), with a vertical expansion velocity still as high as $\sim600\rm~km~s^{-1}$, whereas its horizontal extent has not grown significantly since $t$ = 180 yr.
The width of the bubble at its base is about 120 pc, with a small but appreciable offset towards the positive $x$-axis, both in agreement with the observed bubble.
Arc-like features tracing the sequential SN shocks remain prominent throughout the bubble interior.
Near some of these arcs, locally enhanced magnetic fields are evident, which is the result of shock compression, as illustrated in Figure~\ref{fig:B}.
The magnetic field strength takes a highest value of 175 $\mu$G across the bubble.
Our simulation ends at this point.
\subsection{Comparison with Other Simulation Runs}
\label{subsec:compset}
\begin{figure*}
\centering
\includegraphics[width=0.323\textwidth]{rho_t11_F_xy.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_F_xz.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_F_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{rho_t11_B_xy.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_B_xz.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_B_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{rho_t11_HB_xy.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_HB_xz.eps}
\includegraphics[width=0.323\textwidth]{rho_t11_HB_yz.eps}
\caption{Simulated density-velocity images after 330 kyr. In the upper, middle and lower rows, we show the results of runs \textit{B80I2}, \textit{B50I1} and \textit{B200I1}, respectively. The $x-z$ and $y-z$ panels are slices through the center of the box along each axis, while the $x-y$ panel shows the slice at $z$ = 0. The background is the density distribution in logarithmic scale, and the white arrows indicate the velocity.}
\label{fig:rhoe}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.323\textwidth]{T_t11_F_xy.eps}
\includegraphics[width=0.323\textwidth]{T_t11_F_xz.eps}
\includegraphics[width=0.323\textwidth]{T_t11_F_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{T_t11_B_xy.eps}
\includegraphics[width=0.323\textwidth]{T_t11_B_xz.eps}
\includegraphics[width=0.323\textwidth]{T_t11_B_yz.eps}\newline
\includegraphics[width=0.323\textwidth]{T_t11_HB_xy.eps}
\includegraphics[width=0.323\textwidth]{T_t11_HB_xz.eps}
\includegraphics[width=0.323\textwidth]{T_t11_HB_yz.eps}
\caption{Simulated temperature-magnetic field images after 330 kyr. In the upper, middle and lower rows, we show the results of runs \textit{B80I2}, \textit{B50I1} and \textit{B200I1}, respectively. The $x-z$ and $y-z$ panels are slices through the center of the box along each axis, while the $x-y$ panel shows the slice at $z$ = 0. The background is the temperature distribution in logarithmic scale, and the white arrows indicate the magnetic field.}
\label{fig:Te}
\end{figure*}
Similarly, we show the snapshots of density and temperature maps of simulation runs \textit{B80I2}, \textit{B50I1} and \textit{B200I1} in Figures~\ref{fig:rhoe} and \ref{fig:Te}, all at $t$ = 330 kyr.
These three simulations share some common features with the fiducial simulation. In particular, a vertically-collimated, bubble-like structure is formed in all these simulations.
The bubble is delineated by a dense outer shell with compressed magnetic field and has a low-density, high-temperature interior with vertically-oriented velocities and generally weak magnetic field.
The bubble interior is not smooth, rather, it is filled with chaotic small-scale structures, again due to the sequential SN shocks and mutual interactions between them. Below we shall describe the more unique features in the individual simulations.
Simulation \textit{B80I2} (top row in Figures~\ref{fig:rhoe} and \ref{fig:Te}) adopts a lower explosion frequency than the fiducial case. This leads to a smaller energy injection rate, but is still sufficient to form a bubble.
The bubble evolves more slowly, reaching a height of only 140 pc by the time of 330 kyr.
The width of the bubble is also somewhat smaller than in the fiducial case (thus also narrower than the observed bubble), which remains the case even if we followed the bubble growth to a height of 200 pc.
This occurs because, given a weaker SN energy injection but the same magnetic confinement, reduction in the horizontal expansion is greater than in the vertical expansion.
We note that at a further reduced explosion frequency, much of the SN ejecta would not be able to escape from the strong gravity near the mid-plane, and a bubble would never form.
In \textit{B50I1} (middle row in Figures~\ref{fig:rhoe} and \ref{fig:Te}), which has a weaker magnetic field compared to the fiducial run, the resultant vertical collimation is less effective and thus the bubble appears fatter.
We note that a thinner bubble could still be achieved, should a lower explosion frequency be adopted in combination with the weaker magnetic field, for the reason explained above.
However, in this case it would take a much longer time for the bubble to grow to the observed height of 190 pc.
In contrast, \textit{B200I1} results in a significantly thinner structure.
The magnetic field in this run is so strong that it can resist the compression of the SN shocks and consequently
there is little sweeping of magnetic field inside the bubble.
With the strong magnetic collimation, some SN ejecta are able to rapidly propagate along the field lines, forming vertical protrusions (several of these are captured in the $x-z$ and $y-z$ slides).
The overall morphology is obviously inconsistent with the observed bubble.
\subsection{Comparison with Observations}
\label{subsec:compobs}
Here we shall provide a more quantitative comparison with the radio and X-ray observations, with a focus on the fiducial simulation, which has the best morphological agreement with the observed bubble.
\begin{figure*}
\centering
\includegraphics[width=0.472\textwidth]{S_syn.eps}
\includegraphics[width=0.472\textwidth]{S_ff.eps}
\caption{1284 MHz radio intensity distribution in simulation \textit{B80I1} at 330 kyr.
The red dotted line outlines the rim of the northern radio bubble. \textit{Left:} Synchrotron emission; \textit{Right:} Free-free emission. In the left panel, values lower than 10$^{-5}$ Jy arcsec$^{-2}$ are suppressed to enhance visualization of the faint features.
}
\label{fig:radioflux}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.472\textwidth]{L_x_05.eps}
\includegraphics[width=0.472\textwidth]{L_x_15.eps}
\caption{Synthetic $0.5-1.5$ ({\it left}) and $1.5-10$ ({\it right}) keV X-ray intensity distribution in simulation \textit{B80I1} at 330 kyr. The red dotted line outlines the rim of the northern radio bubble, while the black circles highlight two young SNRs.
Values lower than 10$^{-9}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ are suppressed to enhance visualization of the faint features.
}
\label{fig:xflux}
\end{figure*}
The synthetic synchrotron and free-free intensity maps of \textit{B80I1}, after an evolution time $t$ = 330 kyr, are shown in the left and right panels of Figure~\ref{fig:radioflux}.
The overall morphology is quite similar between the synchrotron and free-free emission, which is partially owing to our assumption that the density of relativistic electrons scales with the local gas density.
However, the synchrotron intensity is everywhere orders of magnitude higher than the free-free counterpart in the synthetic maps.
This holds true even considering the uncertainties in the energy density of the relativistic electrons and the magnetic strength.
Consequently, synchrotron dominates the total flux density at 1284 MHz, consistent with the MeerKAT observation \citep{2019Natur.573..235H}.
It is noteworthy that both the hydrogen recombination line, H90$\alpha$, at 8309 MHz and the 8.4 GHz continuum are found to trace the GCL \citep{2019PASJ...71...80N}, which exhibits a loop-like structure spatially coincident with the northern radio bubble. This suggests that the thermal component may have an increasingly larger contribution toward higher frequencies, which can be due to a combined effect of substantial synchrotron cooling at higher frequencies and the presence of ambient cooler gas not taken into account in our simulation.
The overall extent of the synthetic synchrotron emission highly resembles that of the northern radio bubble (delineated by the red dotted line in Figure~\ref{fig:radioflux}), which, has a width of 120 pc at its base and a height of 190 pc.
Another interesting feature in the simulation is the presence of numerous filaments both at the edge of and inside the bubble,
which closely resemble the NTFs \citep{1984Natur.310..557Y}, although the ones in the simulation appear thicker and fuzzier in general, which may be partly owing to our moderate resolution.
In the simulation, these filaments originate from the sequential SN shocks and their mutual interactions, and are associated with locally amplified magnetic field (Figure~\ref{fig:B}).
Their possible relation with the NTFs will be further addressed in Section~\ref{sec:dis}.
The 1284 MHz synchrotron flux density of the simulated bubble is found to be 5801 Jy, which is to be contrasted with our rough estimate of the observed flux density in the MeerKAT image, 970 Jy, obtained by assuming a mean flux density of 3 mJy beam$^{-1}$ across the projected area of the bubble.
We caution that the MeerKAT mosaic image presented in \citet{2019Natur.573..235H} was not corrected for the primary beam attenuation and that the extended emission from the bubble suffers from potential flux loss in the interferometric image (I. Heywood, private communication), thus our estimate should be treated as a lower limit of the true flux density.
On the other hand, the simulated flux density depends heavily on the assumed energy density of relativistic electrons.
Therefore, the apparently large discrepancy between the observed and simulated radio flux densities should be taken as a point for future improvement rather than a failure of the simulation.
The synthetic 0.5--1.5 keV and 1.5--10 keV X-ray intensity maps are shown in Figure~\ref{fig:xflux}. Compared to its radio morphology, the simulated bubble appears smoother in the X-rays.
The expanding shell of the bubble (Figure~\ref{fig:rho}) leaves no significant sign of limb-brightening in the 1.5-10 keV map, which is roughly consistent with the X-ray observations. This might be due to the fact that the shell is on average cooler than the bubble interior (Figure~\ref{fig:T}). Indeed, in the 0.5-1.5 keV map, which is more sensitive to gas temperatures below $\sim$1 keV, limb-brightening is more evident especially at the northwestern side of the bubble, although this energy band is not directly observable due to the large foreground absorption column density (a few $10^{22}\rm~cm^{-2}$; \citealp{2019Natur.567..347P}).
The 1.5-10 keV map also exhibits
much fewer small-scale structures in the bubble interior, except near the $x-y$ plane where the gas density is high and the most recent SNe freshly deposit a fraction of their kinetic energy.
In particular, remnants of two newly exploded SNe are evident near the center (marked in the right panel of Figure~\ref{fig:xflux}), although they are not clearly seen in the synthetic radio map.
An SNR evolving near Sgr A* will be heavily shaped by the strong gravity, with a large part of the ejecta pulled to the mid-plane, resulting in an appearance
resembling the bipolar X-ray lobes detected in the innermost 15 parsecs of the Galactic center \citep{Ponti2015, 2019Natur.567..347P}.
The thermal, kinetic and magnetic energy of the bubble is calculated by summing over all ``bubble pixels'' (Section~\ref{subsec:synthetic}),
which is found to be 1.9, 1.2 and 0.1$\times 10^{52}$ erg, respectively.
The initial thermal and magnetic energy within the bubble volume are 0.7 and 1.1 $\times 10^{52}$ erg. A net decrease of the magnetic energy underscores the sweep-up of the magnetic field.
\citet{2019Natur.567..347P} estimated a thermal energy of 4$\times 10^{52}$ erg for the X-ray chimneys (sum of the northern and southern halves), which is well matched by the simulated value of 1.9$\times 10^{52}$ erg for the northern chimney.
\citet{2019Natur.567..347P} also measured density and temperature profiles along selected Galactic longitude, $l = 0\degr$, and Galactic latitude, $b = 0\fdg7$.
For a direct comparison, we construct density and temperature profiles at $l = 0\degr$ and $b = 0\fdg7$ from the simulation, as shown in Figure~\ref{fig:profile}.
Precisely speaking, Sgr A* is located at $l=0{\fdg}05579$, $b=-0{\fdg}04608$, but here we neglect this small difference and simply take the $x=0$ plane and $z=98$ pc plane for comparison.
We calculate the density-weighted mean density along the line-of-sight as
\begin{equation}
< n > = \sqrt{\dfrac{\int\ n_{\rm t}^2dV}{\int\ dV}},
\label{eqn:temp}
\end{equation}
where $dV = A dl$, $A$ is the projected area, and the line-of-sight integration ($dl$) is from the farthest side to the nearest side of the bubble.
The projected area varies across the profiles to approximate the rather irregular spectral extraction regions used in \citet{2019Natur.567..347P}. At $l = 0\degr$, the width is 12.5 pc, and the lengths are 20 pc and 70 pc respectively for $b < 0\fdg26$ and $b > 0\fdg26$. At $b = 0\fdg7$, the width is 12.5 pc, and the length is always 70 pc.
The emissivity-weighted mean temperature is calculated as
\begin{equation}
< T > = \dfrac{\int\ Tn_{\rm t}^2\Lambda(T, Z)dV}{\int\ n_{\rm t}^2\Lambda(T, Z)dV},
\label{eqn:temp}
\end{equation}
where $\Lambda$ is the tabulated X-ray emissivity as a function of temperature and metallicity extracted from \textit{ATOMDB}.
By examining the distribution of the SN ejecta through the tracer parameter, we have verified that the assumption of a uniform metallicity is a reasonable approximation.
At $l = 0\degr$, the simulated density profile peaks at the midplane and decreases untill $z \approx$ 40 pc, beyond which it flattens. This general trend is in reasonable agreement with the observed density profile.
Notably, the observed density profile has a significantly higher peak at low $z$. This may be due partly to the smaller line-of-sight depth adopted by \citet{2019Natur.567..347P} for the two inner data points, and partly to contamination from unresolved stellar objects and non-thermal extended features to the apparently diffuse X-ray emission near the mid-plane \citep{2018ApJS..235...26Z}.
The simulated temperature profile appears bumpy around a mean value of $\sim1.0$ keV. The ``bumps'' are most likely due to consecutive SN shocks propagating upward.
Near the top of the expanding shell the temperature quickly drops to $\sim$0.6 keV.
The observed temperature profile, on the other hand, appears flatter and has a lower value of 0.7--0.8 keV between 20--150 pc.
We note that the observed temperature was derived using a single-temperature spectral model to the underlying plasma having a range of temperatures \citep{2019Natur.567..347P}.
The Galactic center hot ISM is expected to have a somewhat lower temperature than in the whole bubble interior. Inclusion of the hot ISM in the observed spectrum could have led to a lower observed temperature.
At $b = 0\fdg7$, the simulated density profile peaks at the eastern and western edges of the bubble shell, which is consistent with Figure~\ref{fig:density}.
However, there is no clear sign of limb-brightening in the observed density profile; an enhanced density is only weakly seen near the eastern edge ($X \approx$ -60 pc) but is absent near the western edge ($x \approx$ 70 pc). One possibility is that the soft X-ray emission from the denser and cooler western shell has largely dropped out of the observation band ({but could have been seen in the 0.5--1.5 keV band, as shown in the left panel of Figure~\ref{fig:xflux}).
The simulated temperature profile shows a roughly inverse ``U''-shape, with values peaking at $\sim$1.25 keV at $x = 20$ pc.
It is noteworthy that the outermost few points in the simulated profile are actually outside the bubble volume, whose values only reflect the unperturbed ISM.
The observed temperature profile, again derived from a spectral fit using a single-temperature model, appears flat around a mean value of 0.8 keV.
\begin{figure*}
\centering
\includegraphics[width=0.472\textwidth]{profilez.eps}
\includegraphics[width=0.472\textwidth]{profilex.eps}
\caption{Density and temperature profiles of \textit{B80I1}. The light blue dots and pink pluses indicate the temperature and the total density in the simulation, respectively. The blue dots and red pluses respectively indicate the temperature and the density from the observations. The observed values are manually estimated from \citet{2019Natur.567..347P}. \textit{Left:} The profile at Galactic longitude $l = 0 \degr$. \textit{Right:} The profile at Galactic latitude $b = 0\fdg7$. Note that the observed density/temperature profiles cover a wider range reaching beyond the bubble volume.}
\label{fig:profile}
\end{figure*}
\citet{2019Natur.567..347P} did not provide an explicit total X-ray luminosity of the chimneys. A rough estimate of this value can be made by adopting a cylinder of 150 pc in both diameter and height, as assumed by \citep{2019Natur.567..347P},
a mean density of 0.1 cm$^{-3}$ and a mean temperature of 1$\times 10^{7}$ K (0.86 keV), which are representative of the X-ray chimneys.
This leads to an estimated 1.5--10 keV luminosity of $\sim$2.8$\times 10^{36}$ erg s$^{-1}$ for the northern chimney, again well matched by the simulated value of 2.0$\times 10^{36}$ erg s$^{-1}$.
For completeness, the synthetic radio and X-ray maps of runs \textit{B80I2}, \textit{B50I1} and \textit{B200I1} are shown in Figure~\ref{fig:fluxe}.
While these maps exhibit some interesting features, it is immediately clear that none of them matches the observed bubble morphology (again approximated by the red dotted line).
\begin{figure*}
\centering
\includegraphics[width=0.323\textwidth]{S_syn_F.eps}
\includegraphics[width=0.323\textwidth]{S_syn_B.eps}
\includegraphics[width=0.323\textwidth]{S_syn_HB.eps}\newline
\includegraphics[width=0.323\textwidth]{L_x_F.eps}
\includegraphics[width=0.323\textwidth]{L_x_B.eps}
\includegraphics[width=0.323\textwidth]{L_x_HB.eps}
\caption{\textit{Upper panels:} Synthetic synchrotron intensity distribution at 1284 MHz. Values lower than 10$^{-5}$ Jy arcsec$^{-2}$ are masked for better visualization. \textit{Lower panels:} Synthetic 1.5--10 keV X-ray intensity distribution. Values lower than 10$^{-9}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ are suppressed to enhance visualization of the faint features}. The red dotted line outlines the morphology of the northern radio bubble. The left, middle and right columns show the results of runs \textit{B80I2}, \textit{B50I1 } and \textit{B200I1}, respectively.
\label{fig:fluxe}
\end{figure*}
\section{Discussion} \label{sec:dis}
\subsection{The Origin and Fate of the Galactic Center Radio Bubbles/X-ray Chimneys}
\label{subsec:disrb}
The simulations presented in the previous section show that an outflow driven by sequential SN explosions and collimated by a vertical magnetic field can provide a reasonable explanation for the observed radio bubbles/X-ray chimneys in the Galactic center.
In particular, the simulations can well reproduce the overall morphology, X-ray luminosity and thermal energy of the northern bubble.
This scenario relies on two key ingredients: SN explosions clustering in the nuclear disk to provide a semi-continuous energy input, and a vertical, moderately strong magnetic field to provide the collimation.
Both ingredients are very likely available in the Galactic center.
Indeed, direct evidence for contemporary SN explosions in the Galactic center was provided by at least a few SNRs clearly visible in radio or X-ray images (e.g., \citealp{Ponti2015}).
Moreover, about two hundred emission-line objects have been detected in the Galactic center, most of which are likely evolved massive stars \citep{2012MNRAS.425..884D}. These stars may belong to the same population that gave rise to the SNe responsible for launching the bubbles.
As for the magnetic field, it is widely thought that it is predominantly poloidal in the Galactic center, at least in regions outside the giant molecular clouds \citep{Ferriere2009}.
In this regard, an SNe-driven, magnetically-collimated outflow
should naturally develop in the Galactic center, provided the correctness of our simulations.
As mentioned in Section~\ref{sec:intro}, a competing driver of a large-scale outflow is the kinetic power from the central SMBH, even though Sgr A* is by no means comparable with a classical AGN.
While our simulations cannot automatically rule out an AGN-driven outflow, they share useful insight on the latter case.
Compared to the distributed SN explosions, energy input from the SMBH is highly concentrated. Thus an AGN-driven outflow on the hundred-parsec scale may either acquire a highly elongated shape in the case of a canonical jet-driven outflow (e.g., \citealp{Zhang2020}), or inflate a fat bubble in the case of a more isotropic wind symbiotic with the hot accretion flow onto a weakly accreting SMBH \citep{2015ApJ...804..101Y}.
Magnetic collimation may also shape the wind-blown bubble, but one expects that the resultant structure is again a highly elongated one. Thus matching the morphology of the radio bubbles with an AGN wind-driven outflow may require some fine-tuning, which awaits a detailed investigation.
We now turn to consider the fate of the radio bubbles. In the framework of our simulations, the SNe-driven outflow is necessarily an evolving structure. In fact, at the end of our fiducial simulation, the top of the bubble still expands at a speed of $\sim 600\rm~km~s^{-1}$ (Section~\ref{subsec:formation}).
Provided a continuous energy injection from future SNe, which is quite likely given the evolved massive stars near the disk plane \citep{2012MNRAS.425..884D}, the bubbles should continue to grow and gradually evolve into a more ``chimney''-like structure, as long as a moderately strong magnetic field persists to greater heights.
Conversely, if SNe were temporarily shut off, one expects that the bubble/chimney would ultimately disperse and collapse within a time not much greater than the sound-crossing time (a few hundred kyr).
We have run a test simulation to examine such a case. Specifically, we adopt the same setting as the fiducial simulation, except that SN explosions cease after a time of 200 kyr. It is found that the upper edge of the bubble can still climb to a height of $\sim$190 pc with its accumulated momentum. However, the interior of the bubble, especially its lower portion, begins to collapse soon after the shutoff of the SNe, due to the loss of energy injection against the strong central gravity. In addition, the mean gas temperature inside the bubble gradually declines. Such an effect might bring the simulated temperature profile into better agreement with the observed temperature profile (Figure~\ref{fig:profile}), although we have no evidence that the Galactic center is currently experiencing a substantial drop in the SN birth rate.
It is interesting to ask whether the radio bubbles/X-ray chimneys have a causal relation with the Fermi bubbles \citep{2010ApJ...724.1044S} and eROSITA bubbles \citep{Predehl2020} found on much larger scales.
We note that the age of the radio bubbles inferred from our simulations is only a few hundred kyr, much shorter than the dynamical timescale of a few Myr originally suggested by \citet{2019Natur.573..235H}.
However, \citet{2019Natur.573..235H}'s estimate was based on the assumption of a constant expansion velocity of the bubbles, which is implausible, hence a shorter timescale is expected.
The estimated age of the Fermi bubbles, on the other hand, ranges from 1 Myr \citep{2013MNRAS.436.2734Y} to 1 Gyr \citep{2011PhRvL.106j1102C}.
Thus, in the context of our supernova-based model for the origin of the radio bubbles/chimneys, the radio bubbles would be a dynamically younger and independent structure simply evolving in the interior of the Fermi/eROSITA bubbles, which themselves were formed by older activities in the Galactic center.
Alternatively, as suggested by \citet{2019Natur.567..347P}, the X-ray chimney may be a channel that transports energy from the Galactic center to the high-latitude region currently occupied by the Fermi bubbles.
In this case, the channel should have existed for tens of Myr, so that star formation in the Galactic center can be sufficient to supply the total energy content of the Fermi bubbles, $\sim 10^{56}$ erg \citep{2013Natur.493...66C}.
However, such a picture contradicts with the capped morphology of the radio bubbles (the southern bubble is not obviously capped in X-rays; \citealp{2021A&A...646A..66P}), which, according to our simulations, is naturally explained as the expanding shell of a newly born outflow.
This picture may be reconciled if star formation in the Galactic center has been episodic on a timescale of $\sim$10 Myrs \citep{2015MNRAS.453..739K}.
In this case, the ``chimney'' is (re)established by consecutive generations of mini-starbursts and collapses inbetween.
Of course, over such a long interval, the activity of Sgr A* can also play an important role in contributing to the inflation of the chimneys, especially in view of the fact it was likely much more active in the recent past \citep{2010ApJ...714..732P, Ponti2013, 2018ApJ...856..180C}.
In a hybrid scenario, Sgr A*, with supernovae and even stellar winds, can simultaneously sustain the ``chimney'' and transport energy to larger scales, implying X-ray emission beyond the edge of the radio bubbles, which is also suggested by \citet{2021A&A...646A..66P}.
\subsection{Origin of the Non-thermal Filaments}
\label{subsec:disNTFs}
The origin of the NTFs has been extensively debated since their discovery nearly four decades ago.
Proposed models for the NTFs include expanding magnetic loops \citep{1988ApJ...330..718H}, induced electric fields \citep{1988ApJ...333..735B, 1989ApJ...343..703M}, thermal instability in relativistic gas \citep{1993A&A...270..416R}, cosmic strings \citep{1986PhRvD..34..944C}, magnetic reconnection \citep{1992A&A...264..493L, 1994ApJ...424L..91S, 1996IAUS..169..247M, BandaBarragan2016, BandaBarragan2018}, analogs of cometary plasma tails \citep{1999ApJ...521..587S}, a turbulent magnetic field \citep{2006ApJ...637L.101B}, stellar winds or SNe of the young star cluster \citep{2003ApJ...598..325Y, 2019ApJ...490..L1}, pulsar wind nebulae \citep{2019MNRAS.489L..28B}, and the tidal destruction of gas clouds \citep{2021MNRAS.501.1868C}.
Of course, a multi-SNe hypothesis has also been suggested \citep{2020PASJ...72L...4S}.
In our simulations, filamentary features resembling the observed NTFs trigger and form primarily at the interface of colliding shocks of individual SNe (Figure~\ref{fig:B}).
Magnetic fields are compressed and amplified in these filaments, where particle acceleration (e.g., due to diffusive shock acceleration) is expected to take place.
Also the Radio Arc finds its possible counterpart in the simulations, which arises from the piling of consecutive SN shocks at the sides of the bubble (Figure~\ref{fig:rho}).
Comparing Figure~\ref{fig:radioflux} and Figure~\ref{fig:fluxe}, it occurs that an SN-driven outflow evolving in a weaker magnetic field produces more filaments.
This is because a strong magnetic field can more easily confine an SN shock and reduce its chance of encountering other shocks. We note that in the simulation many filaments are indeed one-dimensional structures, i.e, they have a distinct long-axis roughly oriented vertically, but some others arise from a projection effect, i.e., a two-dimensional surface viewed edge-on. Such a surface is also the result of colliding shock fronts.
We stress that the moderate resolution of our simulation would smear the appearance of the shock fronts, so we anticipate that additional apparent filaments would show up with higher resolution. The viability of this formation mechanism for the NTFs could be assessed by direct comparison of the cross-sectional profiles of the filaments appearing in the simulations with those of observed NTFs, but a higher resolution simulation is needed for such a comparison.
We note that there are NTFs found outside the radio bubbles \citep{2019Natur.573..235H}. These might have been formed in a past generation of clustering SN explosions, and they exist for a longer time than the associated outflow.
Of course, we cannot rule out the aforementioned alternative models for all NTFs.
In reality, the NTFs can have a mixed origin, i.e., different processes, including
SN shocks, stellar winds and pulsar winds can produce seeds of NTFs which are further shaped by the compressed magnetic field or other mechanisms.
\subsection{Strength of the Galactic Center Magnetic Field}
\label{subsec:disul}
The magnetic field is a crucial component of the Galactic center environment.
At present, the average field strength is still quite uncertain. The assumption of energy equipartition between the magnetic field and relativistic particles leads to estimates up to $\sim$ 1 mG in the brightest NTFs and as low as 10~${\mu}$G in the more diffuse background.
\citet{2010Natur.463...65C} derived a lower limit of $\sim 50~\mu$G based on the diffuse $\gamma$-ray flux and suggested a typical value of $\sim 100~\mu$G in the central 400 pc region.
In our simulation \textit{B50I1}, which adopts a field strength of 50 $\mu$G, an outflow can be developed, although the resultant bubble appears fatter due to the reduced magnetic confinement compared to the fiducial simulation (Section~\ref{subsec:compset}).
This lends some support to the above lower limit.
On the other hand, simulation \textit{B200I1}, which assumes a field strength of $200~\mu$G, is obviously inconsistent with the observation (Figure~\ref{fig:fluxe}).
This conclusion holds even if the other parameter, the SN birth rate, were adjusted within a reasonable range.
Qualitatively, at a lower SN birth rate, the shock and ejecta of individual SNe would be less resistant to the magnetic pressure, thus they are less likely to evolve into a mutual network. The resultant outflow hardly takes a bubble shape, rather it would consist of many barrel-like structures, through which individual SN ejecta propagate.
Only a much higher SN birth rate can counteract the magnetic pressure, but this would be inconsistent with the currently accepted star formation rate in the Galactic center ($\sim 0.1\rm~M_\odot~yr^{-1}$).
Therefore, our simulations provide a meaningful constraint on the average magnetic field on 100 pc scales in the Galactic center, $50\rm~{\mu}G \lesssim B_0 \lesssim 200~{\mu}G$.
Our fiducial run \textit{B80I1} demonstrates localized magnetic field amplification across the bubble, reaching a maximum field strength of 175 $\mu$G.
It is expected that the global magnetic field would gradually restore to the initial configuration
after the termination of clustering SN explosion and the dispersion/collapse of the outflow.
\subsection{Caveats} \label{subsec:caveat}
Despite the satisfactory reproduction of the major observed properties of the radio bubbles/X-ray chimneys, some notable discrepancies exist between our simulation results and the observations, which warrant the following remarks.
The observed edge-brightened radio bubbles have a low-surface-brightness interior, while in our simulation the edge-interior contrast is less significant.
A possible cause is that we have ignored synchrotron cooling.
Using a magnetic field of 20 $\mu$G, \citet{2019Natur.573..235H} derived a synchrotron cooling time of 1--2 Myr by assuming that the electron energy density distribution has a power-law index of 2.
Based on the same method, we estimate a cooling time of 250 kyr for 80 $\mu$G, which is comparable to the evolution time of the bubble in our simulation. Hence the relativistic electrons produced at the early stage and now filling the bubble interior should be subject to radiative cooling, an effect that is not taken into account but otherwise would enhance the edge-interior contrast.
An alternative and more likely cause is the absence of a cool gas shell in our simulation. The presence of cool gas (with a temperature of $\sim 10^4$ K) in the outer part of the GCL has been known for some time \citep{2010ApJ...708..474L, 2019ApJ...875...32N}.
This cool gas is not found in our simulations, owing to the very moderate radiative cooling even in the dense shell of post-shock gas.
This is also the reason why the free-free emission predicted by our simulation is negligible compared to the synchrotron (Section~\ref{subsec:compobs}).
Hence the detected cool gas probably has an external origin that is missing in the framework of our simulation.
Indeed a substantial amount of both cool and cold gas exist in the NSD/CMZ \citep{ 2007A&A...467..611F}, and part of this gas may be swept into the bubble shell and/or entrained into the bubble interior.
For example, \citet{2021A&A...646A..66P} argued that a gas cloud associated with the bright 25 $\mu$m source AFGL5376 has been accelerated and is now defining part of the wall of the bubble.
An additional source of cool gas is the stellar wind of the massive stars distributed in the nuclear disk.
In principle, the Galactic center outflow may also be driven by stellar winds \citep{1992ApJ...397L..39C}.
Stellar winds as an additional energy and momentum source have not been included in our simulation.
We can give a rough estimate of the collective energy input from the massive stars in the Galactic center.
The stellar winds should be dominated by the Wolf–Rayet stars, which have a typical mass loss rate of $10^{-5}\rm~M_{\odot}~yr^{-1}$ and a wind velocity of $2000\rm~km~s^{-1}$.
Thus the $\sim$200 evolved massive stars found by \citet{2012MNRAS.425..884D} in the nuclear disk have a total kinetic power of $2.5\times10^{39}\rm~erg~s^{-1}$ and would release a kinetic energy of $2.6\times10^{52}\rm~erg$ in 330 kyr.
The massive stars in the central parsec provide an additional kinetic energy of $3\times10^{51}\rm~erg$ in 330 kyr, assuming a collective mass loss rate of $10^{-3}\rm~M_{\odot}~yr^{-1}$ and a wind velocity of $1000\rm~km~s^{-1}$ \citep{1997A&A...325..700N, 2004ApJ...613..322Q}.
Therefore, the energy input from the massive stars is about one order of magnitude smaller than that of the SNe in our simulation.
Nevertheless, massive stars may start launching strong winds a few Myr before their core collapse, significantly shaping the ambient gas into which the bubbles expand.
A self-consistent implementation of the stellar winds requires a reliable stellar evolution model and a much higher resolution, thus awaits future work.
\section{Summary} \label{sec:sum}
The recently discovered radio bubbles and X-ray chimneys in the Galactic center both point to a dynamically young outflow.
In this work we have used three-dimensional MHD simulations, carefully tailored to the physical conditions of the Galactic center, to explore the scenario in which a SN-driven, magnetically-collimated outflow produces the observed bubbles/chimneys.
The main results and implications of our study include:
\begin{enumerate}
\item A SN-driven, magnetically-collimated outflow is naturally formed in almost all simulations performed. The morphology, X-ray luminosity and thermal energy of the radio bubbles/X-ray chimneys can be well reproduced for a reasonable choice of two parameters, namely, the SN birth rate and the strength of the vertical magnetic field. Meanwhile, we have examined the effect of changing these two parameters on the formation of the bubble.
\item Dense filamentary features are seen both at the edge and in the interior of the simulated bubble, which are the sites of colliding shocks of individual SNe. This offers a plausible explanation for at least a fraction of the observed NTFs and the Radio Arc.
\item In the framework of our simulations, the magnetic field in the Galactic center is likely to have a strength between 50--200 $\mu$G, consistent with previous estimates based on independent arguments.
\end{enumerate}
In conclusion, we are able to provide a viable formation mechanism for the radio bubbles/X-ray chimneys. This invites future work to explore the possible physical connection between Galactic outflows on various scales.
\acknowledgements
This work is supported by the National Key Research and Development Program of China (grant 2017YFA0402703) and National Natural Science Foundation of China (grant 11873028).
We acknowledge the computing resources of Nanjing University, Purple Mountain observatory and National Astronomical Observatories of China.
We thank Miao Li and Feng Yuan for their helpful discussions,
and G. Ponti and I. Heywood for their communications on the estimation of the X-ray luminosity and radio flux density, respectively.
\bibliographystyle{aasjournal} |
2001.10303 | \section{Introduction}
\noindent Airy function was first introduced as an accelerating undistorted solution of the time dependent Schr\"{o}dingers equation in free space \cite{Berry}. The realisation of the Airy function in the optical domain by Siviloglou et.al. \cite{Siviloglou} opened up new avenues in the study of accelerating optical beams and pulses. After the experimental observation of the truncated finite energy version of the Airy function as a phase modulated gaussian beam, many works have been reported exploring the unique properties of the Airy beam like self-acceleration, quasi diffraction free and self healing nature \cite{Siviloglou,Siviloglou_b,Broky}. Exploiting the isomorphism between the spatial diffraction and temporal dispersion the temporal counterpart of the finite energy Airy beam is realised as a time truncated finite energy Airy pulse (FEAP) \cite{Saari}. Airy pulses are the waveforms which travel undistorted in linear dispersive mediums where the effect of higher order dispersions is negligible and it follows a parabolic trajectory in time. The trajectory of the pulse depends on the dispersion characteristics of the waveguide through which the pulse propagates. Though FEAP is not an exact solution of the dispersion equation but still it keeps the unique properties of the Airy function intact for a finite distance. After the discovery of the self healing Airy pulse, several interesting works have been done in the temporal domain like absolute focusing under third order dispersion (TOD) \cite {driben,Shaarawi}, soliton shedding from the high power Airy pulse \cite{Fattal}, mimicking event horizon through Airy-soliton collision \cite {Yang}, generation of new frequency components by the collision of Airy-soliton \cite{Roy}, Supercontinuum generation \cite {Ament} etc. The description of Airy functions in time domain also opens up exciting applications ranging from bioimaging, nano-machining to plasma physics\cite{Courvoisier,Englert,Gotte,javier,sarpe,Thomas}.
The previous works mentioned above have been done mostly for longitidinally static chromatic dispersion parameters where the possibility of manipulating the pulse shape and its trajectory is limited. In this work,we try to explore the properties of the FEAP under longitudinally varying group velocity dispersion (GVD) profile. We consider a linear as well as the periodic variation of GVD over space and try to investigate its consequence in Airy dynamics in time frame. The periodic modulation of the dispersion is common in optical fibers where the core diameter varies periodically with fiber length, such fibers are called \textit{dispersion oscillating fiber} (DOF) \cite{Biancalana,Droques,Finot,Mussot}. In DOF the optical Kerr nonlinearity is also weakly modulated which leads to additional modulation instability (MI) side-band pairs \cite{Mussot,Trillo}. In nonlinear domain the soliton dynamics also becomes interesting when dispersion oscillates periodically over waveguide length. The longitudinal oscillation of dispersion in fiber results controlled soliton fission \cite{Sysoliatin,Sysoliatin_b}and also leads to multiple quasi-phase matched dispersive waves \cite{Wright,Conforti} resulting tailor-made supercontinuum generation \cite{Hickstein}. Very recently the optical analogue of dynamical Casimir effect is observed in varying dispersion fiber \cite{Vezzoli}. When we find substantial seminal works on optical solitons,the study of Airy like pulse in longitudinally varying dispersion is to some extent limited. Very few attempts were made previously to understand the behaviour of the FEAP in the environment of oscillating GVD \cite{Bai,Driben_b}. These studies are mainly based on numerical computation that may hinder few key characteristics beneath in the theoretical solution. The dynamics of FEAP is far more complicated in realistic domain and requires an extensive investigation.
To capture the behaviour of the FEAP in realistic systems, we design waveguides with longitudinally varying dispersion profiles. Imposing linear and periodic geometry on Si-based waveguides we obtain the GVD that varies linearly or oscillates around an average value over distance. Exploiting the COMSOL simulation we demonstrate if the width of the waveguide has a linear variation with propagation distance, the GVD becomes a linear function of distance. The usual ballistic temporal trajectory of the airy pulse is significantly manipulated by varying dispersion and one can even get quasi-linear path. We obtain a complete analytical solution of the moving airy pulse in varying dispersion environment and explain the phenomenon with the support of numerical simulation. The dynamics of the FEAP becomes more complicated when it encounters oscillating GVD. Waveguides with periodically varying widths offer an oscillating dispersion which radically change the behaviour of airy pulse specially when TOD is non-vanishing. TOD leads to a singularity in the airy pulse solution and because of which the temporal distribution of the pulse flips \cite{driben}. Under periodic TOD one can witness multiple flipping of the waveform that takes places at periodic intervals. At flipping points the airy pulse losses its characteristics and focus tightly in the neighbourhood of the flipping zone. The entire propagation dynamics of the FEAP under periodic TOD is investigated by solving the linear dispersion equation analytically in different zones. The set of solutions reveal that under oscillating TOD the pulse evolves through periodic focusing and one can achieve selective absolute focusing of the pulse by selecting suitable dispersion modulation factor. Absolute focusing is an unique phenomenon where entire energy of the airy pulse is confined tightly. In application point of view the selective focusing may be interesting as we can deliver the entire energy of the time truncated airy pulse at specific output.
\section{Dynamics of FEAP under linear GVD variation}
\noindent The wave number $\beta(\omega)$ of an optical wave is in general a function of frequency ($\omega$) and can be expanded in a Taylor series around the carrier frequency ($\omega_0$) as, $\beta(\omega)=\beta_0+\beta_1(\omega-\omega_0)+\frac{1}{2}\beta_2(\omega-\omega_0)^2+...$, where $\beta_0=\beta(\omega_0)$ and $\beta_j (\omega)=\frac{d^j \beta (\omega)}{d \omega^j}|_{\omega=\omega_0}$ $(j=1,2,3,4..)$. The GVD $\beta_2 (\omega)$ is an intrinsic property of an optical waveguide and can be manipulated by tailoring the waveguide geometry. Si-based planar waveguides are found to be the ideal candidate in controlling the dispersion profile in an arbitrary way. For a linear GVD variation over space we can model the dispersion profile as $\beta_2(z)=\beta_{20}+gz$, where $\beta_{20}$ is the GVD parameter at the input and it depends on the launching wavelength of the pulse. The parameter $g$ determines the rate of change of $\beta_2$ with the propagation distance $z$. Under such dispersion profile the dynamics of a FEAP $u(\xi,\tau)$ can be modelled as \cite{Agarwal},
\begin{equation} \label{q1}
i\frac{\partial u}{\partial \xi}=\frac{{{\delta}_{2}(\xi)}}{2}\frac{{{\partial }^{2}}u}{\partial {{\tau}^{2}}}-i\widetilde{\alpha}\xi,
\end{equation}
\noindent where the parameters are normalised as, $u=U/\sqrt{P_0}$, $\xi=z{L_D}^{-1}$, $\tau=(t-z{v_g}^{-1})/t_0=T/{t_0}$. $U$, $P_0$ and $v_g$ respectively represent the optical field, input peak power and group velocity in real unit. The width of the main lobe of FEAP is defined by $t_0$ which we consider $\sim 100$ fs. $z$ and $t$ represent the space and time variables with physical units. The linear loss is normalised as $\widetilde{\alpha}=\alpha L_D$. For Si-based waveguide $\alpha \sim $ 1 dB/cm \cite{zou,Mashanovich}. The dispersion length $L_D$ is defined as $L_D=t_0^2/|\beta_{20}|$. The GVD parameter is also rescaled as, $\delta_2=sgn(\beta_{20})+\chi\xi$, where, $\delta_2=\beta_2/|\beta_{20}|$ and $\chi=g\frac{L_D}{|\beta_{20}|}=g{t_0^2}/{|\beta_{20}|^2}$.
\subsection{Waveguide Description}
\noindent To investigate the dynamics of airy pulse under varying dispersion we consider the Si-based slab waveguide whose GVD profile can be tailored efficiently by manipulating the waveguide geometry. It is well known that the geometry of the slab waveguide is mainly controlled by two parameters, (i) slab height ($h$) and (ii) slab width ($w$). One may achieve the desired GVD profile simply by manipulating $h$ and $w$. To obtain a linear spatial variation of $\beta_2 (z)$ for a fixed wavelength, we design a waveguide whose width $w$ varies linearly with the propagation length $z$ as $w=w_0+\epsilon z$, where $\epsilon$ denotes the rate of change of width with $z$ axis. In Fig.\ref{Figure1} we represent schematic diagrams of the waveguide of two distinct types, type-1 where the width is linearly increasing (plot a) and type-2 where the width is linearly decreasing (plot b) with propagation distance $z$. For type-1 waveguide, at input the cross-sectional dimension is $w \times h$ = 620 nm $\times$ 800 nm and at output $w \times h$ = 2120 nm $\times$ 800 nm. For type-2 waveguide the dimensions are at input $w \times h$ = 1800 nm $\times$ 800 nm and at output $w \times h$ = 300 nm $\times$ 800 nm.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0.38in 0.0in 0.7in 0.1in,clip=true, width=92mm]{fig1}
\vspace{-1em}
\caption{ Schematic diagram of a Si-based slab waveguide with varying width ($w$). (a) Width is increasing and (b) decreasing with distance. The field distribution of the fundamental mode at $\lambda=2.25$ $\mu$m is also depicted at three different $z$ coordinate. In plot (c) and (d) we demonstrate the linear variation of $\beta_2$ (which is calculated at $\lambda=2.25$ $\mu$m) with propagation distance $z$ and width $w$ for two waveguides. }
\label{Figure1}
\end{center}
\end{figure}
\noindent We consider $\epsilon = \pm 15\times10^{-5}$ which leads to a linear change in GVD as shown in Fig.\ref{Figure1} (c) and (d). For the proposed waveguides the rate of GVD change comes out to be $g \approx \pm 270 $ ps$^2$/m$^2$. The height ($h$) of the waveguide remains fixed at $800$ nm in all the cases. For type-1 and type-2 waveguide the dispersion length ($L_D$) becomes $\approx 2$ mm when we consider $t_0 = 90$ fs. We also consider the operating wavelength at $\lambda_0=2.25$ $\mu$m to avoid the detrimental two-photon absorption (TPA) effect which is dominating for $\lambda <2.1 \mu$m in Si-based waveguide \cite{Bristow}. In order to ensure that there is no nonlinear effect we compare the dispersion ($L_D$) and nonlinear length ($L_{NL}=1/\gamma_rP_0$) for the waveguides. The nonlinear parameter ($\gamma_r$) is defined as $\gamma_r=2\pi n_2/\lambda_0A_{eff}$, where $n_2$ is the Kerr coefficient. For silicon $n_2\approx3\times10^{-18} m^2W^{-1}$. The effective area of the confined mode is defined as, $A_{eff}=(\iint \limits_{-\infty}^{+\infty}|u(x,y)|^2dxdy)^2/\iint \limits_{-\infty}^{+\infty}|u(x,y)|^4dxdy$. For type-1 waveguide, $A_{eff}$ at the input and output are respectively, $\approx0.25 \mu$m$^2$ and $\approx$ 1.3 $\mu$m$^2$ which leads to $L_{NL}$ in the range of $0.30-1.55$ meters (for $P_0=100$ mW). Similarly for type-2 waveguide the $A_{eff}$ is calculated for the two ends are
$ \sim 1\mu$m$^2$ and $0.18 \mu$m$^2$ which leads to the range of $L_{NL}\approx 0.20-1.2$ meters (for $P_0=100$ mW). Now for the proposed waveguide, $L_D\approx 2.0$ mm which leads to the condition $L_{NL}/L_D >>1$ throughout the waveguide length. The condition $L_{NL}/L_D >>1$ ensures that with the power level $P_0=100 $ mW the proposed waveguides behave as a linear medium.
\subsection{Dynamics of FEAP and trajectory manipulation}
We use a FEAP as input having a form $u(0,\tau)=Ai(\tau)\exp(a\tau)$, where $a$ is the truncation parameter that truncates the infinite energy pulse to a practically realizable finite energy pulse. The general solution of the governing equation (Eq.\ref{q1}) for a truncated airy pulse can be given as,
\begin{equation}\label{q2}
u(\xi,\tau)=\exp(a^3/3-\widetilde{\alpha}\xi)Ai(b-n^2)\exp i\left(\frac{2}{3}n^3-nb\right)
\end{equation}
where, $b=(\tau-a^2)$ and $n=ia-\frac{\xi}{2}+\chi\frac{\xi^2}{4}$. The dynamics of the FEAP is illustrated in Fig.\ref{Figure2} where we demonstrate the density distribution of the propagating pules for different values of GVD rate $\chi$. As illustrated in the density plots, the parameter $\chi$ significantly influences the trajectory and final temporal position of the propagating airy pulse. The airy pulse does not follow the usual ballistic trajectory when the width of the waveguide is decreasing or increasing with distance which accounts for a non-zero $\chi$ (Fig.\ref{Figure2}(b)-(d)). The temporal position of the main lobe ($\tau_p$) evolves as,
\begin{equation}\label{q3}
\tau_p (\xi)=\tau_{0p}+\frac{ \xi^2}{4}\left(\frac{\chi\xi}{2}-1\right)^2
\end{equation}
where $\tau_{0p}\approx -(3 \pi/8)^{2/3}$ is the initial temporal position of the primary lobe of the pulse. Eq.(\ref{q3}) provides the theoretical estimation of the trajectory of the main lobe of FEAP. In Fig.\ref{Figure2} we demonstrate the overall dynamics of Airy pulse which we obtain by solving the Eq.(\ref{q1}) numerically using split-step Fourier method \cite{Agarwal}. From the figures it is evident that the dynamics of FEAP is affected significantly by the GVD rate $\chi$. The usual ballistic trajectory of the Airy pulse deforms under varying GVD. The trajectory of the main lobe can be controlled by the GVD rate $\chi$. Note, $\chi$ can be positive or negative and by changing its numeric value one can manipulate the trajectory. The airy pulse decelerates more when the decreasing rate of waveguide width is large (see plot (b) and (c)). It is obvious from Eq. \eqref{q3}, that airy pulse will always decelerate for $\chi<0$. However the usual parabolic airy dynamics is almost lost when $\chi>0$ and we observe a quasi-linear trajectory (see plot (d)). It is interesting to note that, for a waveguide of length $L$ the main lobe retain its position at output for $\chi=2/L$ . We superimpose the analytically obtained trajectory of the main lobe (black dashed lines) based on the Eq.(\ref{q3}) in numerical mesh plots and obtain a perfect agreement. In the top panel of the Fig. \ref{Figure2} we depict the shape of the airy pulse at output which we obtain numerically (shaded curve). The analytical solution of the propagating truncated airy pulse (see Eq. (\ref{q2})envelopes the shaded curve through dashed line. We also compare the dynamics of a $sech$ pulse for varying dispersion in Fig.\ref{Figure2}(e) and (f). The variation of the GVD parameter does not affect the trajectory of $sech$ pulse that only experiences a temporal broadening.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0.0in 0.9in 0.0in 0.0in,clip=true, width=90mm]{fig2_a}
\includegraphics[trim=0.0in 2.4in 0.1in 1.2in,clip=true, width=90mm]{fig2_b}
\vspace{-2em}
\caption{Temporal density plots of the FEAP for different values of $\chi$, (a)$\chi=0$, (b)$\chi=-0.2$, (c) $\chi=-0.4$ and (d)$\chi=0.5$. On the upper panels of the figures we plot the analytical solution (black dashed lines) enclosing the numerical solution(pink shade). The analytical expression of the trajectory of the main lobe Eq. \ref{q3} is depicted (dashed line)on the mesh plot. We also compare the trajectory of a sech pulse under linearly varying GVD for (e)$\chi=1$ and (f)$\chi=-1$.}
\label{Figure2}
\end{center}
\end{figure}
We know that the energy distribution of FEAP does not remain intact while propagating inside an optical medium as this is not the natural solution of the dispersive system. A constant decay of the peak power of a FEAP is incurred through the truncation parameter $a$. FEAP also experiences a linear material loss ($\alpha$) which is typically $\sim$ 0.6 dB/cm for Si \cite{zou,Mashanovich}. In application point of view, it is desired that the airy pulse should retain its shape and power level at the output. We observe that the rate of energy loss of a propagating FEAP can also be manipulated through dispersion engineering.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0.0in 1.4in 0.0in 1.2in,clip=true, width=88mm]{fig3}
\vspace{-1em}
\caption{(a) The variation of the peak power of the FEAP ($P_{p}$) with $\eta$ for different values of $\chi$. The solid lines represent the analytical form where as the dots are numerical data points. It can be seen that for $\chi=0.2$ the attenuation of $P_{p}$ is minimum. (b) The variation of $\chi_c$ for waveguides of different length $L$. Blue solid line represents the analytical results where as red solid dots shows the corresponding numerical results.}
\label{Figure3}
\end{center}
\end{figure}
In Fig.\ref{Figure3}(a) we demonstrate the rate of the attenuation of the peak power ($P_{p}$) of the main lobe of FEAP with distance considering linear loss as well. It is observed that the rate of attenuation is different for non-identical $\chi$ values. We can see that for a particular value of $\chi$ the peak power ($P_{p}$) reduces less ($\chi=0.2$ in Fig.\ref{Figure3}(a)) while for all the other cases the power attenuates at a relatively higher rate. The variation of $P_{p}$ against propagation distance $\xi$ can be written in the form, $P_{p}(\chi,\xi)=P_{p}^{(0)}e^{-\Sigma(\xi)}$, where $P_{p}^{(0)}$ is the peak power at input and $\Sigma(\xi)=2\widetilde{\alpha}\xi+a\frac{\xi^2}{2}(\frac{\chi \xi}{2}-1)^2-\frac{2}{3}a^3$. From the expression it is obvious that, the peak power attenuates monotonically due to the presence of the material loss. However, the overall attenuation can be engineered through $\chi$. Minimizing the decay factor $\Sigma$ for $\chi$ we obtain a optimized relation $\chi_c=\frac{2}{L}$ for which we expect minimal power decay given waveguide length ($L$) we can always find a critical GVD rate $\chi_c$ for which the power decay is minimal. In Fig.\ref{Figure3}(a) we illustrate the variation of $P_{p}$ for a waveguide of length $L=10$ and obtain the minimal power decay for $\chi_c=0.2$ which is consistent with the theoretical prediction. We extend our simulation for different waveguide length and numerically obtain the corresponding $\chi_c$ (dots) for which maximum power transform occurs. The numerical result (dots) corroborate well with the theoretical expression (solid line) which we obtain by minimizing $\Sigma$ as shown in Fig.\ref{Figure3}(b).
\section{Dynamics of FEAP under Periodic GVD}
We demonstrate that, the geometry of the waveguide affects the dynamics of FEAP through dispersion. One can think of the geometry of the waveguide as a effective tool to manipulate the airy dynamics. We extend this idea in this section where we investigate the propagation properties of a FEAP inside the waveguide with periodic width variation. A periodically varying width of the waveguide leads to oscillating GVD over distance \cite{Bai}. We study the airy dynamics for oscillating GVD as well as under TOD where pulse experiences periodic singularity.
\subsection{Waveguide description}
\noindent A periodic width ($w$) variation of a waveguide leads to oscillating GVD profile \cite{Bai}. We design the waveguide considering the width variation as, $w=w_0+\epsilon \cos(z/z_0)$, where $w_0=870$ nm and the value of the strength $\epsilon=800$ nm. The period of the oscillation is $z_0$= 500 $\mu$m. Depending on the numeric value of the $\epsilon$, we can consider two types of waveguide. In Fig.\ref{Figure4},type-1(plot a) where $\epsilon$ is positive and the width increases at first and then decreases and type-2(plot-b) where the opposite effect occurs with negative $\epsilon$. For the waveguide design we keep the height of the waveguide fixed at $800$nm. For type-1 and type-2 waveguides we calculate the GVD and TOD profiles for fundamental modes using commercial COMSOL software which exhibit sinusoidal variation as shown in Fig.\ref{Figure4}.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0.8in 0.00in 1.2in 0.2in,clip=true, width=80mm]{fig4_a}
\includegraphics[trim=0.8in 0.00in 0.8in 0.2in,clip=true, width=80mm]{fig4_b}
\vspace{-1em}
\caption{ Schematic diagram of a Si-based slab waveguide with oscillating width ($w$). (a) Width is varying periodically with distance starting from a lower value to a higher value, (b) the width decreases first and then increases with distance. The field distribution of the fundamental mode at $\lambda=2.25$ $\mu$m is also depicted at three different $z$ coordinate. In the plots we demonstrate the periodic variation of $\beta_2$ and $\beta_3$ (which is calculated at $\lambda=2.25$ $\mu$m) with propagation distance $z$. }
\label{Figure4}
\end{center}
\end{figure}
\subsection{Dynamics of FEAP under periodic GVD}
In the previous section we investigate the dynamics of a FEAP for a waveguide having dispersion that varies linearly with propagation direction $z$. Now if the width of the waveguide varies periodically with its length then as a consequence we have the periodic GVD parameter \cite{Bai}. In such case, the form of $\beta_2$ can be written as $\beta_2(z)=\bar{\beta}_{20}+f \cos(\bar{\mu}z)$, where the parameters $f$ and $\bar{\mu}$ account for the strength and frequency of periodicity respectively. $\bar{\beta}_{20}$ is the average value of GVD. The GVD parameter can be rescaled in normalised unit as, $\delta_2(\xi)=sgn(\bar{\beta}_{20})+\chi \cos(\mu \xi)$, where $\chi=f/|\bar{\beta}_{20}|$ , $\mu=\bar{\mu}L_D$ and $L_D=t_0^2/|\bar{\beta}_{20}| $. Taking the normalised form of GVD parameter if we solve the governing equation (Eq.\ref{q1}) the solution comes out to be a similar looking form that we obtain in (Eq.\ref{q2}),
\begin{equation}\label{q4}
u(\xi,\tau)=\exp(a^3/3-\widetilde{\alpha}\xi)Ai(b-m^2)\exp i\left(\frac{2}{3}m^3-mb\right)
\end{equation}
where, $m=ia-\frac{\xi}{2}+\frac{\chi}{2\mu}\sin(\mu\xi)$. The propagating pulse now has more degrees of freedom and the trajectory of the pulse can be manipulated by changing the amplitude ($\chi$) and period ($\mu$) of the GVD parameter. In Fig.\ref{Figure5} we demonstrate the density plots of the propagating FEAP under different set of $\chi$ and $\mu$. The trajectory of the FEAP in presence of oscillating GVD can be derived as,
\begin{equation}\label{q5}
\tau_p=\tau_{0p}+\frac{\xi^2}{4}+\frac{\chi}{2\mu}\sin(\mu\xi)\left(sgn(\bar{\beta}_{20})\xi+\frac{\chi}{2\mu} \sin(\mu\xi)\right).
\end{equation}
Due to the periodic variation of GVD, the temporal position of the Airy pulse is now oscillating. In Fig.\ref{Figure5} we superimpose the trajectory that is obtained theoretically (black dashed lines) using Eq. \eqref{q5} which agrees well with numerical results.The periodic variation of the GVD parameter does not affect the trajectory of $sech$ pulse that only experiences a temporal broadening as it moves through the waveguide(Fig.\ref{Figure5}(e)-(f)).
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=2.0in 0.0in 1.6in 0.0in,clip=true, width=82mm]{fig5_a}
\includegraphics[trim=1.8in 2.5in 1.6in 1.2in,clip=true, width=82mm]{fig5_b}
\vspace{-1em}
\caption{Temporal density plots of the propagating FEAP for different values of $\chi$ and $\mu$. For the upper row $\chi=1$ and for(a) $\mu=1$ and for (b)$\mu=3$. In the lower row $\chi=-1$ and for (c)$\mu=1$ and for (d) $\mu=3$. In the upper panels we show the analytical solutions (black dashed line) which enclose the the numerical output (pink shaded area). The trajectory of the primary lobe is plotted (black dashed lines)in each density plots that we find analytically. Finally we plot the dynamics of sech pulse for (e)$\chi=1$ and (f)$\chi=-1$ with $\mu=3$. }
\label{Figure5}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0.65in 0.0in 1.0in 0.0in,clip=true, width=88mm]{fig6}
\vspace{-1em}
\caption{(a) The variation of the peak power of FEAP with the propagating distance for different strength of oscillating GVD parameter $\chi$. The lines represent the analytical expression where as dots are the corresponding numerical data. (b) The variation of $P_{p}$ for different $\chi$ with fixed $\mu=1$. In the upper panel we plot the functions constituting the transcendental equation whose solution indicates the value of $P_p$ which is identical to its initial value. The solutions are indicated by the dots for different $\chi$. (c) The figure indicates the relationship between the temporal position of the main lobe of the Airy pulse with the oscillating power. It can be seen that the power reaches to its value exactly at the same $\xi$ when pulse returns to its initial temporal location. }
\label{Figure6}
\end{center}
\end{figure}
The numerical solution also reveals that, in absence of loss peak power $P_{p}$ varies periodically when GVD is oscillating. For a lossless truncated airy pulse, we derive the expression of the $P_{p}$ as,
\begin{equation}\label{q6}
P_{p}(\xi,\mu,\chi)=P_{p}^{(0)} e^{- \Gamma^2},
\end{equation}
where the decay factor $\Gamma$ is given as, $\Gamma=\sqrt{\frac{a}{2}}\int\limits_{0}^{\xi}\delta_2(\xi)d\xi.$ Using the explicit form of $\delta_2$ one can quantify the decay factor as, $\Gamma=\sqrt{\frac{a}{2}}[sgn(\bar{\beta} _{20})\xi +\frac{\chi}{\mu}\sin(\mu\xi)]$. Since the decay factor is periodic we can expect an oscillatory evolution of the peak power of the main lobe of the propagating FEAP. The periodic nature of the decay factor $\Gamma$ leads to more interesting features. For example, at $\xi=n \pi/\mu, (n=1,2,3,4..)$ the peak power will be identical irrespective of GVD profile. We illustrate this feature in Fig.\ref{Figure6}(a). It is interesting to note that the decay factor $\Gamma$ can vanish for a specific propagation length ($\xi_c$) satisfying the transcendental equation $\sin(\mu \xi)/\mu \xi=-sgn(\beta_{20})\chi$. At $\xi_c$ the $\Gamma$ is zero and peak power retains to its input value. In Fig.\ref{Figure6}(b) we plot the variation of $P_{p}$ with distance for different modulation strength $\chi$. A special value of $\chi$ can be chosen such a way that the initial power is revived at the output which is shown in Fig.\ref{Figure6}(b). In absence of amplitude modulation of GVD (i.e $\chi=0$), the peak power decays monotonically. For $\chi\neq 0$, the variation of $P_{p}$ becomes oscillatory. It is also illustrated that how for a particular value of $\chi=\chi_c$ the peak power carried by the primary lobe of FEAP revives to its original value at a fixed propagation length. This specific length can be unique or many valued depending on the single or multi-valued solution of the transcendental equation. The oscillatory dispersion profile affects the peak power and temporal location of the main lobe of FEAP in a complimentary manner. In Fig.\ref{Figure6}(c) using density plot we demonstrate the variation of the temporal location ($\tau_p (\xi)$) of the main lobe with propagation distance $\xi$. The temporal position should follow the path as derived in Eq. \eqref{q5} which suggests the usual balletic propagation of FEAP is no longer valid under modulated GVD profile. The FEAP oscillates against its initial position. The modulation strength $\chi$ is kept to a value $\chi=5$. In the same plot we demonstrate the variation of the $P_{p}$ which is oscillating over distance. It is interesting to note that the oscillating peak power and temporal position both revives to its initial values exactly at the same space point. This is an important piece of information in the context of application.
\subsection{Selective focusing under periodic TOD}
In the previous sections we ignore the effects of the higher order dispersions during the study of the dynamics of FEAP under modulated GVD profile. However, if the pulses are launched near zero GVD wavelength then the effect of TOD will be significant. The dynamics of FEAP in presence of moderate and strong TOD has been a topic of investigation lately\cite{driben}. It has been shown that FEAP shows peculiar behaviour in presence of TOD. For positive TOD coefficient ($\delta_3>0$) the FEAP focuses to a gaussian pulse after moving a specific distance and the temporal distribution flips \cite{driben}. The position and the area of this focusing zone mainly depends on the numeric value of $\delta_3$. Some works have also been done where it is demonstrated that this flipping phenomenon can also be controlled by external parameters (like phase modulation) which are independent of the TOD coefficient \cite{Roy_b}. In our work we show that the geometry of the waveguide plays a pivotal role in the dynamics. The periodic variation of the waveguide geometry leads to a periodic variation of the TOD coefficient $\beta_3$. In real unit $\beta_3$ can be expressed as $\beta_3(z)=\bar{\beta}_{30}+q\cos(\bar{\mu}z)$. $\bar{\beta}_{30}$ and $\bar{\mu}$ represent the average value of TOD parameter and period of the oscillation, respectively. The strength of the modulation is controlled by the factor $q $. Including TOD term the governing equation can be written as
\begin{equation} \label{q7}
i\frac{\partial u}{\partial \xi}=\frac{{{\delta}_{2}(\xi)}}{2}\frac{{{\partial }^{2}}u}{\partial {{\tau}^{2}}}+i\delta_{3}(\xi)\frac{{{\partial }^{3}}u}{\partial {{\tau}^{3}}}-i\widetilde{\alpha}\xi
\end{equation}
where, $\delta_3(\xi)=\bar{\delta}_{30}+\chi_3 \cos (\mu\xi)$ is the distance dependent TOD parameter in normalised unit. The strength of the modulation ($\chi_3$) in normalised unit can be written as $\chi_3=\frac{q}{6t_0|\bar{\beta}_{20}|}$, where $\bar{\delta}_{30}=\frac{\bar{\beta}_{30}}{6t_0|\bar{\beta}_{20}|}$. The period $\mu$ is rescaled as $\mu=\bar{\mu}L_D$. The general solution of Eq. \eqref{q7},
\begin{equation}\label{q8}
\begin{aligned}
u_a(\xi,\tau)=\frac{1}{c}\exp\left(a^3/3-\widetilde{\alpha}\xi\right)Ai\left(\frac{b}{c}-\frac{m^2}{c^4}\right)\\
\exp i\left(\frac{2m^3}{3c^6}-\frac{mb}{c^3} \right),
\end{aligned}
\end{equation}
where $c=(1-3\bar{\delta}_{30}\xi-3\frac{\chi_3}{\mu}\sin\mu\xi)^\frac{1}{3}$. From the solution it is evident that a singularity appears at $c=0$ which leads to a transcendental equation $\sin(\mu\xi)=\frac{\mu}{3 \chi_3}(1-3\bar{\delta}_{30} \xi)$. For a specific case when $\bar{\delta}_{30}=0$ the singularity condition simplifies to $\sin(\mu\xi)=\frac{\mu}{3\chi_3}$. At singular point the original FEAP reshapes to form a Gaussian pulse and flips temporally. The singularity condition ($c=0$) gives rise to two sets of flipping positions,
\begin{equation}\label{q9}
\begin{aligned}
\xi_{fj}^{(n)} = (-1)^{j-1}\frac{1}{\mu}\sin^{-1}\left(\frac{\mu}{3\chi_3}\right)+\frac{\pi}{\mu} [2n+(j-1)] \ \ (j=1,2 )
\end{aligned}
\end{equation}
with $n=0,1,2,.....$ , where FEAP loses its identity. From the expression of Eq. \eqref{q9} it is clear that the FEAP will face multiple flipping while moving in a medium with periodic TOD. In Fig. (\ref{Figure7}) we demonstrate the evolution of a FEAP under periodic TOD. It is evident that the pulse experiences multiple temporal flippings at the specific locations estimated theoretically by Eq.\eqref{q9}. The input FEAP first faces a singularity at $\xi_{f1}^{(0)}$ for which $c=0$ in Eq.(\ref{q8}). At this specific point the FEAP turns into a Gaussian pulse and after that it propagates as a FEAP with temporally flipped wings upto the next flipping point $\xi_{f2}^{(0)}$. This phenomenon repeats itself as the pulse moves forward. To get more insight about this peculiar dynamics of the FEAP under periodic TOD, we try to find the analytical solution of the pulse at different zones of propagation. The solution beyond the flipping point for static TOD is already reported \cite{Roy}. We exploit this concept to find a general solution for periodically varying TOD. Careful investigation reveals that the flipping areas corresponding to different values of $n$ in Fig.\ref{Figure7} are not of same size. In fact the area of flipping region increases when $n$ increases. We derive the general form of the Gaussian pulse at flipping points as,
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=1.2in 0.2in 2.0in 0.1in,clip=true, width=86mm]{fig7}
\vspace{-1em}
\caption{The density plot of FEAP in presence of periodically varying TOD parameter with different strength (a)$\chi_3=0.5$ (b)$\chi_3=1$ with $\chi=-0.5$ which is for type-2 oscillating waveguide. For type-1 waveguide the density plots are shown for (c)$\chi_3=-0.5$ (d)$\chi_3=-1$ with $\chi=0.5$. The phenomenon of multiple flipping can be seen from the figure and the positions of flipping are indicated by the dashed lines which is obtained from Eq.\eqref{q9}. }
\label{Figure7}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0.2in 0.8in 0.5in 1.1in,clip=true, width=80mm]{fig8_a}
\includegraphics[trim=0.1in 0.0in 0.4in 0.0in,clip=true, width=80mm]{fig8_b}
\includegraphics[trim=0.0in 2.0in 0.3in 0.9in,clip=true, width=80mm]{fig8_cd}
\vspace{-1em}
\caption{(a) The propagation of the FEAP at flipping zone is highlighted where the pulse converges to a Gaussian pulse at the flipping position and then it propagates again with inverted temporal wings.(b) The variation of the width $\tau_f$ of the Gaussian pulses obtained at different flipping positions. The shape of the Gaussian pulses are also given at the bottom of the plot. The width enhances as we go to the higher order flipping positions. (c) The variation of $P_p$ with $\xi$ for $\chi_3=1$. The dipping of $P_0$ indicates the position of flipping. It can be seen that the length of flipping area enhances for higher order flipping positions.(d) The variation the length of flipping area $\Delta \xi_f$ with $\tau_f$ of the Gaussian pulse obtained at the flipping positions. It can be seen that higher $\tau_{f}$ enhances $\Delta \xi_f$. }
\label{Figure8}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=1.2in 0.2in 2.0in 0.1in,clip=true, width=80mm]{fig9}
\vspace{-1em}
\caption{Selective focusing for $\chi_3=0.5$. The black circles represent the tight focusing positions. The focusing takes place when the condition of Eq.\ref{q12} is achieved. We can select the position of the tight focusing by suitably adjusting the parameter.}
\label{Figure9}
\end{center}
\end{figure}
\begin{equation}\label{q10}
u_b(\xi_f,\tau)=U_0 \exp\left[-\frac{(\tau-a^2)^2}{\tau_f^2}\right]\exp(i\phi),
\end{equation}
where $U_0= \frac{1}{2\sqrt{\pi\gamma}}\exp(a^3/3)$ and $\phi=\frac{1}{2}\tan^{-1}\left(\frac{\Delta^{(n)}}{a}\right)-\frac{\Delta^{(n)}(\tau-a^2)^2}{4\gamma^2}$. The parameter $\Delta^{(n)}$ is defined as $\Delta^{(n)}=\frac{1}{6}\left[\frac{\chi}{\chi_3}-3\xi_{fj}^{(n)}\right]$ and $\gamma=\sqrt{a^2+\Delta^{(n)2}}$. The characteristic width of the Gaussian pulse is $\tau_f=2\gamma/\sqrt{a}$. It is evident that the width of the Gaussian pulse will differ at different flipping positions depending on the values of $\xi_{fj}^{(n)}$ which can be found from Eq(\ref{q9}). In Fig.\ref{Figure8}(a) we plot the dynamics of FEAP around the first flipping position $\xi_{f1}^{(0)}$ where it can be clearly observed that the pulse merges to gaussian pulse before its temporal wings flip. It is interesting to note that, the length of the flipping zone depends on the width of the Gaussian pulse generated in the flipping point\cite{Roy}. Greater the width greater is the length of the flipping area. For the Gaussian pulse the full width at half maxima $\tau_{FWHM}$ is given as,
\begin{equation}\label{q11}
\tau_{FWHM}=2\sqrt{2 \ln2}\left(a+\frac{\Delta^{(n)2}}{a}\right)^\frac{1}{2}
\end{equation}
The expression suggests that the width of the pulse depends on the position of the flipping and it increases as we go to the higher orders of $n$. In(Fig.\ref{Figure8}(b)) we plot the variation of the temporal width $\tau_f$ at different flipping points where individual Gaussian pulses are emerged. It is evident from the illustration that,the widths of the Gaussian pulse gradually increases at each flipping point denoted by $\xi_{fj}^{(n)}$. The evolution of $P_p$ for $\chi_3=1$ is plotted in Fig.\ref{Figure8}(c) where we can see that the maximum peak-power carried by the pulse dips down at the position of flipping. The length of the valley shown in the Fig.\ref{Figure8}(c) measures the flipping length $\Delta \xi_{fj}^{(n)}$ which enhances at each flipping point. The flipping length $\Delta \xi_{fj}^{(n)}$ is proportional to the width of the Gaussian pulse generated at $\xi_{fj}^{(n)}$. In \ref{Figure8}(d) we numerically demonstrate the relationship between the Gaussian width $\tau_f$ and flipping length $\Delta \xi_f^{(n)}$ which is almost linear.
\noindent Note, we can approximate the expression of gaussian width as $\tau_{FWHM}\approx \Delta^{(n)}\sqrt{2\ln2/a} $ for small truncation parameter $a$. Now, for $\Delta^{(n)}$ the width of the Gaussian pulse obtained at the flipping point nearly vanishes which is the condition for absolute temporal focusing. In the neighbourhood of the absolute focusing point the peak power of the propagating FEAP reaches to its maxima which may be useful in application point of view. The uniqueness of the oscillating GVD parameters lies in the fact that here we can selectively focus the FEAP to a point by varying the amplitude ($\chi$) of periodic GVD parameter. The parameter $\chi$ for which $\Delta^{(n)}=0$ is
\begin{equation}\label{q12}
\chi=3\chi_3\xi^{(n)}_{fj}
\end{equation}
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=2.0in 0.2in 1.7in 0.0in,clip=true, width=82mm]{fig10}
\vspace{-1em}
\caption{Density plots of different zones for $\chi_3=0.5$. The different zones are represented by different integer values in Eq.\ref{q9}. The analytical solutions(black dashed lines) obtained in Eq.\ref{q8}(for(b) and (d) and Eq.\ref{q13}(for (a)and (c)(pink shaded area) are compared in the upper panels of each figure. }
\label{Figure10}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0.0in 1.1in 0.1in 0.7in,clip=true, width=83mm]{fig11_a}
\includegraphics[trim=0.1in 1.0in 0.1in 0.5in,clip=true, width=83mm]{fig11_b}
\includegraphics[trim=0.0in 1.0in 0.2in 0.7in,clip=true, width=83mm]{fig11_c}
\vspace{-1em}
\caption{The dynamics of the Airy pulse under different conditions (a) $\bar{\delta}_{30}=-0.1; \chi_3=0.05$ (b) $\bar{\delta}_{30}=0.1;\chi_3=0.05$ (c) $\bar{\delta}_{30}=0.1;\chi_3=0.25$. The variation of $\delta3$ is provided in the insets. It can be seen that the flipping conditions of the pulse depends on the geometrical variation of $\delta_3$ and its initial value. In each case the flipping condition arises when the transcendental equation (Eq.\eqref{eq15}) have real solution as indicated in the figure. }
\label{Figure11}
\end{center}
\end{figure}
It should be noted that the value of $\xi_{fj}^{(n)}$ depends on the integers $n$ (see Eq.\eqref{q9}) which determines the position of the flipping zone through $\chi_3$. The selective focusing of a particular zone (say $n^{th}$ zone) will be achieved for particular $\chi$ determined by the Eq.(\ref{q12}). We illustrate this complex phenomena graphically in Fig.\ref{Figure9} through the density plot. We can notice that the flipping zone can be merged to a point (marked by black circle) selectively. Here we take the first flipping position ($n=0$) in Eq.\eqref{q9} for $j=1$ and use this value to find the parameter $\chi_f$ for which the tight focusing happens at $\xi_{f1}^{(0)}$ (Fig.\ref{Figure9}(a)) . Similarly for the second position(Fig.\ref{Figure9}(b)) we consider $\xi_{f2}^{(0)}$ in Eq.\ref{q9} with $n=0$ and $j=2$ and calculate $\chi_f$ from Eq.\ref{q12} . Using this technique we can selectively focus the FEAP according to our requirement. We can see that the presence of periodic TOD complicates the dynamics of the FEAP and divide its propagation into many distinct zones. Under static TOD the propagating FEAP experiences a singularity and we can divide the entire propagating length into three distinct zones (i) zone-I before flipping (ii) zone-II flipping and (iii) zone-III after flipping. In zone-I the airy pulse moves with its usual ballistic trajectory. In zone-II the pulse experiences a singularity and try to confine in a finite region. In zone-III the airy pulse temporally flips. When TOD is periodic over distance then, the FEAP flips over periodically against each focusing. For detail investigation we require the analytical solution of the FEAP at each zone. In Eq.\ref{q8} we obtain the solution of the propagating FEAP under periodic TOD. This solution works well in zone-I and valid in the zones after the odd numbered flipping ($2^{nd}$,$4^{th}$ etc) positions which are achieved for second set of flipping points $\xi_{f2}^{(n)}$ in Eq.\ref{q9}. We also derive the solution at flipping position (zone-II) where FEAP completely loses its characteristics and converted to a pure Gaussian pulse as given in Eq.\ref{q10}. The solutions of the pulse for the zones beyond the first set of flipping point ($\xi_{f1}^{(n)}$ in Eq.\ref{q9} can be obtained by defining a variable transformation $\xi'=\xi-\xi_{f1}^{(n)}$. Under this new variable the solution can be expressed as
\begin{equation}\label{q13}
\begin{aligned}
u_c(\xi',\tau)=\frac{1}{c'}\exp\left({\frac{a^3}{3}-\widetilde{\alpha}\xi}\right)Ai\left(\frac{b'}{c'}-\frac{n'^2}{c'^4}\right)\\
\exp i\left(\frac{2n'^3}{3c'^6}-\frac{n'b'}{c'^3} \right),
\end{aligned}
\end{equation}
where $c'=[3\chi_3(\sin\xi-\sin\xi_{f1}^{(n)})]^\frac{1}{3}$, $b'=-\tau$ and $n'=ia-\Delta^{(n)}-\frac{\xi'}{2}+\frac{\chi}{2}[\sin(\xi)-\sin(\xi_{f1}^{(n)})]$. In Fig.\ref{Figure10} we illustrate the dynamics of FEAP as shown in Fig.\ref{Figure7}(a) zone wise. The aim here is to check the validity of the analytical solutions that we obtain in Eq.\ref{q8} and Eq.\ref{q13}. In Fig.\ref{Figure10}(a) we demonstrate the dynamics of FEAP experiencing first flipping at $\xi=\xi_{f1}^{(n=0)}$ where as in Fig.\ref{Figure10}(b) the pulse move forward and encounter the next singularity at $\xi=\xi_{f2} ^{(n=0)}$. For both cases the derived analytical solution (dashed lines) corroborate well with numerical output (pink shaded area). We extended out investigation for $\xi>\xi_{f1}^{(n=1)}$ (Fig.\ref{Figure10}(c)) and $\xi>\xi_{f2}^{(n=1)}$ (Fig.\ref{Figure10}(d)) and find good agreement with numerical and theoretical results.
Finally we conclude our work by investigating the airy pulse dynamics for a general $\delta_3 (\xi)$ variation where $\bar{\delta}_{30} \neq0$. In such case we do not expect any periodic focusing. The general transcendental equation that governs the focusing is,
\begin{equation}\label{eq15}
\sin(\mu\xi)=\frac{\mu}{3 \chi_3}(1-3\bar{\delta}_{30} \xi)
\end{equation}
Note, when $\delta_3(\xi)<0$ we must have $\bar{\delta}_{30} <0$ and $|\bar{\delta}_{30}|/\chi_3 >1$. It is easy to show that if $|\bar{\delta}_{30}|/\chi_3 >1$ we do not have any solution of Eq. (\eqref{eq15}). In other word, when TOD coefficient is throughout negative ($\delta_3(\xi)<0$) there will be no temporal focusing of FEAP. To illustrate this feature in Fig. \ref{Figure11}(a) we plot the dynamics of a FEAP under modulating TOD coefficient which is throughout negative. The situation is different when $\delta_3(\xi) >0$ where we can have only one solution of Eq. \eqref{eq15} defining the flipping of FEAP. In Fig. \ref{Figure11}(b) we demonstrate the flipping of the airy pulse at the precise location ($\xi_c$) where $\xi_c$ satisfies Eq. \eqref{eq15}. However more than one flipping can possible when numeric sign of $\delta_3$ varies from positive to negative values. In such case multiple solution of the Eq. \eqref{eq15} is possible and each solution defines the flipping as illustrated in Fig. \ref{Figure11}(c).
\section {Conclusion}
In this report we investigate the dynamics of a finite energy Airy pulse (FEAP) under the environment of varying dispersion. We propose realistic waveguide structure that offer linear and oscillatory GVD profile as a function of propagation distance. A detail analysis reveals linear variation of GVD affects the usual ballistic trajectory of an Airy pulse. By suitably adjusting the modulation strength parameter one can even achieve an unusual quasi-linear trajectory for FEAP. It is also found that the power carried by the primary lobe can be manipulated for varying dispersion parameters. We theoretically estimate a critical value of the modulation strength of the varying GVD parameter for which the power attenuation of the main lobe is minimal. Our theoretical results agrees well with numerical simulation. The dynamics of FEAP is found to be very interesting under oscillatory second and third order dispersion (TOD). The presence of oscillatory TOD offers multiple singularity zones where wings of the Airy pulse flips temporally. The dynamics of an Airy pulse near singular points is very rich and demands special investigation. We meticulously solve the propagation equation and identify the location of flipping position in ($\xi-\tau$) plane. The theoretical calculation reveals the physical condition of getting selective absolute focusing where peak power of the propagating Airy pulse reaches to its maxima. All the analytical findings are supported by adequate numerical simulation throughout the report. The manipulation of Airy trajectory and power level using the concept of varying dispersion might be useful in practical purposes.
\section*{Acknowledgements}
A.B. acknowledges Ministry of Human Resource Development (MHRD), India for a research fellowship. |
1610.06180 | \section{Introduction}
The closest starburst region, 30~Doradus in the Large Magellanic Cloud
(LMC), allows one to study the star formation (SF) process on a
variety of scales, from the dense cluster R~136, possibly a newborn
globular cluster, via the massive star-forming region NGC~2070, the
``old'' (20 to 25 Myr, \citealt{grebel00}) populous cluster Hodge~301,
to the many diffuse star-forming regions like NGC~2060. Deciphering
the history of 30~Doradus is therefore a unique opportunity to
understand how star formation originates and propagates. In previous
work \citep{cigno15} we studied NGC~2070, a giant region already
active 7 Myr ago and possibly up to 20 Myr ago. Here we analyse Hodge
301 ($\alpha_{2000} = 05^h38^m16^s, \delta_{2000} = -69\degr03'58''$),
one of the oldest structures in 30~Doradus, approximately located at 3
arc-minutes ($\sim 44$ pc, assuming a distance of 50 kpc) to the
northwest of R~136 (\citealt{ho88}).
Hodge~301 (hereinafter H301) was previously studied by
\cite{mendoza73}, \cite{mcgregor81}, \cite{melnick85},
\cite{lortet91}, \cite{walborn97} (hereinafter WB97) and
\cite{grebel00} (hereinafter GC00), among others. As a part of
an extensive optical spectral classification effort of the stellar
populations within the 30~Doradus Nebula, WB97 classified H301 as a
cluster in the \emph{h} and $\chi$ Persei phase (namely containing A-
and M-type supergiants), with an age $\sim 10$ Myr, the oldest in the
30~Doradus complex. GC00 used a synergy between spectroscopy,
Ultraviolet Imaging Telescope (UIT) photometry and deep WFPC2/HST
photometry reaching V$\approx 24$ to study age, initial mass function
(IMF), and reddening of H301. By comparing the loci of main-sequence
turn-off (MSTO) and post-MS stars with Padova and Geneva stellar
models, they derived an age of 20-25 Myr. Concerning the IMF, for the
mass range 1.26$-$10$\,$M$_{\odot}$ they derived a slope close to
Salpeter. Finally, using the UIT photometry they found a mean
reddening E(B$-$V)=$0.28\pm 0.05$.
More recently, using spectroscopy and a qualitative comparison with
the evolutionary models of \cite{brott11}, \cite{evans15} found an age
of 15$\pm$5 Myr.
In this paper we rederive H301's age using the photometric
capabilities of the survey HTTP\footnote{The HTTP Photometric Catalog
can be downloaded at
https://archive.stsci.edu/prepds/30dor/Preview/observations.html. }
(\citealt{sabbi12, sabbi15}). Figure \ref{chart} shows a F555W-band
inverted grey-scale image of the cluster. The depth of our CMDs
(V$\approx 26$) allows us for the first time in this cluster to reach
the magnitude of the PMS turn-on (TOn; V$\approx 24-25$), the point of
the color-magnitude diagram (CMD) where the PMS stars join the
MS. This stellar feature is used to measure the cluster age in a way
that is independent of the previous analysis, mostly based on the MSTO
and post-MS stars. For this task we used the synthetic CMD approach
which allows us to fit several crucial features of the CMD (MSTO, PMS
TOn and field contamination) simultaneously. The advantage with
respect to the classical isochrone fitting is the full consideration
of evolutionary times, magnitude and color spreads from photometric
errors, incompleteness, and unresolved binaries.
\begin{figure*}[t]
\centering \includegraphics[width=13cm]{fig1.pdf}
\caption{Inverted grey-scale image (F555W) of Hodge~301. Blue
(F555W-F775W$<1$) and red (F555W-F775W$>1$) supergiant stars
(F555W$<15$) are indicated with blue and red filled starred symbols,
respectively. The blue arrows point to the two BSGs discussed in the
present analysis (see text). The open green circle indicates a
spectroscopic binary (\citealt{walborn97}) that may have a compact
companion. }
\label{chart}
\end{figure*}
The structure of the present paper is as follows. In Section 2 we
present the data and we discuss the relevant stellar phases of H301's
CMD. In Section 3 we construct a library of synthetic CMDs based on
stellar isochrones and we use them to locate the MSTO and TOn in the
data. In Section 4 we recover the most likely history for
H301. Conclusions close the paper.
\section{Data}
The survey HTTP has gathered unprecedented photometric data with the
Hubble Space Telescope (HST) over the entire Tarantula Nebula in the
near UV (WFC3/UVIS F275W and F336W), optical (ACS/WFC F555W and
F658N), and near IR (WFC3/IR F110W and F160W). Here we focus on the
optical CMD of H301, the only one deep enough to reach 30 Myr old PMS
stars.
Figure \ref{map} shows the density map of the 10 pc region around
H301. The cluster centroid was chosen as the point minimizing the
distance to all stars.
\begin{figure}[t]
\centering \includegraphics[width=9cm]{fig2.pdf}
\caption{Spatial distribution of H301. 1 pc corresponds to $\approx 4$
arcseconds. The central hole is an artifact of the incompleteness,
due to the severe central crowding. The dashed circle indicates a 4
pc radius.}
\label{map}
\end{figure}
In order to determine the cluster members, we found the radius at
which the stellar density drops to a value indistinguishable from the
field. Stars within this radius are assumed to belong to the cluster,
and those outside are treated as belonging to the field population.
Figure \ref{profiles} shows the radial profile (number of stars per
pc$^2$) of H301 as calculated in the quadrants of Fig. \ref{map} (the
top-left, top-right, bottom-right, and bottom-left profiles are
indicated with green, blue, red, and black slopes,
respectively). Despite the slight asymmetry (the top-left profile
shows an excess of stars between 1 and 2 pc from the cluster center),
we found that a 4 pc radius encloses about 85\% of the stars of H301,
minimizing at the same time field contamination. Figure \ref{cmd}
shows the corresponding CMD. Photometric errors, as derived from
artificial stars tests (see further down in this Section), are also
shown on the right side. Important features of this CMD are:
\begin{figure}[t]
\centering \includegraphics[width=9cm]{fig3.pdf}
\caption{Radial profiles of H301 in four quadrants (see
Fig. \ref{map}). The shadowed areas show the Poissonian
uncertainties. The differences in one quadrant (green) may indicate
that the cluster is not spherical.}
\label{profiles}
\end{figure}
\begin{figure}[t]
\centering \includegraphics[width=8cm]{fig4.pdf}
\caption{F555W vs F555W-F775W CMD of H301 within 4 pc from the cluster
center. Relevant stellar species and contamination are
highlighted. Red circles indicate stars brighter than F555W$\sim 19$
characterized by H$\alpha$ excess.}
\label{cmd}
\end{figure}
\begin{itemize}
\label{bes}
\item An extended MS ranging from F555W$\approx$16 to
F555W$\approx$26. For stars in the magnitude range $16<$F555W$<18$,
the MS is broader than expected on the basis of photometric errors
(see figure). As already discovered by GC00 (see their Fig. 6), the
MS is likely broadened toward the red by the presence of Be
stars. Concerning these objects, GC00 also showed that they are most
common among the early B-type stars and show the largest Balmer and
IR excess for the early types (see also \citealt{grebel1997}). In
our photometry, 17 stars above F555W$\sim 19$ show an H$\alpha$
excess\footnote{Measured with respect to the median color
F555W-F658W in the F555W-F658W vs F555W-F775W diagram (see
\citealt{demarchi10} for details on this approach).} (filled red
circles in the figure), as expected in Be-stars. Despite this
spread, the clear drop of star counts brighter than F555W$\sim$16
suggests that the MSTO (green dashed line) is not brighter than this
magnitude;
\item A group of stars up to three magnitudes brighter than the MSTO;
the two brightest and the three reddest are presumably He-burning
stars of intermediate mass on the blue (blue super giant, BSG) and
red (red super giant, RSG) side of the blue loop (BL),
respectively. Their membership is certain, as indicated by the plots
in Fig. \ref{field}, where the CMDs of stars in three different
annuli (panels from left to right show stars between 0 and 4 pc,
6.93 and 8 pc, 8 and 8.95 pc from the cluster center) of equal area
around the center of H301 are compared. The two outer annuli
represent pure field samples and only a few upper-MS stars populate
their CMDs above F555W$\sim 20$. In addition, no upper-MS star above
F555W$\sim 17$ is detected. On the other hand below F555W$\sim 22$
field contamination increases significantly.
\item A group of stars at the right of the MS (indicated with ``FC'',
field contamination, in Figure \ref{cmd}). As suggested by the CMDs
of the two outer fields in Figure \ref{field}, the entirety of these
stars is compatible with being red giant (RGB) and red clump (RC)
stars from the field of the LMC. The elongated shape of the field RC
(visible in the CMD as the over-density around F555W$\approx 20$ and
color around 1.2$-$1.4) suggests the presence of some differential
reddening.
\end{itemize}
\begin{figure}[t]
\centering \includegraphics[width=9cm]{fig5.pdf}
\caption{From left to right, CMD of stars between 0 and 4 pc, 6.93 and
8 pc, 8 and 8.95 pc from the center of H301.}
\label{field}
\end{figure}
The presence of two blue stars above the MSTO around V$\sim$14-15
magnitudes reflects the well-known problem that while theoretical CMDs
predict a post main sequence gap (PMSG) between the end of the main
sequence and the presumed core He burning A-supergiants (the two stars
near V$\sim$13 magnitudes), this gap is not observed. In clusters of
this approximate age the PMSG is populated by bright B-type giants and
supergiants (of similar color to the main sequence dwarfs and
sub-giants), in the case of Hodge 301 these two stars are confirmed
cluster members with spectral types B3\,Ib and B2\,II-III(n)e (see
\citealt{evans15}). While the fact that the precursor of SN1987A was a
B3\,I star (\citealt{walborn89}) indicates that core He burning stars
can inhabit this part of the CMD/Hertzsprung Russel diagram (HRD)
there is as yet no unambiguous method for distinguishing between core
H and He burning blue stars (see \citealt{hunter08}, \citealt{vink10},
\citealt{grebel1996}, and \citealt{mcevoy15} for discussions of how
rotational velocity distributions, mass-loss considerations and binary
frequency might define the width of the main sequence for B-stars in
the LMC). For now we will assume that the MSTO is as shown in
Fig. \ref{cmd} but will return to this point in the discussion.
In the next Sections we study the CMD of H301 using the synthetic CMD
technique (see, e.g., \citealt{cigno15}). A mandatory ingredient of
this approach is to accurately test photometric errors and
incompleteness of the data. Here these tests are conducted following a
two step procedure: 1) ``fake'' sources are injected (one at a time)
following a uniform distribution onto the actual images. The source
detection routine used for our science images is applied to the fields
containing the combined actual images and the fake sources. Counting
how many fake stars are lost as a function of magnitude and position
provides the map of the local incompleteness. Note that if the latter
is averaged over the entire cluster, the result does not represent the
``true'' average incompleteness, because the distribution of real
stars is not uniform; 2) the local incompleteness is used to restore
the real profile of the cluster (before the incompleteness). Fake
stars are now injected (one at a time) onto the actual images
following this profile and the source detection routine is applied
again. Although the resulting incompleteness is locally identical to
the incompleteness of step 1, its average over the entire cluster is
an unbiased estimate of the ``true'' average incompleteness. Figure
\ref{perc} shows the average completeness level for the filters F555W
and F775W.
\begin{figure}[t]
\centering \includegraphics[width=9cm]{fig6.pdf}
\caption{Average photometric completeness in F555W (red symbols) and
F775W (blue symbols).}
\label{perc}
\end{figure}
\section{Synthetic CMDs}
\label{synth}
Synthetic CMDs are generated using the latest PARSEC
(\citealt{bressan12,tang14}) isochrones, and assuming a Kroupa IMF and
metallicity Z=0.008 (from the mean [Fe/H] for LMC Cepheids of
\citealt{luck98}, referred to the updated \citealt{caffau11}
mixture). 30\% of synthetic stars are considered members of binary
systems and their flux is combined with a secondary star sampled from
the same IMF. To mimic the observational process, each synthetic CMD
is then convolved with photometric errors (derived from the cumulative
distribution of mag$_{\mathrm{out}}$-mag$_{\mathrm{input}}$ of fake
stars) and incompleteness as derived in the previous Section.
We have compared our synthetic CMDs with the massive stars and
low-mass ones of H301.
\emph{Massive stars:} Given the likely range of ages of H301, 10-30
Myr, MSTO and BL phases are populated by stars more massive than
$7\,$M$_{\odot}$. Figure \ref{4cmd} shows four synthetic populations
of the labeled ages (15, 20, 25 and 30 Myr) and a duration of star
formation of 1 Myr overlaid to the observations. In order to increase
the visibility of the models in the fastest evolutionary phases, the
synthetic CMDs are populated with a number of stars much larger the
observed counterpart.
The distance modulus is assumed equal to 18.5 (see
e.g. \citealt{panagia91,pietr13}) and the total reddening,
E(B$-$V)$\approx 0.22$, is chosen by fitting the average color of the
UMS in the magnitude range $18-20$. Note that through this paper the
total reddening is defined as the composition of the Milky Way
foreground reddening, kept fixed at E(B$-$V)$\approx 0.07$ with
R$_{\mathrm{V}}=3.1$, and the local reddening. For the latter we used
the A$_{\lambda}$ values from \cite{demarchi16}, which are
specifically derived for the 30~Doradus environment.
\begin{figure}[t]
\centering \includegraphics[width=9.5cm]{fig7.pdf}
\caption{Synthetic CMDs (colored contours) for populations of the
labeled ages, taking unresolved binaries and photometric errors into
account, overlaid on the F555W vs F555W$-$F775W CMD (dots) of
H301. }
\label{4cmd}
\end{figure}
An inspection of Figure \ref{4cmd} reveals that: 1) only ages in the
range 15-25 Myr allow one to match color and magnitude of the BL. The
synthetic BL of the 15 Myr isochrone fits the magnitude of the two
BSGs (although it is too blue), but it is clearly too bright on the
red side. On the other hand, the 20 Myr isochrone fits well the
magnitude of the three RSGs, but it fails to reproduce the ratio
RSGs/BSGs (the BL color extension is too short and its morphology
resembles a red clump); 2) Concerning the MSTO, even considering
photometric errors, none of the models is able to reproduce the
observed color spread in the magnitude range $16-18$. This is not
surprising given the presence of Be stars, which are thought to be
fast rotators surrounded by an out-flowing equatorial disk, likely
causing their infrared excess (see e.g. \citealt{pr03}); 3) only
isochrones between 25 Myr and 30 Myr allow one to reproduce the MSTO
luminosity (V$\sim\,16$; corresponding to a mass
$\approx \, 9\,$M$_{\odot}$ with PARSEC models), while the younger
ones are at least 1 mag brighter than the MSTO tip .
\emph{Low mass stars:} The PMS TOn is the point where the PMS joins
the MS. In H301 this phase is populated by low-mass stars below
$1.5\,$M$_{\odot}$. According to stellar evolution theory, at the
magnitude of the TOn, the luminosity function (LF) of a simple
population is characterized by a strong peak followed by a dip (see,
e.g., \citealt{cignoni10}). This behavior is clearly visible in the
top panel of Fig. \ref{LF_annuli} where synthetic populations of
different ages are shown (note that only photometric errors are
applied, while the incompleteness is not). The older the age of the
cluster, the fainter the LF TOn peak and the corresponding dip. Peak
and dip respectively reflect the steep dependence of stellar mass on
magnitude near the TOn and the following flattening below the TOn,
caused by the short evolutionary timescale of the PMS phase compared
to the MS. After the dip, the shape of the LF mimics the IMF, rising
with decreasing stellar mass. For comparison, the dashed line shows a
synthetic zero age MS where the PMS phase has been artificially
suppressed: as expected, without PMS there is no TOn peak/dip and LF
increases monotonically.
From a theoretical point of view, the dependence of TOn luminosity on
the PMS evolutionary times is a serious issue. However, while in the
first few Myr, PMS times are affected by several uncertainties like,
e.g., the initial conditions (in particular the initial radius), the
efficiency of convection, the initial abundance of deuterium and the
accretion rate (see, e.g., \citealt{baraffe02}, \citealt{tognelli11}),
at later times PMS tracks tend to converge. Indeed, the zero age MS
position for 1 M$_{\odot}$ does not show significant differences among
different authors (see Fig. 14 in \citealt{tognelli11}), with a
dispersion in $\log (\mathrm{L}/\mathrm{L}_{\odot})$ of $\approx 0.1$
dex. In terms of age, such an uncertainty corresponds to an age error
smaller than 3 Myr at 30 Myr.
Concerning the observational errors, two things limit the TOn
visibility in H301: incompleteness and field contamination. For ages
older than 15 Myr, at the distance of the LMC, a TOn is fainter than
23 in V magnitude, $\sim 24.5$ at 30 Myr. At these levels of
faintness, incompleteness can be severe, especially in the center of
H301. At odds with the massive stars, where the degree of
contamination is negligible, at faint magnitudes it can bias the age,
mimicking an older population.
The bottom panel of Figure \ref{LF_annuli} shows the observed LFs,
corrected for the incompleteness, of stars in five concentric annuli
of equal area around the center of H301 (in red, blue, magenta, green
and black stars between 0 and 2.91 pc, 2.91 and 4.11 pc, 4.11 and 5.04
pc, 5.04 and 5.82 pc, 5.82 and 6.51 pc, respectively). The hatched
area fainter than V=25.25 indicates where the incompleteness in the
innermost annulus drops below 50\%.
The TOn feature is visible as the narrow peak near V $\sim 24.25$ in
the LF of the innermost region (red histogram). At fainter magnitudes
the LF increases following the IMF, then it drops again at V$> 25.25$.
This is because below this limit the LF becomes increasingly affected
by photometric blends, whose net effect is to brighten the sample and
deplete the faint end. This happens for all annuli, but increases at
progressively fainter magnitudes as the distance from the cluster
center decreases. Indeed, the blue and magenta LFs (second and third
innermost annuli, respectively) show a general increase up to V=26,
reflecting the much more favorable incompleteness (below 50\% only at
V$>26$). However, the higher field contamination reduces the TOn
visibility, which is only noticeable as a broad peak in the magnitude
range $24-24.75$
Finally, the green and black LFs (the outermost annuli) show a smooth
increase, with no bumps in the range 24$-$25, as expected for the
average field of the LMC.
\begin{figure*}[t]
\centering \includegraphics[width=14cm]{fig8.pdf}
\caption{Top panel: synthetic LFs for the labeled ages. The thick
dashed line shows a synthetic zero age MS where the PMS phase has
been artificially suppressed. Bottom panel: Observed LFs of stars in
equal area annuli around H301's center (red for stars within 2.91
pc, blue between 2.91 and 4.11 pc, magenta between 4.11 and 5.04 pc,
green between 5.04 and 5.82 pc, black between 5.82 and 6.51 pc). The
grey histogram is the sum of the three innermost LFs. The magnitude
of the TOn is also indicated.}
\label{LF_annuli}
\end{figure*}
From a comparison by eye with the synthetic LFs we estimate the TOn
age to be between 26 and 30 Myr.
In the next Section we derive the most likely SFH compatible with the data.
\section{Quantitative derivation of H301's SFH}
To recover the most likely SFH we used the hybrid-genetic code SFERA
(Star Formation Evolution Recovery Algorithm), the statistical
approach described by \cite{cigno15}. Metallicity, binary fraction and
distance modulus are initially kept fixed at Z$=0.008$, 30\% and 18.5,
respectively, while age, reddening and field contamination are free
parameters. The SFH is parameterized in 40 contiguous steps of
duration 1 Myr between now and 40 Myr ago. The best combination of
synthetic CMDs and field contamination (a template field is taken at
radii larger than 6 pc from the center) is searched by SFERA. In
SFERA, observational and model CMDs are binned in color and magnitude,
and the binning scheme is changed randomly. The two 2D distributions
are then compared with a Poissonian $\chi^2$.
In order to reduce as much as possible the field contamination we only
used stars within 4 pc from H301's center. As a first step, we
recovered the SFH using massive stars (MSTO) and low-mass stars (TOn
stars) independently. The former includes all stars brighter than
F555W$=19$, the latter all stars fainter than F555W$=22$. Figure
\ref{sfh} shows the results, with the MSTO/BL solution (given the
paucity of BL stars, the age is largely driven by the MSTO) in dashed
green and PMS TOn one in solid blue. Finally, the red solid line shows
the SFH inferred using the entire CMD.
\begin{figure*}[t]
\centering \includegraphics[width=14cm]{fig9.pdf}
\caption{SFHs of H301. Solid red, blue and dashed green lines
corresponds to the best SFHs obtained using the entire CMD, TOn
alone and MSTO/BL alone, respectively. Shaded regions represent
the 1 $\sigma $ standard deviation from the best SFH. The inset
panel zooms around the main peak.}
\label{sfh}
\end{figure*}
In all cases the most relevant peak is located in the range
$28.5$-$31.5$ Myr. The MSTO/BL solution shows a clear peak at
$30.5^{+1}_{-2}$ Myr, while the TOn peak is slightly broader on the
young side, suggesting that the MSTO age is better defined (not
surprising given the much smaller photometric error affecting this
phase). The net result is that, within the errors, MSTO and TOn ages
are in excellent agreement. Combining the two features leads to the
red solid line solution, which resembles more the MSTO solution, but
with smaller uncertainties. Hereafter we refer to this solution as
the best SFH for H301.
At first glance our conclusions seem at odds with the results derived
by \cite{naylor09}, who studied the age for a selection of clusters
and associations younger than 100 Myr. They found that the ages based
on PMS isochrones are 1.5-2.0 times shorter than the ``nuclear'' MSTO
ages. However, our PMS age is mostly based on the TOn luminosity, and
not on the PMS stars, whose color is affected by several uncertainties
(see e.g. \citealt{henne}). Another possible source of error is the
use of different sets of models to study different phases: we do not
have this kind of problem because in this work we used the PARSEC
models which follow the entire evolution from the PMS to the post-MS.
Fig. \ref{lf2} shows the synthetic LF (blue line) corresponding to the
best SFH compared with the observed counterpart (red line). Error bars
in the data are the square root of the counts, while the $1\,\sigma$
uncertainty in the model is indicated with a blue band. The quality of
the fit is excellent and most of the differences are within the
errors.
\begin{figure*}[t]
\centering \includegraphics[width=12cm]{fig10.pdf}
\caption{Synthetic LF generated with the most likely SFH compared with
the observational LF. }
\label{lf2}
\end{figure*}
We found a best-fitting E(B$-$V) of 0.22-0.24 mag with a $1\,\sigma$
dispersion of 0.04 mag. In this case the dispersion is not the actual
error, but the spread (differential reddening) needed to match the MS
width.
It is worth to notice that the inferred absolute ages are expected to
change according to the adopted distance modulus. Indeed a shorter
distance would increase the cluster age. However, in the range 20-30
Myr, both the MSTO and TOn magnitudes change with age by
$\approx 0.1\, \mathrm{mag}/\mathrm{Myr}$, hence the relative age
MSTO/TOn remains unchanged.
In order to derive the initial total mass and number of supernovae
Type II exploded in H301 we ran 1000 Monte Carlo simulations
normalized to the observed number of stars (decontaminated with an
external field) brighter than F555W $=20$. Adopting the peak age
$30.5$ Myr, we calculated that the initial total mass of H301 was
$\approx 8800\,\pm 800$ M$_{\odot}$\footnote{Corrected for stars
residing outside the 4 pc radius (about 15\%).} and that $52\pm 9$
supernovae Type II\footnote{Assuming that all stars above about 20
M$_\odot$ produce black holes.} exploded. This result is also
supported by observations (see GC00 and references therein;
\citealt{lopez11}), that place H301 within an high-velocity expanding
shell, caracterized by multiple centers and filled with diffuse X-ray
emission. Moreover, \cite{walborn97} classified an object, WB9 (Be1 in
GC00; indicated with a green circle in Fig. \ref{chart}), as a
spectroscopic binary with a compact companion, possibly the stellar
remnant of a supernova event.
Before closing this section, it is important to discuss the impact of
the assumed binary recipe and metallicity on the recovered SFH.
\subsection{Binaries}
Although our findings
do not critically depend on the adopted binary fraction and mass
ratio, a very high binary fraction (above 50\%; \citealt{sana13}) and
flatter mass ratio q (our binary prescription favors low q binaries),
as found for O-stars, can affect the recovered SFH. In practice, the
effect of equal mass binaries is to make stars appear 0.75 mag
brighter, and not taking it into account leads to an underestimate of
the age. Since our SFHs are derived disfavouring q=1 massive binaries
we may indeed interpret the observed brightnesses underestimating the
age if the binaries have mostly the same mass.
However, B-type stars may behave differently. \cite{dunstall15}
studied the multiplicity of 408 B-type stars observed in different
regions (NGC~2070, NGC~2060, Hodge~301, SL~639) of 30~Doradus with
multi-epoch spectroscopy from the VLT-FLAMES Tarantula Survey
(VFTS). Although they found an average binary fraction of about 58\%,
close to the O-stars multiplicity by \cite{sana13}, the intrinsic
binary fraction in H301 was found to be remarkably low and around 20\%
(8\% detected, with a detection probability of $40 \pm
10$\%).
Concerning the mass ratio, they found a distribution of q favoring
low-mass companions (f(q)$\propto q^{-2.8}$).
Although our binary prescription is consistent with
\possessivecite{dunstall15} finding, we also tested the hypothesis of
a binary population similar to that found by \cite{sana13} for O-type
stars. Figure \ref{bin} shows the SFH recovered assuming a binary
fraction of 50\% and a uniform distribution of q. Overall, the result
is qualitatively similar to the standard SFH, except the peak that is
slightly broader.
\begin{figure*}[t]
\centering \includegraphics[width=14cm]{fig11.pdf}
\caption{Red slope: SFH recovered adopting 50\% of binaries and
constant mass ratio. }
\label{bin}
\end{figure*}
Finally, in the literature there is no evidence of a universal mass
ratio holding from massive to low-mass stars (seey
e.g. \citealt{ward15}). However, the similarity of our inferred MSTO
and TOn ages suggests that binary populations of high and low-mass
stars are not dramatically different.
\subsection{Metallicity}
The empirical evidence of a metallicity spread in the young
populations of the LMC (see e.g. \citealt{luck98},
\citealt{romaniello08}) makes Hodge's metallicity inherently
uncertain. In order to test the impact of a lower metallicity on the
final age, we re-recovered the SFH using PARSEC models with Z=0.005
(about $1\,\sigma$ away from the mean value of Luck's sample). The
result is shown in Fig. \ref{sfr_feh}. The lower metallicity reduces
the peak age by about 3 Myr. In this case, the best age is $27.5\pm 1$
Myr.
\begin{figure*}[t]
\centering \includegraphics[width=14cm]{fig12.pdf}
\caption{Recovered SFH for the metallicity Z=0.005 (black line) compared
to the Z=0.008 one (red line). }
\label{sfr_feh}
\end{figure*}
This difference can be explained by the dependence of MSTO and TOn
luminosities on metallicity and helium abundance (PARSEC isochrones at
Z=0.005 have slightly lower helium abundance than isochrones at
Z=0.008). First, the lower the metallicity, the shorter the
evolutionary timescale of a star of a given mass during the major
evolutionary phases. For this reason, at a fixed age, MSTO and TOn
masses are lower at Z=0.005 than at Z=0.008, and stars of lower masses
are also fainter. However, as secondary effect, a decrease of
metallicity shifts the isochrone to higher luminosities, whereas a
decrease of helium has the opposite effect. The combined effect of
these changes decreases MSTO and TOn luminosities of 0.1-0.2 mag,
mimicking an older isochrone. The resulting mean reddening is also
higher by about 0.02 mag (E(B-V)=0.24 mag), which causes a further
luminosity decrease of about 0.1 mag.
The new peak age, 27.5 Myr, leaves the initial total mass of H301
almost unchanged, whereas the number of supernovae Type II, $46\pm 8$,
is slightly lower than in the Z=0.008 case.
Another effect of the lower metallicity is to increase the luminosity
of the BL phase. This is clearly visible in Fig. \ref{4cmd_fe05}. In
contrast to Z=0.008 isochrones (see Fig. \ref{4cmd}), the 20 Myr BL is
long enough to reach the color of the two BSGs, but the red side is
now brighter than the three RSGs. On the other hand, the 25 Myr
isochrone plays the same role of the 20 Myr isochrone at Z=0.008,
fitting well the RSGs luminosity but failing to reproduce the observed
ratio RSGs/BSGs. The entire 30 Myr BL is too faint. In conclusion,
while no one model is able to reproduce extension and luminosity of
the BL at the same time, the 25 Myr is the one that better represents
the observed BL, getting closer to the MSTO/TOn best ages, which are
now only 2.5 Myr apart.
\begin{figure}[t]
\centering \includegraphics[width=10cm]{fig13.pdf}
\caption{The same as in Fig. \ref{4cmd}, but the synthetic CMDs are computed for
Z=0.005 instead of Z=0.008.}
\label{4cmd_fe05}
\end{figure}
In summary, given the uncertainty in the metallicity of H301, we
conclude that the best fitting age is between 26.5 and 31.5 Myr, while
the predicted number of supernovae Type II is between 38 and 61.
\section{Comparison with the literature}
Compared to the literature, our lower limit on the age of H301,
$\sim 26.5$ Myr, is slightly older than the photometric estimate of
GC00, who found $20\pm 5$ and $25\pm 5$ Myr using Geneva
(\citealt{schaerer93}) and Padova models (\citealt{bertelli94})
respectively. GC00 required their isochrone fit to match the position
of the blue and red supergiants and omitted stars with H$\alpha$
excess. This difference is probably caused by the different versions
of the isochrones and different stellar phases used in our
analysis. Indeed, if we limit our analysis to the BL phase only (see
Fig. \ref{4cmd} and \ref{4cmd_fe05}), our best estimate of the age
would drop down to 20-25 Myr, in good agreement with GC00. GC00's
estimate of the total reddening (E(B-V)$=0.28\pm 0.05$ mag) is
compatible with our estimate.
Concerning the mass of H301, GC00 found a present day mass of
4882$\pm$247 M$_\odot$ between 0.4 and 12 M$_\odot$, corresponding to
an initial mass of $\approx 6000 \pm 525$ M$_{\odot}$ when
extrapolated with a Salpeter IMF above 12 and up to 120
M$_{\odot}$. Most of the difference from our estimate stems from the
different age and IMF. If we were to adopt a Salpeter IMF in our
models down to 0.4 M$_{\odot}$, our mass estimate would drop to
$\sim 7200 \pm 700$ M$_{\odot}$. In addition, if we were to adopt a
20 Myr isochrone (as adopted by GC00) instead of a 30.5 Myr ago, our
result would drop to $6600\pm 400$ M$_{\odot}$, in excellent
agreement with GC00's estimate. Our estimated number of supernovae,
between 38 and 61, is in good agreement with GC00's rate
($41\pm 7$).
Our best age is higher than that found by Evans (15$\pm$5 Myr), who
used high resolution spectroscopy to reconstruct temperature and
luminosity for a sample of B-stars within 4.9 pc from H301's
center. Part of the discrepancy can be ascribed to the adopted
stellar models from \cite{brott11}, whose timing is faster than our
Padova models of the same metallicity: we found that a Padova
isochrone of 30 Myr has the same MSTO luminosity of Brott et
al. isochrone of 25 Myr. The rest of the discrepancy might then be
attributed to two other potential effects; the choice MSTO magnitude
in the present paper (Fig. \ref{cmd} and see the accompanying
discussion in section 2), and the transformation from a CMD to the
HRD in Evans et al. It is clear from Fig. 5 of Evans et al. that
they include the fainter of the two PMSG stars in their main
sequence group, and from our Fig.6 one might argue that raising the
MSTO to V$=\,15$ would give an age of ~20 Myr, in better agreement
with Evans et al. However a more serious issue perhaps is that
Evans et al. derive luminosities from the stars' effective
temperatures that imply there are several stars in H301 with masses
between 12 and 15 solar masses, significantly in excess of the 9-10
M$_{\odot}$ implied by an age of 26.5-31.5 Myr of the PARSEC
models. The brightest of these B-stars, particularly those in the
PMSG, might be explained as blue stragglers, being the products of
binary evolution. The present low fraction of binaries in H301,
discussed in the previous section, may simply reflect that many of
these have already interacted and formed binary products as
discussed by \cite{schneider14} producing blue straggler stars that
are among the brightest blue stars in H301.
Finally, it is important to remark that a 15 Myr age would be strongly
ruled out by the PMS TOn. As shown in Fig. \ref{LF_annuli}, a TOn of
15 Myr (grey slope) would be at least 1 mag brighter than the present
TOn, and hence less affected by photometric errors and
incompleteness. If any significant excess of counts at
V$\approx 22.5-23$ were present, it would be clearly
detected. Moreover, the 15 Myr age is disfavored by the MSTO too: as
shown in Fig. \ref{4cmd}, the isodensity contours of the 15 Myr model
show a continuity of counts up to the two PMSG stars, while no stars
are observed in the magnitude range 15-16.
We further note, that a search for PMS stars based on the H$\alpha$
excess emission (e.g. \citealt{demarchi10,demarchi11}) over the whole
HTTP area has revealed an overdensity of about 120 PMS stars within 4
pc of H301, with an average age of $\sim 28$\, Myr (De Marchi,
Panagia, Sabbi, et al., in preparation), hence in excellent agreement
with our estimate.
Although of low significance, a mild activity in the range 6-8 Myr is
also predicted by most of our solutions. Indeed, for V$> 23$, a few
stars up to 0.5 mag redder than the MS are visible at all radii (see
Fig. \ref{field}), hence suggesting that if a more recent episode of
star formation took place this did not stem necessarily from H301 but
it involved a larger region, as generated in a more diffuse
environment or from a dissolved cluster. An inspection of the
H$\alpha$ flux reveals that some of these red objects have H$\alpha$
excess, but their signal-to-noise ratio is too low ( $1-2\, \sigma$)
for a firm conclusion. Another possibility is that a few stars in the
region suffer from much higher reddening, mimicking a PMS
population.
\section{Conclusions}
From comparison of the observed CMD with simulations based on stellar
evolutionary models we derive in a self-consistent way the age
distribution and reddening of Hodge~301, a young cluster located in
the 30~Doradus region. Thanks to the photometric capabilities of the
HTTP data-set we have detected the PMS TOn for the first time. The
peak age we derive from fitting this feature and the MSTO, between
26.5 and 31.5 Myr ($30.5^{+1}_{-2}$ Myr using Z=0.008, $27.5\pm 1$ Myr
using Z=0.005), confirms that Hodge~301 is much older than the bulk of
the stars in NGC~2070, the most active region of 30~Doradus, but only
slightly older than its oldest stars ($\approx 20$ Myr;
\citealt{cigno15}). For a fixed metallicity, the resulting age spread,
$\approx 1-3$ Myr, is of the order of the age uncertainty as expected
from photometric errors only, hence it is difficult to conclude
whether the spread reflects a real prolonged SF.
The inferred PMS TOn age is consistent with the age derived from the
MSTO and a few Myr older than the age derived from fitting the
luminosity of the post-MS stars. In particular, while none of the
models can reproduce BL extension and luminosity at the same time,
fitting the three RSGs leads to an age between 20 and 25 Myr. However,
post-MS theoretical models for intermediate/massive stars are very
uncertain. As shown in, e.g., \citealt{tan14} mass loss and
core-convective overshooting can greatly affect the BL color
extension. Indeed, models tend to predict a clear separation in the
CMD between MS and BL stars, while no such gap is seen in several
extragalactic young massive star clusters (see
e.g. \citealt{larsen11}) and dwarf galaxies (see
e.g. \citealt{tang14,tang16} ). Given this uncertainty, we are
inclined to favor a cluster age that is mainly derived from fitting
the MSTO and PMS TOn. More in general, while the absolute age of H301
does depend on the stellar models adopted, the age difference between
H301 and NGC2070 (\citealt{cigno15}) is robust, since the analysis is
done with the same technique, stellar models and clock (PMS TOn).
Finally, it is intriguing that the MSTO/PMS TOn age estimate is also
older than the spectroscopic age derived by \cite{evans15}. Part of
the discrepancy could be attributed to the presence of blue
stragglers. However, unless H301 hosts an unusual number of these
objects, the discrepancy could indicate problems in current stellar
evolutionary models of massive stars.
Other interesting results are: 1) H301's total stellar mass is
$\approx 8800\,\pm 800$ M$_{\odot}$; 2) the total reddening E(B$-$V)
is $\approx 0.22-0.24$ mag, with a dispersion of 0.04 mag; 3) between
38 and 61 supernovae Type-II exploded in the region.
From the point of view of the Tarantula Nebula, the old age of H301,
several Myr older than the nearby and massive NGC~2070, and its high
supernovae activity, along with the fact that not older clusters are
visible in the region, could suggest that the onset of H301 sparked
the formation of NGC~2070.
\acknowledgments
We would like to thank Mario Gennaro, Chris Evans and Pier Giorgio
Prada Moroni for helpful comments and discussions. D.A.G. kindly
acknowledges financial support by the German Research Foundation (DFG)
through grant GO\,1659/3-2. EKG gratefully acknowledges funding from
Sonderforschungsbereich ``The Milky Way System'' (SFB 881) of the
German Research Foundation (DFG), especially via subproject B5. |
1911.13190 | \section{Perturbation theory}
In this section, we show in more details how we obtained Eq.~\eqref{eq:ApproxSolNk} of the main text. We start from the
equation giving the steady state in the continuum limit [see also Eq.~\eqref{eq:ContinuousMod} of the main text],
\begin{equation}
0 = \int\displaylimits_{-2 J}^{2 J} \di{\omega_{k'}} D(\omega_{k'}) \left\{ S(\omega) n(\omega_{k'})[n(\omega_k)+1] - S(-\omega)
n(\omega_k) [n(\omega_{k'}) +1] \right\}.
\label{eq:ContinuousModApp}
\end{equation}
The first step consists in expanding both $S(\omega)$ and $S(-\omega)$ in powers of $\beta_0 \omega$. We have
\begin{equation}
\begin{aligned}
S(\omega) &= \abs{a_0}^2 \frac{\kappa}{(\omega + \Delta)^2 + \left(\frac{\kappa}{2}\right)^2}\\
&= \abs{a_0}^2 \frac{\kappa}{\Delta^2 + \left( \frac{\kappa}{2} \right)^2} \frac{1}{1 + \frac{4 \Delta}{\Delta^2 + \left(
\frac{\kappa}{2} \right)^2} \frac{\omega}{2} \left( 1 + \frac{\omega}{2\Delta} \right) } \\
&= \abs{a_0}^2 \frac{\kappa}{\Delta^2 + \left( \frac{\kappa}{2} \right)^2} \frac{1}{1 - \beta_0 \frac{\omega}{2} \left( 1 +
\frac{\omega}{2\Delta} \right) } \\
&= \abs{a_0}^2 \frac{\kappa}{\Delta^2 + \left( \frac{\kappa}{2} \right)^2} \sum_{l=0}^\infty \left[
\frac{\beta_0}{2 \omega} \left( 1 + \frac{\omega}{2\Delta} \right)\right]^l
\end{aligned}
\label{eq:ExpNoiseSpectrumPos}
\end{equation}
and similarly we have
\begin{equation}
S(-\omega) = \abs{a_0}^2 \frac{\kappa}{\Delta^2 + \left( \frac{\kappa}{2} \right)^2} \sum_{l=0}^\infty \left[-
\frac{\beta_0}{2 \omega} \left( 1 + \frac{\omega}{2\Delta} \right)\right]^l.
\label{eq:ExpNoiseSpectrumNeg}
\end{equation}
We also expand $n(\omega_{k'}) = n (\omega_k + \omega)$ in a Taylor series around $\omega = 0$, we have
\begin{equation}
n (\omega_k + \omega) = \sum_{l=0}^\infty \frac{n^{(l)} (\omega_k)}{l!} \omega^l,
\label{eq:ExpN}
\end{equation}
where $n^{(l)}$ denotes the $l$th derivate of $n(\omega_j)$.
By replacing Eqs.~\eqref{eq:ExpNoiseSpectrumPos}, \eqref{eq:ExpNoiseSpectrumNeg}, and \eqref{eq:ExpN} into
Eq.~\eqref{eq:ContinuousModApp}, we can carry out the integral over the density of states. We look for solutions of the resulting
equation in the form of a series expansion
\begin{equation}
n(\omega_k) = \sum_{l=0}^\infty n_l (\omega_k),
\label{eq:SeriesAssump}
\end{equation}
where we assume that $n_l \propto (\beta_0 \omega_k)^l$.
By expanding Eqs.~\eqref{eq:ExpNoiseSpectrumPos}, \eqref{eq:ExpNoiseSpectrumNeg}, and \eqref{eq:ExpN} to third order, i.e, up to
$l=3$, we can find first order differential equations for $n_l$ with $l\in\{0,1,2\}$ by grouping together terms proportional to
$(\beta_0 \omega_k)^l$. We note that we treat terms like $(\beta_0 \omega_k) \omega_k /\Delta$ as being higher order, i.e., for
this particular example this term would be grouped together with terms scaling like $(\beta_0 \omega_k)^2$.
Collecting terms proportional to $\beta_0 \omega_k$, we find that $n_0 (\omega_k)$ must obey the differential equation
\begin{equation}
n'_0 (\omega_k) + \beta_0 n_0 (\omega_k) [n_0 (\omega_k) + 1] = 0.
\label{eq:n0diff}
\end{equation}
The solution of Eq.~\eqref{eq:n0} is the Bose-Einstein statistics with temperature $\beta_0$,
\begin{equation}
n_0 (\omega_k) = \frac{1}{\exp\left[ \beta_0 (\omega_k - \mu) \right] -1},
\label{eq:n0}
\end{equation}
and the constant of integration $\mu$ is the chemical potential, which is fixed by requiring that $\sum_k n_0 (\omega_k) = N$
Collecting terms proportional to $(\beta_0 \omega_k)^2$, we find that $n_1 (\omega_k)$ must obey the differential equation
\begin{equation}
n'_1 (\omega_k) + \beta_0 n_1 (\omega_k) [n_0 (\omega_k) + 1] = 0,
\label{eq:n1diff}
\end{equation}
whose solution is
\begin{equation}
n_1 (\omega_k) = c_1 \frac{\exp\left[ \beta_0 (\omega_k -2 \mu) \right]}{\left\{ \exp\left[ \beta_0 (\omega_k - \mu)
\right] -1 \right\}^2},
\label{eq:n1}
\end{equation}
with $c_1$ a constant of integration. To determine the constant $c_1$ we require that $ \sum_k [n_0 (\omega_k) + n_1 (\omega_k)] =
N$. Note that we have previously fixed $\mu$ by requiring that $\sum_k n_0 (\omega_k) = N$, which leads to $c_1 = 0$. We note that
this procedure to find the constants of integration ensures that our solution always describes a distribution with $N$ particles
at every order.
Finally, collecting terms proportional to $(\beta_0 \omega_k)^3$, we find that $n_2 (\omega_k)$ must obey the differential
equation
\begin{equation}
n'_2 (\omega_k) + \beta_0 n_2 (\omega_k) [ 2 n_0 (\omega_k) +1] + \frac{\beta_0^2 (3+\beta_0 \Delta)(6 J^2 +
\omega_k^2)}{12 \Delta} n_0 (\omega_k) [n_0 (\omega_k) + 1] = 0.
\label{eq:n2diff}
\end{equation}
The solution of Eq.~\eqref{eq:n2diff} is
\begin{equation}
n_2 (\omega_k) = - \frac{\exp\left[ \beta_0 (\omega_k - \mu \right]}{\left\{ \exp\left[ \beta_0 (\omega_k -\mu) \right]-1
\right\}^2} \left[ \frac{\beta_0^2}{36 \Delta} (3 + \beta_0 \Delta) (18 J^2 + \omega_k^2)\omega_k - \exp(\beta_0
\mu) c_2 \right],
\label{eq:n2}
\end{equation}
with $c_2$ the constant of integration which is once more determined by the condition $\sum_k [n_0 (\omega_k) + n_2 (\omega_k)] =
N$.
Combining Eqs.~\eqref{eq:n0}, \eqref{eq:n1}, and \eqref{eq:n2} together with the result $c_1 =0$ leads to
Eq.~\eqref{eq:ApproxSolNk} of the main text. We note that the latter equation can predict negative occupation numbers, but this
only occurs outside of the perturbative regime where the theory is not valid anymore.
\section{Approximating the steady-state solution with the Bose-Einstein statistics}
In this section we show that the third order perturbative solution is a better approximation of the steady-state solution than the
leading order Bose-Einstein statistics. In Fig.~\ref{fig:sup01}~(a) we plot the Kullback–Leibler divergence between $p^{(\infty)}$ and
$p_\mm{BE}$. We indicate by a white dashed line when $\beta (\omega_k) = \beta_0$ [see Eq.~\eqref{eq:EffT2ndOrder}], i.e., $\Delta
= -\sqrt{3}\kappa/2$. When this last condition is met, we have $n_2 (\omega_k) = 0$ if $c_2 = 0$.
\begin{figure}[t!]
\includegraphics[width=0.75\columnwidth]{sup01}
\caption{(Color Online) (a) Kullback–Leibler divergence comparing the steady-state solution with the Bose-Einstein
statistics given by Eq.~\eqref{eq:n0}. (b) Ratio between $D_\mm{KL} (p^{(\infty)} \parallel p_\mm{BE})$ and $D_\mm{KL}
(p^{(\infty)} \parallel p)$. The white dashed line corresponds to $\Delta=-\sqrt{3}\kappa/2$ where the temperature $\beta
(\omega_k)$ is equal to $\beta_0$.}
\label{fig:sup01}
\end{figure}
In Fig.~\ref{fig:sup01}~(b), we plot the ratio
\begin{equation}
R = \frac{D_\mm{KL} (p^{(\infty)} \parallel p_\mm{BE})}{D_\mm{KL} (p^{(\infty)} \parallel p)},
\label{eq:rationDKL}
\end{equation}
which shows that the higher order perturbation approximates the steady-state solution more accurately than the leading order given
by the Bose-Einstein statistics. We note, however, that in the close vicinity of $\Delta=-\sqrt{3}\kappa/2$, the leading order is
more suitable to approximate the steady state.
\end{document} |
1808.06648 | \section{INTRODUCTION}
The cosmic microwave background (CMB) has become one of the most powerful probes of the early universe. Measurements of temperature anisotropies on the level of approximately ten parts per million have brought cosmology into a precision era, and have placed tight constraints on the fundamental properties of the universe. Beyond temperature anisotropies, CMB polarization anisotropies not only enrich our understanding of our cosmological model, but could potentially provide clues to the very beginning of the universe via the detection (or non-detection) of primordial gravitational waves. A number of experiments have made and are continuing to refine measurements of the polarization anisotropy. However, these experiments are typically dedicated to a relatively restricted range of angular scales, e.g., large angular scales (tens of degrees) or high resolution/small angular scales (on the order of 1\,arcminute). To provide a complete picture of cosmology, both large and small angular scales are important. Ideally these measurements would be made from the same observing site so that the widest range of angular scales can be probed, at multiple frequencies, on the same regions of the sky. This is the goal of the Simons Observatory (SO). SO will field a 6-meter large aperture telescope (LAT) coupled to the large aperture telescope receiver. During initial deployment, seven of the planned thirteen optics tubes will be installed in the LATR, containing over 30,000 detectors. The LAT is designed with a large FOV capable of supporting a cryostat with up to 19 LATR-like optics tubes. To limit the development risk of the large SO cryostat, the LATR is designed to accommodate up to 13 optics tubes. We plan to deploy 7 optics tubes with 3 detector wafers in each for a total of roughly 35,000 detectors, primarily at 90/150~GHz in the initial SO deployment. We note that each optics tube could be upgraded to support 4.5 wafers for a ~50\% increase in the number of detectors per optics tube. With this upgrade and the deployment of 19 optics tubes, the LAT could support roughly 145,000 detectors at 90/150~GHz.
SO will also have an array of half-meter large angular scale cameras coupled to an additional 30,000 detectors. The unique combination of telescopes in a single CMB observatory, which will be located in Chile\textsc{\char13}s Atacama Desert at an altitude of 5190~m, will allow us to sample a wide range of angular scales over a common survey area.
In this paper we will provide an overview of the simulations done in support of the LATR design process. Finite element analysis (FEA) simulations give us critical feedback about the performance of our components, which we use to refine their design. In Sec.~\ref{sec:mech} we cover the suite of mechanical simulations that we did in support of our design using the Solidworks Simulation module.\footnote{Dassault Syst\`emes, 10, Rue Marcel Dassault, 78140, V\'elizy-Villacoublay, FRANCE, https://www.solidworks.com/} Then, in Sec.~\ref{sec:therm} we described our combined thermal model and detail the thermal gradient simulations performed with the COMSOL software.\footnote{COMSOL, Inc., 100 District Avenue, Burlington, MA 01803, USA, https://www.comsol.com}
The challenges we faced when designing the LATR are unique in that we are trying build the largest ground-based CMB receiver to date. However, the solutions to these challenges we developed with the assistance of FEA simulations will provide a critical stepping stone for the next generation CMB experiments, in particular, CMB-S4\cite{Abazajian2016,Abitbol2017}.
\section{MECHANICAL Simulations}
\label{sec:mech}
In order to ensure the structural stability of our receiver, a full suite of FEA simulations of all structurally critical components was performed. All structural simulations were done using the Solidworks Simulation software module. The core result in each simulation is the static simulation, which quantifies the linear strain of our receiver under static conditions. Our criterion for a success is if the minimum factor of safety (FoS) is four or higher, where the FoS is the elastic yield strength divided by the predicted strain. Additionally, we perform a buckling analysis. The buckling analysis examines the behavior of thin elements under compression, and quantifies whether those parts will collapse due to loss of stability. The criterion for a successful buckling simulation is a load factor greater than 4, where the load factor is the strain at buckling divided by the maximum predicted strain, analogous to FoS. Finally, for some temperature stages, we perform a vibrational analysis, which determines the fundamental frequencies of vibrations for our various temperature stages. Vibrations present a number of challenges: they can put the optics out of alignment, they work harden the copper we use to conduct heat, thereby reducing its efficacy, and, in our coldest stages, can cause appreciable heating. The G10 that we use to support our 80~K, 40~K, and 4~K stages is very stiff, and thus acts as a high frequency filter on vibrational modes coming from the 300~K stage. In particular, we require that the first vibrational mode of each stage is several times higher than the first vibrational mode of the telescope, at 3~Hz, so that the vibrations of the telescope do not couple to the receiver. Therefore, we consider our vibrational analysis a success if the first fundamental mode is at 20~Hz or higher.\\
For all simulations, the key features are loads, fixtures, contacts, and material properties. In all simulations, we include the gravitational load from the masses involved. In any simulations where there is a mass load but the corresponding body is not in the solid model, we use a displaced load to simulate the force. An example is the 4~K stage, where the 1~K and 100~mK bus masses are suspended from the 4~K plate, but the buses themselves are not included in the simulation for simplicity. Each simulation has a fixture, a surface which is defined not to move or deform under load. Generally this is the flange from which the stage is supported. For example, in the 40~K simulation, the flange from which it is supported is the fixture. While these fixtures can deform slightly in reality, we have done simulations of the full structural path which confirm that the flanges deform minimally. Component contacts define the interface between parts. In general, we assume that parts are bonded, which causes our results to slightly over estimate the safety of our designs, and this motivates the high FoS. Finally, material properties are the defaults from Solidworks, with two exceptions. The two major materials which we defined custom materials for are G10\cite{Kasen1980}, and carbon fiber, where we obtained technical specification from the planned vendor, vanDjik Pultruded Products.\footnote{Aphroditestraat 24, NL-5047, TW TILBURG, The Netherlands, http://www.dpp-pultrusion.com/en/the-company/}
\subsection{Vacuum Shell}
\label{sec:shell}
The first challenge presented in designing the LATR is in the construction of the vacuum shell. While numerous examples of vacuum vessels as large or larger than ours exist, our design presented several specific challenges, the combination of which to our knowledge has not previously been solved. At the top of this list of challenges is the front plate design. The front plate of the cryostat needs to contain 13 densely packed apertures while still maintaining its structural strength. Further, the front plate needs to flex in less than 2~cm under load in order to avoid the 80~K stage behind it.\cite{Zhu2018} Finally, the plate has to be consistent with our weight restrictions--some early designs we considered weighed nearly 1000~kg, a significant portion of our total mass budget of 6000~kg.\cite{Zhu2018} The closest analog we have identified was the SPIDER cryostat.\cite{SPIDER} Their front plate, however, is highly curved and would exceed the length restriction on our cryostat. Our simulations identified the material surrounding the inner seven optics tubes as by far the most critical for the structural integrity of the cryostat. Therefore we endeavored to strengthen that area as much as possible. The plate is 1~cm thicker in this area in order to reduce stress concentrations. Further, each of our windows is a hexagon rather than the typical circle. Each optics tube projects a hexagonal optical beam which allows us to make our windows hexagonal to match. We thus remove less material as compared to a circumscribed circle, as shown in Fig.~\ref{fig:hexwindow}. We also taper the walls of each window the match the beam divergence, so that the bottom of our windows are slightly thicker than the tops of the windows. See Fig.~\ref{fig:Taper}. Since the front plate will be monolithic and will not require welding, we will manufacture it from 6061-T6 aluminum for optimum strength and machinability. Finally, we used FEA to identify areas of low stress and aggressively weight relieved them, resulting in the "pockets" visible on the front of the plate. The end result is a 7~cm thick front plate weighing 350~kg that deflects less than 1~cm while conforming to American Society of Mechanical Engineers (ASME) VIII-2 government.\\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{TaperImage2.PNG}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:Taper}
Cross section of front plate showing the tapering of the windows with the beam convergence. Light is entering the cryostat from the right, and detector arrays are to the left. Since we tapper the windows, the left hand side of the front plate is thicker than it would be if we had cut the entire window to the beam size at the right hand side of the window.}
\end{figure}
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{HexWindowAdvantage.PNG}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:hexwindow}
Closeup of front plate showing material saved by using hex windows. The gray circles are the size of the hole of equivalent minimum beam clearance at the front surface of the front plate. Making the windows inscribed hexagons instead of the circles which circumscribe them adds a significant amount of material at what is the weakest point in our front plate.}
\end{figure}
The body of our vacuum shell follows a more conventional design. The body is split into two parts, a front shell and a back shell. This allows us easier access to the inside of the vacuum shell during assembly while also simplifying the manufacture of the shell. Along both halves are ribs placed roughly every 50~cm in compliance with the ASME VIII-1. The back half will be constructed from 0.25~in thick 6061 aluminum, while the front half will be 0.5~in thick. Our FEA, verified by an engineering firm specializing in pressure vessels called PVEng\footnote{Pressure Vessel Engineering Ltd, 120 Randall Dr b, Waterloo, ON N2V 1C6, Canada, https://pveng.com/}, indicates that 0.25~in is sufficient to resist the buckling mode. However the front plate flexing inwards stresses the front half of the shell. In order to resist this mode of failure, the front shell needs to be thicker. Our FEA validates 0.5~in as sufficiently thick. Finally, the back plate of the shell is a non-standard design which minimizes the distance from the flange of the back plate to the apex of curvature of the back plate. It was validated via FEA, as shown in Fig.~\ref{fig:Vacuum FoS}. The lowest factor of safety on our vacuum shell is approximately 5. Note that in Fig.~\ref{fig:Vacuum FoS} the minimum factor of safety is 3.2. This is a non-physical effect of the FEA, wherein stress concentrates in corners of the model much more than it would in reality. Meshing the fillets in the corners with sufficient density to remove the effect is computationally prohibitive. We find that the lowest factor of safety on a surface of our vacuum shell is at least 5. \\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{Vacuum_Shell_Fos.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:Vacuum FoS}
Factor of Safety plot for the vacuum shell. The listed minimum of 3.2 is due to unphysical stress concentrations in corners. The actual minimum factor of safety on a surface is greater than 5 when fillets are accounted for. FEA allowed us to identify the correct shape for the weight relieving around the outside, as well as determine the maximum safe cut depth.}
\end{figure}
\subsection{80~K Stage}
The 80~K stage supports only its own weight, and given the need for high thermal conductivity, we will construct the 80~K plate from 6063-T5 aluminum (see Sec.~\ref{sec:80therm}). The 80~K stage is suspended from the front vacuum plate, so this was selected as the fixture. To isolate the stage thermally from 300~K, we decided to construct the support structure from G10-CR, which is both thermally non-conductive and structurally strong. The results of our simulation were that the FoS was 62, the buckling load factor was 1700, and the first fundamental vibrational mode is at 46.9Hz as seen in Fig.~\ref{fig:80K}, all of which greatly exceed our requirements.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{80K_Frequency.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:80K}
Frequency analysis plot for 80~K filter plate. AMPRES is an arbitrary unit of resultant relative vibrational amplitude. The G10 tabs supporting the stage can be seen around the edge. The plot shows the drum-like vibration which is the first fundamental mode of this stage-the middle of the plate is flexing in and out.}
\end{figure}
\subsection{40~K and 4~K Supports}
We performed combined 40~K and 4~K simulations in order to include the structural coupling between the stages. Our fixture was the 300~K flange of the vacuum shell which supports the 40~K stage. For thermal isolation, both the 40~K and 4~K standoffs are constructed from G10-CRg. We find that the FoS is greater than 10, the load factor is 590, and the first vibrational mode is at 18.9~Hz. The FoS and load factor both meet our requirements, but the first vibrational mode is at a slightly lower frequency than desired. This mode involves the 4~K plate vibrating like a drum. We are currently investigating thickening the 4~K plate to raise this fundamental frequency. In addition to our typical requirements, we also have a sagging requirement on our optics tubes. Optics simulations show that the tip-tilt on the detector arrays at the back of the optics tube is 0.4 degrees to maintain our minimum Strehl ratio\cite{Dicker2018}, which corresponds to a maximum relative sag at the front of the optics tube relative to the back of the optics tube of 6~mm. We find that the tubes sag by approximately 0.3~mm, much lower than our requirement of 6~mm. Finally, due to the differential thermal contraction between the 300~K flange and the 40~K stage, the support tabs flex inwards radially approximately 6~mm. While we have performed FEA that shows that the tabs will be able to safely flex inwards by this amount, we have not been able to determine how to perform an analysis which includes both the mass load and the flexing inwards. As such, the 40~K tabs have been designed to have an extremely high FoS, exceeding 10, as seen in Fig.~\ref{fig:40K Disp}.\\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{40K_4K_Disp.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:40K Disp}
Displacement plot for the combined 40 and 4~K stages. The displacements show in the model are exaggerated to illustrate the directions of the displacements, and the optics tubes do not actually collide. The G10 tabs can be seen around the outside, with the 4~0K ring in between them. The 4~K plate is towards the left of the figure. The entire assembly is sagging towards the bottom of the figure, while the center 7 optics tubes are additionally sagging relative to the plate.}
\end{figure}
\subsection{Thermal Bus Support}
The supports for our 1~K and 100~mK supports must be thermally isolative, like the other stages, in order to maintain the detectors and electronics at these stages at their operating temperatures. At these temperatures carbon fiber has very low thermal conductivity, lower than G10. In order to further reduce thermal conduction without sacrificing stiffness, the carbon fiber tubes are hollow. We simulated the 1~K and 100~mK supports together, fixing the 4~K plate to which the 1~K supports attach. We found that the current design meets our requirements, with a FoS of 89 and a buckling load factor of 58. Vibrational modes at these stages are particularly important, as vibration can cause appreciable heating and work harden the extensive amount of copper. We find that the first fundamental mode is at 28.6~Hz as seen in Fig.~\ref{fig:1K Freq}, above our requirement.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{Thermal_Bus_Freq.jpg}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:1K Freq}
Frequency analysis plot for 1~K 100~mK thermal bus showing the first fundamental mode of vibration. This mode is at about 47~Hz and corresponds to a drum-like mode. AMPRES is an arbitrary unit of resultant relative vibrational amplitude. Since we do not include dampening, there is no way to calculate the amplitude of vibration at resonance. AMPRES shows the relative vibration of different parts of the assembly.}
\end{figure}
\section{THERMAL SIMULATIONS}
\label{sec:therm}
In addition to structural simulations, we have also completed numerous thermal simulations in order to quantify thermal gradients over various components in our receiver. The performance of many of the pieces of our receiver depends on the temperature of those components. In particular, the efficiency of our detectors, electronics, and optical elements depends on their temperatures. While we know the temperature of each stage's corresponding pulse tube or dilution refrigerator stage, there are frequently significant distances between the cooling apparatus and the elements they are cooling. We have designed our cryostat to minimize these distances, but in an instrument of this size, there will inevitably be longer thermal paths than in previous experiments, leading to correspondingly larger thermal gradients. Therefore, it is imperative that we develop an estimate for these thermal gradients, so that we can ensure that the optics, detectors, and readout electronics will be sufficiently cooled to function effectively.\\
The starting point for our thermal simulations is the Simons Observatory LATR thermal model. The model combines estimates of conductive loading through supports and wiring from one stage to another, radiative loading from warm stages to another, dissipation from electronics, and a simulation of the optical chain. To compute the conductive loading for a given part, we combine the cross sectional area A, length l, warm-side and cold-side temperatures, and a temperature-dependent conductivity model from the literature\cite{Woodcraft2009}\cite{NISTCryo} with the integral form of Fourier's Law:
\begin{equation}
\text{P} = -\frac{\text{A}}{\text{l}}\int_{\mathrm{t_{low}}}^{\mathrm{t_{high}}} \text{k(t)dt}
\end{equation}
For the radiative loading between stages, we use the Stefan-Boltzmann law, given in Eq,~2, conservatively assuming aluminum surfaces have a emissivity of $5\%$\cite{Bartl2004} and surfaces covered in multilayer insulation (MLI) have an emissivity of $.2\%$.\cite{Ross2015}
\begin{equation}
\text{P} = \text{A}\epsilon\sigma(\text{T}_{\text{high}}^{4}-\text{T}_{\text{low}}^{4})
\end{equation}
A is the absorbing area, $\epsilon$ is the emissivity, $\sigma$ is the Stefan-Boltzmann constant, and T$_{\text{low}}$ and T$_{\text{high}}$ are the colder and warmer stage temperatures. We also estimate and include the power dissipated by our readout electronics. To estimate the power deposited on each stage by the optical elements at that stage, we developed a thermal-optical model, which is described in a separate paper in these proceedings.\cite{Zhu2018} \\
We then use the COMSOL\footnote{COMSOL, Inc., 100 District Avenue, Burlington, MA 01803, USA, https://www.comsol.com} software suite to estimate thermal gradients across our 80~K, 40~K, 4~K, 1~K, and 100~mK stages. COMSOL is an FEA program that supports the simulation of various physical processes, including heat transfer. For a typical thermal simulation, We first import a computer aided design (CAD) model of our receiver into COMSOL. We then apply a material to each component, which sets its temperature-dependent thermal conductivity from models we have collected from the literature\cite{Woodcraft2009,NISTCryo}. We then apply the loads we calculated with our thermal model to the most appropriate locations on the model and fix the surface corresponding to that stage's cooling system to its operating temperature. Currently we use the manufacturer's guaranteed cooling power for our target temperature, but when we take delivery of the coolers we plan to measure their cooling curves, that is their cooling power as a function of temperature. From the loading on a given stage, we will compute the temperature that the refrigerator will reach, allowing us to refine our simulations. From there we mesh our model and run the simulation. Critically, our model does not take into account thermal resistance between material interfaces, which we call thermal joint resistance. COMSOL does have a function for computing this resistance, but we have not been able to verify its accuracy. Instead, we will make measurements of the thermal joint resistance over the most critical of our thermal interfaces in a test environment and combine these gradients with the ones computed in COMSOL to form a revised estimate of the total gradient. We then use the results of the simulations to inform our design decisions. For example, an earlier design for our 1~K and 100~mK thermal buses was much larger, nearly the size of our 4~K plate with only small cutouts. Via our thermal analysis, we were able to identify which regions were critical for conduction and which were not. We then removed material in the non-critical regions to arrive at our current design, which is significantly lighter and simpler to manufacture. \\
\subsection{80~K Plate}
\label{sec:80therm}
The LATR will feature 80~K stage infrared (IR) filters that will be used to reduce the thermal loading on colder stages. These filters will consist of a double sided IR blocker and an alumina absorber.\cite{Zhu2018} We have determined that if the alumina filters are actually 120~K then our sensitivity will be reduced by less than $0.1\%$ as compared to the being at 80~K, which is an acceptable level\cite{Hill2018}. From our optical simulations, we expect the center of the alumina filters to be no more than a few Kelvin warmer than the edges.\cite{Zhu2018} Once we obtain these filters we will perform several tests, one of which will be to measure their thermal gradients. For now we conservatively estimate the gradient between the center of the filter and the edge as 20~K, so that our goal is to keep the gradient across the 80~K plate under 20~K. From our simulations, we find that a plate made of 6061-T6 aluminum would have a gradient of 27~K while a plate made of 6063-T5 aluminum will have a gradient of 16~K. Since we need further overhead reserved for thermal joint resistance, we therefore have decided to construct our 80~K plate from 6063-T5 aluminum. See Fig.~\ref{fig:80K Thermal}.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{Thermal_80K.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:80K Thermal}
Predicted thermal thermal gradient on 80K stage for 6063-T5 filter plate. The PT-90 pulse tubes would be attached to the thermal straps located on the right of the figure at the coldest points.}
\end{figure}
\subsection{40~K Stage}
The 40~K stage presents a unique challenge in that the sources of thermal loading are far away from the cryo attachment points. Our current design calls for a 40~K structural ring, supported from 300~K and supporting 4~K, to which also are attached a 40~K radiation shield, a standoff, and 40~K filter plate.\cite{Zhu2018} These shields and the filter plate absorb most of the power on the 40~K stage. Therefore, we would ideally make the shield thick to reduce thermal gradients along the 40~K stage to the filter plates and readout. However, to meet our mass budget, these shields must be rather thin, at 0.125~in. Our FEA shows that making shields of this size with our predicted loading out of 6061-T6 aluminum would result in unacceptably high gradients of 25~K. Therefore, we will construct the 40~K shield and extension tube from 6063-T5 aluminum, which offers nearly an order of magnitude more thermal conductivity than 6061-T6 at 40~K\cite{Woodcraft2010} while still maintaining more than half the strength.\footnote{From Solidworks Simulation materials library} \setcounter{footnote}{0} We will still make the 40~K structural ring out of 6061-T6 aluminum. This combination of materials allows us to maintain a factor of safety in excess of 4 over the 40~K stage, while achieving a thermal gradient of only 7~K, ensuring that our filters and electronics operate as expected. See Fig.~\ref{fig:40K Thermal}. \\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{Thermal_40K.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:40K Thermal}
Predicted thermal thermal gradient on 40K stage for 6063-T5 extension tube, radiation shield, and filter plate. The pulse tubes are not shown but connect to the coldest points. Note that even with 6063-T5, the filters are nearly at 50~K at their edges.}
\end{figure}
\subsection{4~K and Optics Tubes}
For the 4~K stage, we quickly determined from structural FEA that the plate would have to be made of 6061-T6 aluminum. Our structural analysis also determined that the deflection of the optics tubes would be acceptable if they were made of either 6061-T6 or 1100-H14 aluminum, though the deflection is somewhat worse with 1100-H14. We therefore analyzed both materials in our thermal FEA. Our simulations show that the total gradient for a 6061-T6 aluminum plate coupled to 6061-T6 optics tubes would be 5.2~K and for a 6061-T6 aluminum plate coupled to 1100-H14 aluminum optics tubes would be 2.9~K (see Fig.~\ref{fig:4K Thermal}). Since the 6061-T6 aluminum gives only marginal gains in tube sag, we will construct our tubes out of 1100-H14 aluminum. Since we rely on our flanges to hold the shape of the optics tube, we will construct the flanges from 6061-T6 aluminum and then glue the 1100-H14 aluminum tubes into the flanges. For these interfaces we are investigating the use of a cryogenic epoxy designed for good heat conduction\footnote{Epotek T7110. See https://www.epotek.com}. This epoxy is weaker than the epoxy we use for more structurally critical components, like the 40~K support tabs, but still strong enough for the optics tubes. When we take reception of our first optics tube, we plan on testing both the strength of the epoxy and its thermal conductivity. This will allow us to precisely evaluate the thermal joint resistance along much of our 4~K thermal path, refining our thermal gradient estimate. If we find the thermal gradients are excessive, then we will attach strips of 4N high purity aluminum to the outside of our optics tubes to reduce their gradients.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{Thermal_4K.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:4K Thermal}
Predicted thermal thermal gradient on 4~K stage for 1100-H14 optics tubes. PT 420's are not shown but attach to the thermal straps, one of which can be seen at the middle, at the coldest point. Note that the gradients on the radiation shield are relatively unimportant--while it increases the radiative loading on the stages inside the shield, there is no other performance degradation.}
\end{figure}
\subsection{Thermal Bus}
To conduct heat away from our detectors to our dilution refrigerator (DR), we employ two thermal buses constructed of oxygen free high conductivity (OFHC) copper, one for the 1~K stage and one for the 100~mK stage. Minimizing the gradients on these stages is particularly important, as the sensitivity of our instrument is strongly dependent on the temperature of the Lyot stop, which is nominally at 1~K, and of the detector arrays, which are nominally at 100~mK. We therefore elected to make the buses out of OFHC copper. Since OFHC copper is heavy and expensive, we worked to minimize the size of the bus while retaining adequate performance thermally. We started with an oversized design, used FEA to identify areas that were not conducting large amounts of heat, and removed those areas to arrive at our current design. This design also has the property that it is relatively easy to manufacture and work with. The arms and core can be machined separately and then welded together, while the spacing between the arms provides ample room to reach through and work on interfaces behind it. We also included rough designs for heat straps in our model to achieve the most realistic results. Excluding thermal joint resistance, we found that the maximum gradient over the 1K thermal bus and straps is 0.6~mK and the maximum gradient over the 100~mK stage and straps is 4.4~mK (Fig.~\ref{fig:1K 100mK Thermal}). While the loading on the 1~K stage is higher than on the 100~mK stage, the thermal conductivity of OFHC copper is also much higher, resulting in the 1~K gradients being smaller than the 100~mK gradients. While have not yet received our DR to make cooling curves, we do have cooling curves from a similar, but less powerful DR from the same manufacturer which we used as an estimate of the 1~K and 100~mK base temperatures. We took the base temperatures to be are 1~K and 80~mK respectively, so that the warmest temperature predicted on the 1~K stage is 1.0006~K and the warmest on the 100~mK is 84.4~mK. We do not yet have a model for the detector arrays, and thus we do not yet have a model of their gradients. However, the thermal loading on the detectors is much lower than other loads on the 100~mK stages, so the gradient across the detector arrays should be small compared to the gradients across the bus to the array. In general, from our experience on previous experiments,\cite{2016ApJS..227...21T} we expect that thermal joint resistance will dominate at these temperatures. As such the simulations above constitute a lower bound on the expected thermal gradient at these stages. Therefore, we plan to weld each braid to the thermal bus, and to weld each braid between our detector arrays and the 100~mK thermal bus at the thermal couple coming from the detector array. We will also measure these thermal joint gradients during cryogenic acceptance. \\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width = .75\linewidth]{Thermal_1K_100mK.png}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:1K 100mK Thermal}
Predicted gradient on the 1~K-100~mK thermal bus. The 100~mK bus is on top, with the 1~K bus below. The thermal bus distributes power to the detector arrays and optical elements at 1K, so minimizing the gradients across the bus is critical. Our current best simulations show a gradient of about 2~mK across the 1~K stage and 8~mK across the 100~mK stage.}
\end{figure}
\section{SUMMARY}
Computer assisted design and FEA simulations have been key for the design of the Simons Observatory LATR. As we iterated through designs of various components, FEA provided critical feedback on the performance of those components. This feedback allowed us to evaluate the performance of our design, determine what could be improved, and design the next iteration. Through this process, we were able to converge on our current and final design for the LATR. Lastly, FEA provided the final validation of each component, ensuring for us that they would meet the performance specifications that we set. The result of this process is a design which is validated and slated for manufacture in the near future, and which can inform the design of a future CMB-S4 instrument.\cite{Abitbol2017}\cite{Abazajian2016} |
cond-mat/9806219 | \section{Semiclassical diagrammatic approach}
We begin by presenting some more details concerning our semiclassical
evaluation of the disorder correlation function, $K^d$, and
corresponding susceptibility $\left<\chi^d (H)\right>$.
We consider non-interacting electrons in a
weak, perpendicular magnetic field.
In terms of retarded and advanced single particle Green functions,
${\cal G}^{+(-)}({\bf r}_1 , {\bf r}_2 ; \varepsilon ;H)$, $K^d$
may be written as
$K^d(\varepsilon_1,\varepsilon_2 ;H) \approx
\left( \Delta^2/2\pi^2\right) {\cal R} \left<\!\left<
{\rm tr} \,{\cal G}^{+}(\varepsilon_1 ;H) {\rm tr} \,
{\cal G}^{-}(\varepsilon_2 ;H)
\right>\!\right>_d$,
where $\left<\!\left< \ldots \right>\!\right>_d$ implies the inclusion
of connected diagrams only. Using a diagrammatic approach
introduced by Altland and Gefen \cite{A+G:95}
the field sensitive part of $K^d$ can be expressed as a sum including
Cooperon type diagrams ${\cal S}_{n}^{(C)}$ defined by
\begin{equation}
{\cal S}_{n}^{(C)}(\omega ;H) = {\rm Tr} \left[\zeta^{(C)}\right]^n =
\int \prod_{j=1}^{n} d^dr_j\prod_{m=1}^{n}
\zeta^{(C)} ({\bf r}_m,{\bf r}_{m+1};\omega ;H) \quad ; \quad
{\bf r}_{n+1} \equiv {\bf r}_1 \; .
\label{sn}
\end{equation}
Here
$\zeta^{(C)} ({\bf r}_1 , {\bf r}_2 ;\omega ;H)
= \left( 1/2\pi\bar{\nu}\tau \right)
G^{+}({\bf r}_1 , {\bf r}_2 ; \varepsilon_1;H )
G^{-}({\bf r}_1 , {\bf r}_2 ; \varepsilon_2;H )$,
$G^{+(-)} = \left< {\cal G}^{+(-)} \right>_d$ is the disorder
averaged single particle Green function,
$\omega = \varepsilon_1 - \varepsilon_2$, and $\tau = \ell/v_F$.
Semiclassically, $G^+({\bf r}_1, {\bf r}_2)$ can be expressed as a
sum over classical trajectories $t$ between ${\bf r}_1$ and ${\bf r}_2$
\cite{Ric:96},
\begin{equation}
G^{+}({\bf r}_1 , {\bf r}_2) \simeq
\sum_t D_t ({\bf r}_1 , {\bf r}_2)
\exp{\left[\frac{i}{\hbar} S_t({\bf r}_1,{\bf r}_2)
-\frac{L_t({\bf r}_1 , {\bf r}_2)}{2\ell}\right]} \; .
\end{equation}
The prefactor $D_t$ includes the classical phase space density,
$S_t$ stands for the classical action along an orbit $t$
(in the absence of disorder)
including the Maslov index, and
$L_t$ is the orbit length.
$\zeta^{(C)}({\bf r}_1,{\bf r}_2;\omega ;H)$ is then given
in terms of pairs of classical paths which explicitly include
the effect of boundary scattering.
However most pairs (of different paths) produce oscillating
contributions which
we assume to vanish after energy or size averaging\footnote{
In a ballistic system additional
oscillatory terms will however remain upon pure disorder average
for fixed size \cite{A+G:95,M+R:98}.}.
The main contribution to the field sensitive part of
$\left<\, K^d \,\right>_L$ arises from diagonal terms
(otherwise known as the Cooperon channel) obtained by pairing paths with
their time reverse.
Assuming that the magnetic field affects
the phase of the particles but not their trajectories we can write
$\zeta^{(C)} ({\bf r}_1, {\bf r}_2 ; \omega ;H)
= \sum_{t:{{\bf r}_1} \rightarrow {{\bf r}_2}}
\tilde\zeta_t^{(C)} ({\bf r}_1, {\bf r}_2 ; \omega ;H)$
where
\begin{equation}
\tilde\zeta_t^{(C)} ({\bf r}_1 , {\bf r}_2 ; \omega ;H)
\simeq
\frac{v_F |D_t|^2}{2\pi\bar{\nu} \ell} \exp \left[
- \frac{L_t}{L_\phi} - \frac{L_t}{\ell} + i\omega T_t
+ i \frac{4\pi}{\varphi_0}
\int_{{\bf r}_1}^{{\bf r}_2} {\bf A.dr} \right] \; .
\label{zsc}
\end{equation}
Here, $T_t$ is the period of the trajectory, $\bf A$ is the vector potential,
$\varphi_0 = hc/e$, and the level broadening was introduced via
$\omega \rightarrow \omega + i\gamma$.
Eq.~(\ref{zsc}) depends, besides $\ell$, only on the system
without disorder and holds for both integrable and chaotic geometries.
The disorder induced contribution to the average susceptibility is given by
($\varphi = H L^2/ \varphi_0$):
\begin{equation}
\frac{\left< \chi^d (\varphi) \right>}{\left| \chi_L \right|} \approx
- \frac{6}{\pi^2} \frac{\partial^2}{\partial \varphi^2} \
\sum_{n=1}^{\infty} \frac{1}{n} {\cal S}_{n}^{(C)} (0 ;\varphi) \; ,
\label{stos}
\end{equation}
where the bulk Landau susceptibility is
$\chi_L = -e^2/24\pi mc^2$ for spinless electrons.
In Eq.~(\ref{stos}) the ${\cal S}_n^{(C)}$ are now assumed to contain
diagonal terms only and their contributions
(Eq.~(\ref{sn}))
can be calculated by diagonalising $\zeta^{(C)}$ which in general cannot
be done analytically. However we can use the fact that all the
variations of $\zeta^{(C)}$ occur on classical lengthscales; rapid
oscillations on the scale of $\lambda_F$ cancel out.
It is thus possible to discretise the ``classical'' operator
$\zeta^{(C)}$ on a lattice in
space with grid size greater than $\lambda_F$. By summing over
trajectories between lattice cells one can compute
$\zeta^{(C)}$, and thereby $\left< \chi^d (H) \right>$,
efficiently by numerical means\cite{M+R:98}\footnote{
A similar calculation has been applied to a different
``classical'' operator, describing interaction effects in
ballistic quantum dots, in Ref.~\cite{Ull:97}.}.
The propagator $\zeta^{(C)}$ (above Eq.~(\ref{zsc})) is made up
of a summation over all diagonal pairs of paths (including boundary
scattering) between any two given impurities situated at
${\bf r}_1$ and ${\bf r}_2$.
On taking the trace over $n$ propagators $\zeta^{(C)}$, one sees
that the field sensitive part of $S_n$, Eq.~(\ref{sn}),
consists of a summation over flux-enclosing closed pairs of paths
(in position space) involving $n$ impurities and an arbitrary number of
boundary scattering events.
However this summation does not include closed pairs of paths
which follow periodic
orbits of the corresponding clean system.
Such paths involve zero momentum transfer between
the Green functions at the impurity positions; they actually represent
disconnected diagrams which are included in $\left<\chi^L (H)\right>$
and must not be counted again in the determination of
$\left<\chi^d (H)\right>$.
It is the presence of these periodic orbits in the determination of
$\left<\chi^L (H)\right>$
\footnote{A similar sensitivity
with respect to the system geometry occurs in interacting
ballistic quantum dots due to the presence of off-diagonal periodic orbit
terms \cite{Ull:97}.}
that leads to strong sensitivity
with respect to the system geometry \cite{Ull:95,Ric:96a} (see below).
In the following we apply the above formalism to the case of an ensemble
of disordered square billiards.
Gefen, Braun, and Montambaux (GBM) \cite{Gef:94} considered
the contribution of trajectories longer than $\ell$ to
$\left<\chi^d (H)\right>$ in an approximate way,
while Richter, Ullmo, and Jalabert (RUJ) \cite{Ric:96}
calculated $\left<\chi^L (H)\right>$ for a square by assuming that
the disorder perturbs the phase, but not the trajectory, of
semiclassical paths of the corresponding clean geometry.
We first compute the ${\cal S}_{n}^{(C)}$ (Eq.~(\ref{sn})) for
the square geometry by employing the extended zone scheme\cite{Ric:96}
to write $\zeta^{(C)} ({\bf r}_1 , {\bf r}_2 ; \omega ;H)$
as a sum of propagators along straight line paths.
We then perform a complete calculation of
$\left<\chi^d (H)\right>$, Eq.~(\ref{stos}),
in the elastic and inelastic regimes and
compare the results with those by GBM and RUJ.
\begin{figure}
\centerline{\epsfxsize=0.5\hsize \epsffile{fig1.eps}
\epsfxsize=0.52\hsize \epsffile{fig2.eps}}
\label{elastic}
\caption{Disorder-induced average susceptibility $\left<\chi^d (0)\right>$ for
a square geometry in the elastic regime ($L < \ell < L_{\phi}$)
as a function of the elastic mean free path $\ell$ for $k_FL=60$
and two strengths of inelastic scattering,
$\gamma /\Delta = 1$ (lower), which corresponds to
$L_{\phi} /L \approx 9.5$, and $\gamma /\Delta = 0.392$ (upper).
The dashed horizontal line indicates the result by GBM
for $\gamma /\Delta = 1$.
The inset shows $\left<\chi^d (0)\right>$ as a function of $k_FL$
for $\gamma /\Delta = 1$.
From the top, the 5 curves are for values of
$\ell /L = 2$, $4$, $5$, $1$ and $10$.
}
\caption{$\left<\chi^d (0)\right>$ in
the inelastic regime ($L, L_{\phi} < \ell$)
as a function of $\ell$ for $k_FL=60$ and $\gamma /\Delta = 10$
which corresponds to $L_{\phi} /L \approx 0.95$.
Circles are our numerical results and the triangles are
numerical results including only the contribution of $S_1$.
The solid and dashed lines are both fits (see main text).
The inset shows $\left<\chi^d (0)\right>$ as a function
of $k_FL$ for $\gamma /\Delta = 10$. From the top, the curves are for
values of $\ell /L = 2$, $4$ and $8$.
The symbols correspond to our numerical results and the solid lines to a fit.
}
\end{figure
\section{Elastic regime: $L < \ell < L_\phi$}
Fig.~1 shows $\left<\chi^d (0)\right>$ as a function
of $\ell$ for a typical experimental value of $k_FL=60$.
The lower curve is for $\gamma /\Delta = 1$
(i.e.\ $L_{\phi} /L \approx 9.5$ at $k_FL=60$)
and the upper is for $\gamma /\Delta = 0.392$ \cite{note2}.
Note that in the diffusive regime, $\ell <L$, there is linear increase
with $\ell$ in agreement with Ref.~\cite{Oh:91}.
For $L < \ell < L_{\phi}$, we find a weak dependence of
$\left<\chi^d (0)\right>$ on $\ell$. Our result
is on the whole close to the prediction by GBM
\cite{Gef:94} who found a paramagnetic $\ell$-independent contribution,
$ \left< \chi^d (0) \right>/\left| \chi_L \right| \approx
0.23 \ k_FL (\Delta/\gamma) $,
shown as the dashed horizontal line in Fig.~1 for $\gamma /\Delta = 1$.
As $\ell$ increases further, $\left<\chi^d (0)\right>$ decreases;
we discuss this behaviour in more detail later when considering the
inelastic regime.
The inset of Fig.~1 shows $\left<\chi^d (0)\right>$ as a function of
$k_FL$ for $\gamma /\Delta = 1$.
From the top, the five curves are for values of
$\ell /L = 2$, $4$, $5$, $1$ and $10$.
For all disorder strengths, $\left<\chi^d (0)\right>$
is paramagnetic and it increases linearly with $k_FL$.
For $L < \ell < L_{\phi}$ the gradient of the curves
is approximately independent of $\ell$ and we find it to be
$\approx 0.18 \Delta /\gamma$.
However there is a $k_FL$ independent offset to the curves
which is $\ell$ dependent and not described by GBM.
\section{Inelastic regime: $L, L_\phi < \ell$}
Fig.~2 shows $\left<\chi^d (0)\right>$ as a function
of $\ell$ for $k_FL=60$ and $\gamma /\Delta = 10$
(i.e.\ $L_{\phi} /L \approx 0.95$).
Circular points correspond to our full numerical results, while
triangular points represent the contribution of $S_1$ only
(minus the disconnected part) in Eq.~(\ref{stos}).
The solid line is the equation
$\left<\chi^d (0)\right> = (a_0L/\ell)
\exp\left( -a_1L/\ell\right)$
where $a_0$ and $a_1$ are fitting parameters.
The dashed line displays an equation of the same type
but with $a_1 = 2\sqrt{2}$ and $a_0$ as the only fitting parameter.
For $\ell \gg L$ the susceptibility is dominated by the contribution
with the lowest number of impurity scatterings $S_1$.
As $\ell /L$ is reduced, progressively more
terms in the summation of Eq.~(\ref{stos}) become relevant,
and there is good agreement with the solid line fit for
$\ell \geq L$.
The inset of Fig.~2 shows $\left<\chi^d (0)\right>$ as a function of
$k_FL$ for $\gamma /\Delta = 10$. From the top, the three curves are
for values of $\ell /L = 2$, $4$ and $8$.
The symbols correspond to our results and the solid lines to
$\left<\chi^d (0)\right> = b_0\exp
\left[ -b_1(\gamma /\Delta )/k_FL \right]$
where $b_0$ and $b_1$ are fitting parameters.
\begin{figure}
\centerline{\epsfxsize=0.48\hsize \epsffile{fig3.eps}
\epsfxsize=0.48\hsize \epsffile{fig4.eps}}
\label{compare}
\caption{Comparison of $\left< \chi^d (0) \right>$
and $\left< \chi^L (0) \right>$ for the square.
The solid line shows our numerical results for
$\left<\chi^d (0)\right>$ and
the dashed line is an analytical expression for
$\left<\chi^L (0)\right>$ (see main text) as a function of
$L_\phi/ L$ for $k_FL=60$ and $\ell /L =2$.
Inset: value of $L_\phi /L$ at which the two
contributions are equal as a function of $\ell / L$.
The solid line is the analytical estimate Eq.~(\protect\ref{cgam})
and the circles are
obtained by comparing our numerical results for
$\left<\chi^d (0)\right>$ with
the analytic expression for $\left<\chi^L (0)\right>$.}
\caption{Comparison of the (normalized)
semiclassical estimates for $\left< \chi^d (0) \right>$
(solid line) and $\left< \chi^L (0) \right>$ (dashed)
for a generic chaotic geometry for $k_F L = 60$ (see main text).
Inset: the (straight) line shows the values in the $(\ell,L_\phi)$-plane
where both contributions are equal.
}
\end{figure
\section{Comparison with the contribution of clean correlations}
We compare the magnitude of $\left<\chi^d (0)\right>$ with
that of $\left<\chi^L (0)\right>$ for squares.
It has been shown \cite{Ull:95,Ric:96a} that the low field
susceptibility of an ensemble of {\em clean} squares is dominated by the
shortest flux enclosing periodic orbits of length $L_t = 2 \sqrt{2}L$
and their repetitions over a
broad range of temperature (and thus inelastic scattering strengths).
For the ballistic white noise case considered here,
the effect of disorder averaging
on the susceptibility was described by an additional damping
$\exp ( - L_t /\ell )$ of the response of the clean system \cite{Ric:96}.
This result corresponds to $\left<\chi^L (0)\right>$ including
the disorder damping $\exp ( - L_t /2\ell )$ of the
single particle Green functions.
We use the results of
RUJ \cite{Ric:96,Ull:95} at zero temperature and introduce
the level broadening $\gamma$ in the same way as for the disorder
correlations above \cite{note1}.
It is then possible to sum the
contribution of all repetitions of the fundamental orbit
explicitly which gives
$
\left< \chi^L (0) \right>/\left| \chi_L \right| \simeq
(\sqrt{2}/5\pi) k_FL/\sinh^2[\sqrt{2}(L/\ell + L/L_\phi)]
$.
Fig.~3 shows this expression for $\left<\chi^L (0)\right>$ (dashed line) and
our numerical results for $\left<\chi^d (0)\right>$ (solid line)
as a function of $L_\phi / L $ for $k_FL=60$ and $\ell /L =2$.
Although $\left<\chi^d (0)\right> < \left<\chi^L (0)\right>$
for small $L_\phi / L$ and vice versa for
large $L_\phi / L$, it is clear that both contributions are
relevant over a broad range of $L_\phi / L$.
We make an estimate for the value of $L_\phi / L$ at
which the contributions of $\left<\chi^L (0)\right>$
and $\left<\chi^d (0)\right>$ are equal by comparing the above
analytic approximation with that given by GBM.
We find for $\ell >L$ that $\left<\chi^d (H)\right>$ is larger
than $\left<\chi^L (H)\right>$ for
$L_\phi$ greater than a crossover value, $L^{*}_\phi$, given by
\begin{equation}
\frac{L^{*}_\phi}{L} \sim \frac{(k_F L)^2}{2\pi^2}\left[
\sqrt{2}k_F L \left( \frac{L}{\ell} \right)^2
+ 8\pi^2\left( \frac{L}{\ell} \right)^3\right]^{-1} \; .
\label{cgam}
\end{equation}
The inset of Fig.~3 shows this estimate (solid line)
compared to points (circles) obtained by comparing our numerical
results for $\left<\chi^d (0)\right>$ with
the analytic expression for $\left<\chi^L (0)\right>$.
The experiment\cite{Lev:93} on the orbital magnetism of ensembles of
squares had estimated values for the elastic mean free path
of $\ell /L \sim 1-2$, for the phase-coherence length
of $\sim (3-10) L $ and for the thermal cutoff length
of $L_T /L \sim 2$.
Hence, the lengthscale $L_\phi$ (Eq.~(\ref{Lphi})) is determined by
the shorter length $L_T$. Fig.~3 shows that for these experimental
parameters (and for white noise disorder)
both the disorder and size-induced
correlations are relevant, however the latter contribution is dominant.
The measured value of the susceptibility at {\em low}
temperature was $\chi(0) \sim 100 |\chi_L|$,
with an uncertainty of about a factor of four.
After including a spin factor of 2,
the combined contributions
$\langle\chi^d\rangle$ and $\langle\chi^L\rangle$
calculated
above, together with an interaction contribution of the same order
\cite{Ull:97}, are in broad agreement with the experimental result.
We note, however, that a theoretical explanation of
the temperature dependence of the measured susceptibility
is still lacking.
Experimental ballistic structures as those in Ref.~\cite{Lev:93}
are usually characterised by smooth disorder potentials.
The effect of smooth disorder on $\langle \chi^L \rangle$ has been analysed
in Ref.~\cite{Ric:96} showing that the reduction of the
clean contribution is less strong than for white noise disorder
and no longer exponential.
Smooth disorder effects can be incorporated
into the present calculation by introducing an angle-dependent
cross section for the impurity scattering between two
trajectory segments.
\section{Chaotic geometries}
For systems with a generic chaotic, clean counterpart we obtain
an analytical estimate for $\left<\chi^d (0)\right>$ in the elastic regime
after transforming the
sum over densities $|D_t|^2$ in Eq.~(\ref{zsc})
into probabilities $P({\bf r},{\bf r}';t|A)$ to
propagate classically from ${\bf r}$ to ${\bf r}'$ at time $t$ accumulating
an ``area'' $A$ \cite{M+R:98}. Assuming a Gaussian ``area'' distribution
with a variance $\sigma$, which is taken to be
$\ell$ independent, we find
\begin{equation}
{\cal S}_{n}^{(C)}(\omega ;H) \approx \left\{1 +
\frac{8\pi^2 H^2 \ell \sigma}{\varphi_{0}^2} + (\gamma- i\omega)\tau
\right\}^{-n} \; .
\label{snch}
\end{equation}
Summation of the ${\cal S}_{n}^{(C)}$, Eq.~(\ref{stos}), leads to
\begin{equation}
\frac{\left<\chi^d(0) \right>}{|\chi_L|}
\simeq \frac{96 \sigma L_\phi^2}{L^4(L_\phi+\ell)} \; .
\end{equation}
$\left<\chi^d (0)\right>$ is shown as the full line in
Fig.~4 which is rather similar
to the corresponding curve in Fig.~3 for the square geometry.
A corresponding approximation for
$\langle \chi^L(0) \rangle$ \cite{Ric:96a} is shown
as the dashed line in Fig.~4.
Both can be shown to add up to an $\ell$-independent response
$\left<\chi(0)\right>/|\chi_L| \simeq 96 \sigma L_\phi /L^4$
for generic chaotic geometries.
\section{Conclusion}
Disorder-induced spectral correlations and
their effect on the magnetic
susceptibility in the non-diffusive regime $\ell > L$ were considered.
We focused on the square billiard, whose corresponding clean
geometry is integrable, and showed that there are two distinct regimes of
behaviour depending on the relative magnitudes of $\ell$ and $L_{\phi}$.
This approach enabled us to study the complete crossover from diffusive
to clean systems for arbitrary values of $\ell$ and $L_{\phi}$
(smaller than $v_F t_H$).
Note that it may be possible to calculate the susceptibility for
values of $L_{\phi}$ and $\ell$
greater than $v_F t_H$ using a non-perturbative
approach such as the ballistic $\sigma$ model \cite{M+K:95}.
\stars
We are grateful to Y.~Gefen, S.~Kettemann, D.~E.~Khmelnitskii,
M.~Leadbeater, I.~V.~Lerner and P.~Walker for useful discussions.
We thank the Isaac Newton Institute for Mathematical Sciences, Cambridge,
where part of this research was performed. |
hep-th/9806171 | \section{\@startsection {section}{1}{\zeta@}{3.ex plus 1ex minus
.2ex}{2.ex plus .2ex}{\large\bf}}
\def\subsection{\@startsection{subsection}{2}{\zeta@}{2.75ex plus 1ex
minus
.2ex}{1.5ex plus .2ex}{\bf}}
\def\subsect#1{\par\penalty1000{\noindent \bf #1}\par\penalty500}
\def |
nucl-th/9806100 | \section*{\bf 1. Introduction}
The newly established electron and photon facilities have made
it possible to investigate the mechanism of vector meson
photoproductions on nucleons with much improved experimental accuracy.
This has been motivated in part by
the puzzle that the NRQM ~\cite{NRQM,capstick} predicts
a much richer resonance spectrum than has been observed
in $\pi N\to \pi N$ scattering experiments.
Quark model studies have suggested that those resonances
missing in the $\pi N$ channel may couple strongly to,
for example, the $\omega N$ and $\rho N$ channels.
Experiments have been performed at ELSA~\cite{saphir} and will be done
at TJNAF in the near future~\cite{cebaf}.
Therefore, a theory on the reaction mechanism
that highlights the dynamical role of
s-channel resonances is crucial in order to establish the ``missing
resonances" in the forthcoming experimental data for the vector meson, in
particular $\omega$ and $\rho$, photoproductions.
The experimental and theoretical studies on vector meson
photoproductions have been a long history since the first
experiment carried out on the Cambridge Electron Accelerator
in 1964. There have been some experimental data of the vector meson
photoproductions in their threshold regions where the s-channel
resonances are expected to play an important role, for instance,
the data from Ref.\cite{saphir,ABHMC,benz,hilpert} for the $\omega$ and
$\rho$ photoproduction ($E^{thres}_\gamma\simeq$ 1.11GeV) ,
and the data from Ref.\cite{phidata} for the $\phi$
photoproduction ($E^{thres}_\gamma\simeq$ 1.57GeV).
Historically, these studies have concentrated on the diffactive
behavior in the small $t$ region for the neutral meson ($\omega$,
$\rho^0$ and $\phi$) productions, in which it has been shown that the
Vector Meson Dominance Model (VMD)~\cite{Bauer} gives a good description
in the low energy region, while the Pomeron exchange becomes more
important as the energy increases\cite{pomeron}. The focus of this
paper is not on the large diffractive behavior in the small $t$ region,
but rather on the non-diffractive s- and u- channel contributions in
the large $t$ region in which the t-channel Pomeron
exchange becomes less significant. Furthermore, we will attempt to
provide a unified framework for both neutral and charged vector meson
productions. Since the t-channel exchanges responsible for the large
diffractive behavior in the small $t$ region, such as the Pomeron exchange,
does not contribute to the charged meson productions, the non-diffractive
s- and u-channel resonances become more dominant, thus they provide a
crucial test to any model that concentrates on the role of the s- and
u-channel resonances in the vector meson photoproductions. The quark model
approach provides an ideal framework to investigate the dynamical role of
the s- and u-channel resonances in the vector meson photoproductions. The
studies in the pseudoscalar photoproductions have shown\cite{pseudoscalar}
that every s- and u-channel resonance, particularly those high partial
wave resonances such as $F_{15}$ and $F_{37}$, can be taken into account,
and this has been proven to be very difficult for the traditional approach
at the hadronic level. Moreover, it introduces the quark degrees of freedom
directly into the reaction mechanism, thus gives a very good description
to the pseudoscalar meson productions with much less parameters than the
models at the hadronic level. It is therefore natural to extend this
approach to the vector meson photoproductions in the resonance region.
A major difference for the vector meson production from the
pseudoscalar meson case in the quark model is that the
interaction between the vector mesons and the quarks inside
the baryon is largely unknown. Although phenomenological models
have been developed to evaluate baryon resonance
decaying into a nucleon and a vector meson,
such as the quark pair creaction model~\cite{yaouanc}
or the $^3P_0$ model,
these approaches are unsuitable for the description of
vector meson photoproductions. This is due to the fact that they
only yield transition amplitudes for s-channel resonances, but
contain no information on how to derive the non-resonant terms
in the u- and t-channels. Therefore, we choose an effective
Lagrangian here as our starting point that satisfies the fundamental
symmetries and determines the transition amplitudes not only
in the s-channel but also in the u- and t- channels.
Even though the effective Lagrangians are different from each other for
pseudoscalar and vector meson photoproductions,
the implementation follows the same guidelines.
The transition amplitudes for each resonance in the s-channel
below 2 GeV will be included explicitly,
while the resonances above 2 GeV for a given quantum number $n$
in the harmonic oscillator basis of the quark model
are treated as degenerate, so that their transition amplitudes can be written
in a compact form. Similarly, the excited resonances in the u-channel are
assumed to be degenerate as well. Only the mass splitting between the
spin 1/2 and spin 3/2 resonances with $n=0$ in the harmonic oscillator basis,
such as the splitting between nucleon and $\Delta$ resonance, is
found to be significant, thus,
the transition amplitudes for the spin 3/2 resonance with $n=0$ in
the u-channel will be included separately.
The effective Lagrangian employed here generates not only the s- and u-channel
exchanges but also a t-channel term containing the vector meson exchange.
For charged vector mesons gauge invariance also mandates a Seagull term.
Although in principle, all the contributions from resonances have
been included in the effective Lagrangian, we don't expect that
such an approach can give an agreement with the data in the small
$t$ region in the neutral vector meson photoproductions
since additional t-channel contribution, such as the Pomeron exchange,
will responsible for the strong diffractive behavior
in the small $t$ region. As it has been shown
in Ref.\cite{collins} and also in Ref.\cite{Freund} and Ref.\cite{Harari}
about the diffraction duality, there can be a non-resonant
imaginary background amplitude in neutral vector meson,
such as $\omega$ and $\rho^0$ photoproductions,
but not in the charged vector meson $\rho^\pm$ photoproductions.
Therefore, the large difference of the cross section
between the neutral and charged $\rho$ meson photoproduction,
observed in the direct channel resonance region,
is due to such a background amplitude, and it should be
from the large contribution of the Pomeron singularity in the
neutral photoproduction from high energies down to the threshold.
This has been one of our concerns in the numerical
investigations. Therefore, we add a t-channel $\pi^0$ exchange to the
amplitude for $\omega$ photoproduction and a $\sigma$ exchange term to
the amplitude for $\rho^0$ photoproduction
suggested by Friman and Soyeur~\cite{FrimanSoyeur}
who showed these two terms play dominant roles in $\omega$ and $\rho^0$
productions respectively, over other meson exchange processes near the
threshold.
With the above considerations, we apply our model to the five
isospin channels, $\gamma p\to \omega p$, $\gamma p\to \rho^0 p$,
$\gamma n\to \rho^- p$, $\gamma p\to \rho^+ n$ and
$\gamma p\to \phi p$.
With the same set of parameters introduced in our model, we
obtained an overall agreement with the differential cross sections
in the large $t$ region for the first four channels,
while
with relatively smaller parameters in the $\phi$ photoproduction,
we predict the behavior of the differential cross section in the
large $t$ region.
With the additional t-channel $\pi^0$ and $\sigma$ exchanges
included in the $\omega$ and $\rho^0$ photoproduction respectively,
we obtain an overall agreement with the available data from
small $t$ to large $t$ region.
The overall agreement between the theoretical predictions and the data
available not only for the neutral meson $\omega$ and $\rho^0$ but also
for the charged meson $\rho^\pm$ productions in which the s- and u-channel
contributions become more dominant is remarkable. It is even more
remarkable that both $\omega$ and $\rho$ productions can be described by
the same set of parameters, which is by no means trivial.
It suggests that
quark model approach provides a very good framework to investigate the
resonance structure in the vector meson photoproductions. Our results also
show that polarization observables is crucial in determining the resonance
structure, which has been shown to be the case in the pseudoscalar meson
photoproductions.
In the reaction $\gamma p\to \phi p$.
Since the threshold energy of the $\phi$ production is above the resonance
region, the primary focus here is the $\phi NN$ coupling constant. Because
the production of $s\overline{s}$ from the nucleons should be suppressed
under the Okubo-Zweig-Iizuka (OZI) rule, the $\phi NN$ coupling constant
is expected to be smaller than the $\omega NN$ or $\rho NN$ couplings.
The recent experiment on the
$p\overline{p} \to \phi X$ ($X=\pi, \eta, \omega, \rho, \pi\pi, \gamma$)
\cite{OZI} has shown a significant violation of the OZI rule, and it
can not be explained by the diffractive process, such as Pomeron
exchange. Thus, large contributions from the non-diffractive processes
are expected to contribute to the $\phi$ photoproduction near the threshold.
This has been the subject of many studies, such as the recently developed
quantum hadrodynamical(QHD) model approach\cite{williams}.
In our framework, it could be
achieved by fitting the s- and u- channel contributions
to the differential cross sections of $\phi$ productions in the large $t$
region\cite{phidata}, where the contributions from the Pomeron exchange
become less significant. The initial results show that the $\phi NN$
couplings in the quark model are small but significant which is consistent
with those in the QHD approach.
In Section 2, we briefly discuss some of the observables
used in our approach, which have been developed extensively in Ref.
\cite{tabakin}. The framework for the vector meson
photorpductions with effective Lagrangian for the quark-meson
interaction is presented in Section 3. In Section 4, we show our
numerical studies of the $\omega$, $\rho$ and $\phi$
photoproductions in the five isospin channels.
Finally, conclusions will be presented in Section 5.
\section*{\bf 2. Observables and Helicity Amplitudes }
Before presenting our quark model approach we
introduce some general features
of vector meson photoproduction on the nucleon.
The basic amplitude $\cal F$ for $\gamma + N \to V+N^\prime$ is defined as
\begin{equation}
{\cal F}=\langle {\bf q}\lambda_V\lambda_2|T|{\bf k}
\lambda\lambda_1\rangle,
\label{1}
\end{equation}
where ${\bf k}$ and ${\bf q}$ are the momenta of the incoming
photon and outgoing vector meson.
The helicity states are denoted by $\lambda=\pm 1$
for the incident photon,
$\lambda_V=0,\pm 1$ for the outgoing vector meson,
and $\lambda_1=\pm 1/2$,$\lambda_2=\pm 1/2$
for the initial and final state nucleons, respectively. Following
Ref.~\cite{tabakin}, the amplitude $\cal F$
can be expressed as a $6\times 4$ matrix in the helicity space,
\begin{equation}
{\cal F}=\left( \begin{array}{cccc}
H_{21} & H_{11} & H_{3-1} & -H_{4-1}\\
H_{41} & H_{31} & -H_{1-1} & H_{2-1}\\
H_{20} & H_{10} & -H_{30} & H_{40}\\
H_{40} & H_{30} & H_{10} & -H_{20}\\
H_{2-1} & H_{1-1} & H_{31} & -H_{41}\\
H_{4-1} & H_{3-1} & -H_{11} & H_{21}\\
\end{array} \right ).
\label{2}
\end{equation}
Because of the parity conservation,
\begin{equation}
\langle {\bf q}\lambda_V\lambda_2|T|{\bf k} \lambda\lambda_1\rangle=
(-1)^{\Lambda_f-\Lambda_i}\langle {\bf q} -\lambda_V -\lambda_2|T|{\bf k}-
\lambda -\lambda_1\rangle,
\end{equation}
where $\Lambda_f=\lambda -\lambda_1$ and $\Lambda_i=\lambda_V -\lambda_2$
in the Jacob-Wick(JW) convention,
the $ H_{a\lambda_V}(\theta)$ in Eq.(\ref{2}) reduces to 12 independent
complex helicity amplitudes:
\begin{eqnarray} \label{helicity}
H_{1\lambda_V}&= &\langle \lambda_V, \lambda_2=+1/2|T|\lambda=1,
\lambda_1=-1/2\rangle\nonumber\\
H_{2\lambda_V}&= &\langle \lambda_V, \lambda_2=+1/2|T|\lambda=1,
\lambda_1=+1/2\rangle\nonumber\\
H_{3\lambda_V}&= &\langle \lambda_V, \lambda_2=-1/2|T|\lambda=1,
\lambda_1=-1/2\rangle\nonumber\\
H_{4\lambda_V}&= &\langle \lambda_V, \lambda_2=-1/2|T|\lambda=1,
\lambda_1=+1/2\rangle.
\end{eqnarray}
Each experimental observable $\Omega$ can be written in the general
{\sl bilinear helicity product}(BHP) form,
\begin{eqnarray}
\check{\Omega}^{\alpha\beta}&=&\Omega^{\alpha\beta}{\cal T}(\theta)\nonumber\\
&=&\pm\frac 12\langle H|\Gamma^\alpha\omega^\beta|H \rangle\nonumber\\
&=&\pm\frac 12\sum_{a,b,\lambda_V,\lambda^\prime_V}
H^*_{a\lambda_V}\Gamma^\alpha_{ab}\omega^\beta_{\lambda_V\lambda^\prime_V}
H_{b\lambda^\prime_V}.
\end{eqnarray}
For example, the differential cross
section operator is given by:
\begin{eqnarray}
\check{\Omega}^{\alpha=1,\beta=1}&=&{\cal T}(\theta)\nonumber\\
&=&\frac 12\langle H|\fbox{$\Gamma^1$}\fbox{$\omega^1$}|H\rangle\nonumber\\
&=&\frac 12\sum^4_{a=1}\sum_{\lambda_V=0,\pm 1} |H_{a\lambda_V}|^2,
\end{eqnarray}
where the box
frames denote the diagonal structure of the matrices. The
$\Gamma$ and $\omega$ matrices labeled by different $\alpha$ and $\beta$
correspond to different spin observables. With the phase space factor, the
differential cross section has the expression,
\begin{eqnarray}
\frac{d\sigma}{d\Omega_{c.m.}}&=&(P.S. factor){\cal T}(\theta)\nonumber\\
&=&\frac{\alpha_e \omega_m(E_f+M_N)(E_i+M_N)}{8\pi s}{|{\bf q}|}\frac 12
\sum^4_{a=1}\sum_{\lambda_V=0,\pm 1} |H_{a\lambda_V}|^2
\end{eqnarray}
in the center of mass frame, where $\sqrt{s}$ is the total energy of the
system, $E_i$ and $E_f$ are the energies of the nucleons in the
initial and final states, respectively.
$M_N$ represents the masses of the nucleon,
and $\omega_m$ denotes the energy of the outgoing meson.
These helicity amplitudes are usually related to the density matrix
elements $\rho_{ik}$~\cite{schilling}, which are measured
by the experiments~\cite{Ballam}.
They are defined as:
\begin{eqnarray}
\rho^0_{ik}&= &\frac{1}{A}\sum_{\lambda\lambda_2\lambda_1}H_{\lambda_{V_i}
\lambda_2,
\lambda\lambda_1} H^*_{\lambda_{V_k}\lambda_2, \lambda\lambda_1},
\nonumber\\
\rho^1_{ik}&= &\frac{1}{A}\sum_{\lambda\lambda_2\lambda_1}
H_{\lambda_{V_i}\lambda_2,
-\lambda\lambda_1} H^*_{\lambda_{V_k}\lambda_2, \lambda\lambda_1},
\nonumber\\
\rho^2_{ik}&= &\frac{i}{A}\sum_{\lambda\lambda_2\lambda_1}\lambda
H_{\lambda_{V_i}\lambda_2, -\lambda\lambda_1} H^*_{\lambda_{V_k}\lambda_2,
\lambda\lambda_1},\nonumber\\
\rho^3_{ik}&= &\frac{i}{A}\sum_{\lambda\lambda_2\lambda_1}\lambda
H_{\lambda_{V_i}\lambda_2, \lambda\lambda_1} H^*_{\lambda_{V_k}\lambda_2,
\lambda\lambda_1},
\end{eqnarray}
where
\begin{equation}
A=\sum_{\lambda_{V_i}\lambda\lambda_2\lambda_1} H_{\lambda_{V_i}\lambda_2,
\lambda\lambda_1} H^*_{\lambda_{V_k}\lambda_2, \lambda\lambda_1},
\end{equation}
where $\rho_{ik}$ stands for $\rho_{\lambda_{V_i}\lambda_{V_k}}$,
and $\lambda_{V_i}$, $\lambda_{V_k}$ denote the helicity of the
produced vector mesons.
For example, the angular distribution
for $\rho^0$ decaying into $\pi^+\pi^-$
produced by linearly polarized photons can be expressed
in terms of nine independent measurable spin-density matrix elements
\begin{eqnarray}
W(cos\theta, \phi,\Phi)&= &\frac 3{4\pi}[\frac 12(1-\rho^0_{00})
+\frac 12(3\rho^0_{00}-1)cos^2\theta-\sqrt2Re\rho^0_{10}sin2
\theta cos\phi\nonumber\\
&&-\rho^0_{1-1}sin^2\theta cos2\phi
-P_\gamma cos2\Phi(\rho^1_{11}sin^2\theta+\rho^1_{00}cos^2
\theta\nonumber\\
&&-\sqrt2 Re\rho^1_{10}sin2\theta cos\phi-\rho^1_{1-1}sin^2
\theta cos2\phi)\nonumber\\
&&-P_\gamma sin2\Phi(\sqrt2 Im\rho^2_{10}sin2\theta sin\phi
+Im\rho^2_{1-1}sin^2\theta sin2\phi)],
\end{eqnarray}
where $P_\gamma$ is the degree of the linear polarization of the
photon, $\Phi$ is the angle of the photon electric polarization
vector with respect to the production plane measured in the c.m.
system, and $\theta$ and $\phi$ are the polar and azimuthal angles
of the $\pi^+$ which is produced by the $\rho^0$ decay in the
$\rho^0$ rest frame.
\section*{\bf 3. Quark Model Approach for Vector Meson Photoproduction}
The starting point of the quark model approach is the effective Lagrangian,
\begin{equation} \label{3.0}
L_{eff}=-\overline{\psi}\gamma_\mu p^\mu\psi+\overline{\psi}\gamma_
\mu e_qA^\mu\psi +\overline{\psi}(a\gamma_\mu +
\frac{ib\sigma_{\mu\nu}q^\nu}{2m_q}) \phi^\mu_m \psi,
\end{equation}
where the quark field $\psi$ is expressed as
\begin{equation}
\psi =\left( \begin{array}{c}
\psi (u)\\ \psi (d) \\ \psi (s) \end{array} \right ),
\end{equation}
and the meson field $\phi^\mu_m $ is a 3$\otimes$3 matrix,
\begin{equation}
\phi_m =\left( \begin{array}{ccc}
\frac{1}{\sqrt{2}}\rho^{0}+\frac{1}{\sqrt{2}}\omega & \rho^{+} & K^{*+}\\
\rho^{-} & -\frac{1}{\sqrt{2}}\rho^{0}+\frac{1}{\sqrt{2}}\omega & K^{*0}\\
K^{*-} & \overline{K}^{*0} &\phi
\end{array} \right )
\end{equation}
in which the vector mesons are treated as point-like particles.
At tree level, the transition matrix element based on the effective
Lagrangian in Eq.(\ref{3.0}) can be written as the sum of contributions
from the s-, u- and t- channels,
\begin{equation}
M_{fi}=M^s_{fi}+M^u_{fi}+M^t_{fi},
\label{3.1}
\end{equation}
where the s- and u-channel contributions in Eq.(\ref{3.1})
have the following form,
\begin{eqnarray}
M^s_{fi}+M^u_{fi}&= &\sum_{j} \langle N_f|H_m|N_j\rangle\langle
N_j|\frac{1}{E_i+\omega-E_j}H_e|N_i\rangle\nonumber\\
&&+\sum_{j} \langle N_f|H_e\frac{1}{E_i-\omega_m-E_j}
|N_j\rangle\langle N_j|H_m|N_i\rangle,
\end{eqnarray}
where the electromagnetic coupling vertex is
\begin{equation}
H_e=-\overline{\psi}\gamma_\mu e_qA^\mu\psi,
\end{equation}
and the quark-meson coupling vertex is
\begin{equation} \label{Hm}
H_m=-\overline{\psi}(a\gamma_\mu +\frac{ib\sigma_{\mu\nu}
q^\nu}{2m_q}) \phi^\mu_m \psi,
\end{equation}
where $m_q$ in is the quark mass and the constants $a$
and $b$ in Eq.(\ref{3.0})
and (\ref{Hm}) are the vector and tensor
coupling constants, which will be treated as free
parameters in our approach. The initial and final
states of the nucleon are denoted by $|N_i\rangle$ and $|N_f\rangle$,
respectively, and $|N_j\rangle$
is the intermediate resonance state while $E_i$ and $E_j$
are the energies of the inital nucleon and the intermediate resonance.
An important test of the transition matrix elements
\begin{equation}
M_{fi}=\langle \lambda_2|J_{\mu\nu}\epsilon^\mu
\epsilon^\nu_m|\lambda_1\rangle,
\label{2.1}
\end{equation}
would be gauge invariance:
\begin{equation}
\langle \lambda_2|J_{\mu\nu}k^\mu|\lambda_1\rangle=\langle
\lambda_2|J_{\mu\nu} q^\nu|\lambda_1\rangle=0,
\label{2.2}
\end{equation}
where $\epsilon^\mu_m$, $\epsilon^\nu$ are the polarization
vectors of the vector mesons and photons.
However, we find that the condition $\langle
\lambda_2|J_{\mu\nu} q^\nu|\lambda_1\rangle=0$ is not satisfied for
the t-channel vector meson exchange term,
\begin{equation}
M^t_{fi}=-a\langle N_f|\sum_{l}\frac{e_m}{2q\cdot k}
\{2q\cdot\epsilon\gamma\cdot\epsilon_m-\gamma\cdot
q\epsilon\cdot\epsilon_m+k\cdot\epsilon_m\gamma\cdot\epsilon\}
e^{i({\bf k}-{\bf q})\cdot{\bf r}_l}| N_i\rangle,
\label{3.3}
\end{equation}
based on the Feynman rules for the photon-vector meson coupling, and
the relation
\begin{equation}
\langle N_f|\gamma\cdot(q-k) e^{i({\bf k}-{\bf q})
\cdot{\bf r}_l}| N_i\rangle=0.
\end{equation}
To remedy this problem, we add a gauge fixing term, so that
\begin{equation}
M^t_{fi}=-a\langle N_f|\sum_{l}\frac{e_m}{2q\cdot k}
2\{q\cdot\epsilon\gamma\cdot\epsilon_m-\gamma\cdot q\epsilon\cdot\epsilon_m+k\cdot\epsilon_m\gamma\cdot\epsilon\}
e^{i({\bf k}-{\bf q})\cdot{\bf r}_l}| N_i\rangle.
\end{equation}
The techniques of deriving the transition amplitudes have been developed
for Compton scattering~\cite{Compton}. Follow the same procedure as given
in Eq.(14) of Ref.~\cite{pseudoscalar}, we can divide the photon
interaction into two parts and the contributions
from the s- and u- channel can be rewritten as:
\begin{eqnarray}
M^{s+u}_{fi}&=&i\langle N_f|[g_e,H_m]|N_i\rangle \nonumber\\
&&+i\omega\sum_{j}\langle N_f|H_m|N_j\rangle\langle
N_j|\frac{1}{E_i+\omega-E_j}h_e|N_i\rangle\nonumber\\
&&+i\omega\sum_{j} \langle N_f|h_e\frac{1}{E_i-\omega_m-E_j}
|N_j\rangle\langle N_j|H_m|N_i\rangle,
\label{3.2}
\end{eqnarray}
where
\begin{equation}
g_e=\sum_{l}e_l{{\bf r}_l\cdot\mbox{\boldmath $\epsilon$ \unboldmath}}e^{i{\bf k\cdot r}_l},
\end{equation}
\begin{equation}
h_e=\sum_{l}e_l{{\bf r}_l\cdot\mbox{\boldmath $\epsilon$ \unboldmath}}(1-\mbox{\boldmath $\alpha$ \unboldmath}\cdot
{\hat{\bf k}})e^{i{\bf k\cdot r}_l}
\end{equation}
\begin{equation}
{\bf{\hat k}}=\frac{\bf k}{\omega}.
\end{equation}
The first term in Eq.(\ref{3.2}) can be identified as a seagull term;
it is proportional to the charge of the outgoing vector meson.
The second and third term in Eq.(\ref{3.2}) represents the s- and u-channel
contributions. Adopting the same strategy as in the pseudoscalar case,
we include a complete set of helicity amplitudes for each
of the s-channel resonances below 2GeV in the $SU(6)\otimes O(3)$
symmetry limit. The resonances above 2GeV
are treated as degenerate in order to express each contribution from
all resonances with quantum number $n$ in a compact form.
The contributions from the resonances with the largest spin for
a given quantum number $n$ were found to be the most
important as the energy
increases~\cite{pseudoscalar}.
This corresponds to spin $J=n+1/2$ with $I=1/2$ for the reactions
$\gamma N\to K^*\Lambda$ and $\gamma N\to \omega N$,
and $J=n+3/2$ with $I=3/2$ for the reactions $\gamma N\to K^*\Sigma$ and
$\gamma N\to \rho N$.
Similar to the pseudoscalar case, the contributions from
the u-channel resonances are divided into two parts as well. The
first part contains the resonances with
the quantum number $n=0$, which includes the spin 1/2 states,
such as the $\Lambda$, $\Sigma$ and the nucleons,
and the spin 3/2 resonances, such as the $\Sigma^*$ in $K^*$ photoproduction
and $\Delta(1232)$ resonance in $\rho$ photoproduction. Because
the mass splitting between the spin 1/2 and spin 3/2 resonances
for $n=0$ is significant, they have to be treated separately.
The transition amplitudes for these u-channel resonances will
also be written in terms of the helicity amplitudes.
The second part comes from the excited
resonances with quantum number $n\ge 1$. As the contributions
from the u-channel resonances are not sensitive to the precise
mass positions, they can be treated as degenerate as well,
so that the contributions from these resonances can again
be written in a compact form.
\subsection*{\bf 3.1. The Seagull term }
The transition amplitude is divided into the transverse
and longitudinal amplitudes according to the polarization of the outgoing
vector mesons. The longitudinal polarization vector for a vector meson with
mass $\mu$ and momentum ${\bf q}$ is,
\begin{equation}
\epsilon^\mu_L =\frac{1}{\mu} \left( \begin{array}{c}
|{\bf q}|\\ \omega_m \frac{\bf q}{|{\bf q}|} \end{array} \right )
\end{equation}
where $\omega_m=\sqrt{{\bf q}^2+\mu^2}$ is the energy of the outgoing
vector mesons.
Thus, the longitudinal interaction at the quark-meson vertex
can be written as
\begin{equation}
H^L_m=\epsilon^\mu_L J_\mu=\epsilon_0J_0-\epsilon_3J_3
\end{equation}
where $\epsilon_3$ corresponds to the direction of the momentum ${\bf q}$.
The transition amplitudes of the s- and u-channel for the
longitudinal quark-meson coupling become,
\begin{eqnarray}
M^{s+u}_{fi}(L)&=& i\langle N_f|[g_e,H^L_m]|N_i\rangle \nonumber \\
&&- i\omega \langle N_f|[h_e,\frac{\epsilon_3}{q_3}J_0]|N_i\rangle
\nonumber \\
&&+i\omega\sum_{j}\langle N_f|(\epsilon_0
-\frac{\omega_m}{q_3}\epsilon_3)J_0|N_j\rangle
\langle N_j|\frac{1}{E_i+\omega-E_j}h_e|N_i\rangle\nonumber \\
&&+i\omega\sum_{j} \langle N_f|h_e\frac{1}{E_i-\omega_m-E_j}|N_j
\rangle\langle N_j|(\epsilon_0-\frac{\omega_m}{q_3}\epsilon_3)J_0|N_i
\rangle,
\label{4.2})
\end{eqnarray}
where the first two terms are seagull terms which can be rewritten
as,
\begin{equation}
\label{seagull}
M^{Seagull}_{fi}(L)=-\frac{a\omega_m e_m}{\mu |{\bf q}|}\langle N_f|\mbox{\boldmath $\alpha$ \unboldmath}\cdot\mbox{\boldmath $\epsilon$ \unboldmath} e^{i({\bf k}-{\bf q})\cdot{\bf r}_l}|N_i\rangle -\frac{ia\mu e_m}{|{\bf q}|}\langle N_f|\sum_l {\bf r}_l\cdot\mbox{\boldmath $\epsilon$ \unboldmath} e^{i({\bf k}-{\bf q})\cdot{\bf r}_l}|N_i\rangle .
\end{equation}
The first term will be cancelled by a corresponding term from
the t-channel to keep the gauge invariance while the second term
has more explicit expression as,
\begin{equation}
\label{seagull-longi}
M^{Seagull}_{fi}(L)=-\frac{a\mu e_m}{|{\bf q}|\alpha^2}g^t_v {\bf q}
\cdot\mbox{\boldmath $\epsilon$ \unboldmath} e^{i({\bf k}-{\bf q})\cdot{\bf r}_l},
\end{equation}
where the $g$- factor $g^t_v$ is defined as the following,
\begin{equation}
\label{gtv}
g^t_v=\langle N_f|\sum_j{\hat I}_j|N_i\rangle.
\end{equation}
The values of $g^t_v$ for every channel is presented in Table 1.
The corresponding expressions for the t-channel amplitudes
are given in the Appendix. The last two terms in (\ref{4.2})
will be discussed in the following sections.
The nonrelativistic expansion of the transverse meson quark
interaction vertex gives,
\begin{equation}
H^T_m=\sum_{l} \{i\frac{b^\prime}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}_l\cdot({\bf q}
\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)
+a{\bf A}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v+\frac{a}{2\mu_q}{\bf p}^\prime_l
\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v\}{\hat I}_le^{-i{\bf q\cdot r}_l}
\end{equation}
where $b^\prime\equiv b-a$, ${\bf p}^\prime_l$ is the internal
motion of the $l$th quark in the c.m. system, and
\begin{equation}
{\hat I}_l =\left\{ \begin{array}{ccc}
a^\dagger _l(s)a_l(u) & \qquad\mbox{for}\qquad & K^{*+}\\
a^\dagger_l(s)a_l(d) & \qquad\mbox{for}\qquad & K^{*0}\\
a^\dagger_l(d)a_l(u) & \qquad\mbox{for}\qquad & \rho^+\\
-\frac{1}{\sqrt 2}(a^\dagger _l(u)a_l(u)
-a^\dagger _l(d)a_l(d)) & \qquad\mbox{for}\qquad & \rho^0\\
1 &\qquad\mbox{for}\qquad & \omega (\phi )
\end{array} \right.
\end{equation}
The vector ${\bf A}$ has the general form,
\begin{equation}
{\bf A}=\frac{{\bf P}_f}{E_f+M_f}+\frac{{\bf P}_i}{E_i+M_i},
\end{equation}
which comes from the center-mass motion of the quark system. In
the s- and u-channel, ${\bf A}$ has the following expression for
different channels,
\begin{eqnarray}
s-channel:&&{\bf A}=-\frac{{\bf q}}{E_f+M_f},\\
u-channel:&&{\bf A}=-(\frac{1}{E_f+M_f}
+\frac{1}{E_i+M_i}){\bf k}-\frac{1}
{E_f+M_f}{\bf q}.
\end{eqnarray}
The transverse transition amplitude for the s- and u-channel is,
\begin{eqnarray}
M^{s+u}_{fi}(T)&=&i\langle N_f|[g_e,H^T_m]|N_i\rangle \nonumber\\
&&+i\omega\sum_{j}\langle N_f|H^T_m|N_j\rangle\langle N_j|
\frac{1}{E_i+\omega-E_j}h_e|N_i\rangle\nonumber\\
&&+i\omega\sum_{j} \langle N_f|h_e\frac{1}{E_i-\omega_m-E_j}|N_j
\rangle\langle N_j|H^T_m|N_i\rangle
\label{4.3}
\end{eqnarray}
The nonrelativistic expansion of the first term gives,
\begin{eqnarray}
M^{Seagull}_{fi}(T)&=&-i\langle N_f|[g_e, H^T_m]|N_i\rangle\nonumber\\
&=&-iae_m g^t_v\langle N_f|\{ \frac{{\bf P}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v}{E+M},
{\bf R}\cdot\mbox{\boldmath $\epsilon$ \unboldmath} \}e^{-\frac{({\bf k-q})^2}{6\alpha^2}} |N_i\rangle\nonumber\\
&&+ae_m g_A\langle N_f| [\frac{\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf P}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)}{E+M},
{\bf R}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}] e^{-\frac{({\bf k-q})^2}{6\alpha^2}}|N_i\rangle,
\end{eqnarray}
where $\{A,B\}=AB+BA$ is the anti-commutation operator.
${\bf P}$ and ${\bf R}$
are the momentum and coordinate of the center mass
motion of the three quark system.
The seagull terms in the
transitions are proportional to the charge of the
outgoing mesons and, therefore,
vanish in the neutral vector meson, $\omega$,
$\rho^0$ and $\phi$, photoproductions.
\subsection*{\bf 3.2. U-channel transition amplitudes}
The last term in Eq.(\ref{4.2}) is the longitudinal transition amplitude
in the u-channel. We find
\begin{eqnarray}
M^u_{fi}(L)&=&i\omega\sum_{j}
\langle N_f|h_e\frac{1}{E_i-\omega_m-E_j}|N_j\rangle\langle N_j
|-\frac{\mu}{|{\bf q}|}J_0|N_i\rangle\nonumber\\
&=& (M^u_3+M^u_2)e^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}}
\end{eqnarray}
in the harmonic oscillator basis, where
\begin{eqnarray}
M^u_3&=&g^u_3 \frac{a\mu}{|{\bf q}|}
\{\frac{i}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})F^0
(\frac{{\bf k}\cdot {\bf q}}{3\alpha^2},P_f\cdot k)\nonumber\\
&&-g_v\frac{\omega}{3\alpha^2}{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath} F^1
(\frac{{\bf k}\cdot {\bf q}}{3\alpha^2},P_f\cdot k)\},
\label{5.2}
\end{eqnarray}
which corresponds to incoming photons and outgoing vector mesons
being absorbed and emitted by the same quark, and
\begin{eqnarray}
M^u_2&=&g^u_2 \frac{a\mu}{|{\bf q}|}
\{g^\prime_a\frac{i}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})
F^0(-\frac{{\bf k}\cdot {\bf q}}{6\alpha^2},P_f\cdot k)\nonumber\\
&&+g^\prime_v\frac{\omega}{6\alpha^2}{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}
F^1(-\frac{{\bf k}\cdot {\bf q}}{6\alpha^2},P_f\cdot k)\}
\label{5.3}
\end{eqnarray}
in which the incoming photons and outgoing vector mesons are absorbed and
emitted by different quarks. $P_f$ in Eq.(\ref{5.2}) and Eq.(\ref{5.3})
denotes the four momentum of the final state nucleon.
The function $F$ in Eq.(\ref{5.2}) and Eq.(\ref{5.3}) is defined as,
\begin{equation}
F^l(x,y)=\sum_{n\ge l}\frac{M_n}{(n-l)!(y+n\delta M^2)}x^{n-l},
\end{equation}
where $n\delta M^2=(M^2_n-M^2_f)/2$ represents
the average mass difference between the ground state and excited states
with the total excitation quantum number $n$ in the harmonic
oscillator basis. The parameter $\alpha^2$ in the above equation is commonly
used in the quark model and is related to the harmonic oscillator strength.
Similarly, the transverse transition in the u-channel is given by,
\begin{eqnarray}
M^u_{fi}(T)&=&i\omega\sum_{j} \langle N_f|
h_e\frac{1}{E_i-\omega_m-E_j}|N_j\rangle\langle N_j|H^T_m|N_i\rangle\nonumber\\
&=& (M^u_3+M^u_2)e^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}}
\end{eqnarray}
where
\begin{eqnarray}
M^u_3/g^u_3&=&\frac{b^\prime}{4m^2_q}\{g_v(\mbox{\boldmath $\epsilon$ \unboldmath}\times
{\bf k})\cdot({\bf
q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v) +i\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\times({\bf q}
\times\mbox{\boldmath $\epsilon$ \unboldmath}_v) \}F^0(\frac{{\bf k}\cdot {\bf q}}{3\alpha^2},P_f\cdot k)
\nonumber\\
&&-\frac{ia}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k}){\bf A}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v
F^0(\frac{{\bf k}\cdot {\bf q}}{3\alpha^2},P_f\cdot k)\nonumber\\
&&+\{\frac{ia}{12m^2_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot{\bf
k}+\frac{ib^\prime\omega}{6m_q\alpha^2}\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)
\mbox{\boldmath $\epsilon$ \unboldmath}\cdot{\bf q}\nonumber\\
&&+g_v\frac{a\omega}{3\alpha^2}\mbox{\boldmath $\epsilon$ \unboldmath} \cdot{\bf q}{\bf A}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v-
g_v\frac{a\omega}{6m_q}\mbox{\boldmath $\epsilon$ \unboldmath} \cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v \} F^1(\frac{{\bf k\cdot
q}}{3\alpha^2},P_f\cdot k)\nonumber\\
&&-g_v\frac{a\omega}{18m_q\alpha^2}\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot{\bf k}\mbox{\boldmath $\epsilon$ \unboldmath}
\cdot{\bf q} F^2(\frac{{\bf k}\cdot {\bf q}}{3\alpha^2},P_f\cdot k)
\label{amp3}
\end{eqnarray}
and
\begin{eqnarray}
M^u_2/g^u_2&= &\frac{b^\prime}{4m^2_q}\{g^\prime_v(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})
\cdot({\bf
q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\nonumber\\
&& -ig^\prime_a\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})
\times({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v) \}F^0(-\frac{{\bf k}\cdot {\bf q}}
{6\alpha^2},P_f\cdot k)
\nonumber\\
&&-\frac{ia}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k}){\bf A}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v
F^0(-\frac{{\bf k}\cdot {\bf q}}{6\alpha^2},P_f\cdot k)\nonumber\\
&&+\{-\frac{ia}{24m^2_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot{\bf
k}-\frac{ib^\prime\omega}{12m_q\alpha^2}\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)
\mbox{\boldmath $\epsilon$ \unboldmath}\cdot{\bf q}\nonumber\\
&&-g^\prime_v\frac{a\omega}{6\alpha^2}\mbox{\boldmath $\epsilon$ \unboldmath} \cdot{\bf q}{\bf A}
\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v- g^\prime_v\frac{a\omega}{12m_q}\mbox{\boldmath $\epsilon$ \unboldmath} \cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v \}
F^1(-\frac{{\bf k}\cdot {\bf q}}{6\alpha^2},P_f\cdot k)\nonumber\\
&&-g^\prime_v\frac{a\omega}{72m_q\alpha^2}\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot{\bf k}
\mbox{\boldmath $\epsilon$ \unboldmath}\cdot{\bf q} F^2(-\frac{{\bf k}\cdot {\bf q}}{6\alpha^2},P_f\cdot k)
\label{amp2}
\end{eqnarray}
The $g$-factors in Eq.(\ref{5.2})-(\ref{amp2})
are defined as
\begin{equation}
\langle N_f|\sum_{j} {\hat I}_j \mbox{\boldmath $\sigma$ \unboldmath}_j|N_i\rangle=g_A \langle N_f|
\mbox{\boldmath $\sigma$ \unboldmath}|N_i\rangle,
\end{equation}
\begin{equation} g^u_3=\langle N_f|\sum_{j} e_j {\hat
I}_j\sigma^z_j|N_i\rangle/g_A,
\end{equation}
\begin{equation} g^u_2=\langle N_f|\sum_{i\neq j} e_j {\hat
I}_i\sigma^z_j|N_i\rangle/g_A,
\end{equation}
\begin{equation} g_v=\langle N_f|\sum_{j} e_j{\hat I}_j
|N_i\rangle/g^u_3g_A,
\end{equation}
\begin{equation}
g^\prime_v=\frac{1}{3g^u_2g_A}\langle N_f|\sum_{i\neq j}
{\hat I}_ie_j\mbox{\boldmath $\sigma$ \unboldmath}_i\cdot\mbox{\boldmath $\sigma$ \unboldmath}_j|N_i\rangle,
\end{equation}
\begin{equation}
g^\prime_a=\frac{1}{2g^u_2g_A}\langle N_f|\sum_{i\neq j}
{\hat I}_ie_j(\mbox{\boldmath $\sigma$ \unboldmath}_i\times\mbox{\boldmath $\sigma$ \unboldmath}_j)_z|N_i\rangle.
\end{equation}
The numerical values of these $g$-factors have been derived in
Ref.~\cite{pseudoscalar} in the $SU(6)\otimes O(3)$ symmetry limit;
they are listed in Table 1 for completeness.
The first terms of Eq.(\ref{amp3}) and Eq.(\ref{amp2}) correspond
to the correlation between the magnetic transition and the c.m.
motion of the meson transition operator and they contribute to the
leading Born term in the u-channel. The second terms are due to
correlations between the internal and c.m. motion of the photon
and meson transition operators, and they only contribute to the
transitions between the
ground and $n\ge 1$ excited states in the harmonic oscillator
basis. The last terms in both equations represent the correlation
of the internal motion between the photon and meson transition
operators, which only contribute to transitions between the
ground and $n\ge 2$ excited states.
As pointed out before, the mass splitting between
the ground state spin 1/2 and spin 3/2 is significant, the transition
amplitudes for $\Delta$ resonance in $\rho$ production or $\Sigma^*$
resonance in $K^*$ production have to be computed separately.
The transition amplitude with $n=0$ corresponding to the correlation
of magnitic transitions is,
\begin{eqnarray}
M^u(n=0)&=&-\frac{1}{2m_q}\frac{M_f e^{-\frac{{\bf q}^2+{\bf k}^2}
{6\alpha^2}}}{P_f\cdot k+\delta M^2/2}
\frac{b^\prime}{2m_q}\{(g^u_3g_v+g^u_2g^\prime_v)({\bf k}\times\mbox{\boldmath $\epsilon$ \unboldmath})
\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\nonumber\\
&&+i(g^u_3-g^u_2g^\prime_a)\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf k}\times\mbox{\boldmath $\epsilon$ \unboldmath})
\times({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\}.
\end{eqnarray}
The amplitude for spin 1/2 intermediate states in the total $n=0$
amplitudes is,
\begin{eqnarray}
& &\langle N_f|h_e |N(J=1/2)\rangle\langle N(J=1/2)|H_m|N_i \rangle
\nonumber\\
&= &\frac{\mu_N b^\prime}{2m_q}\frac{M_f
e^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}}}
{P_f\cdot k+\delta M^2/2}
\{ ({\bf k}\times\mbox{\boldmath $\epsilon$ \unboldmath})\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\nonumber\\
&&+i\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf k}\times\mbox{\boldmath $\epsilon$ \unboldmath})\times({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\}
\end{eqnarray}
where $\mu_N$ is the magnetic moment, which has the following
values for different processes,
\begin{equation}
\mu_N =\left\{ \begin{array}{ccc}
\mu_\Lambda+\frac{g_{K^*\Sigma N}}{g_{K^*\Lambda N}}\mu_{\Lambda\Sigma}
&\qquad\mbox{for}\qquad & \gamma N
\to K^*\Lambda\\
\mu_{\Sigma^0}+\frac{g_{K^*\Lambda N}}{g_{K^*\Sigma N}}
\mu_{\Lambda\Sigma} &\qquad\mbox{for}\qquad &
\gamma N\to K^*\Sigma \\
\mu_{N_f}& \qquad\mbox{for}\qquad &\gamma N
\to \rho N_f
\end{array} \right.
\end{equation}
Thus, we obtain the spin 3/2 resonance contribution to the transition
amplitude by subtracting the spin 1/2 intermediate state contributions
from the total $n=0$ amplitudes as follows:
\begin{eqnarray}
M^u&= &-\frac{b^\prime}{2m_q}\frac{M_fe^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}}}
{P_f\cdot k+\delta M^2/2}
\{[(g^u_3g_v+g^u_2g^\prime_v)/2m_q-\mu_N]({\bf k}\times\mbox{\boldmath $\epsilon$ \unboldmath})
\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\nonumber\\
&&+i[(g^u_3-g^u_2g^\prime_a)/2m_q-\mu_N]\mbox{\boldmath $\sigma$ \unboldmath}\cdot[({\bf
k}\times\mbox{\boldmath $\epsilon$ \unboldmath})\times({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)]\}.
\end{eqnarray}
Substituting the $g$-factor coefficients into the above equation
gives the following general expression for spin 3/2 resonance with $n=0$,
\begin{eqnarray}
M^u&= &-\frac{b^\prime}{2m_q}\frac{M_fg_se^{-\frac{{\bf q}^2
+{\bf k}^2}{6\alpha^2}}}
{M_N(P_f\cdot k+\delta M^2/2)}
\{2\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf k}\times\mbox{\boldmath $\epsilon$ \unboldmath})\nonumber\\
&&-i\mbox{\boldmath $\sigma$ \unboldmath}\cdot[({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\times({\bf k}\times\mbox{\boldmath $\epsilon$ \unboldmath})]\}
\end{eqnarray}
where the value of $g_s$ is given in Table 1.
Note that the transition amplitudes here are generally written as
operators that are similar to the CGLN amplitudes in pseudoscalar
meson photoproduction. They have to be transformed into the helicity
amplitudes defined in Eq.(\ref{helicity}). In Tables 2 and 3,
we show the relations between the operators presented here and
the helicity amplitudes; they
are generally related by the Wigner $d$-function.
\subsection*{\bf 3.3 S-channel transition amplitudes }
The third term in Eq.(\ref{4.2}) and second term in Eq.(\ref{4.3})
are the s-channel longitudinal and transverse transition amplitudes.
Following the derivation for Compton scattering
in Ref.~\cite{Compton}, we obtain the general transition amplitude for
excited states in the s-channel,
\begin{equation}
H^J_{a\lambda_V}=\frac{2M_R}
{s-M_R(M_R-i\Gamma({\bf q}))}
h^J_{a\lambda_V},
\label{6.1}
\end{equation}
where $\sqrt{s}=E_i+\omega=E_f+\omega_m$ is the total energy
of the system, and $H^J_{a\lambda_V}$ are the helicity amplitudes
defined previously. $\Gamma({\bf q})$ in Eq. \ref{6.1} denotes the
total width of the resonance, which is
a function of the final state momentum ${\bf q}$. For a resonance
decaying into a two-body final state with relative angular momentum $l$,
the decay width $\Gamma({\bf q})$ is given by:
\begin{equation}\label{40}
\Gamma({\bf q})= \Gamma_R \frac {\sqrt {s}}{M_R} \sum_{i} x_i
\left (\frac {|{\bf q}_i|}{|{\bf q}^R_i|}\right )^{2l+1}
\frac {D_l({\bf q}_i)}{D_l({\bf q}^R_i)},
\end{equation}
with
\begin{equation}\label{41}
|{\bf q}^R_i|=\sqrt{\frac
{(M_R^2-M_N^2+M_i^2)^2}{4M_R^2}-M_i^2},
\end{equation}
and
\begin{equation}\label{42}
|{\bf q}_i|=\sqrt{\frac
{(s-M_N^2+M_i^2)^2}{4s}-M_i^2},
\end{equation}
where $x_i$ is the branching ratio of the resonance decaying into a
meson with mass $M_i$ and a nucleon, and $\Gamma_R$ is the total
decay width
of the resonance with the mass $M_R$. The
function $D_l({\bf q})$ in Eq. \ref{40}, called fission barrier~\cite{bw},
is wavefunction dependent and has the following form
in the harmonic oscillator basis:
\begin{equation}\label{43}
D_l({\bf q})=exp\left (-\frac {{\bf q}^2}{3\alpha^2}\right ),
\end{equation}
which is independent of $l$. In principle, the branching ratio
$x_i$ should also be evaluated in the quark model.
For a given intermediate resonance state with spin $J$, the twelve
independent helicity amplitudes $h^J_{a\lambda_V}$ in Eq.(\ref{6.1})
are a combination of the meson and photon helicity amplitudes
together with the Wigner-$d$ functions
\begin{equation}
h^J_{a\lambda_V}=\sum_{\Lambda_f}d^J_{\Lambda_f,\Lambda_i}(\theta)
A^V_{\Lambda_f}A^\gamma_{\Lambda_i},
\label{59}
\end{equation}
where $\Lambda_f=\lambda_V-\lambda_2$, $\Lambda_i=\lambda-\lambda_1$
and ${\bf k}\cdot{\bf q}=|{\bf k}||{\bf q}|cos(\theta)$.
The $A^\gamma_{1/2}$ and $A^\gamma_{3/2}$ in Eq.(\ref{59})
represent the helicity amplitudes in the s-channel for the photon
interactions; their explicit expressions have been given
in Ref.~\cite{thesis}.
More explicitly, the 12 independent helicity amplitudes are related to
the photon helicity amplitudes $A^\gamma_{\frac 12}$, $A^\gamma_{\frac 32}$
and vector meson helicity amplitudes $S^V_{\frac 12}$,
$A^V_{\frac 12}$ and $A^V_{\frac 32}$ through the following relations
\begin{eqnarray}
h^J_{11}&=&d^J_{\frac{1}{2},\frac{3}{2}}(\theta)A^V_{\frac{1}{2}}
A^\gamma_{\frac{3}{2}},\nonumber\\
h^J_{10}&=&d^J_{-\frac{1}{2},\frac{3}{2}}(\theta)S^V_{-\frac{1}{2}}
A^\gamma_{\frac{3}{2}},\nonumber\\
h^J_{1-1}&=&d^J_{-\frac{3}{2},\frac{3}{2}}(\theta)A^V_{-\frac{3}{2}}
A^\gamma_{\frac{3}{2}}
\end{eqnarray}
for $a=1$, and $\lambda_V=1,0,-1$,
\begin{eqnarray}
h^J_{21}&=&d^J_{\frac{1}{2},\frac{1}{2}}(\theta)A^V_{\frac{1}{2}}
A^\gamma_{\frac{1}{2}},\nonumber\\
h^J_{20}&=&d^J_{-\frac{1}{2},\frac{1}{2}}(\theta)S^V_{-\frac{1}{2}}
A^\gamma_{\frac{1}{2}},\nonumber\\
h^J_{2-1}&=&d^J_{-\frac{3}{2},\frac{1}{2}}(\theta)A^V_{-\frac{3}{2}}
A^\gamma_{\frac{1}{2}}
\end{eqnarray}
for $a=2$, and $\lambda_V=1,0,-1$,
\begin{eqnarray}
h^J_{31}&=&d^J_{\frac{3}{2},\frac{3}{2}}(\theta)A^V_{\frac{3}{2}}
A^\gamma_{\frac{3}{2}},\nonumber\\
h^J_{30}&=&d^J_{\frac{1}{2},\frac{3}{2}}(\theta)S^V_{\frac{1}{2}}
A^\gamma_{\frac{3}{2}},\nonumber\\
h^J_{3-1}&=&d^J_{-\frac{1}{2},\frac{3}{2}}(\theta)A^V_{-\frac{1}{2}}
A^\gamma_{\frac{3}{2}}
\end{eqnarray}
for $a=3$, and $\lambda_V=1,0,-1$,
and
\begin{eqnarray}
h^J_{41}&=&d^J_{\frac{3}{2},\frac{1}{2}}(\theta)A^V_{\frac{3}{2}}
A^\gamma_{\frac{1}{2}},\nonumber\\
h^J_{40}&=&d^J_{\frac{1}{2},\frac{1}{2}}(\theta)S^V_{\frac{1}{2}}
A^\gamma_{\frac{1}{2}},\nonumber\\
h^J_{4-1}&=&d^J_{-\frac{1}{2},\frac{1}{2}}(\theta)A^V_{-\frac{1}{2}}
A^\gamma_{\frac{1}{2}}
\end{eqnarray}
for $a=4$, and $\lambda_V=1,0,-1$.
The amplitudes with negative helicities in the above equations
are not independent
from those with positive one; they are related by
an additional phase factor according to the Wigner-Eckart
theorem,
\begin{equation}
A^V_{-\lambda}= (-1)^{J_f-J_i-J_V}A^V_{\lambda}
\end{equation}
where $J_f$ and $J_i$ are the final nucleon and initial
resonance spins, and $J_V$ is the angular momentum of the vector meson.
The angular distributions of the helicity amplitudes in terms of the multipole
transitions have been discussed in Ref. \cite{yang}, the expressions
here are consistent with their analysis.
The evaluation of the vector meson helicity amplitudes are
similar to that of the photon amplitudes. The transition operator for
a resonance decaying into a vector meson and a nucleon is,
\begin{equation}
H^T_m=\sum_{l} \{i\frac{b^\prime}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}_l\cdot({\bf q}
\times\mbox{\boldmath $\epsilon$ \unboldmath}_v) +\frac{a}{2\mu_q}{\bf p}^\prime_l
\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v\}{\hat I}_le^{-i{\bf q\cdot r}_l},
\end{equation}
for transverse transitions and
\begin{equation}
H^L_m=\frac{a\mu}{|{\bf q}|}\sum_{l} {\hat I}_le^{-i{\bf q\cdot r}_l}
\end{equation}
for longitudinal transitions. Thus, $H^T_m$ and $H^L_m$ have
the group structure,
\begin{equation}
H^T_m={\hat I}_3(A L^-_{(3)} +B \sigma^-_{(3)}),
\label{htm}
\end{equation}
and
\begin{equation}
H^L_m={\hat I}_3 S,
\label{hlm}
\end{equation}
where
\begin{equation}
\label{A}
A=\frac{3 a}{2\sqrt{2}m_q}\langle\psi_f|p^-_3
e^{-i{\bf q}\cdot{\bf r}_3}|\psi_R\rangle,
\end{equation}
\begin{equation}
B=\frac{-3 b^\prime}{2m_q}
|{\bf q}|\langle\psi_f|e^{-i{\bf q}\cdot{\bf r}_3}|\psi_R\rangle,
\end{equation}
\begin{equation}
S=-\frac{3\mu a}{|{\bf q}|}\langle\psi_f|
e^{-i{\bf q}\cdot{\bf r}_3}|\psi_R\rangle.
\end{equation}
where $p^-_3=p_x-ip_y$.
In Eq.(\ref{htm}), $L^-_{(3)}$ and $\sigma^-_{(3)}$
denote orbital and spin flip operators.
The helicity amplitudes $A^V_{\frac 12}$, $A^V_{\frac 32}$
and $S^V_{\frac 12}$ are the matrix elements of Eq.(\ref{htm})
and Eq.(\ref{hlm}).
We list the angular momentum and flavor parts of $A^V_{\frac 12}$,
$A^V_{\frac 32}$
and $S^V_{\frac 12}$ for $\omega$ and $\rho$ photoproduction
in Tables 4-6 in the
$SU(6)\otimes O(3)$ limit with $A$, $B$ and $S$ in the
second row to denote the corresponding spatial integrals,
which are given in Table 7.
The resonances with $n\ge 3$ are treated as degenerate since there is
little information available about them. Their longitudinal
transition in the s-channel is given by:
\begin{equation}
h^J_{a\lambda_V=0}=(M^s_3(L)+M^s_2(L))
e^{-\frac{{\bf q}^2+{\bf k}^2}
{6\alpha^2}}
\end{equation}
where
\begin{eqnarray}
M^s_3(L)&=&g^s_3 \frac{a\mu}{|{\bf q}|}
\{-\frac{i}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})
\frac{1}{n!}(\frac{{\bf k}\cdot {\bf q}}{3\alpha^2})^n\nonumber\\
&& +g_v\frac{\omega}{3\alpha^2}{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}\frac{1}{(n-1)!}
(\frac{{\bf k\cdot
q}}{3\alpha^2})^{n-1}\},
\label{s-l-3}
\end{eqnarray}
and
\begin{eqnarray}
M^s_2(L)&=&-g^u_2 \frac{a\mu}{|{\bf q}|}
\{g^\prime_a\frac{i}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})
\frac{1}{n!}(\frac{-{\bf k\cdot
q}}{6\alpha^2})^n\nonumber\\
&&+g^\prime_v\frac{\omega}{6\alpha^2}
{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}\frac{1}{(n-1)!}(\frac{-{\bf k\cdot
q}}{6\alpha^2})^{n-1}\}.
\label{s-l-2}
\end{eqnarray}
The $g$-factors in Eq.(\ref{s-l-3}) and (\ref{s-l-2})
have been defined previously, and
\begin{equation}
g^s_3=\langle N_f|\sum_{j} {\hat I}_j e_j\sigma^z_j|N_i\rangle/g_A=
e_m+g^u_3,
\end{equation}
where $e_m$ is the charge of the outgoing vector meson.
The transverse transition amplitudes at the quark level are:
\begin{equation}
h^J_{a\lambda_V=\pm 1}=(M^s_3(T)+M^s_2(T))
e^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}}
\end{equation}
where
\begin{eqnarray}
M^s_3(T)/g^s_3&=&\frac{b^\prime}{4m^2_q}\{g_v({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})
+i\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)
\times(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\}\frac{1}{n!}(\frac{{\bf k\cdot
q}}{3\alpha^2})^n\nonumber\\
&&+\{-\frac{ia}{12m^2_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\mbox{\boldmath $\epsilon$ \unboldmath}_v
\cdot{\bf k}
+\frac{ib^\prime\omega}{6m_q\alpha^2}\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\mbox{\boldmath $\epsilon$ \unboldmath}
\cdot{\bf q}\nonumber\\
&&+g_v\frac{a\omega}{6m_q}\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot\mbox{\boldmath $\epsilon$ \unboldmath}\}\frac{1}{(n-1)!}
(\frac{{\bf k}\cdot {\bf q}}{3\alpha^2})^{n-1}\nonumber\\
&&+g_v\frac{a\omega}{18m_q\alpha^2}\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot{\bf k}\mbox{\boldmath $\epsilon$ \unboldmath}
\cdot{\bf q} \frac{1}{(n-2)!}
(\frac{{\bf k}\cdot{\bf q}}{3\alpha^2})^{n-2},
\end{eqnarray}
and
\begin{eqnarray}
M^s_2(T)/g^u_2&=&\frac{b^\prime}{4m^2_q}\{g^\prime_v({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)
\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf
k})+ig^\prime_a\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)
\times(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\}\frac{1}{n!}(\frac{-{\bf
k\cdot q}}{6\alpha^2})^n\nonumber\\
&&+\{\frac{ia}{24m^2_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot{\bf
k}-\frac{ib^\prime\omega}{12m_q\alpha^2}
\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\mbox{\boldmath $\epsilon$ \unboldmath}\cdot{\bf q}\nonumber\\
&&-g^\prime_v\frac{a\omega}{12m_q}\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot\mbox{\boldmath $\epsilon$ \unboldmath} \}
\frac{1}{(n-1)!}(\frac{-{\bf k\cdot
q}}{6\alpha^2})^{n-1}\nonumber\\
&&+g^\prime_v\frac{a\omega}{72m_q\alpha^2}\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot{\bf k}\mbox{\boldmath $\epsilon$ \unboldmath}
\cdot{\bf q} \frac{1}{(n-2)!}(\frac{-{\bf k}\cdot {\bf q}}{6\alpha^2})^{n-2}.
\end{eqnarray}
Qualitatively, we find that the resonances with larger partial waves
have larger decay widths into the vector meson and nucleon though this is
not as explicit as in the pseudoscalar case~\cite{pseudoscalar,eta}.
Thus, we could use the mass
and decay width of the high spin states, such as $G_{17}(2190)$ for $n=3$
states and $H_{19}(2220)$ for $n=4$ states in the $\omega$ photoproduction.
The relation between these operators and the helicity amplitudes
$h_{a\lambda_V}$ has been given in Table 2 and 3.
\section*{\bf 4. The numerical results}
Before discussing the details of the $\omega$, $\rho$ and $\phi$
productions, it should be pointed out that the nonrelativistic
wave function in the quark model becomes more inadequate
as the energy of the system increases.
A procedure to partly remedy this problem is to introduce
the Lorentz boost factor in the spatial integrals
that involve the spatial wavefunctions of nucleons
and baryon resonances,
\begin{equation}
R(q,k)\to \gamma_q\gamma_k R(q\gamma_q, k\gamma_k),
\end{equation}
where $\gamma_q=\frac{M_f}{E_f}$ and $\gamma_k=\frac{M_i}{E_i}$.
A similar procedure had been used in the numerical evaluation of
pseudoscalar meson photoproduction\cite{eta}. There are two overall
parameters from the quark model formalism; the quark mass $m_q$ and the
parameter $\alpha$ related to the harmonic oscillator strength, and we
adopt the values commonly used in the quark model approach
for these parameters,
\begin{eqnarray}
m_q&=&330\quad\hbox{MeV},\nonumber\\
\alpha &=& 410\quad\hbox{MeV}.
\end{eqnarray}
Now, we turn our attention to the details of
the $\omega$, $\rho^0$, $\rho^\pm$ and $\phi$ photoproductions.
\subsection*{\bf 4.1. The $\omega$ photoproduction}
The t-channel exchange of $M^t_{fi}$ in Eq.(\ref{3.1}) would correspond to
$\omega$ exchange which is absent since the amplitude is proportional
to the charge of the outgoing $\omega$ meson.
As discussed in Ref. \cite{FrimanSoyeur},
the $\pi^0$ exchange is dominant in the small $t$ region
over other meson exchanges and largely responsible for the large
diffractive scattering behavior near the threshold.
The Lagrangian for the
$\pi^0$ exchange model has the following form~\cite{FrimanSoyeur},
\begin{equation}\label{3}
L_{\pi NN}=-i g_{\pi NN}\overline\psi \gamma_5(\vtau\cdot\vpi)\psi
\end{equation}
for the $\pi NN$ coupling vertex, and
\begin{equation}\label{4}
L_{\omega \pi^0 \gamma}=e_N\frac{ g_{\omega\pi\gamma} }{M_\omega}
\epsilon_{\alpha\beta\gamma\delta}\partial^\alpha A^\beta
\partial^\gamma\omega^\delta\pi^0
\end{equation}
for the $\omega\pi\gamma$ coupling vertex, where the $\omega^\delta$
and $\pi^0$ represent
the $\omega$ and $\pi^0$ fields, the $A^\beta$ denotes the electromagnetic field,
and $\epsilon_{\alpha\beta\gamma\delta}$ is the Levi-Civita tensor, and $M_\omega$
is the mass of $\omega$ meson. The $ g_{\pi NN}$ and $ g_{\omega\pi\gamma}$ in Eqs.
(\ref{3}) and (\ref{4}) denote the coupling constants at the two
vertices, respectively. Therefore, the transition amplitudes of t-channel $\pi^0$
exchange have the following expression,
\begin{equation}
M^t_T(\pi^0)= \frac{e_Ng_{\pi NN} g_{\omega\pi\gamma}}{2M_\omega(t-m^2_\pi)}
\{\omega\mbox{\boldmath $\epsilon$ \unboldmath}\cdot({\bf q}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)
+\omega_m{\bf k}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times\mbox{\boldmath $\epsilon$ \unboldmath}_v)\}
\mbox{\boldmath $\sigma$ \unboldmath}\cdot {\bf A}
e^{-\frac {({\bf q}-{\bf k})^2}{6\alpha_\pi^2}}
\label{t}
\end{equation}
for the transverse transition, and
\begin{equation}
M^t_L(\pi^0)= -\frac{e_Ng_{\pi NN} g_{\omega\pi\gamma}}{2M_\omega(t-m^2_\pi)}
\frac{ M_\omega}{|{\bf q}|}(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\cdot{\bf q} \mbox{\boldmath $\sigma$ \unboldmath}\cdot {\bf A}
e^{-\frac {({\bf q}-{\bf k})^2}{6\alpha_\pi^2}}
\label{l}
\end{equation}
for the longitudinal transition, where
$\omega$ in the transition amplitudes
denotes the energy of the photon with momentum ${\bf k}$, and
${\bf A}=-\frac{{\bf q}}{E_f+M_N}+\frac{{\bf k}}{E_i+M_N}$,
and $t=(q-k)^2=M_\omega^2-2k\cdot q$.
The factor $e^{-\frac {({\bf q}-{\bf k})^2}{6\alpha_\pi^2}}$ in Eqs.
(\ref{t}) and (\ref{l}) is the form factor for both $\pi NN$ and
$\omega \gamma \pi$ vertices, if we assume
that the wavefunctions for nucleon, $\omega$ and $\pi$ have a Gaussian form.
The constant
$\alpha_\pi$ in this form factor is treated as a parameter.
The coupling constants $ g_{\pi NN}$ and $ g_{\omega\pi\gamma}$
have the values as used in \cite{FrimanSoyeur}. Therefore,
\begin{eqnarray}
\frac{g^2_{\pi NN}}{4\pi}&=& 14,\nonumber\\
g^2_{\omega\pi\gamma}&=&3.315 \ \ .
\end{eqnarray}
Note that the
values of $g_{\pi NN}$ and $g_{\omega\pi\gamma}$ were fixed by separate
experiments
and, therefore, are not free parameters in Ref.\cite{FrimanSoyeur}.
Qualitatively, we would
expect that $\alpha_{\pi}$ be smaller than the
parameter $\alpha=410$ MeV, since it represents
the combined form factors for both $\pi NN$ and
$\omega\pi\gamma$ vertices while the parameter
$\alpha$ only corresponds to the form factor for
the $\pi NN$ or $\omega NN$ vertex alone.
Following the same procedure
as in Section 2, the explicit expressions for the
operators in terms of the helicity
amplitudes can be obtained. They are listed in Tables 2
and 3 for the transverse and
longitudinal amplitudes, respectively.
As shown in Table 4., the Moorhouse
selection rule\cite{moor} have eliminated the states
belonging to $[{\bf 70}, 1^-]_1$ and $[{\bf 70}, 2^+]_2$ representation with
symmetric spin structure from contributing to the
$\omega$ photoproduction on the proton
target so that the s-channel states
$S_{11}(1650), D_{13}(1700), D_{15}(1650)$ are not
present in our numerical evaluations.
Of course, configuration mixing will lead to additional
contributions from these resonances which,
however, cannot be determined at present due
to the poor quality of data.
Only the resonances $P_{13}(1900)$ and
$F_{15}(2000)$, at present classified as 2-star resonances
in the 1996 PDG listings, have masses above the $\omega$ decay threshold,
and therefore have branching ratio into the $\omega N$ channel.
We have not performed a rigorous numerical fit to the available
data because of the poor quality of the data. However,
The numerical results have shown that the resonance $F_{15}(2000)$
plays an very important role in $\omega$ photoproduction.
Fig. 1 shows our calculations for the differential cross section
at the average photon energies of $E_\gamma=$1.225, 1.45, 1.675
and 1.915 GeV, in comparison with the data~\cite{saphir}. The results for
the t-channel $\pi^0$ exchange and contributions from only
the s- and u-channel processes are also shown separately. We find that
the remaining paramters in our model are
\begin{eqnarray}\label{omega}
a & = & -1.7 \nonumber \\
b^\prime & = & 2.5 \nonumber \\
\alpha_{\pi}&=& 300 \quad \hbox{MeV} .
\end{eqnarray}
In order to give a good overall agreement with the data, particularly in
the large $t$ region. Our results with the $\pi^0$ exchange are consistent
with the findings of Ref. \cite{FrimanSoyeur} though the form factor in
our calculation is different. Fig. 1 clearly demonstrates that
the t-channel $\pi^0$ exchange is dominant in the small $t$ region,
while the s- and u-channel resonance contributions become more important
as the momentum transfer $t$ increases.
To test the sensitivity of s-channel resonances to the differential
cross section, the differential cross section
without the contribution from the resonance $F_{15}(2000)$
at 1.675 GeV (near the threshold of $F_{15}(2000)$ )
is presented in Fig. 1-(c) as well. The results indicate that
the differential cross section data alone are not sufficient to
determine the presence of this resonance considering the theoretical
and experimental uncertainties. Since our numerical calculation shows
that the resonance couplings of the
$F_{15}(2000)$ are larger than those of other resonances in this
mass region, the sensitivity of the differential cross section to
other resonances around 2 GeV is even smaller.
In contrast to the differential cross section,
the polarization observables show a much more dramatic dependence
on the presence of the s-channel resonances. We present
results of four single polarizations at 1.7 GeV in Fig. 2. The absence
of the resonance $F_{15}(2000)$ leads to a sign change in the
target polarization, and the variations in the recoil as well
as the meson polarization observable are very significant as well.
The absence of the resonance
$P_{13}(1900)$, also shown in Fig. 2,
leads to very significant changes in the recoil
polarization. Although we
do not expect our numerical results to give a quantitative
prediction of
polarization observables at the present stage,
since the calculations are limited
to the $SU(6)\otimes O(3)$ symmetry limit that should be broken in
more realistic quark model wavefunction, our results clearly suggest that the
polarization observables may be the best place to determine s-channel
resonance properties.
Our results for the total cross section are shown in Fig. 3,
in which the contributions from the s- and
u-channel resonances alone are compared to the full calculation.
Our results indicate
an increasing discrepancy between theory and
the data~\cite{saphir,ABHMC,olddata}
with increasing energy $E_{\gamma}$,
This discrepancy
comes mainly from the small angle region where the $\pi^0$
exchange alone is not sufficient to describe the
diffractive behavior at higher energies. One might expect
that Pomeron exchange\cite{pomeron,wolf}
plays a more important role in the higher energy region.
However, Fig. 1 shows that our results for
the differential cross section at the large angle region are
in good agreement with the data, and it
suggests that contributions from the s- and u- channel resonances
which are the main focus of our study, give
an appropriate description of the reaction mechanism.
It is interesting to note that the small bump around 1.7 GeV in the total
cross section
comes from the contributions of the resonance $F_{15}(2000)$.
As discussed above, our calculations
find that the resonance $F_{15}(2000)$
has a strong coupling to the $\omega N$ channel. Thus,
this resonance is perhaps the best candidate
whose existance as a ``missing" resonance can be
established through $\omega$ photoproduction.
\subsection*{\bf 4.2. The $\rho^0 $ photoproduction}
$\rho^0$ meson photoproduction has some similar features
as $\omega$ photoproduction. The most significant one is that it also has
strong-forward-peaking diffractive behavior in the
differential cross section. The t-channel vector meson exchange
is proportional to the meson charge in (\ref{3.1}),
therefore has no contributions to the transition amplitudes.
However, since $\rho$ meson is an isovector, its photoproduction
has also shown some different characters from $\omega$
photoproduction. There are more s-channel resonances, such as the $\Delta$
resonances, will contribute to the $\rho^0$ productions due to the isospin
couplings.
From the Lagrangian introduced in our quark model approach,
the amplitudes from s- and u-channel are not sufficient
to reproduce the diffractive behavior in the small $t$ region.
Following what we have discussed at the beginning of this
section, it is reasonable to include an additional t-channel
meson exchange term in the transition amplitude\cite{collins}.
As Friman and Soyeur\cite{FrimanSoyeur} has discussed in their
work that $\sigma$ meson plays dominant role over other
t-channel processes in the $\rho^0$ photoproduction. As a
phenomenological approach, $\sigma$ exchange is included to
give the small $t$ diffractive behavior which can be understood
as the still sizable contribution from the Pomeron exchange
from high energies down to the threshold\cite{Freund,Harari}.
In terms of
Regge phenomenology, it corresponds to the nontrivial background
integral of the Regge trajectory expansion\cite{collins}
which contributes to the diffractive behavior at small
$t$ region. However, we have to mention that, only the resonance
contributions have been taken into account consistently, could
the proper large $t$ behavior be described.
The followings are the transverse and longitudinal
transition amplitudes of $\sigma$ exchange,
\begin{equation}
M^t_T(\sigma)=\frac{ie_N g_{\sigma NN}g_{\rho^0\sigma\gamma}}
{M_\rho(M^2_\rho-2q\cdot k-M^2_\sigma)}
\{-\omega\omega_m\mbox{\boldmath $\epsilon$ \unboldmath}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v+{\bf k}\cdot{\bf q}
\mbox{\boldmath $\epsilon$ \unboldmath}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v-{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath} {\bf k}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v \}
e^{-\frac{({\bf q}-{\bf k})^2}{6\alpha^2_\sigma}},
\end{equation}
\begin{equation}
M^t_L(\sigma)=\frac{i e_N g_{\sigma NN}g_{\rho^0\sigma\gamma}}
{M^2_\rho(M^2_\rho -2q\cdot k-M^2_\sigma)}
\omega |{\bf q}|{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}
e^{-\frac{({\bf q}-{\bf k})^2}{6\alpha^2_\sigma}},
\end{equation}
where $ e^{-\frac{({\bf q}-{\bf k})^2}{6\alpha^2_\sigma}}$
is the form factor for t-channel $\sigma$ exchange.
$\alpha_\sigma $ is treated as the parameter.
$ g_{\sigma NN}$ and $ g_{\rho^0\sigma\gamma}$
are the coupling constants of the vertex
$\sigma NN$ and $\rho^0\sigma\gamma $, respectively,
which are determined by the experimental analysis.
As a phenomenological freedom, the mass of $\sigma$ meson
$M_\sigma$ has the same values as in \cite{FrimanSoyeur},
$M_\sigma=500$GeV. $ g_{\sigma NN}$ and
$ g_{\rho^0\sigma\gamma}$ have the following values,
\begin{eqnarray}
\frac{g^2_{\sigma NN}}{4\pi}&=&8,\nonumber\\
g^2_{\rho^0\sigma\gamma}&=&7.341\quad .
\end{eqnarray}
Numerical investigation indicates that $\pi^0$ exchange
contribution in $\rho^0$ photoproduction is so small that
can be neglected in the first order approximation.
With the same values for parameter $a$ and $b^\prime$,
we fit the differential cross sections, total cross section and
the single spin polarizations in $\gamma p\to \rho^0 p$.
For $n\le 2$, there are 27 resonances given by the quark model.
However, Moorhouse selection rule will eliminate those belonging
to the representation
$[{\bf 70},1^-]_1,\quad [{\bf 70}, 0^+]_2$ and $[{\bf 70}, 2^+]_2$,
in which there are $S_{11}(1650),\quad D_{13}(1700)$
and $ D_{15}(1675)$ belonging to $[{\bf 70},1^-]_1$,
but those belonging to $[{\bf 70}, 0^+]_2$
and $[{\bf 70}, 2^+]_2$ have not been determined very well in
experiments.
The experimental data from\cite{saphir} have been
reproduced in our model
at $E_\gamma =$1.225, 1.305, 1.4, 1.545, 1.730, 1.925GeV for
the differential cross sections. We find that the same set of
parameters in the $\omega$ could also be used to describe the $\rho$
productions as well. The additional parameter $\alpha_{\sigma}$ is found
to be
\begin{equation}
\alpha_{\sigma}=250 MeV.
\end{equation}
The fact that both $\omega$ and $\rho^0$ productions can be described by
the same set of the parameters is by no means trivial. It shows the
advantage of the introduction of the quark degrees of freedom so that the
$\omega$ and $\rho$ can be described by a unified framework, moreover, the
isospin mixings between $\omega$ and $\rho$ is small and they have very
similar masses. Although the $\sigma$ exchange dominates in the small
$t$ region, s- and
u-channel contributions play obviously important roles in the large $t$
region. In fact, it is the s- and u-channel contributions that
result in the backward peaking behavior which is similar to
the Compton scattering phenomenon\cite{capstickcompton}.
In Fig4., the individual results from $\sigma$ exchange and
s- and u-channel are presented as well. It shows that the cross
sections are mainly from the diffractive process. Moreover,
since the number of contributing resonances is large, the
contribution effects from individual resonance are not
significant. That is to say, it is quite impossible to derive
the resonance informations from the differential cross sections.
The polarization results are provided in Fig 5. Much attention
has been paid to the ``missing resonance" $F_{15}(2000)$.
It shows that the polarization observables are quite sensitive
to the presence of this state, especially in the recoil
polarization and meson polarization. Double spin polarization
investigations are in progress in our framework and they should
be more sensitive to the resonances.
In Fig 6. all available data\cite{saphir,ABHMC,olddata} for
$\gamma p\to \rho^0 p$ are present. With the same set of parameters,
the theoretical
result give a good description of the experimental data.
In Fig 6. the dotted line denotes the contributions from s- and u-
channel which shows that the cross section of producing $\rho^0$
through resonance channel is quite small in contrast with
that through diffractive process. This also explains the reason
that the effects of an individual resonance is not significant
in the differential cross sections.
\subsection*{\bf 4.3. $\rho^\pm $ photoproduction}
In the charge meson productions, $\gamma n\to \rho^- p$ and
$\gamma p\to \rho^+ n$, the three channels s-, u- and t- have
contributions to the
transition amplitudes. The charge exchange process has
eliminated contributions from such diffractive behaviors
as in the $\omega$ and $\rho^0$ photoproductions while
the t-channel charged vector meson exchange and the Seagull
term will account for the small forward peaking shapes
of the differential cross sections, and this is also required
by the duality hypothesis when we have taken
the contributions from all the s- and u-channel processes
into account. Therefore, the charged meson productions provides an
important test of this approach, as every term in these two reactions are
generated by the effective Lagrangian, and there is no additional free
parameters.
In $\gamma p\to \rho^+ n$, for the Moorhouse selection rule
at the photon interaction vertex, those resonances belonging
to $[{\bf 70},1^-]_1,\quad [{\bf 70}, 0^+]_2$
and $[{\bf 70}, 2^+]_2$ are eliminated from contributing to
the amplitudes. But in $\gamma n\to \rho^- p$, without
the constraint from Moorhouse selection rule, more resonances
have contributions to the amplitudes.
As we have shown in the $\omega$ and $\rho^0$ photoproductions,
the same set of parameters $a$ and $b^\prime$ gives an overall
agreement with the available data, the challenge is that
whether we can reproduce the data\cite{benz} in the $\rho^\pm$
channels with the same parameters,
because they possess the same isospin symmetry. This should be
one crucial test for the model. As expected,
the data in the reaction $\gamma n\to \rho^- p$ are in
very good agreement
with the quark model predictions, indicating that the quark model
wave functions appear to provide the correct relative strengths
and phases among
the terms in the s-, u- and t-channels. In Fig. 7(a).,
the experimental data for $\gamma n\to \rho^- p$
from Ref.\cite{benz} are presented at $E_\gamma=1.85$GeV,
which is the average of the measurement realm. In Fig. 7(b).,
we present the prediction of the differential cross section
in $\gamma p\to \rho^+ n$, and it shows a similar behavior
as in $\gamma n\to \rho^- p$.
In Fig. 8., the total cross sections for $\rho^\pm$ \cite{benz}
are presented. In Fig. 8(a), the cross section of $\rho^-$
photoproduction
with $0<|t|<1.1\hbox{GeV}^2$ is well reproduced,
along which the total cross section is also consistent with
the experimental estimation\cite{hilpert}. The prediction
of the total cross section of $\rho^+$ photoproduction is
given in Fig. 8(b).
While the shapes and magnitudes of the differential cross sections are
well reproduced within our approach we find little sensitivity to
individual resonances. For example,
in the energy region of $E_{\gamma}\sim 1.7$GeV, removing the
$F_{15}(2000)$ state - one of the ``missing" candidates -
changes the cross section very little, indicating
the differential cross section may not be the
ideal experimental observable to study the structure
of the baryon resonances.
In contrast to the cross sections, the polarization observables
show a more dramatic dependence on the presence of the s-channel
resonances. To illustrate their effects, as an example,
the target polarizations for $\rho^-$ and $\rho^+$
production with and without the contribution from
the $F_{15}(2000)$ resonance are shown in Fig. 9.
We do not expect the quark model in the
$SU(6)\otimes O(3)$ limit to provide a good description of these observables.
However, it demonstrates the sensitivity of these observables
to the presence of s-channel resonances.
This shows that polarization observables are essential
in analyzing the role of s-channel resonances.
\subsection*{\bf 4.4. The $\phi$ photoproduction}
Because the isospin of the $\phi$ is the same as that of the
$\omega$, the formalism for the s- and u- channel contributions
to the $\phi$ productions should be the same as that of the $\omega$
productions except the different threshold energies. The major
difference between the $\omega$ and the $\phi$ productions is the
mechanism of generating $u$ ($d$) and $\overline u$ ($\overline d$)
quarks for the $\omega$ productions and the $s$ and $\overline s$ quarks
for the $\phi$ production that are suppressed by the OZI rule. Such a
difference will be reflected in the difference of
the coupling constants between
$\omega$ and $\phi$ productions, of which
the coupling constant for the $\phi NN$
vertex is expected to be much smaller than that for the $\omega NN$ vertex.
Thus, we shall concentrate on the non-diffractive effects generated
from the effective Lagrangian in the s- and u-channel reaction.
Since we have not included the Pomeron exchange term to give the strong
diffractive behavior in the small $t$ region, our results thus could
only be regarded as an estimation.
The numerical results with the following two sets of parameters
are shown in comparison with the data\cite{phidata} in Fig.10.
They represent the $\phi qq$ coupling constants in the non-diffractive
processes of the $\phi$ production,
\begin{eqnarray}\label{phi1}
a&=&-0.35, \nonumber\\
b^\prime &=& 0.7\quad ,
\end{eqnarray}
and
\begin{eqnarray}\label{phi2}
a&=&-0.6, \nonumber\\
b^\prime &=& 1.2 \quad .
\end{eqnarray}
It should be note that the same value of $\alpha$ as
that in the $\omega$ production
has been used in the evaluations at present stage.
Our results show that the differential cross sections are very
senstive to the parameters $a$ and $b^\prime$ in the large $t$
region, where the Pomeron exchange is expected to be small. Thus,
the data in the large $t$ region could provide an important
constraint to the $\phi NN$ coupling constants. Moreover, the
coupling constants in Eqs. (\ref{phi1}) and (\ref{phi2}) are indeed
significantly smaller than those for the $\omega$ photoproductions
in Eq.(\ref{omega}). These results are also consistent with those
obtained in the QHD approach\cite{williams}.
\section*{\bf 5. Discussion and conclusion}
In this paper we have developed the framework and formalism
for the description of the vector meson photoproductions in the
constituent quark model. Consequently, the application
of this approach to the
$\omega$ and $\rho$ meson photoproductions have
produced very encouraging results.
The use of an effective Lagrangian allows the gauge invariance to be
satisfied straightforwardly.
The advantage of using the quark model approach is that
the number of free parameters is greatly reduced in comparison
with hadronic models which introduce each resonance as a new
independent field with unknown coupling constants.
In our approach, only three parameters appear
in the $SU(6)\otimes O(3)$ symmetry limit,
the coupling constants $a$ and $b$ (or $b^\prime$) which determine the
coupling strengths of the vector meson to the quark,
and the harmonic oscillator strength $\alpha$.
With $\pi^0$ and $\sigma$ exchange taken into account,
an overall description of the $\omega$,
$\rho^0$, $\rho^+$ and $\rho^-$ photoproduction
with the same set of parameters has been obtained
in this framework. It shows that intermediate
resonance contributions have played
important roles in the $\omega$ and $\rho$
meson photoproductions especially in large $t$ regions.
This shows that our effective Lagrangian approach in the quark model
has provided an ideal framework to investigate the reaction
mechanism and the underlying quark structure of the baryon
resonances. The crucial role played by the polarization
observables in determining
the s-channel resonance properties is demonstrated.
Data on these observables, expected from TJNAF in the near future,
should therefore provide new insights into the
structure of the resonance $F_{15}(2000)$
as well as other ``missing" resonances.
The introduction of t-channel $\pi^0$ and $\sigma$ exchange
in the neutral productions can be quanlitively understood
in the picture of the Regge phenomenology\cite{collins}
or the diffraction duality
picture of Freund\cite{Freund} and Harari\cite{Harari}.
In such a picture,
the large difference of the cross section
between $\rho^0$ and $\rho^\pm$ is due to such a background
amplitude which originates from a sizable contribution
of the Pomeron singularity in $\rho^0$ photoproduction
from high energies down to the threshold.
Our numerical investigation has really shown the case,
therefore, to some extent, suggests that the
duality hypothesis\cite{duality}
constrains also the vector meson photoproductions.
In the reaction of $\phi$ photoproduction, the sizable non-diffractive
contributions can be phenomenologically interpreted as the
s- and u- channel contributions generated from the effective Lagiangian.
Further studies that includes the pomeron exchange in this approach
will be persued later.
One significant approximation inherent in the presented approach
is the treatment of the vector mesons as point particles, thus,
the effects due to the finite size
of the vector mesons that were important in the $^3P_0$ model
are neglected here.
A possible way that may partly compensate this problem
is to adjust the parameter $\alpha^2$, the harmonic
oscillator strength.
In general, the question of how to include the finite
size of vector mesons
while maintaining the gauge invariance is
very complicated and has not yet been resolved.
Moreover, as configuration mixing effects for the
resonances in the second and third resonance
region are known to be very important, more precise
quantitative agreement with
the data cannot be derived from the current form.
But such effects could be investigated
in our approach by inserting a
mixing parameter $C_R$ in front of the transition
amplitudes for the s-channel
resonances, as has been investigated in Ref.~\cite{eta}.
The fact that the $\omega$ and $\rho$ productions
can be described by the same set of the parameters shows the
successes of the quark model appraoch. Thus, the model
presented here could provide a systematic method to investigate
the resonance behavior in the vector meson photoproductions
for the first time,
which will help us to identify the ``missing resonances".
The authors acknowledge useful discussions
with B. Saghai, F. Tabakin and P. Cole.
Helpful discussions with F. J. Klein regarding
the data are acknowledged gratefully.
Zhenping Li acknowledges the hospitality of the
Saclay Nuclear Physics Laboratory. Q. Zhao acknowledges helpful
discussions with Yang Ze-sen and Hongan Peng.
This work is supported in part by Chinese Education
Commission and Peking University, and the US-DOE
grant no. DE-FG02-95-ER40907.
\section*{\bf Appendix}
The matrix element for the nucleon pole term of transverse
excitations in the s-channel is,
\begin{eqnarray}
M^s_N(T)&=&-\frac{M_N
e^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}}}{P_N\cdot k}
\{g^t_v\frac{\omega ae_N}{E_f+M_f}
\mbox{\boldmath $\epsilon$ \unboldmath}_v\cdot\mbox{\boldmath $\epsilon$ \unboldmath} -g_A\mu_N\frac{b^\prime}{2m_q}
[(\mbox{\boldmath $\epsilon$ \unboldmath}_v\times {\bf q})\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}
\times {\bf k})\nonumber\\
&&+i\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}_v\times {\bf q})\times(\mbox{\boldmath $\epsilon$ \unboldmath}\times {\bf k})]\},
\end{eqnarray}
while the one for the u-channel is,
\begin{eqnarray}
M^u_N(T) & = & -\frac{M_fe^{-\frac{{\bf q}^2
+{\bf k}^2}{6\alpha^2}}}{P_f\cdot k}
\{ g^t_v\frac{\omega ae_f}{E_N+M_N}\mbox{\boldmath $\epsilon$ \unboldmath}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v
+g_A\mu_f\frac{b^\prime}{2m_q}
\big [(\mbox{\boldmath $\epsilon$ \unboldmath}\times {\bf k})\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}_v\times {\bf q})
\nonumber \\
& & + i\mbox{\boldmath $\sigma$ \unboldmath}\cdot((\mbox{\boldmath $\epsilon$ \unboldmath}\times {\bf k})\times(\mbox{\boldmath $\epsilon$ \unboldmath}_v\times {\bf q}))
\big ]\}\nonumber\\
&&+ \frac{ e_f e^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}}}
{P_f\cdot k}\{\frac{-g^t_v a}{E_N+M_N}{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}{\bf k}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v
+ig_A\frac{b^\prime}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}_v\times {\bf q})
{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}\}.
\end{eqnarray}
The matrix element for the nucleon pole term of the longitudinal
excitations in the s-channel is,
\begin{equation}
M^s_N(L)=-g^t_v\frac{i\mu a }{|{\bf q}|}
\frac{(w+M_N)}{2 P_N\cdot k}\mu_N\mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})
e^{-\frac{{\bf q}^2
+{\bf k}^2}{6\alpha^2}},
\end{equation}
while the one for the u-channel is,
\begin{equation}
M^u_N(L)=g^t_v\frac{\mu a }{|{\bf q}|}\frac{1}{P_f\cdot k}\{ -e_f{\bf q}
\cdot \mbox{\boldmath $\epsilon$ \unboldmath}+i\mu_f \frac{(w+M_f)}{2w} \mbox{\boldmath $\sigma$ \unboldmath}\cdot(\mbox{\boldmath $\epsilon$ \unboldmath}\times{\bf k})\}
e^{-\frac{{\bf q}^2+{\bf k}^2}{6\alpha^2}},
\end{equation}
where $w=E_i+\omega=E_f+\omega_m$ is the c.m. energy and
the $g$-factor $g^t_v$ has been given in (\ref{gtv}).
The t-channel matrix element for the transverse transition is,
\begin{eqnarray}
M^t(T)&=&-\frac{a e_m}{q\cdot k}
\{-g^t_v[\omega_m+(\frac{{\bf q}}{E_f+M_f}
+\frac{{\bf k}}{E_N+M_N})\cdot{\bf q}]\mbox{\boldmath $\epsilon$ \unboldmath}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v
\nonumber\\
&&+g_A\frac{i}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot({\bf k}
\times{\bf q})\mbox{\boldmath $\epsilon$ \unboldmath}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v\nonumber\\
&&-g^t_v(\frac{1}{E_f+M_f}+\frac{1}{E_N+M_N})
{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}{\bf k}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v \nonumber\\
&&+g_A\frac{i}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(({\bf k}-{\bf q})
\times\mbox{\boldmath $\epsilon$ \unboldmath}_v){\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}\nonumber\\
&&+g_A\frac{i}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(({\bf k}-{\bf q})
\times\mbox{\boldmath $\epsilon$ \unboldmath}){\bf k}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}_v
\}e^{-\frac{({\bf k}-{\bf q})^2}{6\alpha^2}},
\end{eqnarray}
and for the longitudinal transition is,
\begin{equation}
M^t(L)=-\frac{\mu}{|{\bf q}|}
\frac{ae_m}{q\cdot k}\{g^t_v(1-\frac{\omega}{E_f+M_f})
{\bf q}\cdot\mbox{\boldmath $\epsilon$ \unboldmath}
+g_A\frac{i\omega}{2m_q}\mbox{\boldmath $\sigma$ \unboldmath}\cdot(({\bf k}-{\bf q})
\times\mbox{\boldmath $\epsilon$ \unboldmath}) \}e^{-\frac{({\bf k}-{\bf q})^2}{6\alpha^2}}.
\end{equation} |
2102.09275 | \section*{Introduction}
Organic charge-transfer complexes (CTCs) formed by electron-donor and -acceptor molecules are an intriguing and broad class of materials that can exhibit phenomena related to strong electron correlations and electron-phonon coupling such as
charge and spin density waves, Mott metal-insulator transitions, charge ordering, spin-liquid phases, and superconductivity.\cite{Jerome2004review,Enoki2004review,Seo2004review,Powell2006review,Clay2018RepProgPhys,Zhang2017_AccChemRes} In bulk CTC crystals, donor and acceptor molecules typically stack in rows that maximize $\pi-\pi$ electronic overlap along the rows only.\cite{Sing2003} This anisotropy in the overlap results in pseudo one-dimensional electronic dispersion, providing a suitable platform to investigate low-dimensional, as well as low-energy, physics.
Despite the broad spectrum of intriguing physical phenomena that have been reported in bulk CTCs, their two-dimensional (2D) films have been much less studied.\cite{Gonzalez-Lakunza2008,Fernandez-Torrente2008,Jackel2008,Clark2010,Rojas2013CTC,Jeon2016,Rodriguez-Fernandez2017,Hassanien2017,PhysRevB.81.155403} In particular, the studies have been confined to metal substrates, which strongly interact with the molecular layer and mask the intrinsic electronic properties of the CTCs.
The CTC formed out of tetrathiafulvalene (TTF) and tetracyanoquinodimethane (TCNQ) molecules is an archetypal example of a CTC. It possesses the highest bulk conductivity reported so far in a CTC and has been studied in detail.\cite{Jerome2004review,Nishiguchi1998CDW,Wang2003TTFTCNQ,Sing2003,PhysRevB.81.155403} Another widely studied system is formed by the Bechgaard salts consisting of small, planar organic molecules acting as an electron donor combined with an electron accepting small inorganic molecule. These materials are one of the most prominent examples of organic superconductors.\cite{Jerome2004review,Clark2010}
The properties of 2D films of these CTCs on metallic substrates can be strongly influenced by the underlying substrate.
For example, it is possible to form films with other than 1:1 stoichiometry.\cite{Rojas2013CTC,Jeon2016,Rodriguez-Fernandez2017} In some cases, the effect of the substrate can be limited to doping of the film, \textit{e.g.}~in the case of the organic superconductor BETS$_2$GaCl$_4$ monolayer on Ag(111).\cite{Clark2010,Hassanien2017} On the other hand, the substrate interaction can completely dominate the low-energy electronic properties. On Au(111), TTF-TCNQ molecular states of the CTC hybridize with the metal states to form dispersive interface states.\cite{Gonzalez-Lakunza2008} Further, the unpaired electron of TCNQ molecules on the Au(111) surface exhibits the many-body Kondo effect due to screening by the substrate conduction electrons.\cite{Fernandez-Torrente2008}
Thus, the electronic properties of a CTC, especially close to the Fermi energy, can be strongly perturbed by the metal substrate, prohibiting the study of intrinsic electronic properties of CTC. Therefore, preparing 2D films of CTCs on weakly interacting substrates is extremely desirable. Epitaxial graphene grown on Ir(111) has been shown to decouple the adsorbate layer from the underlying metal substrate allowing investigation of intrinsic electronic properties of the adsorbate layers.\cite{Kumar2017review, Kumar2018}
Here, we a present low-temperature scanning tunneling microscopy (LT-STM) study of a 2D CTC of TTF and fluorinated TCNQ (F$_4$TCNQ) self-assembled on the surface of oxygen-intercalated epitaxial graphene on Ir(111) (G/O/Ir(111)). Sequential deposition of the molecules on this surface leads to the formation of rotationally identical domains of CTCs with alternating rows of TTF and F$_4$TCNQ lying parallel to the surface. The frontier molecular orbitals of the molecular species in the CTC, as found from scanning tunneling spectroscopy (STS), indicate charge transfer between TTF and F$_4$TCNQ molecules. High-resolution tunneling spectra exhibit a dip at Fermi Fermi energy closing at a temperature of 20 K that may be attributed to the formation of a correlated ground state in the CTC monolayer.
\section*{Results and Discussion}
Figure~\ref{fig:fig1} describes the assembly and structure of the TTF-F$_4$TCNQ CTC on a G/O/Ir(111) surface. The sample preparation is described in detail in the Methods section. Briefly, we grow a near monolayer coverage of graphene on Ir(111) by a combination of temperature programmed growth (TPG) and chemical vapour deposition (CVD), as described previously,\cite{NDiaye2008,coraux2009growth,Hamalainen2013} followed by oxygen intercalation to electronically decouple graphene from the underlying substrate.\cite{Martinez-Galera2016} Finally, the molecules are deposited at low temperatures ($\approx 100$ K), followed by annealing at room temperature for 15-45 mins to allow the formation of highly ordered CTC islands.
Figure~\ref{fig:fig1}a shows an STM topography image of oxygen intercalated graphene on Ir(111). The surface contains the periodic moir\'e pattern of a G/Ir(111) surface with a periodicity of 25.4 \r{A}.
The additional superstructure visible on the surface is due to patches of ($2\times1$) reconstruction of subsurface oxygen which is consistent with an earlier report.\cite{Martinez-Galera2016} Oxygen intercalation leads to decoupling of graphene from Ir, which is indicated by the short-range d$I$/d$V$-spectroscopy of the surface showing a phonon gap of $\sim$160 mV \cite{Zhang2008,Halle2018_NanoLett} (see Supporting Information (SI) Fig.~S1a). Oxygen intercalation also results in strong \textit{p}-doping of graphene by $\sim$0.5 eV,\cite{Ulstrup2014} which increases the work function to $\sim$5.1 eV. This can be independently verified by measuring d$I$/d$V$ spectra at high bias with the feedback loop on $-$ here, the field-emission resonances allow to estimate the substrate work function \cite{Binnig1985_prl,Lin2007,Schulz2013,Schulz2014_PRB} (See SI Fig.~S1b).
\begin{figure}[h!]
\includegraphics[width=0.8\textwidth]{fig1.png}
\caption{Assembly and structure of the CTC on oxygen-intercalated graphene. (a) STM topography image of oxygen intercalated graphene on Ir(111). The additional superstructure apart from the moir\'e is due to reconstruction of subsurface oxygen. Scale bar is 3 nm. Imaging parameters: 1.2 nA and 10 mV. (b) Few large islands of CTC on the G/O/Ir(111) surface showing various domains and the domain boundaries. Scale bar is 30 nm. Imaging parameters: 0.4 pA and 0.75 V. (c) A zoomed-in STM image of the CTC shows the arrangement of TTF and F$_4$TCNQ molecules. Each molecule forms a row next to the row of the other molecule. A molecular structure along with a unit cell is overlaid to elucidate the molecular arrangement within the unit cell. Scale bar is 2 nm. Imaging parameters: $\sim$5 pA and 0.1 V. (d) A DFT simulated STM image of the CTC close to the Fermi energy resembles the recorded topography closely. Molecular structure and unit cells are overlaid for clarity.}
\label{fig:fig1}
\end{figure}
Figure~\ref{fig:fig1}b shows an STM topograph of large islands of ordered CTC assembled on a G/O/Ir(111) surface. The long-range ordering is the result of the post-deposition room-temperature annealing; directly after the low-temperature deposition, we observe disordered islands on the surface (see SI Fig.~S2). The CTC islands grow across the step edges in carpet-like fashion \cite{banerjee2016flexible,Yan2020_CuDCA} and contain various domains rotated with respect to each other. Analysis of several images reveals a total of six domain orientations rotated w.r.t.~each other in multiples of 30$^\circ$. Figure~\ref{fig:fig1}c shows a zoomed-in STM image to identify arrangement of TTF and F$_4$TCNQ molecules within the CTC islands. As evident from the STM image, there are two different rows of molecules: one is composed of TTF and the other of F$_4$TCNQ molecules. Rows of TTF and F$_4$TCNQ are lying alternately on the surface. The molecular structure obtained from density functional theory (DFT) calculations (see below) has been overlaid on the STM image for clarity. The molecular rows are found to be at an angle of $\pm$12$^\circ$ w.r.t.~graphene's zigzag direction for each domain. The unit cell of the CTC is shown by a parallelogram with lattice parameters \textit{a} = 18.5 ($\pm$0.5) \r{A}, \textit{b} = 9.5 ($\pm$0.5) \r{A}, $\theta$ = 56 ($\pm$2)$^\circ$. This is the most common phase we observe for this stoichiometry ((F$_4$TCNQ$)_1$(TTF)$_1$) of the molecules. At a slightly different stoichiometry ((F$_4$TCNQ)$_x$(TTF)$_{1-x}$), we have observed a checkerboard phase of the CTC where only F$_4$TCNQ rows are present and TTF molecules are dispersed across in a checkerboard fashion (see SI Fig.~S3).
In order to further elucidate the structure of the molecular layer, we carried out a broad structural search for different possible geometries using DFT (see Methods for details). We performed full structural relaxations of 300 CTC monolayers sampled by varying intermolecular distance, bond angles, and alignment with respect to the underlying graphene. The initial structures are systematically generated but done ``by hand" without any input from machine learning or structure search algorithms.\cite{Egger/etal:2020,Jarvi/Rinke/Todorovic:2020} After relaxation, the structures are sorted by formation energy. One of the low energy conformations closely matches the experimental structure both in terms of the unit cell dimensions (\textit{a} = 17.78 \r{A}, \textit{b} = 8.89 \r{A}, $\theta$ = 60$^\circ$) and the relative orientation w.r.t.~the graphene lattice (13.89$^\circ$). A DFT simulated STM image (at Fermi energy) is shown in Fig.~\ref{fig:fig1}d for the optimized geometry; it closely resembles the STM image shown in Fig.~\ref{fig:fig1}c.
We have also looked at the assembly of single component F$_4$TCNQ and TTF layers on the G/O/Ir(111) surface. A sub-monolayer coverage of F$_4$TCNQ molecules forms chain-like structures (in contrast to non-planar adsorption on the G/Ir(111) surface \cite{Kumar2017}). On the other hand, TTF molecules tend to assemble in a close-packed geometry on the G/O/Ir(111) surface. The assembly of F$_4$TCNQ and TTF molecules is shown in SI Figs.~S4 and S5.
\begin{figure}[h]
\includegraphics[width=0.9\textwidth]{fig2}
\caption{Charge transfer across the molecules. (a) Long range d$I$/d$V$-spectra on F$_4$TCNQ molecules in a single-component chain on the G/O/Ir(111) surface (red line) and on the F$_4$TCNQ sites in the CTC (black line). (b) Long range d$I$/d$V$-spectra on TTF molecules in a single component assembly on G/O/Ir(111) (blue line) and on the TTF sites in the CTC (black line). (c) Bias dependent STM images of the CTC at the sample biases indicated in the figure. Size of each image is $4.7\times3.2$ nm$^2$.}
\label{fig:fig2}
\end{figure}
Figure \ref{fig:fig2} shows the experimental verification of charge transfer between TTF and F$_4$TCNQ molecules in the CTC by d$I$/d$V$ spectroscopy and STM imaging. Fig.~\ref{fig:fig2}a compares long-range d$I$/d$V$ spectra recorded on F$_4$TCNQ molecules in single component chains to those recorded in the CTC. The spectrum on the molecule in the chain shows a resonance corresponding to the lowest unoccupied molecular orbital (LUMO) at 0.64 V without any features at negative bias. This indicates that the F$_4$TCNQ molecules on G/O/Ir(111) are neutral, in contrast to F$_4$TCNQ molecules on a G/Ir(111) surface, where they are charged at lower sites of the moir\'e pattern.\cite{Kumar2017} This difference is likely due to the increased work function of graphene due to oxygen intercalation. The spectrum recorded on a F$_4$TCNQ molecule in the CTC, on the other hand, shows two peaks at -0.44 V and 1.2 V.
Fig.~\ref{fig:fig2}b compares the d$I$/d$V$ spectrum on TTF molecules from the pristine assembly on a G/O/Ir(111) surface to that of TTF molecules from the CTC. Here, d$I$/d$V$ spectrum on TTF molecule shows a peak at -0.8 V, corresponding to the highest occupied molecular orbital (HOMO) of a neutral TTF molecule. Despite the high work function of the surface ($\sim$5.0 eV), the TTF molecules stay neutral. In the CTC, the spectrum on TTF molecules shows two peaks at -0.9 V and 0.95 V (similar to the two peaks on an F$_4$TCNQ molecule). The assignment of these peaks is done on the basis of images recorded at sample biases at -0.5 and 0.8 V. The image at 0.8 V shows a relatively prominent TTF HOMO, while the image at -0.5 V shows a relatively prominent F$_4$TCNQ LUMO \cite{Kumar2017} (see Fig.~2c). Electron transfer from donor TTF to acceptor F$_4$TCNQ molecules results in splitting of the TTF HOMO (-0.8 eV peak) into singly occupied (SOMO, -0.95 V peak) and singly unoccupied molecular orbitals (SUMO, 0.95 V peak). Similarly, the F$_4$TCNQ LUMO (0.64 V peak) splits into SOMO (-0.44 V peak) and SUMO (1.2 V peak) after accepting an electron. Consequently, the TTF molecule acquires a positive charge while F$_4$TCNQ molecules become negatively charged in the CTC. The charge transfer between the molecules is also supported by DFT calculations, and based on Hirshfeld charge analysis \cite{hirshfeld} it amounts to $\sim$0.55 e in this configuration. Each N atom gains $\sim$0.2 e and redistribution of the remaining charge makes up the difference. The calculated band structure of the monolayer CTC (Fig.~\ref{fig:fig1}d) is shown in SI Fig.~S6. From the band structure, it is evident that there is also a charge transfer from graphene to the CTC monolayer and a finite electronic coupling in the molecules along certain directions of reciprocal space ($\Gamma$-K and $\Gamma$-Y). However, the bandwidth is relatively small ($\sim100$ meV), indicating that the coupling is quite weak.
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{fig3}
\caption{Short-range d$I$/d$V$-spectroscopy on the CTC. (a) Short-range d$I$/d$V$-spectroscopy on the TTF and F$_4$TCNQ sites in the CTC show a dip at zero bias. (b) Magnetic field dependent d$I$/d$V$-spectra on a TTF site in the CTC shows that the shape and size of the zero-bias dip does not change with magnetic field up to 10 T. (c) Temperature-dependent d$I$/d$V$-spectra on a TTF site in the CTC show that the dip is washed away with increasing temperature and the asymmetric background is also decreased at higher temperatures. (d) Temperature dependence of the zero-bias conductance (ZBC, normalized at the d$I$/d$V$ at bias of 20 mV) shows saturation at 15-20 K.}
\label{fig:fig3}
\end{figure}
Interestingly, high-resolution d$I$/d$V$ spectra on both molecules contains a dip close to zero bias which has pronounced asymmetry on TTF sites as is shown in Fig. \ref{fig:fig3}a.
To investigate its origin, we have examined its dependence on temperature and on out-of-plane magnetic field. Care was taken to record these spectra on the same molecule and with the same microscopic tip apex. Fig. \ref{fig:fig3}b shows magnetic field dependent d$I$/d$V$ spectra on the TTF sites of the CTC lattice in the range of 0 to 10 T. There is no measurable change in either the shape and size of the dip, or the observed asymmetry up to magnetic field of 10 T. On the contrary, a clear temperature dependence is observed from Fig.~\ref{fig:fig3}c, which shows the temperature-dependent d$I$/d$V$-spectroscopy recorded on TTF sites of the CTC from 2.7 K to 20 K (data on the F$_4$TCNQ site is shown in the SI Fig.~S7a). The asymmetric dip is most prominent at the lowest temperature of 2.7 K. The dip amplitude decreases with increasing temperature and at 20 K only a step at zero bias remains. The temperature dependence of the zero bias conductance (ZBC) extracted from these spectra
clearly exhibits the saturation of the ZBC at temperatures between 15-20 K. This change in the ZBC indicates the presence of a low-temperature correlated state, which we discuss in more detail below.
\begin{figure}[h]
\includegraphics[width=0.8\textwidth]{fig4}
\caption{Deconvoluting the low-bias features of the d$I$/d$V$ spectra. (a) Short-range d$I$/d$V$-spectrum on on TTF molecules. The curve has been fitted with sum of two Fano functions: Fano-1 (broken black line) represents the central dip and Fano-2 (red line) represents the step. Final fit is indicated by blue line. (b) Temperature-dependent evolution of HWHM extracted from the two Fano function (Fano-1: left, Fano-2: right) from the fits. (c) Short-range d$I$/d$V$-spectroscopy on CTC islands, recorded on a F$_4$TCNQ molecule showing steps at energies $\sim$2 (shown by arrow 1), $\sim$31 (arrow 2), $\sim$35 (arrow 3) and $\sim$52 meV (arrow 4). (d) Temperature-dependent evolution of the steps at $\sim$2 meV (Step-1: left) and at $\sim$52 meV (Step-4: right). }
\label{fig:fig4}
\end{figure}
The temperature dependent spectroscopy shows that the overall asymmetry of the spectra and the amplitude of the dip reduces with increasing temperature. At 20 K, the dip feature is no longer visible while the asymmetry (a step at Fermi-energy) is still present in the spectra. This suggests that the spectrum can be deconvoluted into a dip and a step - the dip vanishes at 20 K while the step still remains visible at that temperature.
The deconvolution of a spectrum measured on a TTF site in the CTC is shown in Fig.~\ref{fig:fig4}a. The entire spectrum (note the wider bias range here compared to Fig.~\ref{fig:fig3}a) can be well fitted (details of the fittings are described in the Methods section) by a sum of two Fano lineshapes.\cite{fano1961effects} The effect of the spectral broadening due to the bias modulation and thermal broadening have been deconvoluted (see \textit{Methods} section) to obtain the intrinsic width of the lineshapes. Fig.~\ref{fig:fig4}b summarizes the temperature dependence of the half-width half-maximum (HWHM) of the two Fano lineshapes used to fit the spectra on the TTF site. The HWHM of the Fano lineshape corresponding to the dip at zero bias (Fano-1) shows a clear scaling with temperature. On the other hand, the HWHM of the step-like Fano lineshape (Fano-2) has a weaker temperature dependence. While the Fano lineshape is taken here as a phenomenological description of the measured spectra, the choice is not completely arbitrary, as it typically arises in situations where there are two interfering tunneling pathways present. For example, it is widely observed on Kondo impurities, where the interference occurs between a direct tip-sample tunneling and tunneling path \textit{via} the Kondo impurity.\cite{li1998kondo,madhavan1998tunneling,nagaoka2002temperature,Ternes2015_review} In fact, a spectral shape combining a step-like Fano lineshape with a smaller energy gap-like feature - very similar to our measurements - has been observed on the heavy fermion compound URu$_2$Si$_2$.\cite{Aynajian10383} There, the spectral response was explained by a combination of Kondo screening of the uranium $f$-electrons and the gap-like feature resulting from a transition to a hidden order phase at low temperatures.
Intriguingly, the d$I$/d$V$-spectra recorded on the F$_4$TCNQ molecules of the CTC (Fig.~\ref{fig:fig4}c - the bias range is again wider than in Fig.~\ref{fig:fig3}a) show additional step-like features at higher biases, \textit{viz}.~at $\pm31$, $\pm35$ and $\pm52$ mV. These steps
can be attributed to inelastic electron tunneling processes similar to molecular vibrations of negatively charged F$_4$TCNQ molecules.\cite{Garnica2014,Fernandez-Torrente2008}
The tunneling electrons can excite a molecular vibration once the sample bias matches the energy of the corresponding vibrational mode.\cite{Lorente2001_PRL,delaTorre2017_PRL,IETS_review} The inelastic process corresponds to opening of an additional tunneling channel and a sudden increase in the tunneling conductance. To corroborate this picture, we assess the phonon modes for the CTC monolayer with DFT (details in the Methods). There is good agreement between the energies of the measured steps and the calculated energies of certain CTC phonon modes with a high electron-phonon coupling strength. Additionally, the calculated modes with strong coupling strength near the energies of the inelastic steps are dominated by F$_4$TCNQ vibrations (see SI Fig.~S8 for details). This is consistent with our experiments, where we see the inelastic steps only on the F$_4$TCNQ sites of the CTC.
Although DFT calculations indicate presence of intermolecular phonon modes with energies of a few mV, the temperature dependence of the dip close to zero bias does not fit with thermally broadened inelastic steps. If we force a fit with an inelastic step to the data (feature marked with ``1'' in Fig.~\ref{fig:fig4}b), the position of the fitted step would be strongly temperature dependent (Fig.~\ref{fig:fig4}d, black symbols), which is not expected for inelastic features. The zero bias feature also washes out more quickly with temperature than what would be expected for a vibrational transition, and at 15 K or above, only an asymmetric step remains, supporting the notion that it is a result of a gap closing transition. This is illustrated in Fig.~S7b, which shows the expected temperature dependence for an inelastic step using the parameters extracted from the experimental spectrum acquired at $T=2.7$ K. As can be clearly seen, the predicted trend does not match the experimental results in Fig.~S7a, which gives a strong indication that the zero-bias dip feature does not correspond to inelastic steps
Considering the width of the dip, we should be able to resolve a possible magnetic field induced splitting if this feature was arising from any spin-related phenomena such as the Kondo effect or spin-flip inelastic transitions.\cite{Ternes2015_review,Ternes2017_review} However, we do not observe any such changes with a magnetic field up to 10 T as shown in Fig.~\ref{fig:fig3}b. While the Kondo effect has earlier been observed in an TTF-TCNQ CTC monolayer on Au(111),\cite{Fernandez-Torrente2008} the Kondo coupling is expected to be generally weak on graphene.\cite{Fritz2013_RepProgPhys} Finally, experiments on CTCs deposited on graphene directly on Ir(111) show very similar response (see SI Fig.~S9). The two substrates differ significantly in terms of the doping level of graphene, which is expected to have a marked influence on the Kondo temperature.\cite{Fritz2013_RepProgPhys,Chen2011,Jiang2018}
Further, CTCs are also known for exhibiting superconductivity. But in light of the spectroscopy measurements in high magnetic fields, superconductivity origin of the dip at Fermi energy is also very unlikely. One would expect either quenching or at least changes in the superconducting gap under high field. We also do not observe coherence peaks in the spectra that are usually associated with superconductivity.
\begin{figure}[h]
\includegraphics[width=0.65\textwidth]{fig5.png}
\caption{(a) STM topography image of CTC at imaging parameters: 5 pA and -500 mV. The scale bar is 3 nm. (b) A contrast-optimized version of the topography in panel (a) shows periodic topography modulations (white lines are guide to the eyes). (c,d) Two-dimensional fast-Fourier transform (2D-FFT) of panel
(a) shows features corresponding to the CTC rectangular lattice (marked by vectors \textbf{\emph{b$_{1}$}} and \textbf{\emph{b$_{2}$}}), spots due to the underlying graphene moir\'e (white hexagon), and charge density wave modulations by vectors \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}}, and \textbf{\emph{u$_{3}$}}. CDW wavelengths corresponding to \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}} are approximately $3.25\times l_1$ and $3.25\times l_2$, while that corresponding to \textbf{\emph{u$_{3}$}} is $\sim$5 nm. The scale bar is 1 nm$^{-1}$.
}
\label{fig:cdw}
\end{figure}
The remaining explanations consistent with the spectral feature and its dependence on magnetic field and temperature include the formation of a charge-density wave (CDW) or Peierls instability at low temperatures; these correlated ground states have been commonly observed in bulk CTC materials.\cite{Jerome2004review,Nishiguchi1998CDW} The structure of this compound, both in bulk and in our monolayer is anisotropic: there is much stronger electronic coupling along a certain lattice direction than in the perpendicular direction. This is also evident in the calculated band structure shown in Fig.~S6. This kind of an anisotropic bandstructure is favourable for the formation of a CDW state as it naturally provides Fermi surface nesting. This leads to the CDW driven by e-ph coupling, which is also in line with the picture for the bulk TCNQ-TTF phases.\mbox{\cite{Jerome2004review}}
The temperature dependence of the ZBC (Fig.~\ref{fig:fig3}d) clearly indicates a transition temperature of 15-20 K, which is close to the expected temperature range of a CDW or Peierls transition; for example, in the bulk TTF-TCNQ CTC this is 54 K.\cite{PhysRevB.16.5238} Finally, the ground state associated with CDW breaks the symmetry of the system and results in a superstructure arising from modulations in electron density or CTC atomic structure.
Fig.~\mbox{\ref{fig:cdw}}a shows STM topography image of the CTC. Contrast-optimized version of the same image in Fig.~\mbox{\ref{fig:cdw}}b shows periodic modulation of the topography which can better be understood using 2D FFT. White lines are guide to the eyes. Fig.~\mbox{\ref{fig:cdw}}c and d show the 2D-FFT images of the topography with various spots identified.
The set of spots marked by vectors \textbf{\emph{b$_{1}$}} and \textbf{\emph{b$_{2}$}} corresponds to the CTC rectangular lattice (see SI Fig.~S6) while the spots due to the underlying graphene moir\'e is indicated by a white hexagon. The set of vectors indicated by \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}}, and \textbf{\emph{u$_{3}$}} indicate the presence of longer wavelength charge-density wave modulation. CDW wavelengths corresponding to \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}} are approximately $3.25 \times l_1$ and $3.25\times l_2$, while that corresponding to \textbf{\emph{u$_{3}$}} is $\sim$5 nm. Here, \textbf{\emph{l$_{1}$}} and \textbf{\emph{l$_{2}$}} are real space lattice vectors perpendicular to and along the TTF/F$_4$TCNQ molecular rows, respectively.
This provides further evidence of the presence of CDW/Peierls ground state in the TTF-F$_4$TCNQ CTC monolayer at low temperatures causing a gap in the density of states at the Fermi energy.
\section*{Conclusions}
In conclusion, we have synthesized a monolayer of charge-transfer complex TTF-F$_4$TCNQ on a weakly interacting epitaxial graphene substrate, and have investigated its intrinsic electronic properties. TTF and F$_4$TCNQ molecules assemble into close-packed islands with alternating rows of TTF and F$_4$TCNQ molecules in a 1:1 stoichiometry. Low-temperature STM and STS measurements confirm the formation of a charge-transfer complex with d$I$/d$V$ spectra consistent with the presence of TTF cations and F$_4$TCNQ anions. High-resolution spectroscopy at low-temperatures and high magnetic fields show formation of a correlated ground state related to a CDW or Peirls instability with a transition temperature of 15-20 K. This work demonstrates CTC monolayers as intriguing example of two-dimensional materials with low-temperature correlated ground states
\section*{Methods}
\textit{Sample preparation.} The experiments were carried out in ultra-high vacuum (UHV), low-temperature scanning tunneling microscopes (STMs) (Createc LT-STM and Unisoku USM-1300). Both STMs are equipped with a preparation chamber and operate at a base pressure lower than $1\times10^{-10}$ mbar. The sample was prepared by depositing F$_4$TCNQ and TTF molecules sequentially on an oxygen-intercalated graphene on Ir(111) substrate. The Ir(111) surface was cleaned by repeated cycles of sputtering using Ne ions at energy 1.5 kV and annealing at 900 $^\circ$C in an oxygen environment, followed by flashing to 1300 $^\circ$C. Epitaxial graphene was grown using ethylene gas with a combination of temperature programmed growth (TPG) and chemical vapour deposition (CVD) steps to achieve a nearly full monolayer coverage of graphene.\cite{NDiaye2008,coraux2009growth,michely_prl_2011,Hamalainen2013} In the TPG step, the cleaned Ir(111) substrate was exposed to the ethylene gas for one minute at a pressure of $1\times10^{-6}$ mbar followed by heating the substrate to 1300 $^\circ$C. The CVD step was carried out at this temperature by exposing the substrate to ethylene gas at $3\times10^{-7}$ mbar for 60 s. This gives nearly a monolayer coverage of graphene on Ir(111) (G/Ir(111)). Oxygen intercalation of G/Ir(111) (G/O/Ir(111)) was carried out by exposure of $9\times10^4$ L oxygen at 225$^\circ$ C as reported by Ref.~\cite{Martinez-Galera2016}.
The charge-transfer complex (CTC) was synthesized by first depositing $\sim$0.25 monolayer of F$_4$TCNQ molecules on a G/O/Ir(111) surface at low substrate temperature ($\approx 100$ K), followed by deposition of a similar amount of TTF molecules at a similar substrate temperature. This resulted in disordered islands of CTC on the surface. The sample was annealed at room temperature for 15-45 mins. to allow the formation of highly ordered CTC islands. While F$_4$TCNQ molecules were evaporated using a Knudsen cell heated to 92 $^\circ$C, TTF molecules were evaporated from a home-made evaporator kept at temperature 23 $^\circ$C. The deposited amounts of the two molecules were adjusted to 1:1 stoichiometry (each of them at less than a half monolayer coverage). Subsequently, the sample was transferred into the low-temperature STM housed within the same UHV system.
\textit{STM measurements.} The STM experiments were carried out at a temperature of 4.2 K unless otherwise stated. Temperature-dependent measurements were carried out in the Createc STM, while magnetic field dependent measurements were carried out in the Unisoku STM. For the measurements at 2.7 K, the LHe cryostat of the STM was pumped, while measurements at temperature higher than 4.2 K was achieved by heating the STM by a Zener diode installed on the STM scanner. To avoid any ambiguity, the temperature-dependent measurements were carried out on the same F$_4$TCNQ and TTF molecules of the CTC assembly using the same tip. Similar precautions were taken for the magnetic field measurements as well, where the same molecules and the tip was used for the full-range of the magnetic field sweep. STM measurements were carried out using mechanically cut Pt/Ir tips. d$I$/d$V$-spectroscopy was performed using a standard lock-in technique, where a voltage modulation with amplitude of 10-15 mV and 1-2 mV signal has been used for long-range and short-range spectroscopies, respectively. WSxM \cite{wsxm} and Gwyddion (\url{http://gwyddion.net/})\cite{gwyddion2} software were used to process all the STM images.
\textit{Fitting of the d$I$/d$V$-spectra.} We use two Fano lineshape functions to fit the short-range d$I$/d$V$ spectrum in Fig.~\ref{fig:fig4}a. The Fano lineshape function is:
\begin{equation*}
f_\mathrm{Fano}(\epsilon) = A\frac{(q + \frac{\epsilon-\epsilon_0}{\Gamma})^2}{1 + (\frac{\epsilon-\epsilon_0}{\Gamma})^2} + c_1
\end{equation*}
where \textit{A} is the prefactor, $\epsilon$ is the energy, $\epsilon_0$ is offset from zero, $\Gamma$ is the half width at half maximum, $q$ is the Fano parameter, and c$_1$ is a constant background term. We first fit the step-like Fano lineshape to capture the step of the spectrum (Fano-2) by excluding the central dip during the fitting. Further, we subtract the step-like Fano fit (red line in Fig.~\ref{fig:fig4}a) from the spectrum to get a central dip which is fitted again using a dip-like Fano lineshape (Fano-1). The fitting process is repeated for all the recorded spectra at the indicated temperatures to extract HWHM for the two Fano lineshapes as function of temperature.
To fit the temperature dependence of the pair of four step features seen in Fig. 4c (four on each side of zero bias), we use a series of symmetric Fermi-Dirac distribution functions as function of energy, $\epsilon$:
\begin{equation*}
\begin{split}
f_\mathrm{step}(\epsilon) & = \sum\limits_{i=1}^{4} \left(f_{FD}^+ + f_{FD}^-\right) + s\epsilon + c_2 \\
& = \sum\limits_{i=1}^{4} \left(a^+_i \frac{1}{1+e^{\frac{\epsilon+\epsilon_i}{k_{B}T}}} + a^-_i\left(1-\frac{1}{1+e^{\frac{\epsilon-\epsilon_i}{k_{B}T}}}\right)\right) + s\epsilon + c_2
\end{split}
\end{equation*}
where \textit{i} is the step number, $a^+_i$ is the amplitude of the $i^{th}$ step for $\epsilon>0$, $a^-_i$ is the amplitude of the corresponding step at $\epsilon<0$, $\epsilon_i$ is the position of the $i^{th}$step, $k_B$ is the Boltzmann constant, $T$ is the temperature, s is the slope, and c$_2$ is a constant background term.
The recorded spectra are broadened by thermal contribution as well as the applied lock-in voltage. These effects have to be deconvoluted to get the intrinsic line-shape. To correct for the lock-in modulation voltage (V$_m$) we use the broadening function:
\begin{equation*}
f_{V_{m}}(\epsilon) = \frac{2}{\pi}\Re\frac{\sqrt{V_m^2 - \epsilon^2}}{V_m^2}
\end{equation*}
where $\Re$ is the real part of a complex number. To account for thermal broadening due to
the temperature (T) of the tip, we use the derivative of the Fermi-Dirac distribution:
\begin{equation*}
f_{T}(\epsilon) = \frac{\partial}{\partial\epsilon}\left(\frac{1}{1+e^{\frac{\epsilon}{k_{B}T}}}\right)
\end{equation*}
Finally, the simulated LDOS is obtained by convolving these functions either,
\begin{equation*}
f_\mathrm{total}^\mathrm{Fano}(\epsilon) = f_\mathrm{Fano} * f_{V_{m}} * f_{T}
\end{equation*}
or,
\begin{equation*}
f_\mathrm{total}^\mathrm{step}(\epsilon) = f_\mathrm{step} * f_{V_{m}} * f_{T}
\end{equation*}
The simulated LDOS is fitted to the experimental d$I$/d$V$ spectra to obtain the intrinsic linewidth $\Gamma$ in the first case and the step-positions in the second case.
\textit{DFT calculations.} Density functional theory calculations are performed with the full potential, all-electron, numeric atom-centered orbital code FHI-AIMS.\cite{aims1,aims2,Xinguo/implem_full_author_list,Levchenko/etal:2015} We use the standard FHI-AIMS `light' pre-constructed basis sets of numeric atomic orbitals. Supercell calculations are performed with a $8\times4$ $\Gamma$-centered $\mathbf{k}$-point sampling. We use the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation to the exchange-correlation functional.\cite{pbe} Van der Waals interactions are included with the pair-wise Tkatchenko-Scheffler correction.\cite{ts_dispersion} Atomic forces are relaxed to less than $10^{-2}$ eV/\AA. Vibrations are calculated with the finite difference method. Electron-phonon coupling constants are based on the electronic friction approach.\cite{askerka_prl_2016,maurer_prb_2016} In pursuit of open materials science,\cite{Himanen2019} the DFT relaxed geometry of the monolayer is available in the NOvel MAterials Discovery (NOMAD) repository \cite{NOMAD}.
\begin{suppinfo}
Spectroscopy on oxygen-intercalated graphene on Ir(111); disordered islands and checker-board phases of CTC; assembly of single component F$_4$TCNQ and TTF molecules on G/O/Ir(111); electronic band structure of CTC with or without graphene; temperature-dependent spectra -- experiment and simulated; molecular vibrations of CTC; and assembly and spectroscopy of CTC grown on G/Ir(111).
\end{suppinfo}
\begin{acknowledgement}
We thank Jose Lado for discussions. This research made use of the Aalto Nanomicroscopy Center (Aalto NMC) facilities and was supported by the European Research Council (ERC-2017-AdG no.~788185 ``Artificial Designer Materials'') and Academy of Finland (Academy professor funding nos.~318995 and 320555, postdoctoral researcher nos.~309975 and 316347). We gratefully acknowledge high performance computing resources from the Aalto Science-IT project and the CSC-IT Center for Science, Finland.
\end{acknowledgement}
\section*{Introduction}
Organic charge-transfer complexes (CTCs) formed by electron-donor and -acceptor molecules are an intriguing and broad class of materials that can exhibit phenomena related to strong electron correlations and electron-phonon coupling such as
charge and spin density waves, Mott metal-insulator transitions, charge ordering, spin-liquid phases, and superconductivity.\cite{Jerome2004review,Enoki2004review,Seo2004review,Powell2006review,Clay2018RepProgPhys,Zhang2017_AccChemRes} In bulk CTC crystals, donor and acceptor molecules typically stack in rows that maximize $\pi-\pi$ electronic overlap along the rows only.\cite{Sing2003} This anisotropy in the overlap results in pseudo one-dimensional electronic dispersion, providing a suitable platform to investigate low-dimensional, as well as low-energy, physics.
Despite the broad spectrum of intriguing physical phenomena that have been reported in bulk CTCs, their two-dimensional (2D) films have been much less studied.\cite{Gonzalez-Lakunza2008,Fernandez-Torrente2008,Jackel2008,Clark2010,Rojas2013CTC,Jeon2016,Rodriguez-Fernandez2017,Hassanien2017,PhysRevB.81.155403} In particular, the studies have been confined to metal substrates, which strongly interact with the molecular layer and mask the intrinsic electronic properties of the CTCs.
The CTC formed out of tetrathiafulvalene (TTF) and tetracyanoquinodimethane (TCNQ) molecules is an archetypal example of a CTC. It possesses the highest bulk conductivity reported so far in a CTC and has been studied in detail.\cite{Jerome2004review,Nishiguchi1998CDW,Wang2003TTFTCNQ,Sing2003,PhysRevB.81.155403} Another widely studied system is formed by the Bechgaard salts consisting of small, planar organic molecules acting as an electron donor combined with an electron accepting small inorganic molecule. These materials are one of the most prominent examples of organic superconductors.\cite{Jerome2004review,Clark2010}
The properties of 2D films of these CTCs on metallic substrates can be strongly influenced by the underlying substrate.
For example, it is possible to form films with other than 1:1 stoichiometry.\cite{Rojas2013CTC,Jeon2016,Rodriguez-Fernandez2017} In some cases, the effect of the substrate can be limited to doping of the film, \textit{e.g.}~in the case of the organic superconductor BETS$_2$GaCl$_4$ monolayer on Ag(111).\cite{Clark2010,Hassanien2017} On the other hand, the substrate interaction can completely dominate the low-energy electronic properties. On Au(111), TTF-TCNQ molecular states of the CTC hybridize with the metal states to form dispersive interface states.\cite{Gonzalez-Lakunza2008} Further, the unpaired electron of TCNQ molecules on the Au(111) surface exhibits the many-body Kondo effect due to screening by the substrate conduction electrons.\cite{Fernandez-Torrente2008}
Thus, the electronic properties of a CTC, especially close to the Fermi energy, can be strongly perturbed by the metal substrate, prohibiting the study of intrinsic electronic properties of CTC. Therefore, preparing 2D films of CTCs on weakly interacting substrates is extremely desirable. Epitaxial graphene grown on Ir(111) has been shown to decouple the adsorbate layer from the underlying metal substrate allowing investigation of intrinsic electronic properties of the adsorbate layers.\cite{Kumar2017review, Kumar2018}
Here, we a present low-temperature scanning tunneling microscopy (LT-STM) study of a 2D CTC of TTF and fluorinated TCNQ (F$_4$TCNQ) self-assembled on the surface of oxygen-intercalated epitaxial graphene on Ir(111) (G/O/Ir(111)). Sequential deposition of the molecules on this surface leads to the formation of rotationally identical domains of CTCs with alternating rows of TTF and F$_4$TCNQ lying parallel to the surface. The frontier molecular orbitals of the molecular species in the CTC, as found from scanning tunneling spectroscopy (STS), indicate charge transfer between TTF and F$_4$TCNQ molecules. High-resolution tunneling spectra exhibit a dip at Fermi Fermi energy closing at a temperature of 20 K that may be attributed to the formation of a correlated ground state in the CTC monolayer.
\section*{Results and Discussion}
Figure~\ref{fig:fig1} describes the assembly and structure of the TTF-F$_4$TCNQ CTC on a G/O/Ir(111) surface. The sample preparation is described in detail in the Methods section. Briefly, we grow a near monolayer coverage of graphene on Ir(111) by a combination of temperature programmed growth (TPG) and chemical vapour deposition (CVD), as described previously,\cite{NDiaye2008,coraux2009growth,Hamalainen2013} followed by oxygen intercalation to electronically decouple graphene from the underlying substrate.\cite{Martinez-Galera2016} Finally, the molecules are deposited at low temperatures ($\approx 100$ K), followed by annealing at room temperature for 15-45 mins to allow the formation of highly ordered CTC islands.
Figure~\ref{fig:fig1}a shows an STM topography image of oxygen intercalated graphene on Ir(111). The surface contains the periodic moir\'e pattern of a G/Ir(111) surface with a periodicity of 25.4 \r{A}.
The additional superstructure visible on the surface is due to patches of ($2\times1$) reconstruction of subsurface oxygen which is consistent with an earlier report.\cite{Martinez-Galera2016} Oxygen intercalation leads to decoupling of graphene from Ir, which is indicated by the short-range d$I$/d$V$-spectroscopy of the surface showing a phonon gap of $\sim$160 mV \cite{Zhang2008,Halle2018_NanoLett} (see Supporting Information (SI) Fig.~S1a). Oxygen intercalation also results in strong \textit{p}-doping of graphene by $\sim$0.5 eV,\cite{Ulstrup2014} which increases the work function to $\sim$5.1 eV. This can be independently verified by measuring d$I$/d$V$ spectra at high bias with the feedback loop on $-$ here, the field-emission resonances allow to estimate the substrate work function \cite{Binnig1985_prl,Lin2007,Schulz2013,Schulz2014_PRB} (See SI Fig.~S1b).
\begin{figure}[h!]
\includegraphics[width=0.8\textwidth]{fig1.png}
\caption{Assembly and structure of the CTC on oxygen-intercalated graphene. (a) STM topography image of oxygen intercalated graphene on Ir(111). The additional superstructure apart from the moir\'e is due to reconstruction of subsurface oxygen. Scale bar is 3 nm. Imaging parameters: 1.2 nA and 10 mV. (b) Few large islands of CTC on the G/O/Ir(111) surface showing various domains and the domain boundaries. Scale bar is 30 nm. Imaging parameters: 0.4 pA and 0.75 V. (c) A zoomed-in STM image of the CTC shows the arrangement of TTF and F$_4$TCNQ molecules. Each molecule forms a row next to the row of the other molecule. A molecular structure along with a unit cell is overlaid to elucidate the molecular arrangement within the unit cell. Scale bar is 2 nm. Imaging parameters: $\sim$5 pA and 0.1 V. (d) A DFT simulated STM image of the CTC close to the Fermi energy resembles the recorded topography closely. Molecular structure and unit cells are overlaid for clarity.}
\label{fig:fig1}
\end{figure}
Figure~\ref{fig:fig1}b shows an STM topograph of large islands of ordered CTC assembled on a G/O/Ir(111) surface. The long-range ordering is the result of the post-deposition room-temperature annealing; directly after the low-temperature deposition, we observe disordered islands on the surface (see SI Fig.~S2). The CTC islands grow across the step edges in carpet-like fashion \cite{banerjee2016flexible,Yan2020_CuDCA} and contain various domains rotated with respect to each other. Analysis of several images reveals a total of six domain orientations rotated w.r.t.~each other in multiples of 30$^\circ$. Figure~\ref{fig:fig1}c shows a zoomed-in STM image to identify arrangement of TTF and F$_4$TCNQ molecules within the CTC islands. As evident from the STM image, there are two different rows of molecules: one is composed of TTF and the other of F$_4$TCNQ molecules. Rows of TTF and F$_4$TCNQ are lying alternately on the surface. The molecular structure obtained from density functional theory (DFT) calculations (see below) has been overlaid on the STM image for clarity. The molecular rows are found to be at an angle of $\pm$12$^\circ$ w.r.t.~graphene's zigzag direction for each domain. The unit cell of the CTC is shown by a parallelogram with lattice parameters \textit{a} = 18.5 ($\pm$0.5) \r{A}, \textit{b} = 9.5 ($\pm$0.5) \r{A}, $\theta$ = 56 ($\pm$2)$^\circ$. This is the most common phase we observe for this stoichiometry ((F$_4$TCNQ$)_1$(TTF)$_1$) of the molecules. At a slightly different stoichiometry ((F$_4$TCNQ)$_x$(TTF)$_{1-x}$), we have observed a checkerboard phase of the CTC where only F$_4$TCNQ rows are present and TTF molecules are dispersed across in a checkerboard fashion (see SI Fig.~S3).
In order to further elucidate the structure of the molecular layer, we carried out a broad structural search for different possible geometries using DFT (see Methods for details). We performed full structural relaxations of 300 CTC monolayers sampled by varying intermolecular distance, bond angles, and alignment with respect to the underlying graphene. The initial structures are systematically generated but done ``by hand" without any input from machine learning or structure search algorithms.\cite{Egger/etal:2020,Jarvi/Rinke/Todorovic:2020} After relaxation, the structures are sorted by formation energy. One of the low energy conformations closely matches the experimental structure both in terms of the unit cell dimensions (\textit{a} = 17.78 \r{A}, \textit{b} = 8.89 \r{A}, $\theta$ = 60$^\circ$) and the relative orientation w.r.t.~the graphene lattice (13.89$^\circ$). A DFT simulated STM image (at Fermi energy) is shown in Fig.~\ref{fig:fig1}d for the optimized geometry; it closely resembles the STM image shown in Fig.~\ref{fig:fig1}c.
We have also looked at the assembly of single component F$_4$TCNQ and TTF layers on the G/O/Ir(111) surface. A sub-monolayer coverage of F$_4$TCNQ molecules forms chain-like structures (in contrast to non-planar adsorption on the G/Ir(111) surface \cite{Kumar2017}). On the other hand, TTF molecules tend to assemble in a close-packed geometry on the G/O/Ir(111) surface. The assembly of F$_4$TCNQ and TTF molecules is shown in SI Figs.~S4 and S5.
\begin{figure}[h]
\includegraphics[width=0.9\textwidth]{fig2}
\caption{Charge transfer across the molecules. (a) Long range d$I$/d$V$-spectra on F$_4$TCNQ molecules in a single-component chain on the G/O/Ir(111) surface (red line) and on the F$_4$TCNQ sites in the CTC (black line). (b) Long range d$I$/d$V$-spectra on TTF molecules in a single component assembly on G/O/Ir(111) (blue line) and on the TTF sites in the CTC (black line). (c) Bias dependent STM images of the CTC at the sample biases indicated in the figure. Size of each image is $4.7\times3.2$ nm$^2$.}
\label{fig:fig2}
\end{figure}
Figure \ref{fig:fig2} shows the experimental verification of charge transfer between TTF and F$_4$TCNQ molecules in the CTC by d$I$/d$V$ spectroscopy and STM imaging. Fig.~\ref{fig:fig2}a compares long-range d$I$/d$V$ spectra recorded on F$_4$TCNQ molecules in single component chains to those recorded in the CTC. The spectrum on the molecule in the chain shows a resonance corresponding to the lowest unoccupied molecular orbital (LUMO) at 0.64 V without any features at negative bias. This indicates that the F$_4$TCNQ molecules on G/O/Ir(111) are neutral, in contrast to F$_4$TCNQ molecules on a G/Ir(111) surface, where they are charged at lower sites of the moir\'e pattern.\cite{Kumar2017} This difference is likely due to the increased work function of graphene due to oxygen intercalation. The spectrum recorded on a F$_4$TCNQ molecule in the CTC, on the other hand, shows two peaks at -0.44 V and 1.2 V.
Fig.~\ref{fig:fig2}b compares the d$I$/d$V$ spectrum on TTF molecules from the pristine assembly on a G/O/Ir(111) surface to that of TTF molecules from the CTC. Here, d$I$/d$V$ spectrum on TTF molecule shows a peak at -0.8 V, corresponding to the highest occupied molecular orbital (HOMO) of a neutral TTF molecule. Despite the high work function of the surface ($\sim$5.0 eV), the TTF molecules stay neutral. In the CTC, the spectrum on TTF molecules shows two peaks at -0.9 V and 0.95 V (similar to the two peaks on an F$_4$TCNQ molecule). The assignment of these peaks is done on the basis of images recorded at sample biases at -0.5 and 0.8 V. The image at 0.8 V shows a relatively prominent TTF HOMO, while the image at -0.5 V shows a relatively prominent F$_4$TCNQ LUMO \cite{Kumar2017} (see Fig.~2c). Electron transfer from donor TTF to acceptor F$_4$TCNQ molecules results in splitting of the TTF HOMO (-0.8 eV peak) into singly occupied (SOMO, -0.95 V peak) and singly unoccupied molecular orbitals (SUMO, 0.95 V peak). Similarly, the F$_4$TCNQ LUMO (0.64 V peak) splits into SOMO (-0.44 V peak) and SUMO (1.2 V peak) after accepting an electron. Consequently, the TTF molecule acquires a positive charge while F$_4$TCNQ molecules become negatively charged in the CTC. The charge transfer between the molecules is also supported by DFT calculations, and based on Hirshfeld charge analysis \cite{hirshfeld} it amounts to $\sim$0.55 e in this configuration. Each N atom gains $\sim$0.2 e and redistribution of the remaining charge makes up the difference. The calculated band structure of the monolayer CTC (Fig.~\ref{fig:fig1}d) is shown in SI Fig.~S6. From the band structure, it is evident that there is also a charge transfer from graphene to the CTC monolayer and a finite electronic coupling in the molecules along certain directions of reciprocal space ($\Gamma$-K and $\Gamma$-Y). However, the bandwidth is relatively small ($\sim100$ meV), indicating that the coupling is quite weak.
\begin{figure}[h]
\includegraphics[width=0.85\textwidth]{fig3}
\caption{Short-range d$I$/d$V$-spectroscopy on the CTC. (a) Short-range d$I$/d$V$-spectroscopy on the TTF and F$_4$TCNQ sites in the CTC show a dip at zero bias. (b) Magnetic field dependent d$I$/d$V$-spectra on a TTF site in the CTC shows that the shape and size of the zero-bias dip does not change with magnetic field up to 10 T. (c) Temperature-dependent d$I$/d$V$-spectra on a TTF site in the CTC show that the dip is washed away with increasing temperature and the asymmetric background is also decreased at higher temperatures. (d) Temperature dependence of the zero-bias conductance (ZBC, normalized at the d$I$/d$V$ at bias of 20 mV) shows saturation at 15-20 K.}
\label{fig:fig3}
\end{figure}
Interestingly, high-resolution d$I$/d$V$ spectra on both molecules contains a dip close to zero bias which has pronounced asymmetry on TTF sites as is shown in Fig. \ref{fig:fig3}a.
To investigate its origin, we have examined its dependence on temperature and on out-of-plane magnetic field. Care was taken to record these spectra on the same molecule and with the same microscopic tip apex. Fig. \ref{fig:fig3}b shows magnetic field dependent d$I$/d$V$ spectra on the TTF sites of the CTC lattice in the range of 0 to 10 T. There is no measurable change in either the shape and size of the dip, or the observed asymmetry up to magnetic field of 10 T. On the contrary, a clear temperature dependence is observed from Fig.~\ref{fig:fig3}c, which shows the temperature-dependent d$I$/d$V$-spectroscopy recorded on TTF sites of the CTC from 2.7 K to 20 K (data on the F$_4$TCNQ site is shown in the SI Fig.~S7a). The asymmetric dip is most prominent at the lowest temperature of 2.7 K. The dip amplitude decreases with increasing temperature and at 20 K only a step at zero bias remains. The temperature dependence of the zero bias conductance (ZBC) extracted from these spectra
clearly exhibits the saturation of the ZBC at temperatures between 15-20 K. This change in the ZBC indicates the presence of a low-temperature correlated state, which we discuss in more detail below.
\begin{figure}[h]
\includegraphics[width=0.8\textwidth]{fig4}
\caption{Deconvoluting the low-bias features of the d$I$/d$V$ spectra. (a) Short-range d$I$/d$V$-spectrum on on TTF molecules. The curve has been fitted with sum of two Fano functions: Fano-1 (broken black line) represents the central dip and Fano-2 (red line) represents the step. Final fit is indicated by blue line. (b) Temperature-dependent evolution of HWHM extracted from the two Fano function (Fano-1: left, Fano-2: right) from the fits. (c) Short-range d$I$/d$V$-spectroscopy on CTC islands, recorded on a F$_4$TCNQ molecule showing steps at energies $\sim$2 (shown by arrow 1), $\sim$31 (arrow 2), $\sim$35 (arrow 3) and $\sim$52 meV (arrow 4). (d) Temperature-dependent evolution of the steps at $\sim$2 meV (Step-1: left) and at $\sim$52 meV (Step-4: right). }
\label{fig:fig4}
\end{figure}
The temperature dependent spectroscopy shows that the overall asymmetry of the spectra and the amplitude of the dip reduces with increasing temperature. At 20 K, the dip feature is no longer visible while the asymmetry (a step at Fermi-energy) is still present in the spectra. This suggests that the spectrum can be deconvoluted into a dip and a step - the dip vanishes at 20 K while the step still remains visible at that temperature.
The deconvolution of a spectrum measured on a TTF site in the CTC is shown in Fig.~\ref{fig:fig4}a. The entire spectrum (note the wider bias range here compared to Fig.~\ref{fig:fig3}a) can be well fitted (details of the fittings are described in the Methods section) by a sum of two Fano lineshapes.\cite{fano1961effects} The effect of the spectral broadening due to the bias modulation and thermal broadening have been deconvoluted (see \textit{Methods} section) to obtain the intrinsic width of the lineshapes. Fig.~\ref{fig:fig4}b summarizes the temperature dependence of the half-width half-maximum (HWHM) of the two Fano lineshapes used to fit the spectra on the TTF site. The HWHM of the Fano lineshape corresponding to the dip at zero bias (Fano-1) shows a clear scaling with temperature. On the other hand, the HWHM of the step-like Fano lineshape (Fano-2) has a weaker temperature dependence. While the Fano lineshape is taken here as a phenomenological description of the measured spectra, the choice is not completely arbitrary, as it typically arises in situations where there are two interfering tunneling pathways present. For example, it is widely observed on Kondo impurities, where the interference occurs between a direct tip-sample tunneling and tunneling path \textit{via} the Kondo impurity.\cite{li1998kondo,madhavan1998tunneling,nagaoka2002temperature,Ternes2015_review} In fact, a spectral shape combining a step-like Fano lineshape with a smaller energy gap-like feature - very similar to our measurements - has been observed on the heavy fermion compound URu$_2$Si$_2$.\cite{Aynajian10383} There, the spectral response was explained by a combination of Kondo screening of the uranium $f$-electrons and the gap-like feature resulting from a transition to a hidden order phase at low temperatures.
Intriguingly, the d$I$/d$V$-spectra recorded on the F$_4$TCNQ molecules of the CTC (Fig.~\ref{fig:fig4}c - the bias range is again wider than in Fig.~\ref{fig:fig3}a) show additional step-like features at higher biases, \textit{viz}.~at $\pm31$, $\pm35$ and $\pm52$ mV. These steps
can be attributed to inelastic electron tunneling processes similar to molecular vibrations of negatively charged F$_4$TCNQ molecules.\cite{Garnica2014,Fernandez-Torrente2008}
The tunneling electrons can excite a molecular vibration once the sample bias matches the energy of the corresponding vibrational mode.\cite{Lorente2001_PRL,delaTorre2017_PRL,IETS_review} The inelastic process corresponds to opening of an additional tunneling channel and a sudden increase in the tunneling conductance. To corroborate this picture, we assess the phonon modes for the CTC monolayer with DFT (details in the Methods). There is good agreement between the energies of the measured steps and the calculated energies of certain CTC phonon modes with a high electron-phonon coupling strength. Additionally, the calculated modes with strong coupling strength near the energies of the inelastic steps are dominated by F$_4$TCNQ vibrations (see SI Fig.~S8 for details). This is consistent with our experiments, where we see the inelastic steps only on the F$_4$TCNQ sites of the CTC.
Although DFT calculations indicate presence of intermolecular phonon modes with energies of a few mV, the temperature dependence of the dip close to zero bias does not fit with thermally broadened inelastic steps. If we force a fit with an inelastic step to the data (feature marked with ``1'' in Fig.~\ref{fig:fig4}b), the position of the fitted step would be strongly temperature dependent (Fig.~\ref{fig:fig4}d, black symbols), which is not expected for inelastic features. The zero bias feature also washes out more quickly with temperature than what would be expected for a vibrational transition, and at 15 K or above, only an asymmetric step remains, supporting the notion that it is a result of a gap closing transition. This is illustrated in Fig.~S7b, which shows the expected temperature dependence for an inelastic step using the parameters extracted from the experimental spectrum acquired at $T=2.7$ K. As can be clearly seen, the predicted trend does not match the experimental results in Fig.~S7a, which gives a strong indication that the zero-bias dip feature does not correspond to inelastic steps
Considering the width of the dip, we should be able to resolve a possible magnetic field induced splitting if this feature was arising from any spin-related phenomena such as the Kondo effect or spin-flip inelastic transitions.\cite{Ternes2015_review,Ternes2017_review} However, we do not observe any such changes with a magnetic field up to 10 T as shown in Fig.~\ref{fig:fig3}b. While the Kondo effect has earlier been observed in an TTF-TCNQ CTC monolayer on Au(111),\cite{Fernandez-Torrente2008} the Kondo coupling is expected to be generally weak on graphene.\cite{Fritz2013_RepProgPhys} Finally, experiments on CTCs deposited on graphene directly on Ir(111) show very similar response (see SI Fig.~S9). The two substrates differ significantly in terms of the doping level of graphene, which is expected to have a marked influence on the Kondo temperature.\cite{Fritz2013_RepProgPhys,Chen2011,Jiang2018}
Further, CTCs are also known for exhibiting superconductivity. But in light of the spectroscopy measurements in high magnetic fields, superconductivity origin of the dip at Fermi energy is also very unlikely. One would expect either quenching or at least changes in the superconducting gap under high field. We also do not observe coherence peaks in the spectra that are usually associated with superconductivity.
\begin{figure}[h]
\includegraphics[width=0.65\textwidth]{fig5.png}
\caption{(a) STM topography image of CTC at imaging parameters: 5 pA and -500 mV. The scale bar is 3 nm. (b) A contrast-optimized version of the topography in panel (a) shows periodic topography modulations (white lines are guide to the eyes). (c,d) Two-dimensional fast-Fourier transform (2D-FFT) of panel
(a) shows features corresponding to the CTC rectangular lattice (marked by vectors \textbf{\emph{b$_{1}$}} and \textbf{\emph{b$_{2}$}}), spots due to the underlying graphene moir\'e (white hexagon), and charge density wave modulations by vectors \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}}, and \textbf{\emph{u$_{3}$}}. CDW wavelengths corresponding to \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}} are approximately $3.25\times l_1$ and $3.25\times l_2$, while that corresponding to \textbf{\emph{u$_{3}$}} is $\sim$5 nm. The scale bar is 1 nm$^{-1}$.
}
\label{fig:cdw}
\end{figure}
The remaining explanations consistent with the spectral feature and its dependence on magnetic field and temperature include the formation of a charge-density wave (CDW) or Peierls instability at low temperatures; these correlated ground states have been commonly observed in bulk CTC materials.\cite{Jerome2004review,Nishiguchi1998CDW} The structure of this compound, both in bulk and in our monolayer is anisotropic: there is much stronger electronic coupling along a certain lattice direction than in the perpendicular direction. This is also evident in the calculated band structure shown in Fig.~S6. This kind of an anisotropic bandstructure is favourable for the formation of a CDW state as it naturally provides Fermi surface nesting. This leads to the CDW driven by e-ph coupling, which is also in line with the picture for the bulk TCNQ-TTF phases.\mbox{\cite{Jerome2004review}}
The temperature dependence of the ZBC (Fig.~\ref{fig:fig3}d) clearly indicates a transition temperature of 15-20 K, which is close to the expected temperature range of a CDW or Peierls transition; for example, in the bulk TTF-TCNQ CTC this is 54 K.\cite{PhysRevB.16.5238} Finally, the ground state associated with CDW breaks the symmetry of the system and results in a superstructure arising from modulations in electron density or CTC atomic structure.
Fig.~\mbox{\ref{fig:cdw}}a shows STM topography image of the CTC. Contrast-optimized version of the same image in Fig.~\mbox{\ref{fig:cdw}}b shows periodic modulation of the topography which can better be understood using 2D FFT. White lines are guide to the eyes. Fig.~\mbox{\ref{fig:cdw}}c and d show the 2D-FFT images of the topography with various spots identified.
The set of spots marked by vectors \textbf{\emph{b$_{1}$}} and \textbf{\emph{b$_{2}$}} corresponds to the CTC rectangular lattice (see SI Fig.~S6) while the spots due to the underlying graphene moir\'e is indicated by a white hexagon. The set of vectors indicated by \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}}, and \textbf{\emph{u$_{3}$}} indicate the presence of longer wavelength charge-density wave modulation. CDW wavelengths corresponding to \textbf{\emph{u$_{1}$}}, \textbf{\emph{u$_{2}$}} are approximately $3.25 \times l_1$ and $3.25\times l_2$, while that corresponding to \textbf{\emph{u$_{3}$}} is $\sim$5 nm. Here, \textbf{\emph{l$_{1}$}} and \textbf{\emph{l$_{2}$}} are real space lattice vectors perpendicular to and along the TTF/F$_4$TCNQ molecular rows, respectively.
This provides further evidence of the presence of CDW/Peierls ground state in the TTF-F$_4$TCNQ CTC monolayer at low temperatures causing a gap in the density of states at the Fermi energy.
\section*{Conclusions}
In conclusion, we have synthesized a monolayer of charge-transfer complex TTF-F$_4$TCNQ on a weakly interacting epitaxial graphene substrate, and have investigated its intrinsic electronic properties. TTF and F$_4$TCNQ molecules assemble into close-packed islands with alternating rows of TTF and F$_4$TCNQ molecules in a 1:1 stoichiometry. Low-temperature STM and STS measurements confirm the formation of a charge-transfer complex with d$I$/d$V$ spectra consistent with the presence of TTF cations and F$_4$TCNQ anions. High-resolution spectroscopy at low-temperatures and high magnetic fields show formation of a correlated ground state related to a CDW or Peirls instability with a transition temperature of 15-20 K. This work demonstrates CTC monolayers as intriguing example of two-dimensional materials with low-temperature correlated ground states
\section*{Methods}
\textit{Sample preparation.} The experiments were carried out in ultra-high vacuum (UHV), low-temperature scanning tunneling microscopes (STMs) (Createc LT-STM and Unisoku USM-1300). Both STMs are equipped with a preparation chamber and operate at a base pressure lower than $1\times10^{-10}$ mbar. The sample was prepared by depositing F$_4$TCNQ and TTF molecules sequentially on an oxygen-intercalated graphene on Ir(111) substrate. The Ir(111) surface was cleaned by repeated cycles of sputtering using Ne ions at energy 1.5 kV and annealing at 900 $^\circ$C in an oxygen environment, followed by flashing to 1300 $^\circ$C. Epitaxial graphene was grown using ethylene gas with a combination of temperature programmed growth (TPG) and chemical vapour deposition (CVD) steps to achieve a nearly full monolayer coverage of graphene.\cite{NDiaye2008,coraux2009growth,michely_prl_2011,Hamalainen2013} In the TPG step, the cleaned Ir(111) substrate was exposed to the ethylene gas for one minute at a pressure of $1\times10^{-6}$ mbar followed by heating the substrate to 1300 $^\circ$C. The CVD step was carried out at this temperature by exposing the substrate to ethylene gas at $3\times10^{-7}$ mbar for 60 s. This gives nearly a monolayer coverage of graphene on Ir(111) (G/Ir(111)). Oxygen intercalation of G/Ir(111) (G/O/Ir(111)) was carried out by exposure of $9\times10^4$ L oxygen at 225$^\circ$ C as reported by Ref.~\cite{Martinez-Galera2016}.
The charge-transfer complex (CTC) was synthesized by first depositing $\sim$0.25 monolayer of F$_4$TCNQ molecules on a G/O/Ir(111) surface at low substrate temperature ($\approx 100$ K), followed by deposition of a similar amount of TTF molecules at a similar substrate temperature. This resulted in disordered islands of CTC on the surface. The sample was annealed at room temperature for 15-45 mins. to allow the formation of highly ordered CTC islands. While F$_4$TCNQ molecules were evaporated using a Knudsen cell heated to 92 $^\circ$C, TTF molecules were evaporated from a home-made evaporator kept at temperature 23 $^\circ$C. The deposited amounts of the two molecules were adjusted to 1:1 stoichiometry (each of them at less than a half monolayer coverage). Subsequently, the sample was transferred into the low-temperature STM housed within the same UHV system.
\textit{STM measurements.} The STM experiments were carried out at a temperature of 4.2 K unless otherwise stated. Temperature-dependent measurements were carried out in the Createc STM, while magnetic field dependent measurements were carried out in the Unisoku STM. For the measurements at 2.7 K, the LHe cryostat of the STM was pumped, while measurements at temperature higher than 4.2 K was achieved by heating the STM by a Zener diode installed on the STM scanner. To avoid any ambiguity, the temperature-dependent measurements were carried out on the same F$_4$TCNQ and TTF molecules of the CTC assembly using the same tip. Similar precautions were taken for the magnetic field measurements as well, where the same molecules and the tip was used for the full-range of the magnetic field sweep. STM measurements were carried out using mechanically cut Pt/Ir tips. d$I$/d$V$-spectroscopy was performed using a standard lock-in technique, where a voltage modulation with amplitude of 10-15 mV and 1-2 mV signal has been used for long-range and short-range spectroscopies, respectively. WSxM \cite{wsxm} and Gwyddion (\url{http://gwyddion.net/})\cite{gwyddion2} software were used to process all the STM images.
\textit{Fitting of the d$I$/d$V$-spectra.} We use two Fano lineshape functions to fit the short-range d$I$/d$V$ spectrum in Fig.~\ref{fig:fig4}a. The Fano lineshape function is:
\begin{equation*}
f_\mathrm{Fano}(\epsilon) = A\frac{(q + \frac{\epsilon-\epsilon_0}{\Gamma})^2}{1 + (\frac{\epsilon-\epsilon_0}{\Gamma})^2} + c_1
\end{equation*}
where \textit{A} is the prefactor, $\epsilon$ is the energy, $\epsilon_0$ is offset from zero, $\Gamma$ is the half width at half maximum, $q$ is the Fano parameter, and c$_1$ is a constant background term. We first fit the step-like Fano lineshape to capture the step of the spectrum (Fano-2) by excluding the central dip during the fitting. Further, we subtract the step-like Fano fit (red line in Fig.~\ref{fig:fig4}a) from the spectrum to get a central dip which is fitted again using a dip-like Fano lineshape (Fano-1). The fitting process is repeated for all the recorded spectra at the indicated temperatures to extract HWHM for the two Fano lineshapes as function of temperature.
To fit the temperature dependence of the pair of four step features seen in Fig. 4c (four on each side of zero bias), we use a series of symmetric Fermi-Dirac distribution functions as function of energy, $\epsilon$:
\begin{equation*}
\begin{split}
f_\mathrm{step}(\epsilon) & = \sum\limits_{i=1}^{4} \left(f_{FD}^+ + f_{FD}^-\right) + s\epsilon + c_2 \\
& = \sum\limits_{i=1}^{4} \left(a^+_i \frac{1}{1+e^{\frac{\epsilon+\epsilon_i}{k_{B}T}}} + a^-_i\left(1-\frac{1}{1+e^{\frac{\epsilon-\epsilon_i}{k_{B}T}}}\right)\right) + s\epsilon + c_2
\end{split}
\end{equation*}
where \textit{i} is the step number, $a^+_i$ is the amplitude of the $i^{th}$ step for $\epsilon>0$, $a^-_i$ is the amplitude of the corresponding step at $\epsilon<0$, $\epsilon_i$ is the position of the $i^{th}$step, $k_B$ is the Boltzmann constant, $T$ is the temperature, s is the slope, and c$_2$ is a constant background term.
The recorded spectra are broadened by thermal contribution as well as the applied lock-in voltage. These effects have to be deconvoluted to get the intrinsic line-shape. To correct for the lock-in modulation voltage (V$_m$) we use the broadening function:
\begin{equation*}
f_{V_{m}}(\epsilon) = \frac{2}{\pi}\Re\frac{\sqrt{V_m^2 - \epsilon^2}}{V_m^2}
\end{equation*}
where $\Re$ is the real part of a complex number. To account for thermal broadening due to
the temperature (T) of the tip, we use the derivative of the Fermi-Dirac distribution:
\begin{equation*}
f_{T}(\epsilon) = \frac{\partial}{\partial\epsilon}\left(\frac{1}{1+e^{\frac{\epsilon}{k_{B}T}}}\right)
\end{equation*}
Finally, the simulated LDOS is obtained by convolving these functions either,
\begin{equation*}
f_\mathrm{total}^\mathrm{Fano}(\epsilon) = f_\mathrm{Fano} * f_{V_{m}} * f_{T}
\end{equation*}
or,
\begin{equation*}
f_\mathrm{total}^\mathrm{step}(\epsilon) = f_\mathrm{step} * f_{V_{m}} * f_{T}
\end{equation*}
The simulated LDOS is fitted to the experimental d$I$/d$V$ spectra to obtain the intrinsic linewidth $\Gamma$ in the first case and the step-positions in the second case.
\textit{DFT calculations.} Density functional theory calculations are performed with the full potential, all-electron, numeric atom-centered orbital code FHI-AIMS.\cite{aims1,aims2,Xinguo/implem_full_author_list,Levchenko/etal:2015} We use the standard FHI-AIMS `light' pre-constructed basis sets of numeric atomic orbitals. Supercell calculations are performed with a $8\times4$ $\Gamma$-centered $\mathbf{k}$-point sampling. We use the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation to the exchange-correlation functional.\cite{pbe} Van der Waals interactions are included with the pair-wise Tkatchenko-Scheffler correction.\cite{ts_dispersion} Atomic forces are relaxed to less than $10^{-2}$ eV/\AA. Vibrations are calculated with the finite difference method. Electron-phonon coupling constants are based on the electronic friction approach.\cite{askerka_prl_2016,maurer_prb_2016} In pursuit of open materials science,\cite{Himanen2019} the DFT relaxed geometry of the monolayer is available in the NOvel MAterials Discovery (NOMAD) repository \cite{NOMAD}.
\begin{suppinfo}
Spectroscopy on oxygen-intercalated graphene on Ir(111); disordered islands and checker-board phases of CTC; assembly of single component F$_4$TCNQ and TTF molecules on G/O/Ir(111); electronic band structure of CTC with or without graphene; temperature-dependent spectra -- experiment and simulated; molecular vibrations of CTC; and assembly and spectroscopy of CTC grown on G/Ir(111).
\end{suppinfo}
\begin{acknowledgement}
We thank Jose Lado for discussions. This research made use of the Aalto Nanomicroscopy Center (Aalto NMC) facilities and was supported by the European Research Council (ERC-2017-AdG no.~788185 ``Artificial Designer Materials'') and Academy of Finland (Academy professor funding nos.~318995 and 320555, postdoctoral researcher nos.~309975 and 316347). We gratefully acknowledge high performance computing resources from the Aalto Science-IT project and the CSC-IT Center for Science, Finland.
\end{acknowledgement} |
2102.09249 | \section{Introduction}
There have recently been impressive advances in \emph{deep generative modeling}
techniques, which have made new industrial applications possible.
These include new ways of collaborating on sensitive data with
strong privacy guarantees such as the release of synthetic microdata
by the \emph{US Census Bureau} \cite{benedetto2018creation}.
The idea behind this application of synthetic data, is that training a generative model
with \emph{differential privacy} \cite{abadi2016deep, jordon2018pate}, and then
sharing the data generated from the model, enables the sharing of statistical properties
of privacy-sensitive data, without sharing the data itself.
There is a long line of work about \emph{differential privacy} \cite{dwork2006calibrating, dwork2014algorithmic},
which establishes it as a solid notion of privacy. This paper will not
develop this aspect, but rather focus on generative modelling in this particular context.
A very common use-case for generating synthetic data, is the need to share
privacy sensitive \emph{tabular data} mixing together various column types
such as numerical, categorical, or \emph{richer types} like sequences or images.
Furthermore, in training a model with \emph{differentially private} techniques such
as \emph{DP-SGD} \cite{abadi2016deep}, one generally wants to interact as little as
possible with the private data to minimize privacy loss, and therefore one wants to pre-train
the model with public datasets. In practice, these public datasets often have partially
overlapping sets of features.
To be pretrained on many of these heterogenous datasets, a synthetic data model needs
to handle properly missing-data in a way similar to semi-supervised learning \cite{kingma2014semi}.
This paper proposes a novel deep generative model architecture (see figure~\ref{fig:1}), the
\emph{Composable Generative Model{}} (\emph{CGM}), meeting the requirements specific to practical
privacy preserving synthetic data generation, namely:
\begin{itemize}
\item to accomodate a variety of column types, continuous, discrete and potentially richer types by \emph{composing} them
\item to handle missing data properly
\item to show good performances in modelling the joint distribution of data
\end{itemize}
The proposed architecture is autoregressive in the sense that each feature
of the data is generated sequentially conditional on the features already
generated. The order in which features are generated does not mater.
During training, the features are shuffled and each of them is encoded in a separate
\emph{input representation} vector to effectively bypass the bottleneck of encoding a variable length input
into a fixed size vector \cite{cho2015describing}.
The input representations are then combined with \emph{column embeddings} and
passed through a causal transformer \cite{vaswani2017attention} that builds
a set of higher level representations in a potentially generalized many-to-many
cross-feature inference.
An \emph{output representation} of a feature is built from the input
representations of one or several other features and the \emph{embedding}
of the generated feature. The output representation for a feature sums up
the shared knowledge of one or more other features.
It is fed to a conditional generative model specialized
for this feature (e.g. CNNs for images, RNNs for sequences, etc).
These conditional generative models can be off-the-shelf pre-trained modules plugged
into our framework. An overview of the framework is pictured on
figure~\ref{fig:1}.
\textbf{This work provides the following contributions}:
\begin{itemize}
\item \emph{We describe a novel architecture} built upon the Transformer, using \emph{column embeddings}
to encode the feature position and \emph{shuffling features during training} to learn all
possible combinations of missing values.
\item Following \cite{xu2019modeling}, we evaluate our architecture on 13 datasets (6 standard datasets and 7 simulated),
compare it to 14 other generative models and demonstrate \emph{it beats the state of the art in tabular data
generation}
\item We specify a set of requirements any conditional generative model needs to implement
to be used as a component of the \emph{CGM}{}, \emph{enabling the use of richer datatypes} than numerical or categorical.
\end{itemize}
\section{Related Work}
\subsection{Tabular data generation}
Tabular data are very widespread in the industry. Yet relatively few data
generation research focuses on tabular data. Traditional models for tabular data
generations use decision trees, bayesian networks or copulas to sample from a
data distribution.
Recent approaches have used GANs to generate tabular data. Some of them have
focused their efforts on generating Electronic Health Records (EHR).
MedGAN~\cite{choi2017generating} pretrains an autoencoder and then uses the
decoder as the final part of the generator, tableGAN~\cite{park2018data}
introduces convolutions in the generator, PATE-GAN~\cite{jordon2018pate} uses
the PATE framework to generate differentially private synthetic data while CTGAN
and TVAE~\cite{xu2019modeling} introduce a node specific normalization and
conditional sampling to tackle mode collapse. The last paper also introduced a
useful benchmark framework called SDGym
\footnote{https://github.com/sdv-dev/SDGym} for comparing tabular data
generative models.
\subsection{Composable generative models}
Beyond the focus on tabular data, we designed our architecture so that it could be
trained on any subset of features without having to train an exponential number of models
(one for each possible subset).
Some early attempts in this direction were based on Restricted Boltzmann Machines (RBM)
\cite{srivastava2014multimodal}.
An important line of work is based on the variational auto-encoder (VAE).
Composable VAEs aim to provide a flexible mechanism to
compose generative models and adapt to arbitrary missing values patterns without having
to learn an exponential number of mapping networks.
MVAE~\cite{wu2018multimodal}, MMVAE~\cite{shi2019variational},
mmJSD~\cite{sutter2020multimodal} and MHVAE~\cite{vasco2020mhvae} propose to
have one specialized encoder per modality (called expert) and a rule for
combining the experts' outputs together to infer the latent variable.
Our proposed approach leverages the \emph{Transformer} architecture to combine
several conditional generator sub-models.
\subsection{Transformers}
The Transformer \cite{vaswani2017attention} has been shown to excel in several
modalities such as natural language \cite{devlin2018bert, radford2019language, brown2020language},
image \cite{parmar2018image,dosovitskiy2020image} or music
\cite{huang2018music}. As each self-attention layer has a global receptive
field, the network can give more importance to the input regions most useful for
predicting a given point. Thus the architecture may be more flexible at
generating diverse data types than networks with fixed connectivity patterns
\cite{child2019generating}.
At the core of the Transformer is the \emph{attention} operation which can be seen as a
list of queries on a set of key and value pairs. We say that the attention is
causal if the $k$th representation vector depends only on the $k-1$ first
values. Considering a sequence of length $l$, a (key, query) dimension $d_k$ and
a value dimension $d_v$, we can define the \emph{attention} and
\emph{causal attention} operations as follows:
\begin{equation} \label{eq2}
\begin{split}
\text{attention}(Q,K,V) & = \text{softmax}\left(\frac{Q K^T}{\sqrt{d_k}}\right) V^T \\
\text{causal\_attention}(Q,K,V) & = \text{softmax}\left(\frac{Q K^T}{\sqrt{d_k}}-M\right) V^T
\end{split}
\end{equation}
Where $Q$ and $K$ have shape $(l, d_k)$, $V$ has shape $(l,d_v)$ and
where the mask $M$ is a lower triangular matrix with shape $(l, l)$ where
$M_{ij}=1_{j\leq i} \infty$, which enforces the causality by setting
contributions of input $j$ to output $i$ to $0$ if $j>i$.
Following \cite{vaswani2017attention, radford2019language, brown2020language},
we will name \emph{self-attention} (or \emph{causal-self-attention}) the use of a multi-head \emph{attention}
(or \emph{causal-attention}) operation with linear transformations of the same vector passed as $Q$, $K$ and $V$ arguments.
And we will name \emph{cross-attention} (or \emph{causal-cross-attention}) the use of a multi-head \emph{attention}
(or \emph{causal-attention}) operation with linear transformations of the same vector passed as $K$ and $V$ arguments and
linear transformations of another vector as $Q$.
\section{The Composable Generative Model{}}
\subsection{Model definition}
Let us consider a tabular dataset consisting of categorical and real values.
Cells can also contain missing values. In all this section, we will encode real
values as categorical variables by quantizing it using its quantiles. We could
also use \emph{mode aware encoding} techniques as described in
\cite{xu2019modeling}.
Formally, we define our dataset $\mathcal{D}$ as a set of $n$ \emph{features}
(or columns) $\mathcal{F}_k$ where the $k$th feature is $d_k$ dimensional i.e.
$\mathcal{F}_k \in \mathbb{R}^{d_k}$. The i$th$ training example consists of
values $\mathcal{F}^i = \{f_1^i, ..., f_n^i\}$. Assuming training examples are
i.i.d. realization of a random variable with distribution $\mathcal{F}^i\sim
\mathcal{P_D}$, our objective is to train a model that can sample from an
estimator $\mathcal{\hat{P}_D}$ maximizing the likelihood of the data.
We introduce an architecture built around a Transformer stack. We use the same
transformer stack architecture as GPT-2 and GPT-3
\cite{radford2019language, brown2020language}. The Transformer builds a sequence
of representations from the input values.
For a training example $i$ and feature $k$, our model is formally defined by
\begin{equation} \label{eq1}
\begin{split}
\mathcal{R}^i & = \text{causal\_transformer}(E(\mathcal{F}^i) + \mathcal{X}) \\
y_k^i & = \text{cross\_attention}(Q=x_k^i, (K,V)=\mathcal{R}_{<k}^i)
\end{split}
\end{equation}
Since each feature $f_k^i$ has a priori a different dimension (i.e. number of
categories), we first need to project the values to a common latent subspace so
that they can then be fed into the transformer (in the same way as words are
represented as dense vector embeddings). For this purpose, each feature
$\mathcal{F}_k$ has an \emph{encoder} $E_k$ that converts its data space to the
latent space $E_k\left(f_k^i\right) \in \mathbb{R}^h$.
Furthermore, as there is no natural ordering of features, for each feature
$\mathcal{F}_k$ we learn a \emph{column embedding} $x_k \in \mathbb{R}^h$ that
is integrated to the input by adding it to the latent value. The column
embedding can be thought of as an origin point of the feature in the latent
space $\{E_1(f_1^i)+ x_1, ..., E_n(f_n^i) + x_n\} = E(\mathcal{F}^i) +
\mathcal{X}$. The column embedding is learned and identifies the columns.
The \emph{causal-Transformer} is composed of a causal-self-attention operation
and a position-wise feedforward network both preceded by a normalization layer
and followed by a residual connection. The feedforward network has the following
structure where, if $X \in \mathbb{R}^d$, $W_1$ has shape $(4d,d)$ and $W_2$ has
shape $(d,4d)$:
\begin{equation} \label{eq5}
\begin{split}
\text{feedforward}(X)=W_2.\text{gelu}(W_1.X)
\end{split}
\end{equation}
\subsection{Conditional distribution}
At the output of the Transformer, the representation vector $y_k$ summarizes the
distribution of the $k^{th}$ feature conditionally on the previous features:
\begin{equation} \label{eq6}
\mathcal{P}(f_k|f_1,...,f_{k-1}) = \mathcal{P}(f_k|y_k)
\end{equation}
For each feature, we add a decoder $D_k$ that could be any off-the-shelf
conditional generation model and trained accordingly. The decoder $D_k$ allows
us to compare the conditional distribution to the expected data distribution.
The provided error signal for each feature is backpropagated to train the whole
model.
In this paper, we only consider the case of the conditional categorical
generative model. For this purpose, we reuse the embedding matrix $E_k$ used to
transform the one-hot encoded vectors to dense latent vactor. We multiply the
conditional vector $y_k \in \mathbb{R}^h$ with the transpose of the embedding matrix $E_k \in \mathbb{R}^{(h, d_k)}$ to
produce an $n_k$ dimentional vector representing the logits of the categorical
distribution $\mathcal{P}(f_k|y_k)$, where $d_k$ is the number of caterogies (or
quantiles) of the $k^{th}$ feature.
\begin{equation} \label{eq7}
\mathcal{P}(f_k|y_k) = \text{softmax}(E_k^Ty_k)
\end{equation}
The model then minimize the loss $\mathcal{L}$ which is the categorical
cross-entropy accross all columns as written in equation \ref{eq8} where $f_k$
is a one-hot encoded vector.
\begin{equation} \label{eq8}
\mathcal{L} = - \sum_k f_k \log(\text{softmax}(E_k^T y_k))
\end{equation}
\subsection{Training \emph{CGM}{}s}
\paragraph{Feature order} One of the advantages of this model compared to a
traditional architecture \cite{uria2016neural} is the fact that \emph{the
autoregressive order does not need to be fixed}. For each training example, a
random permutation of the input features is chosen and the probability
distribution of the $k^{th}$ feature conditionally on a random subset of
the remaining features is computed. The training algorithm is summarized in
algorithm~\ref{algo:training}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{composability.pdf}
\caption{\emph{CGM}{} can use any set of columns to make predictions.}
\label{fig:composability}
\end{figure}
\paragraph{Weakly-supervised setting} Another advantage of our model is its
ability to be trained in a weakly-supervised setting, that is, when some columns
have missing values. This is possible since the columns are referred to by
explicitly learned column embedding as shown in figure~\ref{fig:composability}.
\begin{algorithm}[h]
\SetAlgoLined Initialize $\theta_E, \theta_T, \theta_D$\;
\While{not converged}{
$\mathcal{H} \leftarrow E(\mathcal{F}) + \mathcal{X}$\;
$\mathcal{H} \leftarrow \text{permute}(\mathcal{H})$\;
$\mathcal{R} \leftarrow \text{causal\_transformer}(\mathcal{H})$\;
$y_k \leftarrow \text{cross\_attention}(Q=x_k, (K, V)=\mathcal{R}_{<k}), \forall k$\;
$\mathcal{L} \leftarrow - \sum_k f_k \log(\text{softmax}(E_k^T y_k))$\;
$\theta_E, \theta_T, \theta_D \leftarrow (\theta_E, \theta_T, \theta_D) -
\lambda \nabla_{\theta_D, \theta_T, \theta_D} \mathcal{L}$\;}
\caption{Training algorithm}
\label{algo:training}
\end{algorithm}
\subsection{Generation with \emph{CGM}{}s}
Once trained, the model can be used for generation. The first representation
vector is generated using an empty set of features. The
first features is then sampled from the representation.
The other features are generated conditionally on the
ones already generated:
\begin{equation} \label{generation}
\begin{split}
\mathcal{R} & = \text{causal\_transformer}(E(\mathcal{F}) + \mathcal{X}) \\
y_k & = \text{cross\_attention}(Q=x_k, (K,V)=\mathcal{R}_{<k}) \\
f_k & \sim \mathcal{P}(f_k|y_k)
\end{split}
\end{equation}
\section{Composing a model with richer types}
Because it is trained with \emph{feature shuffling}, it is
possible to remove or add features to the \emph{CGM}{} at any stage of the training.
Those features can be numerical or categorical as described above, but they could
be of a richer type provided the type implements the interface specified below:
\begin{description}
\item[Encoder] the type of feature $\mathcal{F}_k$ should be equiped with an
encoder, mapping a value $f_k^i$ from the data to an \emph{input representation}:
$E_k: \to \mathbb{R}^h$
The encoder can be a neural network, parametrized by weights and trained along with those of the
transformer.
\item[Decoder] the type of feature $\mathcal{F}_k$ should be able equiped
with a decoder, sampling a value $f_k \in \mathbb{R}^{d_k}$ from an \emph{output representation}
$y_k in \mathbb{R}^h$: $f_k \sim \mathcal{P}(f_k|y_k)$
The decoder can be a neural network, parametrized by weights and trained along with those of the
transformer.
\item[Loss] the type of feature $\mathcal{F}_k$ should provide the loss to minimize during training
of the weights of the transformer along with \emph{encoder weights} and \emph{decoder weights}.
The loss can also be parametrized as a neural network and trained against an adversarial loss, thus
permiting the integration of \emph{Generative Adversarial Networks} \cite{goodfellow2014generative}
as the decoder part of a rich type (like images or sound).
\end{description}
When considering a \emph{categorical} feature, the \emph{encoder} is simply parametrized by an
embedding matrix giving the \emph{input representation} vector corresponding to the $i^{th}$ modality
by a simple look-up operation.
The \emph{decoder} is also parametrized by an embedding matrix converting an \emph{output representation}
vector to a vectors of logits fed into a softmax. Then a value is sampled from the probabilities derived
from the softmax.
The \emph{loss} is simply the categorical cross-entropy.
\emph{Numerical} feature are converted to categorical by quantization. The \emph{encoder} and \emph{decoder}
have the same form as those of a categorical feature, simply composed with a quantization, de-quantization
step.
If one wants to produce synthetic data with columns of images, one can simply implement the specifications
above. The \emph{encoder} can be a deep convolutional network, the \emph{decoder} the generator network
of a GAN and the \emph{loss} the dicriminator network of the GAN. In that case two adversarial
objectives are optimized during training, the minimization of the common loss of all the encoders, decoders
and transformer networks, and the minimization of the adversarial loss of the discriminators of image features.
Beyond images, one can integrate sequences usng RNN or Transformers, NLP and more. The setting with images
categorical and numerical features has been tested and works fine. And further types are being integrated
in the model.
\section{Application to tabular data}
In this section we present the experiments we conducted to test the performance
and applications of our model on tabular data.
\begin{figure*}
\centering
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\linewidth]{sdgym_overview_syn.pdf}
\caption{SDGym process for synthetic datasets.} \label{fig:sdgym_syn}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\linewidth]{sdgym_overview_real.pdf}
\caption{SDGym process for real datasets.} \label{fig:sdgym_real}
\end{subfigure}
\caption{Overview of the SDGym scoring process \cite{xu2019modeling}.}
\label{fig:sdgym_overview}
\end{figure*}
\begin{figure*}[!htb]
\centering
\input{tables/real.tex}
\caption{Real datasets leaderboard. Each synthesizer is trained on a training split $\mathcal{T}_{train}$ of the real data. Synthetic data $\mathcal{T}_{syn}$ are generated and a classifier is trained on it. The classifier's accuracy and f1-score are then measured on a test split $\mathcal{T}_{test}$ of the real data.}
\label{fig:real}
\end{figure*}
\begin{figure*}[!htb]
\centering
\input{tables/synthetic.tex}
\caption{Synthetic datasets leaderboard. A parametric model $\mathcal{M}$ is used to generate training data $\mathcal{T}_{train}$ and test data $\mathcal{T}_{test}$. Each synthesizer is trained and generates synthetic data $\mathcal{T}_{syn}$. The log-likelihood of the synthetic data wrt. $\mathcal{M}$ is computed $\mathcal{L}_{syn} = log(P(\mathcal{T}_{syn}|\mathcal{M}))$. The parametric model is also refitted using $\mathcal{T}_{syn}$ and yield a new model $\mathcal{M}'$. The log-likelihood of the test data wrt. $\mathcal{M}'$ is computed $\mathcal{L}_{test} = log(P(\mathcal{T}_{test}|\mathcal{M}'))$.}
\label{fig:synthetic}
\end{figure*}
\begin{figure*}[!htb]
\centering
\input{tables/bayesian.tex}
\caption{Bayesian datasets leaderboard. A parametric model $\mathcal{M}$ is used to generate training data $\mathcal{T}_{train}$ and test data $\mathcal{T}_{test}$. Each synthesizer is trained and generates synthetic data $\mathcal{T}_{syn}$. The log-likelihood of the synthetic data wrt. $\mathcal{M}$ is computed $\mathcal{L}_{syn} = log(P(\mathcal{T}_{syn}|\mathcal{M}))$. The parametric model is also refitted using $\mathcal{T}_{syn}$ and yield a new model $\mathcal{M}'$. The log-likelihood of the test data wrt. $\mathcal{M}'$ is computed $\mathcal{L}_{test} = log(P(\mathcal{T}_{test}|\mathcal{M}'))$.}
\label{fig:bayesian}
\end{figure*}
We trained the model on tabular data and benchmarked against the SDGym
leaderboard. SDGym evaluates the performance of synthetic data generators on
three dataset families: simulated data using gaussian mixtures, simulated data
using bayesian networks and real world datasets \cite{xu2019modeling}.
We evaluated our model using the publicly available SDGym benchmark
\cite{xu2019modeling}. The model's performance at generating synthetic data is
measured against several other models on different types of datasets.
The test procedure is different for datasets generated with a parametric model
$\mathcal{M}$ (synthetic and bayesian) and real datasets as seen on
figure~\ref{fig:sdgym_overview}. For each dataset, the data is split into a
training set $\mathcal{T}_{train}$ and a test set $\mathcal{T}_{test}$. The
generative model is trained on $\mathcal{T}_{train}$ and a synthetic data set
$\mathcal{T}_{syn}$ is generated.
\paragraph{Synthetic and bayesian datasets} For generated datasets (synthetic
and bayesian datasets), the probability distribution of the data is known. We
can thus evaluate the log-likelihood of the synthetized data with respect to the
parametric model $\mathcal{M}$ that is $\mathcal{L}_{syn} =
log(P(\mathcal{T}_{syn}|\mathcal{M}))$. However, such a metric favors mode
collapsed models. To measure if mode collapse has occured, the parametric model
$\mathcal{M}$ that was used to generate $\mathcal{T}_{train}$ is refitted using
$\mathcal{T}_{syn}$ instead. This yields a new parametric model $\mathcal{M}'$.
The log-likelihood of the test set $\mathcal{T}_{test}$ with respect to
$\mathcal{M}'$ is then computed $\mathcal{L}_{test} =
log(P(\mathcal{T}_{test}|\mathcal{M}'))$.
\paragraph{Real datasets} For real datasets, the machine learning efficacy is
used to measure the quality of the synthetic data. A classification model is
trained using $\mathcal{T}_{syn}$ and the accuracy $acc$ and $f1$ scores are
measured on the test set $\mathcal{T}_{test}$.
For all datasets, we used a hidden dimension of 64 and a transformer composed of
2 blocks with 8 heads. We trained the generative model for 15 epochs with a
batch size of $128$ and the Adam optimizer with $\beta_1=0.5$, $\beta_2=0.99$, a
learning rate of $0.001$. All computations were done on a machine with 8 CPUs
and 2 NVIDIA V100 GPUs. Benchmarking on all SDGym datasets took 4 hours.
\subsection{Results}
We compare our results with the public leaderboard provided on the SDGym web
page. The scores of several models have been pre-computed for comparison. We are
compared against the following models: \texttt{GCC} (Gaussian Copula
Categorical), \texttt{GCCF} (Gaussian Copula Categorical Fuzzy), \texttt{GCOH}
(Gaussian Copula One Hot), \texttt{CopulaGAN}, \texttt{CLBN}, \texttt{PrivBN},
\texttt{Medgan}, \texttt{Tablegan}, \texttt{CTGAN} and \texttt{TVAE}.
Trivial synthesizers are also benchmarked: the \texttt{Uniform} synthesizer
generates data uniformly and serves as a lower bound likelihood estimator, the
\texttt{Independent} synthesizer makes an independence assumption and the
\texttt{Identity} synthesizer simply returns the training data and serves as a
likelihood upper bound estimator. The actual scores are presented on figures
\ref{fig:real}, \ref{fig:synthetic} and \ref{fig:bayesian}.
For model comparison, we will focus on the $\mathcal{L}_{test}$ and f1-scores
since the other metrics (accuracy and $\mathcal{L}_{syn}$) do not reflect the
synthetic data quality as good. To easily visualize the results, we put in
\textbf{bold} the best result in each column.
As we can see, \textbf{our model outperforms the other models on the real
datasets}. On the credit dataset, we can see that the \texttt{CopulaGAN} model
outperforms us but that is also outperforms the \texttt{Identity} synthesizer by
a large margin. Such a high performance, better than the perfect
\texttt{Identity} synthesizer, is odd and could be attributed to the fact that
the synthesized data are better suited to train the subsequent classifier
whereas the real data may contain outliers that perturb the classifier's
training process.
On synthetic datasets, our model is the leader in the outperformed
\texttt{gridr} and \texttt{ring} datasets. It is slightly outperformed by the
trivial \texttt{Independent} synthesizer on the \texttt{grid} dataset but is the
second best model by a large margin.
On bayesian datasets, our model is the leader on the \texttt{alarm},
\texttt{asia} and \texttt{insurance} datasets. Our model is very close to the
\texttt{PrivBN} synthesizer on the \texttt{asia} dataset though slightly
outperformed.
\section{The \emph{CGM}{} in production}
The \emph{CGM}{} is used in production at \href{https://sarus.tech}{Sarus Technologies}
for the generation of synthetic data.
\href{https://sarus.tech}{Sarus} is a technology company developing solutions
to analyze privacy-sensitive data with formal privacy guarantees.
In production, the \emph{CGM}{} is trained with \emph{DP-SGD} \cite{abadi2016deep} in order to provide
differential privacy guarantees to the data generator.
The solution has been applied in medical data sharing with the
\href{https://cesp.inserm.fr/}{French National Institute in Medical Research (INSERM)}
and it is being experimented in various institutions in finance, energy, and transportation.
\section{Conclusion}
Though we focus on categorical and numerical data in this paper, the \emph{CGM}{} can accomodate a wide range of types.
Besides, as demonstrated, the \emph{CGM}{} is able to be pre-trained on many public datasets before
it is trained on private data.
Both those improvements and the quality of the data generated,
open new possibilities in privacy preserving data analysis, enabling
new applications in health care, personal finance or public policy planning.
\bibliographystyle{alpha} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.